Monitoring & Alerting: Metrics to Action Flow

    Introduction Effective monitoring and alerting are critical for maintaining reliable systems. Without proper observability, you’re flying blind when issues occur in production. This guide visualizes the complete monitoring and alerting flow: Metrics Collection: From instrumentation to storage Alert Evaluation: When metrics cross thresholds Notification Routing: Getting alerts to the right people Incident Response: From alert to resolution The Three Pillars: Metrics, Logs, and Traces Part 1: Complete Monitoring & Alerting Flow End-to-End Overview %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD subgraph Apps[Application Layer] App1[Application 1Exposes /metrics] App2[Application 2Exposes /metrics] App3[Application 3Exposes /metrics] end subgraph Collection[Metrics Collection] Prometheus[Prometheus ServerScrapes metrics every 15sStores time-series data] end subgraph Rules[Alert Rules Engine] Rules1[Alert Rule 1:High Error Raterate > 5%] Rules2[Alert Rule 2:High Latencyp95 > 500ms] Rules3[Alert Rule 3:Low Availabilityuptime < 99%] end subgraph AlertMgr[Alert Manager] Routing[Alert Routing- Group similar alerts- Deduplicate- Apply silences] Throttle[Throttling- Rate limiting- Grouping window- Repeat interval] end subgraph Notification[Notification Channels] PagerDuty[PagerDutyCritical alertsOn-call engineer] Slack[SlackWarning alertsTeam channel] Email[EmailInfo alertsDistribution list] end subgraph Response[Incident Response] OnCall[On-Call EngineerReceives alert] Investigate[Investigate Issue- Check dashboards- Review logs- Analyze traces] Fix[Apply Fix- Deploy patch- Scale resources- Restart service] Resolve[Resolve AlertMetrics return to normal] end App1 --> |Scrape /metrics| Prometheus App2 --> |Scrape /metrics| Prometheus App3 --> |Scrape /metrics| Prometheus Prometheus --> |Evaluate every 1m| Rules1 Prometheus --> |Evaluate every 1m| Rules2 Prometheus --> |Evaluate every 1m| Rules3 Rules1 --> |Trigger if true| Routing Rules2 --> |Trigger if true| Routing Rules3 --> |Trigger if true| Routing Routing --> Throttle Throttle --> |Severity: Critical| PagerDuty Throttle --> |Severity: Warning| Slack Throttle --> |Severity: Info| Email PagerDuty --> OnCall Slack --> OnCall OnCall --> Investigate Investigate --> Fix Fix --> Resolve Resolve -.->|Metrics normalized| Prometheus style Prometheus fill:#1e3a8a,stroke:#3b82f6 style Routing fill:#1e3a8a,stroke:#3b82f6 style PagerDuty fill:#7f1d1d,stroke:#ef4444 style Resolve fill:#064e3b,stroke:#10b981 Part 2: Metrics Collection Process Prometheus Scrape Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant App as Application participant Metrics as /metrics Endpoint participant Prom as Prometheus participant TSDB as Time-Series Database participant Grafana as Grafana Dashboard Note over App: Application runningIncrementing countersRecording histograms App->>Metrics: Update in-memory metricshttp_requests_total++http_request_duration_seconds loop Every 15 seconds Prom->>Metrics: HTTP GET /metrics Metrics-->>Prom: Return current metrics# TYPE http_requests_total counterhttp_requests_total{method="GET",status="200"} 1523http_requests_total{method="GET",status="500"} 12 Note over Prom: Parse metricsAdd labels:- job="myapp"- instance="pod-1:8080"- timestamp Prom->>TSDB: Store time-series dataAppend to existing seriesCreate new series if needed Note over TSDB: Compress and store:http_requests_total{ job="myapp", instance="pod-1:8080", method="GET", status="200"} = 1523 @ timestamp end Note over Prom,TSDB: Data retained for 15 daysOlder data deleted automatically Grafana->>Prom: PromQL Query:rate(http_requests_total[5m]) Prom->>TSDB: Fetch time-series datafor last 5 minutes TSDB-->>Prom: Return raw data points Note over Prom: Calculate rate:Δ value / Δ time Prom-->>Grafana: Return computed values Grafana->>Grafana: Render graphDisplay on dashboard Metrics Instrumentation Example package main import ( "net/http" "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) // Define metrics var ( // Counter - only goes up httpRequestsTotal = prometheus.NewCounterVec( prometheus.CounterOpts{ Name: "http_requests_total", Help: "Total number of HTTP requests", }, []string{"method", "endpoint", "status"}, ) // Histogram - for request durations httpRequestDuration = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Name: "http_request_duration_seconds", Help: "HTTP request duration in seconds", Buckets: prometheus.DefBuckets, // 0.005, 0.01, 0.025, 0.05, ... }, []string{"method", "endpoint"}, ) // Gauge - current value (can go up or down) activeConnections = prometheus.NewGauge( prometheus.GaugeOpts{ Name: "active_connections", Help: "Number of active connections", }, ) ) func init() { // Register metrics with Prometheus prometheus.MustRegister(httpRequestsTotal) prometheus.MustRegister(httpRequestDuration) prometheus.MustRegister(activeConnections) } func trackMetrics(method, endpoint string, statusCode int, duration time.Duration) { // Increment request counter httpRequestsTotal.WithLabelValues( method, endpoint, fmt.Sprintf("%d", statusCode), ).Inc() // Record request duration httpRequestDuration.WithLabelValues( method, endpoint, ).Observe(duration.Seconds()) } func handleRequest(w http.ResponseWriter, r *http.Request) { start := time.Now() // Increment active connections activeConnections.Inc() defer activeConnections.Dec() // Your application logic here processRequest(w, r) // Track metrics duration := time.Since(start) trackMetrics(r.Method, r.URL.Path, http.StatusOK, duration) } func main() { // Expose /metrics endpoint for Prometheus http.Handle("/metrics", promhttp.Handler()) // Application endpoints http.HandleFunc("/api/users", handleRequest) http.ListenAndServe(":8080", nil) } Part 3: Alert Evaluation and Firing Alert Rule Decision Tree %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Prometheus evaluatesalert rules every 1m]) --> Query[Execute PromQL query:rate5m > threshold] Query --> Result{Queryreturns data?} Result -->|No data| Inactive[Alert: InactiveNo time-series matchNo notification] Result -->|Data exists| CheckCondition{Conditiontrue?} CheckCondition -->|False| Resolved{Alert wasfiring?} Resolved -->|Yes| SendResolved[Alert: ResolvedSend resolved notificationGreen alert to channel] Resolved -->|No| Inactive CheckCondition -->|True| Duration{Condition truefor 'for' duration?} Duration -->|No| Pending[Alert: PendingWaiting for duratione.g., 5 minutesNo notification yet] Pending -.->|Check again| Start Duration -->|Yes| Firing[Alert: Firing 🔥Send to Alertmanager] Firing --> Dedupe{Alreadyfiring?} Dedupe -->|Yes| Throttle[Respect repeat_intervale.g., every 4 hoursDon't spam] Dedupe -->|No| NewAlert[New alert!Send notification immediately] Throttle --> TimeCheck{Repeat intervalelapsed?} TimeCheck -->|No| Wait[Wait...Don't send yet] TimeCheck -->|Yes| Reminder[Send remindernotification] NewAlert --> AlertManager[Send to Alertmanager] Reminder --> AlertManager AlertManager --> Route[Route based on labelsApply routing rules] style Inactive fill:#1e3a8a,stroke:#3b82f6 style Pending fill:#78350f,stroke:#f59e0b style Firing fill:#7f1d1d,stroke:#ef4444 style SendResolved fill:#064e3b,stroke:#10b981 style NewAlert fill:#7f1d1d,stroke:#ef4444 Alert Rule Configuration # prometheus-rules.yaml groups: - name: application_alerts interval: 60s # Evaluate every 60 seconds rules: # High Error Rate Alert - alert: HighErrorRate expr: | ( rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) ) > 0.05 for: 5m # Must be true for 5 minutes before firing labels: severity: critical team: backend annotations: summary: "High error rate on {{ $labels.instance }}" description: "Error rate is {{ $value | humanizePercentage }} (threshold: 5%)" dashboard: "https://grafana.example.com/d/app" # High Latency Alert - alert: HighLatency expr: | histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]) ) > 0.5 for: 10m labels: severity: warning team: backend annotations: summary: "High latency on {{ $labels.instance }}" description: "P95 latency is {{ $value }}s (threshold: 0.5s)" # Service Down Alert - alert: ServiceDown expr: up{job="myapp"} == 0 for: 1m labels: severity: critical team: sre annotations: summary: "Service {{ $labels.instance }} is down" description: "Cannot scrape metrics from {{ $labels.instance }}" # Memory Usage Alert - alert: HighMemoryUsage expr: | ( container_memory_usage_bytes{pod=~"myapp-.*"} / container_spec_memory_limit_bytes{pod=~"myapp-.*"} ) > 0.90 for: 5m labels: severity: warning team: platform annotations: summary: "High memory usage on {{ $labels.pod }}" description: "Memory usage is {{ $value | humanizePercentage }} of limit" Part 4: Alert Routing and Notification Alertmanager Processing Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Alert received fromPrometheus]) --> Inhibit{Inhibitionrules match?} Inhibit -->|Yes| Suppress[Alert suppressedHigher priority alertalready firinge.g., NodeDown inhibitsall pod alerts on that node] Inhibit -->|No| Silence{Silencematches?} Silence -->|Yes| Silenced[Alert silencedManual suppressionDuring maintenance windowNo notification sent] Silence -->|No| Group[Group alertsBy: cluster, alertnameCombine similar alerts] Group --> Wait[Wait for group_waitDefault: 30sCollect more alerts] Wait --> Batch[Create notification batchMultiple alerts groupedSingle notification] Batch --> Route{Matchrouting tree?} Route --> Critical{severity:critical?} Route --> Warning{severity:warning?} Route --> Default[Default route] Critical --> Team1{team:backend?} Team1 -->|Yes| PagerDuty[PagerDutyPage on-call engineerEscalate if no ackin 5 minutes] Team1 -->|No| Team2[Other team's PagerDuty] Warning --> SlackRoute{team:backend?} SlackRoute -->|Yes| Slack[Slack #backend-alertsPost message@here mention] SlackRoute -->|No| SlackOther[Other team's Slack] Default --> Email[EmailSend to mailing listLow priority] PagerDuty --> Track[Track notificationSet repeat_interval timer4 hours until resolved] Slack --> Track Email --> Track Track --> Resolved{Alertresolved?} Resolved -->|No| RepeatCheck{repeat_intervalelapsed?} RepeatCheck -->|Yes| Resend[Resend notificationReminder that alertstill firing] Resend -.-> Track RepeatCheck -->|No| Wait2[Wait...] Wait2 -.-> Resolved Resolved -->|Yes| SendResolved[Send resolved notificationAll is well ✓] style Suppress fill:#1e3a8a,stroke:#3b82f6 style Silenced fill:#1e3a8a,stroke:#3b82f6 style PagerDuty fill:#7f1d1d,stroke:#ef4444 style SendResolved fill:#064e3b,stroke:#10b981 Alertmanager Configuration # alertmanager.yaml global: resolve_timeout: 5m slack_api_url: 'https://hooks.slack.com/services/XXX' pagerduty_url: 'https://events.pagerduty.com/v2/enqueue' # Inhibition rules - suppress alerts when higher priority alert is firing inhibit_rules: # If node is down, don't alert on pods on that node - source_match: alertname: 'NodeDown' target_match: alertname: 'PodDown' equal: ['node'] # If entire cluster is down, don't alert on individual services - source_match: severity: 'critical' alertname: 'ClusterDown' target_match_re: severity: 'warning|info' equal: ['cluster'] # Route tree - how to send alerts route: receiver: 'default-email' group_by: ['alertname', 'cluster', 'service'] group_wait: 30s # Wait 30s to collect more alerts group_interval: 5m # Send updates every 5m for grouped alerts repeat_interval: 4h # Resend if still firing after 4h routes: # Critical alerts to PagerDuty - match: severity: critical receiver: 'pagerduty-critical' group_wait: 10s # Page quickly for critical continue: true # Also send to Slack - match: severity: critical receiver: 'slack-critical' # Warning alerts to Slack - match: severity: warning receiver: 'slack-warnings' group_wait: 1m # Team-specific routing - match: team: backend receiver: 'backend-team' - match: team: frontend receiver: 'frontend-team' # Receivers - where to send alerts receivers: - name: 'default-email' email_configs: - to: '[email protected]' headers: Subject: '{{ .GroupLabels.alertname }}: {{ .Status | toUpper }}' - name: 'pagerduty-critical' pagerduty_configs: - service_key: 'your-pagerduty-key' description: '{{ .GroupLabels.alertname }}: {{ .CommonAnnotations.summary }}' severity: 'critical' - name: 'slack-critical' slack_configs: - channel: '#alerts-critical' title: '🚨 CRITICAL: {{ .GroupLabels.alertname }}' text: | {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} *Description:* {{ .Annotations.description }} *Severity:* {{ .Labels.severity }} *Dashboard:* {{ .Annotations.dashboard }} {{ end }} color: 'danger' send_resolved: true - name: 'slack-warnings' slack_configs: - channel: '#alerts-warning' title: '⚠️ WARNING: {{ .GroupLabels.alertname }}' text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' color: 'warning' - name: 'backend-team' slack_configs: - channel: '#backend-alerts' Part 5: Incident Response Workflow From Alert to Resolution %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Alert as Alert System participant PD as PagerDuty participant Eng as On-Call Engineer participant Dash as Grafana Dashboard participant Logs as Log Aggregator participant Trace as Tracing System participant K8s as Kubernetes participant Incident as Incident Channel Alert->>PD: 🚨 Critical AlertHighErrorRate firingService: myappError rate: 12% PD->>Eng: 📱 Phone call + SMS + PushIncident created Note over Eng: Engineer woken upat 3 AM 😴 Eng->>PD: Acknowledge incidentStop escalation Eng->>Incident: Create #incident-123Post initial status Note over Eng: Open laptopStart investigation Eng->>Dash: Open dashboardCheck error rate graph Dash-->>Eng: Graph shows spikeStarted 5 minutes agoOnly affects /api/payment Eng->>Logs: Query logs:level=error ANDpath=/api/payment Logs-->>Eng: Errors:"Database connection timeout""Cannot connect to db:5432" Note over Eng: Database issue suspected Eng->>K8s: kubectl get pods -n database K8s-->>Eng: postgres-0: CrashLoopBackOffRestart count: 8 Eng->>K8s: kubectl describe pod postgres-0 K8s-->>Eng: Event: Liveness probe failedEvent: OOMKilledMemory: 2.1Gi / 2Gi limit Note over Eng: Database OOMKilled!Need more memory Eng->>Incident: Update: Database OOMAction: Increasing memory limit Eng->>K8s: kubectl edit statefulset postgresChange: 2Gi → 4Gi memory K8s-->>Eng: Statefulset updated Note over K8s: Rolling restartpostgres-0 recreatedwith 4Gi memory Eng->>K8s: kubectl get pods -n database -wWatch pod status K8s-->>Eng: postgres-0: Running ✓Ready: 1/1 Note over Eng: Wait for metricsto normalize Eng->>Dash: Refresh dashboard Dash-->>Eng: Error rate: 0.3% ✓Latency: normal ✓Back to baseline Note over Alert: Metrics normalizedAlert conditions false Alert->>PD: ✅ Alert resolved PD->>Eng: Incident auto-resolved Eng->>Incident: Incident resolved ✓Root cause: DB OOMFix: Increased memoryDuration: 23 minutes Eng->>Eng: Create follow-up tasks:1. Set memory alerts2. Review query performance3. Consider connection pooling Note over Eng: Back to sleep 😴Post-mortem tomorrow Part 6: The Three Pillars of Observability Metrics, Logs, and Traces Integration %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Issue([Production Issue Detected]) --> Which{Which pillarto start with?} Which --> Metrics[1️⃣ METRICSWhat is broken?] Which --> Logs[2️⃣ LOGSWhy is it broken?] Which --> Traces[3️⃣ TRACESWhere is it broken?] Metrics --> M1[Check Grafana- Error rate spiking?- Latency increased?- Which service?- Which endpoint?] M1 --> M2[Identify:✓ Service: payment-api✓ Endpoint: /checkout✓ Metric: p95 latency 5000ms✓ Time: Started 10m ago] M2 --> UseTrace{Need to seerequest flow?} UseTrace -->|Yes| Traces Logs --> L1[Search logs in ELK/Lokiservice=payment-api ANDpath=/checkout ANDlevel=error] L1 --> L2[Find errors:"Database query timeout""SELECT * FROM ordersWHERE user_id=123execution time: 5200ms"] L2 --> L3[Context found:✓ Specific query is slow✓ Affecting user_id=123✓ No index on user_id?] L3 --> UseMetrics{Verify withmetrics?} UseMetrics -->|Yes| Metrics Traces --> T1[Open Jaeger/TempoSearch trace_id orservice=payment-api] T1 --> T2[View distributed trace:┌─ payment-api: 5100ms│ ├─ auth-svc: 20ms ✓│ ├─ inventory-svc: 30ms ✓│ └─ database: 5000ms ❌│ └─ query: SELECT * FROM orders] T2 --> T3[Identify bottleneck:✓ Database query is slow✓ Affects only /checkout✓ Other services healthy] T3 --> UseLogs{Need errordetails?} UseLogs -->|Yes| Logs M2 --> RootCause[Combine insights:METRICS: Latency spike on /checkoutLOGS: Specific query timeoutTRACES: Database is bottleneck] L3 --> RootCause T3 --> RootCause RootCause --> Fix[Root Cause Found:Missing database indexon orders.user_idFix: CREATE INDEXidx_user_id ON orders] style Metrics fill:#1e3a8a,stroke:#3b82f6 style Logs fill:#78350f,stroke:#f59e0b style Traces fill:#064e3b,stroke:#10b981 style RootCause fill:#064e3b,stroke:#10b981 style Fix fill:#064e3b,stroke:#10b981 When to Use Each Pillar Pillar Best For Example Questions Tools Metrics Detecting issues, trends - Is the service up?- What’s the error rate?- Is latency increasing? Prometheus, Grafana, Datadog Logs Understanding what happened - What was the error message?- Which user was affected?- What was the input? ELK, Loki, Splunk Traces Finding bottlenecks - Which service is slow?- Where is the delay?- How do requests flow? Jaeger, Tempo, Zipkin Part 7: Setting Up Effective Alerts Alert Quality Framework %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([New Alert Idea]) --> Question1{Does this requireimmediate action?} Question1 -->|No| Ticket[Create ticket insteadNot an alertReview during business hours] Question1 -->|Yes| Question2{Can it beautomated away?} Question2 -->|Yes| Automate[Build automationAuto-scalingAuto-healingSelf-recovery] Question2 -->|No| Question3{Is it actionable?} Question3 -->|No| Rethink[Rethink the alertWhat action shouldthe engineer take?If none, not an alert] Question3 -->|Yes| Question4{Is the signalclear?} Question4 -->|No| Refine[Refine the thresholdAdd 'for' durationAdjust sensitivityReduce false positives] Question4 -->|Yes| Question5{Provides enoughcontext?} Question5 -->|No| AddContext[Add context:- Dashboard link- Runbook link- Query to debug- Recent changes] Question5 -->|Yes| Question6{Correctseverity?} Question6 -->|No| Severity[Adjust severity:Critical = PageWarning = SlackInfo = Email] Question6 -->|Yes| GoodAlert[✅ Good Alert!- Actionable- Clear signal- Right severity- Good context] GoodAlert --> Deploy[Deploy alertMonitor for:- False positives- Alert fatigue- Resolution time] style Ticket fill:#1e3a8a,stroke:#3b82f6 style Automate fill:#064e3b,stroke:#10b981 style GoodAlert fill:#064e3b,stroke:#10b981 style Rethink fill:#7f1d1d,stroke:#ef4444 Part 8: Best Practices DO’s and DON’Ts ✅ DO: ...

    January 23, 2025 · 12 min · Rafiul Alam