Release Management: From Semantic Versioning to Production Deployment

    Introduction Release management is the process of planning, scheduling, and controlling software releases through different stages and environments. It ensures that software is released reliably, predictably, and with minimal disruption. This guide visualizes key release management concepts: Semantic Versioning: Deciding when to bump major, minor, or patch versions Release Train: Structured release cadence with quality gates Hotfix Process: Fast-track critical fixes to production Release Checklist: Ensuring nothing is missed during deployment Environment Promotion: Moving code through dev, staging, and production Part 1: Semantic Versioning Decision Tree Understanding Version Numbers: MAJOR.MINOR.PATCH Semantic versioning (SemVer) uses a three-part version number: MAJOR.MINOR.PATCH ...

    January 24, 2025 · 19 min · Rafiul Alam

    CI/CD Pipeline: Git Push to Production Deployment

    Introduction CI/CD (Continuous Integration/Continuous Deployment) automates the software delivery process from code commit to production deployment. This automation reduces manual errors, speeds up releases, and improves software quality. This guide visualizes the complete CI/CD pipeline: Code Commit: Developer pushes code Continuous Integration: Automated testing and building Continuous Deployment: Automated deployment to production Quality Gates: Checkpoints ensuring code quality Rollback Mechanisms: Handling deployment failures Part 1: Complete CI/CD Pipeline Overview End-to-End Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Developer writes codecommits changes]) --> Push[git push origin main] Push --> Webhook[Git Provider WebhookTriggers CI/CD pipeline] Webhook --> Checkout[Stage 1: CheckoutClone repositoryFetch dependencies] Checkout --> Lint[Stage 2: LintCheck code styleESLint, Prettier, golangci-lint] Lint --> LintResult{Lintingpassed?} LintResult -->|No| LintFail[❌ Pipeline FailedNotify developerFix linting errors] LintResult -->|Yes| UnitTest[Stage 3: Unit TestsRun test suiteGenerate coverage report] UnitTest --> TestResult{Testspassed?} TestResult -->|No| TestFail[❌ Pipeline FailedSome tests failedCoverage too low] TestResult -->|Yes| Build[Stage 4: BuildCompile applicationBuild Docker image] Build --> BuildResult{Buildsuccessful?} BuildResult -->|No| BuildFail[❌ Pipeline FailedBuild errorsDependency issues] BuildResult -->|Yes| IntegTest[Stage 5: Integration TestsTest with real dependenciesDatabase, APIs, etc.] IntegTest --> IntegResult{Integrationtests passed?} IntegResult -->|No| IntegFail[❌ Pipeline FailedIntegration issuesService communication errors] IntegResult -->|Yes| Security[Stage 6: Security ScanScan for vulnerabilitiesOWASP, Snyk, Trivy] Security --> SecResult{Securitychecks passed?} SecResult -->|No| SecFail[❌ Pipeline FailedSecurity vulnerabilities foundFix before deploying] SecResult -->|Yes| Push2Registry[Stage 7: Push ImageTag: myapp:abc123Push to container registry] Push2Registry --> DeployStaging[Stage 8: Deploy to Stagingkubectl apply -f staging/Run smoke tests] DeployStaging --> SmokeTest[Stage 9: Smoke TestsTest critical pathsHealth checksBasic functionality] SmokeTest --> SmokeResult{Smoke testspassed?} SmokeResult -->|No| StagingFail[❌ Pipeline FailedStaging deployment issuesRollback staging] SmokeResult -->|Yes| Approval{Manualapprovalrequired?} Approval -->|Yes| WaitApproval[⏸️ Waiting for ApprovalNotify team leadReview changes] WaitApproval --> ApprovalDecision{Approved?} ApprovalDecision -->|No| Rejected[❌ Deployment RejectedNot ready for production] ApprovalDecision -->|Yes| DeployProd Approval -->|No| DeployProd[Stage 10: Deploy to ProductionRolling updateOr blue-green deployment] DeployProd --> ProdHealth{Productionhealthy?} ProdHealth -->|No| AutoRollback[❌ Auto-RollbackRevert to previous versionAlert on-call team] ProdHealth -->|Yes| Success[✅ Deployment Successful!Monitor metricsNotify teamUpdate status] style LintFail fill:#7f1d1d,stroke:#ef4444 style TestFail fill:#7f1d1d,stroke:#ef4444 style BuildFail fill:#7f1d1d,stroke:#ef4444 style IntegFail fill:#7f1d1d,stroke:#ef4444 style SecFail fill:#7f1d1d,stroke:#ef4444 style StagingFail fill:#7f1d1d,stroke:#ef4444 style AutoRollback fill:#7f1d1d,stroke:#ef4444 style Success fill:#064e3b,stroke:#10b981 style WaitApproval fill:#78350f,stroke:#f59e0b Part 2: Continuous Integration (CI) Stages CI Pipeline Detailed Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Dev as Developer participant Git as Git Repository participant CI as CI Server participant Docker as Docker Registry participant Notify as Slack/Email Dev->>Git: git push origin feature/new-api Note over Git: Webhook triggeredon push event Git->>CI: Trigger pipeline:Branch: feature/new-apiCommit: abc123Author: [email protected] CI->>CI: Create build environmentUbuntu 22.04 container CI->>Git: git clone --depth 1Checkout abc123 Note over CI: Stage 1: Setup CI->>CI: Install dependenciesnpm installgo mod download Note over CI: Stage 2: Code Quality CI->>CI: Run lintereslint src/golangci-lint run alt Linting Failed CI->>Notify: ❌ Linting failed26 issues foundFix formatting CI-->>Dev: Pipeline failed end Note over CI: Stage 3: Unit Testing CI->>CI: Run unit testsnpm testgo test ./... CI->>CI: Generate coverage reportCoverage: 87% alt Tests Failed or Low Coverage CI->>Notify: ❌ Tests failed5 tests failingCoverage: 72% < 80% CI-->>Dev: Pipeline failed end Note over CI: Stage 4: Build CI->>CI: Build applicationnpm run buildgo build -o app CI->>CI: Build Docker imagedocker build -t myapp:abc123 alt Build Failed CI->>Notify: ❌ Build failedCompilation errors CI-->>Dev: Pipeline failed end Note over CI: Stage 5: Integration Tests CI->>CI: Start test dependenciesdocker-compose up -dpostgres, redis CI->>CI: Run integration testsTest database connectionsTest API endpoints CI->>CI: Stop test servicesdocker-compose down alt Integration Tests Failed CI->>Notify: ❌ Integration tests failedDatabase connection timeout CI-->>Dev: Pipeline failed end Note over CI: Stage 6: Security Scanning CI->>CI: Scan dependenciesnpm auditsnyk test CI->>CI: Scan Docker imagetrivy image myapp:abc123 alt Security Issues Found CI->>Notify: ⚠️ Security issues3 high severity CVEsUpdate dependencies CI-->>Dev: Pipeline failed end Note over CI: All checks passed! ✓ CI->>Docker: docker push myapp:abc123Tag: myapp:latest Docker-->>CI: Image pushed successfully CI->>Notify: ✅ Build successful!Image: myapp:abc123Ready for deployment CI-->>Dev: Pipeline succeededDuration: 8m 32s GitHub Actions CI Configuration # .github/workflows/ci.yml name: CI Pipeline on: push: branches: [ main, develop ] pull_request: branches: [ main ] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }} jobs: # Job 1: Code Quality Checks lint: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: '18' cache: 'npm' - name: Install dependencies run: npm ci - name: Run ESLint run: npm run lint - name: Run Prettier run: npm run format:check # Job 2: Unit Tests test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: '18' cache: 'npm' - name: Install dependencies run: npm ci - name: Run tests run: npm test -- --coverage - name: Check coverage threshold run: | COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct') if (( $(echo "$COVERAGE < 80" | bc -l) )); then echo "Coverage $COVERAGE% is below 80%" exit 1 fi - name: Upload coverage to Codecov uses: codecov/codecov-action@v3 # Job 3: Build build: runs-on: ubuntu-latest needs: [lint, test] # Wait for lint and test to pass steps: - uses: actions/checkout@v3 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Log in to GitHub Container Registry uses: docker/login-action@v2 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Extract metadata id: meta uses: docker/metadata-action@v4 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} tags: | type=sha,prefix={{branch}}- type=ref,event=branch type=ref,event=pr - name: Build and push Docker image uses: docker/build-push-action@v4 with: context: . push: true tags: ${{ steps.meta.outputs.tags }} cache-from: type=gha cache-to: type=gha,mode=max # Job 4: Integration Tests integration-test: runs-on: ubuntu-latest needs: build services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: postgres options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 redis: image: redis:7 options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: '18' cache: 'npm' - name: Install dependencies run: npm ci - name: Run integration tests run: npm run test:integration env: DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test REDIS_URL: redis://localhost:6379 # Job 5: Security Scan security: runs-on: ubuntu-latest needs: build steps: - uses: actions/checkout@v3 - name: Run npm audit run: npm audit --audit-level=high - name: Run Snyk security scan uses: snyk/actions/node@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} - name: Scan Docker image with Trivy uses: aquasecurity/trivy-action@master with: image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} format: 'sarif' output: 'trivy-results.sarif' - name: Upload Trivy results to GitHub Security uses: github/codeql-action/upload-sarif@v2 with: sarif_file: 'trivy-results.sarif' Part 3: Continuous Deployment (CD) Stages Deployment Pipeline Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([CI Pipeline PassedImage ready: myapp:abc123]) --> DeployDecision{Whichbranch?} DeployDecision -->|feature/*| SkipDeploy[Skip deploymentCI only forfeature branches] DeployDecision -->|develop| DeployDev[Deploy to Dev EnvironmentNamespace: devAuto-deploy on push] DeployDecision -->|main| DeployStaging[Deploy to StagingNamespace: stagingAuto-deploy on push] DeployDev --> DevSmoke[Run smoke testsBasic health checks] DevSmoke --> DevDone[✅ Dev deployment complete] DeployStaging --> UpdateManifest[Update Kubernetes manifestsimage: myapp:abc123Apply configuration] UpdateManifest --> ApplyStaging[kubectl apply -f k8s/staging/Create/Update resourcesWait for rollout] ApplyStaging --> WaitReady{All podsready?} WaitReady -->|No timeout| CheckHealth[Check pod statuskubectl get pods -n staging] CheckHealth --> HealthStatus{Healthy?} HealthStatus -->|No| RollbackStaging[❌ Rollback stagingkubectl rollout undodeployment myapp -n staging] RollbackStaging --> NotifyFail[Notify team:Staging deployment failedCheck logs and fix] HealthStatus -->|Yes| StagingSmoke[Run staging smoke tests- Health endpoint- Critical API endpoints- Database connectivity] StagingSmoke --> SmokePass{Smoke testspassed?} SmokePass -->|No| RollbackStaging SmokePass -->|Yes| StagingReady[✅ Staging ReadyAll tests passedReady for production] StagingReady --> ApprovalGate{Require manualapproval?} ApprovalGate -->|Yes| WaitApproval[⏸️ Wait for approvalCreate deployment requestNotify reviewers] WaitApproval --> ReviewDecision{Approvedby reviewer?} ReviewDecision -->|No| Rejected[❌ Deployment rejectedFeedback providedMake changes] ReviewDecision -->|Yes| DeployProd ApprovalGate -->|No| DeployProd[Deploy to ProductionNamespace: productionStrategy: Rolling update] DeployProd --> BackupProd[Create backup:- Current deployment state- Database snapshot- Config backup] BackupProd --> ApplyProd[kubectl apply -f k8s/prod/Rolling update:maxSurge: 1maxUnavailable: 0] ApplyProd --> MonitorRollout[Monitor rollout statuskubectl rollout statusdeployment myapp -n production] MonitorRollout --> ProdHealth{All new podshealthy?} ProdHealth -->|No| AutoRollback[🚨 Auto-rollback triggeredkubectl rollout undoRestore previous version] AutoRollback --> AlertTeam[Alert on-call teamPagerDuty notificationProduction incident] ProdHealth -->|Yes| ProdMonitor[Monitor production metrics- Error rates- Latency- Business KPIs] ProdMonitor --> MetricsOK{Metricshealthy for10 minutes?} MetricsOK -->|No| AutoRollback MetricsOK -->|Yes| Complete[✅ Deployment Complete!Production healthyNew version liveUpdate status page] Complete --> CleanupOld[Cleanup old resourcesRemove old replica setsPrune old images] style SkipDeploy fill:#1e3a8a,stroke:#3b82f6 style WaitApproval fill:#78350f,stroke:#f59e0b style RollbackStaging fill:#7f1d1d,stroke:#ef4444 style AutoRollback fill:#7f1d1d,stroke:#ef4444 style Complete fill:#064e3b,stroke:#10b981 style DevDone fill:#064e3b,stroke:#10b981 Part 4: Quality Gates Quality Gate Decision Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Code ready to deploy]) --> Gate1{Quality Gate 1:Code Quality} Gate1 --> CheckLint[Check LintingESLint, Prettier] Gate1 --> CheckComplexity[Check ComplexityCyclomatic complexity< 10 per function] Gate1 --> CheckDuplication[Check DuplicationCode duplication < 3%] CheckLint --> LintScore{Pass?} CheckComplexity --> ComplexScore{Pass?} CheckDuplication --> DupScore{Pass?} LintScore -->|No| Fail1[❌ Gate 1 Failed] ComplexScore -->|No| Fail1 DupScore -->|No| Fail1 LintScore -->|Yes| Gate2{Quality Gate 2:Testing} ComplexScore -->|Yes| Gate2 DupScore -->|Yes| Gate2 Gate2 --> CheckCoverage[Check CoverageLine coverage >= 80%Branch coverage >= 75%] Gate2 --> CheckTests[All Tests PassUnit + Integration] Gate2 --> CheckPerf[Performance TestsResponse time < baseline] CheckCoverage --> CovScore{Pass?} CheckTests --> TestScore{Pass?} CheckPerf --> PerfScore{Pass?} CovScore -->|No| Fail2[❌ Gate 2 Failed] TestScore -->|No| Fail2 PerfScore -->|No| Fail2 CovScore -->|Yes| Gate3{Quality Gate 3:Security} TestScore -->|Yes| Gate3 PerfScore -->|Yes| Gate3 Gate3 --> CheckVuln[Scan VulnerabilitiesNo high/critical CVEs] Gate3 --> CheckSecrets[Check for SecretsNo hardcoded credentials] Gate3 --> CheckDeps[Dependency CheckAll deps up-to-date] CheckVuln --> VulnScore{Pass?} CheckSecrets --> SecretScore{Pass?} CheckDeps --> DepScore{Pass?} VulnScore -->|No| Fail3[❌ Gate 3 Failed] SecretScore -->|No| Fail3 DepScore -->|No| Fail3 VulnScore -->|Yes| Gate4{Quality Gate 4:Production Readiness} SecretScore -->|Yes| Gate4 DepScore -->|Yes| Gate4 Gate4 --> CheckHealth[Health ChecksLiveness + Readiness] Gate4 --> CheckResources[Resource LimitsCPU + Memory defined] Gate4 --> CheckDocs[DocumentationREADME + API docs] CheckHealth --> HealthScore{Pass?} CheckResources --> ResScore{Pass?} CheckDocs --> DocScore{Pass?} HealthScore -->|No| Fail4[❌ Gate 4 Failed] ResScore -->|No| Fail4 DocScore -->|No| Fail4 HealthScore -->|Yes| AllGates[✅ All Quality Gates Passed!Ready for deployment] ResScore -->|Yes| AllGates DocScore -->|Yes| AllGates Fail1 --> Block[Block deploymentFix issues first] Fail2 --> Block Fail3 --> Block Fail4 --> Block style Fail1 fill:#7f1d1d,stroke:#ef4444 style Fail2 fill:#7f1d1d,stroke:#ef4444 style Fail3 fill:#7f1d1d,stroke:#ef4444 style Fail4 fill:#7f1d1d,stroke:#ef4444 style AllGates fill:#064e3b,stroke:#10b981 Part 5: GitLab CI/CD Example .gitlab-ci.yml Configuration # .gitlab-ci.yml stages: - lint - test - build - security - deploy-staging - deploy-production variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "/certs" IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA # Template for Docker jobs .docker-login: &docker-login before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY # Stage 1: Linting lint:code: stage: lint image: node:18 script: - npm ci - npm run lint - npm run format:check cache: paths: - node_modules/ # Stage 2: Testing test:unit: stage: test image: node:18 script: - npm ci - npm test -- --coverage - | COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct') if (( $(echo "$COVERAGE < 80" | bc -l) )); then echo "Coverage $COVERAGE% is below threshold" exit 1 fi coverage: '/Lines\s*:\s*(\d+\.\d+)%/' artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml test:integration: stage: test image: node:18 services: - name: postgres:15 alias: postgres - name: redis:7 alias: redis variables: DATABASE_URL: postgresql://postgres:postgres@postgres:5432/test REDIS_URL: redis://redis:6379 script: - npm ci - npm run test:integration # Stage 3: Build build:image: stage: build image: docker:24 services: - docker:24-dind <<: *docker-login script: - docker build -t $IMAGE_TAG . - docker push $IMAGE_TAG - docker tag $IMAGE_TAG $CI_REGISTRY_IMAGE:latest - docker push $CI_REGISTRY_IMAGE:latest only: - main - develop # Stage 4: Security Scanning security:scan: stage: security image: aquasec/trivy:latest script: - trivy image --severity HIGH,CRITICAL --exit-code 1 $IMAGE_TAG allow_failure: true security:sast: stage: security image: node:18 script: - npm audit --audit-level=high - npx snyk test --severity-threshold=high allow_failure: true # Stage 5: Deploy to Staging deploy:staging: stage: deploy-staging image: bitnami/kubectl:latest script: - kubectl config set-cluster k8s --server="$K8S_SERVER" - kubectl config set-credentials admin --token="$K8S_TOKEN" - kubectl config set-context default --cluster=k8s --user=admin - kubectl config use-context default - | kubectl set image deployment/myapp \ myapp=$IMAGE_TAG \ -n staging - kubectl rollout status deployment/myapp -n staging --timeout=5m - kubectl get pods -n staging environment: name: staging url: https://staging.example.com only: - main # Stage 6: Deploy to Production deploy:production: stage: deploy-production image: bitnami/kubectl:latest script: - kubectl config set-cluster k8s --server="$K8S_SERVER" - kubectl config set-credentials admin --token="$K8S_TOKEN" - kubectl config set-context default --cluster=k8s --user=admin - kubectl config use-context default - | kubectl set image deployment/myapp \ myapp=$IMAGE_TAG \ -n production - kubectl rollout status deployment/myapp -n production --timeout=10m - | # Check pod health READY=$(kubectl get deployment myapp -n production -o jsonpath='{.status.readyReplicas}') DESIRED=$(kubectl get deployment myapp -n production -o jsonpath='{.spec.replicas}') if [ "$READY" != "$DESIRED" ]; then echo "Deployment unhealthy: $READY/$DESIRED pods ready" kubectl rollout undo deployment/myapp -n production exit 1 fi environment: name: production url: https://example.com when: manual # Require manual approval only: - main Part 6: Pipeline Best Practices Pipeline Optimization Fast Feedback Loop: ...

    January 23, 2025 · 11 min · Rafiul Alam

    Deployment Strategies: Blue-Green, Canary, Rolling Updates

    Introduction Choosing the right deployment strategy is critical for minimizing downtime and risk when releasing new versions of your application. Different strategies offer different trade-offs between speed, safety, and resource usage. This guide visualizes three essential deployment strategies: Rolling Updates: Gradual replacement of instances Blue-Green Deployments: Instant cutover between versions Canary Deployments: Progressive rollout with traffic splitting Comparison and Use Cases: When to use each strategy Part 1: Rolling Update Deployment Rolling updates gradually replace old version pods with new version pods, ensuring continuous availability. ...

    January 23, 2025 · 11 min · Rafiul Alam

    Docker Build Process: Dockerfile to Image to Container

    Introduction Docker has revolutionized how we build, ship, and run applications by providing a standardized way to package software with all its dependencies. Understanding the Docker build process is fundamental to modern DevOps practices. This guide visualizes the complete journey from writing a Dockerfile to running a container: Dockerfile Creation: Writing instructions for your application Image Building: Creating a reusable image from the Dockerfile Container Execution: Running instances of your image Multi-Stage Builds: Optimizing image size and security Part 1: The Docker Build Process Flow Complete Build to Run Lifecycle %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Developer writes code]) --> CreateDockerfile[Create DockerfileDefine base image,dependencies, commands] CreateDockerfile --> BuildContext[Prepare Build ContextGather files neededfor build] BuildContext --> DockerBuild[Run: docker build -t app:v1 .] DockerBuild --> LayerProcess{Process EachInstruction} LayerProcess --> FROM[FROM instructionPull base image] FROM --> COPY[COPY/ADD instructionsCopy application files] COPY --> RUN[RUN instructionsExecute commandsInstall dependencies] RUN --> ENV[ENV/EXPOSE/WORKDIRSet environment] ENV --> CMD[CMD/ENTRYPOINTDefine startup command] CMD --> CacheCheck{Layer existsin cache?} CacheCheck -->|Yes| UseCache[Use cached layer⚡ Fast build] CacheCheck -->|No| BuildLayer[Build new layer🔨 Execute instruction] UseCache --> NextLayer{Moreinstructions?} BuildLayer --> NextLayer NextLayer -->|Yes| LayerProcess NextLayer -->|No| CreateImage[Create Docker ImageTagged as app:v1Stored in local registry] CreateImage --> DockerRun[Run: docker run -p 8080:8080 app:v1] DockerRun --> CreateContainer[Create Container- Writable layer on top- Network configuration- Volume mounts] CreateContainer --> StartProcess[Start Container ProcessExecute CMD/ENTRYPOINT] StartProcess --> Running[Container Running ✓Application accessibleon port 8080] Running --> Stop{Containerstopped?} Stop -->|No| Running Stop -->|Yes| Cleanup[Container stoppedCan be restartedor removed] style CreateDockerfile fill:#064e3b,stroke:#3b82f6 style CreateImage fill:#1e3a8a,stroke:#3b82f6 style Running fill:#064e3b,stroke:#3b82f6 Image Layer Structure Docker images are built in layers, with each instruction in the Dockerfile creating a new layer: ...

    January 23, 2025 · 9 min · Rafiul Alam

    GitOps Workflow: Git as Single Source of Truth

    Introduction GitOps is a modern approach to continuous deployment where Git serves as the single source of truth for both application code and infrastructure. Changes are made through Git commits, and automated agents ensure the live environment matches the desired state in Git. This guide visualizes the GitOps workflow: Declarative Infrastructure: Everything defined as code in Git Automated Sync: Agents continuously reconcile live state with Git Drift Detection: Automatic detection and correction of manual changes Pull-Based Deployment: Agents pull changes from Git (vs push-based CI/CD) Audit Trail: Complete history of changes in Git Part 1: GitOps Overview GitOps Principles %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([GitOps Core Principles]) --> P1[1️⃣ DeclarativeAll config in Git as YAMLDesired state, not steps] Start --> P2[2️⃣ Versioned & ImmutableGit history is truthEvery change tracked] Start --> P3[3️⃣ Pulled AutomaticallyAgents pull from GitNo push access needed] Start --> P4[4️⃣ Continuously ReconciledAgents detect driftAuto-heal to Git state] P1 --> Example1[Example:Kubernetes manifestsTerraform codeHelm charts] P2 --> Example2[Example:git log shows who changed whatgit revert to rollbackgit blame for accountability] P3 --> Example3[Example:ArgoCD polls Git every 3minFluxCD watches Git repoNo CI/CD push needed] P4 --> Example4[Example:Manual kubectl edit detectedReverted to Git stateDrift alert sent] style P1 fill:#064e3b,stroke:#10b981 style P2 fill:#064e3b,stroke:#10b981 style P3 fill:#064e3b,stroke:#10b981 style P4 fill:#064e3b,stroke:#10b981 Part 2: GitOps vs Traditional CI/CD Architecture Comparison %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% graph TB subgraph Traditional[Traditional CI/CD PUSH Model] T1([Developer]) --> T2[git push] T2 --> T3[CI ServerGitHub ActionsJenkins] T3 --> T4[Build & Test] T4 --> T5[Docker Build] T5 --> T6[Push Image] T6 --> T7[Deploy Scriptkubectl apply] T7 --> T8[Kubernetes Cluster] Note1[Issues:❌ CI needs cluster credentials❌ Push-based security risk❌ No drift detection❌ Manual changes persist] end subgraph GitOps[GitOps PULL Model] G1([Developer]) --> G2[git push] G2 --> G3[Git RepositoryKubernetes manifestsHelm charts] G4[GitOps AgentArgoCD/FluxRunning IN cluster] G4 --> |Polls every 3min| G3 G4 --> G5{Desired state= Live state?} G5 --> |No| G6[Apply changeskubectl apply] G6 --> G7[Kubernetes Cluster] G5 --> |Yes| G8[No action needed] G7 --> |Detect drift| G4 Note2[Benefits:✅ No external cluster access✅ Pull-based security✅ Automatic drift detection✅ Self-healing✅ Audit trail in Git] end style Traditional fill:#7f1d1d,stroke:#ef4444 style GitOps fill:#064e3b,stroke:#10b981 Part 3: Complete GitOps Flow End-to-End Workflow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Developer makes change]) --> Change[Update Kubernetes manifestdeployment.yaml:image: myapp:v2.0replicas: 5] Change --> Commit[git commit -m "Update to v2.0"] Commit --> Push[git push origin main] Push --> GitRepo[(Git Repositoryk8s-configs/├─ apps/│ └─ myapp/│ ├─ deployment.yaml│ ├─ service.yaml│ └─ ingress.yaml└─ infrastructure/ └─ namespaces.yaml)] GitRepo --> Webhook{Webhookconfigured?} Webhook -->|Yes| Notify[Notify ArgoCDimmediately] Webhook -->|No| Poll[ArgoCD pollsevery 3 minutes] Notify --> ArgoCD[ArgoCD ControllerDetect changes in Git] Poll --> ArgoCD ArgoCD --> Compare[Compare Git statevs Live cluster state] Compare --> Diff{Differencesdetected?} Diff -->|No| InSync[✅ Application in syncNo action neededStatus: Healthy] Diff -->|Yes| OutOfSync[⚠️ Application out of syncGit: image: v2.0Cluster: image: v1.0] OutOfSync --> SyncPolicy{Auto-syncenabled?} SyncPolicy -->|No| WaitManual[⏸️ Waiting formanual sync trigger] WaitManual --> ManualSync[User clicks "Sync"in ArgoCD UI] ManualSync --> ApplyChanges SyncPolicy -->|Yes| ApplyChanges[Apply changes from Gitkubectl apply -f deployment.yaml] ApplyChanges --> RolloutStart[Kubernetes Rolling UpdateCreate new pods with v2.0] RolloutStart --> HealthCheck[Health Check LoopCheck pod statusRun readiness probes] HealthCheck --> HealthStatus{All podshealthy?} HealthStatus -->|No| CheckTimeout{Exceededtimeout?} CheckTimeout -->|No| HealthCheck CheckTimeout -->|Yes| SyncFailed[❌ Sync FailedStatus: DegradedSend alert] SyncFailed --> Rollback{Auto-rollbackenabled?} Rollback -->|Yes| RevertGit[Revert to previousGit commitTrigger new sync] Rollback -->|No| Manual[Manual interventionrequired] HealthStatus -->|Yes| Synced[✅ Sync SuccessfulApplication: HealthyGit ≡ Cluster] Synced --> ContinuousMonitor[Continuous MonitoringDetect driftWatch for manual changes] ContinuousMonitor --> DriftDetect{Manual changedetected?} DriftDetect -->|Yes| DriftAlert[🚨 Drift Detected!Someone ran kubectl editCluster ≠ Git] DriftAlert --> AutoHeal{Self-healingenabled?} AutoHeal -->|Yes| RevertDrift[Revert manual changeRestore Git stateCluster ≡ Git again] AutoHeal -->|No| DriftNotify[Notify teamManual change persistsUpdate Git to match?] RevertDrift --> ContinuousMonitor DriftNotify --> ContinuousMonitor DriftDetect -->|No| ContinuousMonitor style InSync fill:#064e3b,stroke:#10b981 style Synced fill:#064e3b,stroke:#10b981 style OutOfSync fill:#78350f,stroke:#f59e0b style SyncFailed fill:#7f1d1d,stroke:#ef4444 style DriftAlert fill:#7f1d1d,stroke:#ef4444 Part 4: ArgoCD Sync Process Detailed Sync Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Dev as Developer participant Git as Git Repository participant ArgoCD as ArgoCD Controller participant K8s as Kubernetes API participant Pods as Application Pods Dev->>Git: git pushUpdate deployment.yamlimage: myapp:v2.0 Note over Git: Git Repository UpdatedCommit: abc123 alt Webhook Configured Git->>ArgoCD: Webhook: Repo changed else Polling loop Every 3 minutes ArgoCD->>Git: Poll for changes end end ArgoCD->>Git: Fetch latest commitgit pull origin main ArgoCD->>ArgoCD: Parse manifests:deployment.yamlservice.yaml ArgoCD->>K8s: Get live resourceskubectl get deployment myapp -o yaml K8s-->>ArgoCD: Current state:image: myapp:v1.0replicas: 3 Note over ArgoCD: Compare states:Git: v2.0, replicas: 5Live: v1.0, replicas: 3Diff detected! ArgoCD->>ArgoCD: Status: OutOfSyncHealth: Healthy alt Auto-Sync Enabled Note over ArgoCD: Auto-sync triggered else Manual Sync ArgoCD->>ArgoCD: Wait for user to click "Sync" end ArgoCD->>K8s: Apply changeskubectl apply -f deployment.yaml K8s->>K8s: Create new ReplicaSetfor image v2.0 K8s->>Pods: Start new podswith image v2.0 loop Rolling Update K8s->>Pods: Create 1 new pod Pods-->>K8s: Pod starting... K8s->>Pods: Run readiness probe Pods-->>K8s: Ready ✓ K8s->>Pods: Terminate 1 old pod Pods-->>K8s: Terminated ArgoCD->>K8s: Check sync progress K8s-->>ArgoCD: 2/5 pods updated end K8s-->>ArgoCD: All pods ready5/5 running v2.0 ArgoCD->>ArgoCD: Status: Synced ✓Health: Healthy ✓ ArgoCD->>Dev: Notification:✅ Sync successfulmyapp updated to v2.0 Note over ArgoCD,K8s: Continuous monitoringfor drift Note over K8s: Someone runs:kubectl scale deployment myapp --replicas=10 K8s->>Pods: Scale to 10 pods ArgoCD->>K8s: Poll live state K8s-->>ArgoCD: Live: replicas: 10Git: replicas: 5Drift detected! ArgoCD->>ArgoCD: Status: OutOfSyncHealth: HealthyDrift: Yes alt Self-Healing Enabled ArgoCD->>K8s: Revert to Git statekubectl apply -f deployment.yaml K8s->>Pods: Scale back to 5 pods Note over ArgoCD: Drift correctedCluster matches Git else No Self-Healing ArgoCD->>Dev: Alert: Manual change detectedCluster has 10 replicasGit has 5 replicas end Part 5: GitOps Repository Structure Recommended Directory Layout gitops-repo/ ├── apps/ │ ├── production/ │ │ ├── myapp/ │ │ │ ├── deployment.yaml │ │ │ ├── service.yaml │ │ │ ├── ingress.yaml │ │ │ └── kustomization.yaml │ │ └── database/ │ │ ├── statefulset.yaml │ │ └── service.yaml │ │ │ ├── staging/ │ │ └── myapp/ │ │ ├── deployment.yaml │ │ ├── service.yaml │ │ └── kustomization.yaml │ │ │ └── dev/ │ └── myapp/ │ └── ... │ ├── infrastructure/ │ ├── namespaces/ │ │ ├── production.yaml │ │ ├── staging.yaml │ │ └── dev.yaml │ │ │ ├── ingress-controller/ │ │ └── nginx-ingress.yaml │ │ │ └── monitoring/ │ ├── prometheus/ │ └── grafana/ │ ├── argocd/ │ ├── applications/ │ │ ├── myapp-production.yaml │ │ ├── myapp-staging.yaml │ │ └── infrastructure.yaml │ │ │ └── projects/ │ └── default-project.yaml │ └── README.md ArgoCD Application Definition # argocd/applications/myapp-production.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: myapp-production namespace: argocd spec: # Where the app is defined in Git source: repoURL: https://github.com/myorg/gitops-repo targetRevision: main path: apps/production/myapp # Where to deploy destination: server: https://kubernetes.default.svc namespace: production # Sync policy syncPolicy: automated: prune: true # Delete resources not in Git selfHeal: true # Revert manual changes allowEmpty: false syncOptions: - CreateNamespace=true - PrunePropagationPolicy=foreground - PruneLast=true retry: limit: 5 backoff: duration: 5s factor: 2 maxDuration: 3m # Health assessment ignoreDifferences: - group: apps kind: Deployment jsonPointers: - /spec/replicas # Ignore HPA changes # Notifications notifications: - when: on-sync-succeeded destination: slack - when: on-sync-failed destination: pagerduty Part 6: Drift Detection and Self-Healing Drift Handling Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Cluster State]) --> Monitor[ArgoCD monitorsevery 3 minutes] Monitor --> Compare[Compare:Git desired statevsCluster live state] Compare --> Check{Statesmatch?} Check -->|Yes| InSync[✅ In SyncNo action needed] Check -->|No| DriftType{Type ofdrift?} DriftType --> ManualEdit[Manual kubectl editSomeone changed replicasfrom 5 to 10] DriftType --> ResourceAdd[New resource addedNot in Gite.g., manual deployment] DriftType --> ResourceDelete[Resource deletedIn Git but not in cluster] ManualEdit --> SelfHeal1{Self-healingenabled?} ResourceAdd --> SelfHeal2{Pruneenabled?} ResourceDelete --> SelfHeal3{Self-healingenabled?} SelfHeal1 -->|Yes| Revert[Revert to Git statekubectl apply -f deployment.yamlReplicas: 10 → 5] SelfHeal1 -->|No| Alert1[🚨 Alert OnlyManual drift detectedReplicas changed] SelfHeal2 -->|Yes| Delete[Delete extra resourcekubectl delete deployment xyzNot in Git = Removed] SelfHeal2 -->|No| Alert2[⚠️ Alert OnlyExtra resource detectedNot managed by Git] SelfHeal3 -->|Yes| Recreate[Recreate resourcekubectl apply -f service.yamlRestore from Git] SelfHeal3 -->|No| Alert3[⚠️ Alert OnlyResource missingExpected in Git] Revert --> Healed[✅ Drift CorrectedCluster ≡ GitLog event] Delete --> Healed Recreate --> Healed Alert1 --> Decision[Team Decision:1. Update Git to match?2. Revert cluster to Git?] Alert2 --> Decision Alert3 --> Decision Healed --> InSync InSync -.->|Continue monitoring| Monitor style InSync fill:#064e3b,stroke:#10b981 style Healed fill:#064e3b,stroke:#10b981 style Alert1 fill:#78350f,stroke:#f59e0b style Alert2 fill:#78350f,stroke:#f59e0b style Alert3 fill:#78350f,stroke:#f59e0b Part 7: GitOps Workflow Best Practices Git Branching Strategy %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% gitGraph commit id: "Initial infrastructure" commit id: "Add myapp v1.0" branch feature/upgrade-v2 checkout feature/upgrade-v2 commit id: "Update to v2.0" commit id: "Add new config" checkout main commit id: "Hotfix: security patch" checkout feature/upgrade-v2 merge main id: "Merge main" checkout main merge feature/upgrade-v2 id: "PR merged" tag: "Deploy to staging" commit id: "Staging validated" tag: "Deploy to prod" Approval Process # PR approval workflow name: GitOps PR Validation on: pull_request: paths: - 'apps/**' - 'infrastructure/**' jobs: validate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Validate Kubernetes YAML run: | kubeval apps/**/*.yaml kustomize build apps/production/myapp | kubeval - - name: Dry-run in staging run: | kubectl apply --dry-run=server -k apps/staging/myapp - name: Security scan run: | kubesec scan apps/production/myapp/deployment.yaml - name: Policy check run: | conftest test apps/production/myapp/*.yaml require-approval: needs: validate runs-on: ubuntu-latest steps: - name: Check required approvals if: contains(github.event.pull_request.files, 'apps/production') run: | # Require 2 approvals for production changes APPROVALS=$(gh pr view ${{ github.event.pull_request.number }} --json reviews -q '.reviews | length') if [ "$APPROVALS" -lt 2 ]; then echo "Production changes require 2 approvals" exit 1 fi Part 8: Comparison Table GitOps Tools Comparison Feature ArgoCD Flux Jenkins X Architecture Controller + UI Set of controllers Full platform UI Rich web UI No UI (CLI only) Web UI Multi-cluster ✅ Native support ✅ Via Flux controllers ✅ Supported Helm support ✅ Native ✅ Via Helm controller ✅ Native Kustomize support ✅ Native ✅ Via Kustomize controller ✅ Supported SSO/RBAC ✅ Built-in ❌ Use K8s RBAC ✅ Built-in Notifications ✅ Slack, email, webhook ✅ Via providers ✅ Various channels Drift detection ✅ Visual in UI ✅ CLI/metrics ✅ Supported Learning curve Medium Low High Best for Teams wanting UI GitOps purists Full CI/CD platform Conclusion GitOps provides: ...

    January 23, 2025 · 9 min · Rafiul Alam

    Kubernetes Pod Lifecycle: Pending → Running → Succeeded

    Introduction Kubernetes Pods are the smallest deployable units in Kubernetes, representing one or more containers that share resources. Understanding the Pod lifecycle is crucial for debugging, monitoring, and managing applications in Kubernetes. This guide visualizes the complete Pod lifecycle: Pod Creation: From YAML manifest to scheduling State Transitions: Pending → Running → Succeeded/Failed Init Containers: Pre-application setup Container Restart Policies: How Kubernetes handles failures Termination: Graceful shutdown process Part 1: Pod Lifecycle Overview Complete Pod State Machine %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% stateDiagram-v2 [*] --> Pending: Pod created Pending --> Running: All containers started Pending --> Failed: Scheduling failedImage pull failedInvalid config Running --> Succeeded: All containerscompleted successfully(restartPolicy: Never/OnFailure) Running --> Failed: Container failedand won't restartPod deleted during run Running --> Running: Container restarted(restartPolicy: Always/OnFailure) Succeeded --> [*]: Pod cleanup Failed --> [*]: Pod cleanup Running --> Terminating: Delete requestreceived Terminating --> Succeeded: Graceful shutdownsuccessful Terminating --> Failed: Force terminationafter grace period note right of Pending Pod accepted by cluster - Waiting for scheduling - Pulling images - Starting init containers - Creating container runtime end note note right of Running Pod is executing - At least 1 container running - Could be starting/restarting - Application serving traffic - Health checks active end note note right of Succeeded All containers terminated successfully - Exit code 0 - Will not be restarted - Job/CronJob completed end note note right of Failed Pod terminated in failure - Non-zero exit code - OOMKilled - Exceeded restart limit - Node failure end note note right of Terminating Pod shutting down - SIGTERM sent - Grace period active - Endpoints removed - Cleanup in progress end note Pod Creation to Running Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([kubectl apply -f pod.yaml]) --> APIServer[API ServerValidates YAMLWrites to etcd] APIServer --> Scheduler{Scheduler findssuitable node?} Scheduler -->|No| PendingNoNode[Status: PendingReason: Unschedulable- Insufficient resources- Node selector mismatch- Taints/tolerations] Scheduler -->|Yes| AssignNode[Pod assigned to NodeUpdate: spec.nodeName] AssignNode --> Kubelet[Kubelet on target nodereceives Pod spec] Kubelet --> PullImages{Pull containerimages} PullImages -->|Failed| ImagePullError[Status: PendingReason: ImagePullBackOff- Image doesn't exist- Registry auth failed- Network issues] PullImages -->|Success| InitContainers{Init containersdefined?} InitContainers -->|Yes| RunInit[Run init containerssequentially] InitContainers -->|No| CreateContainers RunInit --> InitSuccess{All initcontainerssucceeded?} InitSuccess -->|No| InitFailed[Status: Init:Erroror Init:CrashLoopBackOff] InitSuccess -->|Yes| CreateContainers[Create main containersSetup networkingMount volumes] CreateContainers --> StartContainers[Start all containersin Pod] StartContainers --> HealthChecks{Startup probedefined?} HealthChecks -->|Yes| StartupProbe[Execute startup probe] HealthChecks -->|No| Running StartupProbe --> StartupResult{Probepassed?} StartupResult -->|No| ProbeFailed[Container not readyIf fails too long:CrashLoopBackOff] StartupResult -->|Yes| Running[Status: Running- Container ready- Liveness probe active- Readiness probe active] Running --> ServingTraffic[Pod receives trafficAdded to Service endpoints] style PendingNoNode fill:#78350f,stroke:#f59e0b style ImagePullError fill:#7f1d1d,stroke:#ef4444 style InitFailed fill:#7f1d1d,stroke:#ef4444 style Running fill:#064e3b,stroke:#10b981 style ServingTraffic fill:#064e3b,stroke:#10b981 Part 2: Pod Creation Sequence API Server to Kubelet Communication %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant User as Developer participant API as API Server participant ETCD as etcd participant Sched as Scheduler participant Kubelet as Kubelet (Node) participant Runtime as Container Runtime participant Reg as Container Registry User->>API: kubectl apply -f pod.yaml Note over API: Validate Pod spec- Required fields- Resource limits- Security context API->>ETCD: Write Pod objectStatus: PendingnodeName: ETCD-->>API: Acknowledged API-->>User: Pod created Note over Sched: Watch for unscheduled Pods Sched->>API: List Pods with nodeName="" API-->>Sched: Pod list Note over Sched: Score nodes:- CPU/Memory available- Affinity rules- Taints/TolerationsBest node: node-1 Sched->>API: Bind Pod to node-1 API->>ETCD: Update Pod.spec.nodeName = "node-1" Note over Kubelet: Watch for Pods on node-1 Kubelet->>API: Get Pod specifications API-->>Kubelet: Pod details Kubelet->>Runtime: Pull image: nginx:1.21 Runtime->>Reg: Pull nginx:1.21 Reg-->>Runtime: Image layers Note over Runtime: Extract and cache image Kubelet->>Runtime: Create containerwith Pod spec config Runtime-->>Kubelet: Container created Kubelet->>Runtime: Start container Runtime-->>Kubelet: Container started Kubelet->>API: Update Pod Status:Phase: RunningcontainerStatuses: ready API->>ETCD: Save Pod status Kubelet->>Kubelet: Start health checks- Startup probe- Readiness probe- Liveness probe Note over Kubelet,Runtime: Continuous monitoringand health checking Part 3: Init Containers Init containers run before app containers and must complete successfully before the main containers start. ...

    January 23, 2025 · 11 min · Rafiul Alam

    Monitoring & Alerting: Metrics to Action Flow

    Introduction Effective monitoring and alerting are critical for maintaining reliable systems. Without proper observability, you’re flying blind when issues occur in production. This guide visualizes the complete monitoring and alerting flow: Metrics Collection: From instrumentation to storage Alert Evaluation: When metrics cross thresholds Notification Routing: Getting alerts to the right people Incident Response: From alert to resolution The Three Pillars: Metrics, Logs, and Traces Part 1: Complete Monitoring & Alerting Flow End-to-End Overview %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD subgraph Apps[Application Layer] App1[Application 1Exposes /metrics] App2[Application 2Exposes /metrics] App3[Application 3Exposes /metrics] end subgraph Collection[Metrics Collection] Prometheus[Prometheus ServerScrapes metrics every 15sStores time-series data] end subgraph Rules[Alert Rules Engine] Rules1[Alert Rule 1:High Error Raterate > 5%] Rules2[Alert Rule 2:High Latencyp95 > 500ms] Rules3[Alert Rule 3:Low Availabilityuptime < 99%] end subgraph AlertMgr[Alert Manager] Routing[Alert Routing- Group similar alerts- Deduplicate- Apply silences] Throttle[Throttling- Rate limiting- Grouping window- Repeat interval] end subgraph Notification[Notification Channels] PagerDuty[PagerDutyCritical alertsOn-call engineer] Slack[SlackWarning alertsTeam channel] Email[EmailInfo alertsDistribution list] end subgraph Response[Incident Response] OnCall[On-Call EngineerReceives alert] Investigate[Investigate Issue- Check dashboards- Review logs- Analyze traces] Fix[Apply Fix- Deploy patch- Scale resources- Restart service] Resolve[Resolve AlertMetrics return to normal] end App1 --> |Scrape /metrics| Prometheus App2 --> |Scrape /metrics| Prometheus App3 --> |Scrape /metrics| Prometheus Prometheus --> |Evaluate every 1m| Rules1 Prometheus --> |Evaluate every 1m| Rules2 Prometheus --> |Evaluate every 1m| Rules3 Rules1 --> |Trigger if true| Routing Rules2 --> |Trigger if true| Routing Rules3 --> |Trigger if true| Routing Routing --> Throttle Throttle --> |Severity: Critical| PagerDuty Throttle --> |Severity: Warning| Slack Throttle --> |Severity: Info| Email PagerDuty --> OnCall Slack --> OnCall OnCall --> Investigate Investigate --> Fix Fix --> Resolve Resolve -.->|Metrics normalized| Prometheus style Prometheus fill:#1e3a8a,stroke:#3b82f6 style Routing fill:#1e3a8a,stroke:#3b82f6 style PagerDuty fill:#7f1d1d,stroke:#ef4444 style Resolve fill:#064e3b,stroke:#10b981 Part 2: Metrics Collection Process Prometheus Scrape Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant App as Application participant Metrics as /metrics Endpoint participant Prom as Prometheus participant TSDB as Time-Series Database participant Grafana as Grafana Dashboard Note over App: Application runningIncrementing countersRecording histograms App->>Metrics: Update in-memory metricshttp_requests_total++http_request_duration_seconds loop Every 15 seconds Prom->>Metrics: HTTP GET /metrics Metrics-->>Prom: Return current metrics# TYPE http_requests_total counterhttp_requests_total{method="GET",status="200"} 1523http_requests_total{method="GET",status="500"} 12 Note over Prom: Parse metricsAdd labels:- job="myapp"- instance="pod-1:8080"- timestamp Prom->>TSDB: Store time-series dataAppend to existing seriesCreate new series if needed Note over TSDB: Compress and store:http_requests_total{ job="myapp", instance="pod-1:8080", method="GET", status="200"} = 1523 @ timestamp end Note over Prom,TSDB: Data retained for 15 daysOlder data deleted automatically Grafana->>Prom: PromQL Query:rate(http_requests_total[5m]) Prom->>TSDB: Fetch time-series datafor last 5 minutes TSDB-->>Prom: Return raw data points Note over Prom: Calculate rate:Δ value / Δ time Prom-->>Grafana: Return computed values Grafana->>Grafana: Render graphDisplay on dashboard Metrics Instrumentation Example package main import ( "net/http" "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" ) // Define metrics var ( // Counter - only goes up httpRequestsTotal = prometheus.NewCounterVec( prometheus.CounterOpts{ Name: "http_requests_total", Help: "Total number of HTTP requests", }, []string{"method", "endpoint", "status"}, ) // Histogram - for request durations httpRequestDuration = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Name: "http_request_duration_seconds", Help: "HTTP request duration in seconds", Buckets: prometheus.DefBuckets, // 0.005, 0.01, 0.025, 0.05, ... }, []string{"method", "endpoint"}, ) // Gauge - current value (can go up or down) activeConnections = prometheus.NewGauge( prometheus.GaugeOpts{ Name: "active_connections", Help: "Number of active connections", }, ) ) func init() { // Register metrics with Prometheus prometheus.MustRegister(httpRequestsTotal) prometheus.MustRegister(httpRequestDuration) prometheus.MustRegister(activeConnections) } func trackMetrics(method, endpoint string, statusCode int, duration time.Duration) { // Increment request counter httpRequestsTotal.WithLabelValues( method, endpoint, fmt.Sprintf("%d", statusCode), ).Inc() // Record request duration httpRequestDuration.WithLabelValues( method, endpoint, ).Observe(duration.Seconds()) } func handleRequest(w http.ResponseWriter, r *http.Request) { start := time.Now() // Increment active connections activeConnections.Inc() defer activeConnections.Dec() // Your application logic here processRequest(w, r) // Track metrics duration := time.Since(start) trackMetrics(r.Method, r.URL.Path, http.StatusOK, duration) } func main() { // Expose /metrics endpoint for Prometheus http.Handle("/metrics", promhttp.Handler()) // Application endpoints http.HandleFunc("/api/users", handleRequest) http.ListenAndServe(":8080", nil) } Part 3: Alert Evaluation and Firing Alert Rule Decision Tree %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Prometheus evaluatesalert rules every 1m]) --> Query[Execute PromQL query:rate5m > threshold] Query --> Result{Queryreturns data?} Result -->|No data| Inactive[Alert: InactiveNo time-series matchNo notification] Result -->|Data exists| CheckCondition{Conditiontrue?} CheckCondition -->|False| Resolved{Alert wasfiring?} Resolved -->|Yes| SendResolved[Alert: ResolvedSend resolved notificationGreen alert to channel] Resolved -->|No| Inactive CheckCondition -->|True| Duration{Condition truefor 'for' duration?} Duration -->|No| Pending[Alert: PendingWaiting for duratione.g., 5 minutesNo notification yet] Pending -.->|Check again| Start Duration -->|Yes| Firing[Alert: Firing 🔥Send to Alertmanager] Firing --> Dedupe{Alreadyfiring?} Dedupe -->|Yes| Throttle[Respect repeat_intervale.g., every 4 hoursDon't spam] Dedupe -->|No| NewAlert[New alert!Send notification immediately] Throttle --> TimeCheck{Repeat intervalelapsed?} TimeCheck -->|No| Wait[Wait...Don't send yet] TimeCheck -->|Yes| Reminder[Send remindernotification] NewAlert --> AlertManager[Send to Alertmanager] Reminder --> AlertManager AlertManager --> Route[Route based on labelsApply routing rules] style Inactive fill:#1e3a8a,stroke:#3b82f6 style Pending fill:#78350f,stroke:#f59e0b style Firing fill:#7f1d1d,stroke:#ef4444 style SendResolved fill:#064e3b,stroke:#10b981 style NewAlert fill:#7f1d1d,stroke:#ef4444 Alert Rule Configuration # prometheus-rules.yaml groups: - name: application_alerts interval: 60s # Evaluate every 60 seconds rules: # High Error Rate Alert - alert: HighErrorRate expr: | ( rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) ) > 0.05 for: 5m # Must be true for 5 minutes before firing labels: severity: critical team: backend annotations: summary: "High error rate on {{ $labels.instance }}" description: "Error rate is {{ $value | humanizePercentage }} (threshold: 5%)" dashboard: "https://grafana.example.com/d/app" # High Latency Alert - alert: HighLatency expr: | histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]) ) > 0.5 for: 10m labels: severity: warning team: backend annotations: summary: "High latency on {{ $labels.instance }}" description: "P95 latency is {{ $value }}s (threshold: 0.5s)" # Service Down Alert - alert: ServiceDown expr: up{job="myapp"} == 0 for: 1m labels: severity: critical team: sre annotations: summary: "Service {{ $labels.instance }} is down" description: "Cannot scrape metrics from {{ $labels.instance }}" # Memory Usage Alert - alert: HighMemoryUsage expr: | ( container_memory_usage_bytes{pod=~"myapp-.*"} / container_spec_memory_limit_bytes{pod=~"myapp-.*"} ) > 0.90 for: 5m labels: severity: warning team: platform annotations: summary: "High memory usage on {{ $labels.pod }}" description: "Memory usage is {{ $value | humanizePercentage }} of limit" Part 4: Alert Routing and Notification Alertmanager Processing Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Alert received fromPrometheus]) --> Inhibit{Inhibitionrules match?} Inhibit -->|Yes| Suppress[Alert suppressedHigher priority alertalready firinge.g., NodeDown inhibitsall pod alerts on that node] Inhibit -->|No| Silence{Silencematches?} Silence -->|Yes| Silenced[Alert silencedManual suppressionDuring maintenance windowNo notification sent] Silence -->|No| Group[Group alertsBy: cluster, alertnameCombine similar alerts] Group --> Wait[Wait for group_waitDefault: 30sCollect more alerts] Wait --> Batch[Create notification batchMultiple alerts groupedSingle notification] Batch --> Route{Matchrouting tree?} Route --> Critical{severity:critical?} Route --> Warning{severity:warning?} Route --> Default[Default route] Critical --> Team1{team:backend?} Team1 -->|Yes| PagerDuty[PagerDutyPage on-call engineerEscalate if no ackin 5 minutes] Team1 -->|No| Team2[Other team's PagerDuty] Warning --> SlackRoute{team:backend?} SlackRoute -->|Yes| Slack[Slack #backend-alertsPost message@here mention] SlackRoute -->|No| SlackOther[Other team's Slack] Default --> Email[EmailSend to mailing listLow priority] PagerDuty --> Track[Track notificationSet repeat_interval timer4 hours until resolved] Slack --> Track Email --> Track Track --> Resolved{Alertresolved?} Resolved -->|No| RepeatCheck{repeat_intervalelapsed?} RepeatCheck -->|Yes| Resend[Resend notificationReminder that alertstill firing] Resend -.-> Track RepeatCheck -->|No| Wait2[Wait...] Wait2 -.-> Resolved Resolved -->|Yes| SendResolved[Send resolved notificationAll is well ✓] style Suppress fill:#1e3a8a,stroke:#3b82f6 style Silenced fill:#1e3a8a,stroke:#3b82f6 style PagerDuty fill:#7f1d1d,stroke:#ef4444 style SendResolved fill:#064e3b,stroke:#10b981 Alertmanager Configuration # alertmanager.yaml global: resolve_timeout: 5m slack_api_url: 'https://hooks.slack.com/services/XXX' pagerduty_url: 'https://events.pagerduty.com/v2/enqueue' # Inhibition rules - suppress alerts when higher priority alert is firing inhibit_rules: # If node is down, don't alert on pods on that node - source_match: alertname: 'NodeDown' target_match: alertname: 'PodDown' equal: ['node'] # If entire cluster is down, don't alert on individual services - source_match: severity: 'critical' alertname: 'ClusterDown' target_match_re: severity: 'warning|info' equal: ['cluster'] # Route tree - how to send alerts route: receiver: 'default-email' group_by: ['alertname', 'cluster', 'service'] group_wait: 30s # Wait 30s to collect more alerts group_interval: 5m # Send updates every 5m for grouped alerts repeat_interval: 4h # Resend if still firing after 4h routes: # Critical alerts to PagerDuty - match: severity: critical receiver: 'pagerduty-critical' group_wait: 10s # Page quickly for critical continue: true # Also send to Slack - match: severity: critical receiver: 'slack-critical' # Warning alerts to Slack - match: severity: warning receiver: 'slack-warnings' group_wait: 1m # Team-specific routing - match: team: backend receiver: 'backend-team' - match: team: frontend receiver: 'frontend-team' # Receivers - where to send alerts receivers: - name: 'default-email' email_configs: - to: '[email protected]' headers: Subject: '{{ .GroupLabels.alertname }}: {{ .Status | toUpper }}' - name: 'pagerduty-critical' pagerduty_configs: - service_key: 'your-pagerduty-key' description: '{{ .GroupLabels.alertname }}: {{ .CommonAnnotations.summary }}' severity: 'critical' - name: 'slack-critical' slack_configs: - channel: '#alerts-critical' title: '🚨 CRITICAL: {{ .GroupLabels.alertname }}' text: | {{ range .Alerts }} *Alert:* {{ .Annotations.summary }} *Description:* {{ .Annotations.description }} *Severity:* {{ .Labels.severity }} *Dashboard:* {{ .Annotations.dashboard }} {{ end }} color: 'danger' send_resolved: true - name: 'slack-warnings' slack_configs: - channel: '#alerts-warning' title: '⚠️ WARNING: {{ .GroupLabels.alertname }}' text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' color: 'warning' - name: 'backend-team' slack_configs: - channel: '#backend-alerts' Part 5: Incident Response Workflow From Alert to Resolution %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Alert as Alert System participant PD as PagerDuty participant Eng as On-Call Engineer participant Dash as Grafana Dashboard participant Logs as Log Aggregator participant Trace as Tracing System participant K8s as Kubernetes participant Incident as Incident Channel Alert->>PD: 🚨 Critical AlertHighErrorRate firingService: myappError rate: 12% PD->>Eng: 📱 Phone call + SMS + PushIncident created Note over Eng: Engineer woken upat 3 AM 😴 Eng->>PD: Acknowledge incidentStop escalation Eng->>Incident: Create #incident-123Post initial status Note over Eng: Open laptopStart investigation Eng->>Dash: Open dashboardCheck error rate graph Dash-->>Eng: Graph shows spikeStarted 5 minutes agoOnly affects /api/payment Eng->>Logs: Query logs:level=error ANDpath=/api/payment Logs-->>Eng: Errors:"Database connection timeout""Cannot connect to db:5432" Note over Eng: Database issue suspected Eng->>K8s: kubectl get pods -n database K8s-->>Eng: postgres-0: CrashLoopBackOffRestart count: 8 Eng->>K8s: kubectl describe pod postgres-0 K8s-->>Eng: Event: Liveness probe failedEvent: OOMKilledMemory: 2.1Gi / 2Gi limit Note over Eng: Database OOMKilled!Need more memory Eng->>Incident: Update: Database OOMAction: Increasing memory limit Eng->>K8s: kubectl edit statefulset postgresChange: 2Gi → 4Gi memory K8s-->>Eng: Statefulset updated Note over K8s: Rolling restartpostgres-0 recreatedwith 4Gi memory Eng->>K8s: kubectl get pods -n database -wWatch pod status K8s-->>Eng: postgres-0: Running ✓Ready: 1/1 Note over Eng: Wait for metricsto normalize Eng->>Dash: Refresh dashboard Dash-->>Eng: Error rate: 0.3% ✓Latency: normal ✓Back to baseline Note over Alert: Metrics normalizedAlert conditions false Alert->>PD: ✅ Alert resolved PD->>Eng: Incident auto-resolved Eng->>Incident: Incident resolved ✓Root cause: DB OOMFix: Increased memoryDuration: 23 minutes Eng->>Eng: Create follow-up tasks:1. Set memory alerts2. Review query performance3. Consider connection pooling Note over Eng: Back to sleep 😴Post-mortem tomorrow Part 6: The Three Pillars of Observability Metrics, Logs, and Traces Integration %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Issue([Production Issue Detected]) --> Which{Which pillarto start with?} Which --> Metrics[1️⃣ METRICSWhat is broken?] Which --> Logs[2️⃣ LOGSWhy is it broken?] Which --> Traces[3️⃣ TRACESWhere is it broken?] Metrics --> M1[Check Grafana- Error rate spiking?- Latency increased?- Which service?- Which endpoint?] M1 --> M2[Identify:✓ Service: payment-api✓ Endpoint: /checkout✓ Metric: p95 latency 5000ms✓ Time: Started 10m ago] M2 --> UseTrace{Need to seerequest flow?} UseTrace -->|Yes| Traces Logs --> L1[Search logs in ELK/Lokiservice=payment-api ANDpath=/checkout ANDlevel=error] L1 --> L2[Find errors:"Database query timeout""SELECT * FROM ordersWHERE user_id=123execution time: 5200ms"] L2 --> L3[Context found:✓ Specific query is slow✓ Affecting user_id=123✓ No index on user_id?] L3 --> UseMetrics{Verify withmetrics?} UseMetrics -->|Yes| Metrics Traces --> T1[Open Jaeger/TempoSearch trace_id orservice=payment-api] T1 --> T2[View distributed trace:┌─ payment-api: 5100ms│ ├─ auth-svc: 20ms ✓│ ├─ inventory-svc: 30ms ✓│ └─ database: 5000ms ❌│ └─ query: SELECT * FROM orders] T2 --> T3[Identify bottleneck:✓ Database query is slow✓ Affects only /checkout✓ Other services healthy] T3 --> UseLogs{Need errordetails?} UseLogs -->|Yes| Logs M2 --> RootCause[Combine insights:METRICS: Latency spike on /checkoutLOGS: Specific query timeoutTRACES: Database is bottleneck] L3 --> RootCause T3 --> RootCause RootCause --> Fix[Root Cause Found:Missing database indexon orders.user_idFix: CREATE INDEXidx_user_id ON orders] style Metrics fill:#1e3a8a,stroke:#3b82f6 style Logs fill:#78350f,stroke:#f59e0b style Traces fill:#064e3b,stroke:#10b981 style RootCause fill:#064e3b,stroke:#10b981 style Fix fill:#064e3b,stroke:#10b981 When to Use Each Pillar Pillar Best For Example Questions Tools Metrics Detecting issues, trends - Is the service up?- What’s the error rate?- Is latency increasing? Prometheus, Grafana, Datadog Logs Understanding what happened - What was the error message?- Which user was affected?- What was the input? ELK, Loki, Splunk Traces Finding bottlenecks - Which service is slow?- Where is the delay?- How do requests flow? Jaeger, Tempo, Zipkin Part 7: Setting Up Effective Alerts Alert Quality Framework %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([New Alert Idea]) --> Question1{Does this requireimmediate action?} Question1 -->|No| Ticket[Create ticket insteadNot an alertReview during business hours] Question1 -->|Yes| Question2{Can it beautomated away?} Question2 -->|Yes| Automate[Build automationAuto-scalingAuto-healingSelf-recovery] Question2 -->|No| Question3{Is it actionable?} Question3 -->|No| Rethink[Rethink the alertWhat action shouldthe engineer take?If none, not an alert] Question3 -->|Yes| Question4{Is the signalclear?} Question4 -->|No| Refine[Refine the thresholdAdd 'for' durationAdjust sensitivityReduce false positives] Question4 -->|Yes| Question5{Provides enoughcontext?} Question5 -->|No| AddContext[Add context:- Dashboard link- Runbook link- Query to debug- Recent changes] Question5 -->|Yes| Question6{Correctseverity?} Question6 -->|No| Severity[Adjust severity:Critical = PageWarning = SlackInfo = Email] Question6 -->|Yes| GoodAlert[✅ Good Alert!- Actionable- Clear signal- Right severity- Good context] GoodAlert --> Deploy[Deploy alertMonitor for:- False positives- Alert fatigue- Resolution time] style Ticket fill:#1e3a8a,stroke:#3b82f6 style Automate fill:#064e3b,stroke:#10b981 style GoodAlert fill:#064e3b,stroke:#10b981 style Rethink fill:#7f1d1d,stroke:#ef4444 Part 8: Best Practices DO’s and DON’Ts ✅ DO: ...

    January 23, 2025 · 12 min · Rafiul Alam

    Multi-Environment Pipeline: Dev → Staging → Production

    Introduction Multi-environment pipelines enable safe, progressive deployment of code changes through isolated environments. Each environment serves a specific purpose in validating changes before they reach production users. This guide visualizes the multi-environment deployment flow: Environment Hierarchy: Dev → Staging → Production Environment Isolation: Separate configs, databases, resources Progressive Promotion: Automated testing at each stage Approval Gates: Manual checkpoints for production Configuration Management: Environment-specific settings Part 1: Multi-Environment Architecture Complete Environment Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Dev([👨‍💻 Developer]) --> LocalDev[Local DevelopmentLaptop/Docker DesktopFast iteration] LocalDev --> Push[git push origin feature/new-api] Push --> CI[CI Pipeline TriggeredBuild + Test + Lint] CI --> CIPass{CIPassed?} CIPass -->|No| FixLocal[❌ Fix locallyCheck logsRun tests] FixLocal -.-> LocalDev CIPass -->|Yes| FeatureBranch{Branchtype?} FeatureBranch -->|feature/*| DevEnv[🔧 Dev EnvironmentNamespace: devAuto-deploy on push] FeatureBranch -->|main| StagingEnv[🎯 Staging EnvironmentNamespace: stagingAuto-deploy on merge] subgraph DevEnvironment[Development Environment] DevEnv --> DevConfig[Configuration:- Debug mode ON- Verbose logging- Mock external APIs- Dev database- Minimal replicas: 1] DevConfig --> DevTest[Basic Tests:- Smoke tests- Health checks- Manual QA] DevTest --> DevDone[✅ Dev validatedReady for staging] end DevDone --> MergePR[Merge Pull Requestto main branch] MergePR --> StagingEnv subgraph StagingEnvironment[Staging Environment] StagingEnv --> StagingConfig[Configuration:- Production-like setup- Staging database- Real external APIs test- Replicas: 2-3- Resource limits] StagingConfig --> StagingTest[Comprehensive Tests:- Integration tests- E2E tests- Performance tests- Security scans] StagingTest --> StagingResult{All testspassed?} StagingResult -->|No| StagingFail[❌ Staging failedRollback stagingFix issues] StagingFail -.-> FixLocal StagingResult -->|Yes| StagingMonitor[Monitor staging:- Error rates- Performance metrics- User acceptance testing] StagingMonitor --> StagingReady[✅ Staging validatedReady for production] end StagingReady --> ApprovalGate{ManualApprovalRequired} ApprovalGate --> ReviewTeam[Team Lead Review:- Code changes- Test results- Risk assessment- Deployment timing] ReviewTeam --> Approved{Approved?} Approved -->|No| Rejected[❌ RejectedMore testing neededor wrong timing] Approved -->|Yes| ProdEnv[🚀 Production EnvironmentNamespace: productionManual trigger only] subgraph ProductionEnvironment[Production Environment] ProdEnv --> ProdConfig[Configuration:- Production settings- Production database- High availability- Replicas: 5-10- Strict resource limits- Auto-scaling enabled] ProdConfig --> ProdDeploy[Deployment Strategy:- Blue-green or- Canary or- Rolling update] ProdDeploy --> ProdHealth{Productionhealthy?} ProdHealth -->|No| AutoRollback[🚨 Auto-rollbackRevert to previousAlert on-call team] ProdHealth -->|Yes| ProdMonitor[Monitor Production:- Real user metrics- Error rates- Business KPIs- SLO compliance] ProdMonitor --> ProdStable{Stable for15 minutes?} ProdStable -->|No| AutoRollback ProdStable -->|Yes| Success[✅ Deployment Complete!New version liveMonitor continues] end style DevEnv fill:#064e3b,stroke:#10b981 style StagingEnv fill:#78350f,stroke:#f59e0b style ProdEnv fill:#1e3a8a,stroke:#3b82f6 style Success fill:#064e3b,stroke:#10b981 style StagingFail fill:#7f1d1d,stroke:#ef4444 style AutoRollback fill:#7f1d1d,stroke:#ef4444 style Rejected fill:#7f1d1d,stroke:#ef4444 Part 2: Environment Comparison Environment Characteristics %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% graph TB subgraph Local[🏠 Local Development] LocalProps[Properties:✓ Fast iteration✓ Developer's laptop✓ Docker Compose✓ Mock services✓ Hot reload enabled] LocalData[Data:- SQLite or local DB- Seed data- No real user data- Quick reset] LocalAccess[Access:- localhost only- No authentication- Debug tools enabled] end subgraph Dev[🔧 Development Environment] DevProps[Properties:✓ Shared team env✓ Kubernetes cluster✓ Continuous deployment✓ Latest features✓ Can be unstable] DevData[Data:- Dev database- Synthetic test data- Reset weekly- No PII] DevAccess[Access:- VPN required- Basic auth- All developers- Debug mode ON] end subgraph Staging[🎯 Staging Environment] StagingProps[Properties:✓ Production mirror✓ Same infrastructure✓ Pre-production testing✓ Stable builds only✓ Performance testing] StagingData[Data:- Staging database- Anonymized prod data- Or realistic test data- Refreshed monthly] StagingAccess[Access:- VPN required- OAuth/SSO- Developers + QA- Debug mode OFF] end subgraph Prod[🚀 Production Environment] ProdProps[Properties:✓ Live customer traffic✓ High availability✓ Auto-scaling✓ Disaster recovery✓ Maximum stability] ProdData[Data:- Production database- Real user data- Encrypted at rest- Regular backups] ProdAccess[Access:- Public internet- Full authentication- Limited admin access- Audit logging enabled] end Local --> |git push feature/*| Dev Dev --> |Merge to main| Staging Staging --> |Manual approval| Prod style Local fill:#064e3b,stroke:#10b981 style Dev fill:#064e3b,stroke:#10b981 style Staging fill:#78350f,stroke:#f59e0b style Prod fill:#1e3a8a,stroke:#3b82f6 Environment Configuration Matrix Aspect Local Dev Staging Production Purpose Development Feature testing Pre-production validation Live users Deployment Manual Auto on push Auto on merge Manual approval Replicas 1 1-2 2-3 5-10+ Database Local SQLite Shared dev DB Staging DB (prod-like) Production DB Resources Minimal Low Medium (prod-like) High Monitoring None Basic Full Full + Alerts Debug Mode Yes Yes No No Logging Level DEBUG DEBUG INFO WARN/ERROR External APIs Mocked Test endpoints Test endpoints Production endpoints Data Seed data Synthetic Anonymized Real user data Access localhost VPN + Basic auth VPN + SSO Public + Full auth Uptime SLA N/A None None 99.9%+ Part 3: Progressive Promotion Pipeline Promotion Flow with Quality Gates %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart LR subgraph LocalStage[Local Stage] L1[Write Code] L2[Run Unit Tests] L3[Manual Testing] L1 --> L2 --> L3 end subgraph DevStage[Dev Stage] D1[Auto Deploy] D2[Smoke Tests] D3{TestsPass?} D4[Dev Validated ✓] D1 --> D2 --> D3 D3 -->|Yes| D4 D3 -->|No| D5[❌ Fix] D5 -.-> L1 end subgraph StagingStage[Staging Stage] S1[Auto Deploy] S2[Integration Tests] S3[E2E Tests] S4[Performance Tests] S5{All Pass?} S6[Staging Validated ✓] S1 --> S2 --> S3 --> S4 --> S5 S5 -->|Yes| S6 S5 -->|No| S7[❌ Fix] S7 -.-> L1 end subgraph ApprovalStage[Approval Gate] A1[Create Release] A2[Code Review] A3[Change Advisory] A4{Approved?} A1 --> A2 --> A3 --> A4 A4 -->|No| A5[❌ Rejected] A5 -.-> L1 end subgraph ProdStage[Production Stage] P1[Manual Deploy] P2[Canary 10%] P3{Healthy?} P4[Increase to 50%] P5{Healthy?} P6[Complete 100%] P7[Monitor] P8[Success ✓] P1 --> P2 --> P3 P3 -->|Yes| P4 --> P5 P5 -->|Yes| P6 --> P7 --> P8 P3 -->|No| P9[🚨 Rollback] P5 -->|No| P9 end L3 --> |git push| D1 D4 --> |Merge PR| S1 S6 --> A1 A4 -->|Yes| P1 style L3 fill:#064e3b,stroke:#10b981 style D4 fill:#064e3b,stroke:#10b981 style S6 fill:#064e3b,stroke:#10b981 style P8 fill:#064e3b,stroke:#10b981 style D5 fill:#7f1d1d,stroke:#ef4444 style S7 fill:#7f1d1d,stroke:#ef4444 style P9 fill:#7f1d1d,stroke:#ef4444 Part 4: Environment-Specific Configuration Configuration Management Strategy %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Application needs config]) --> Method{ConfigMethod?} Method --> EnvVars[Environment Variables] Method --> ConfigMaps[Kubernetes ConfigMaps] Method --> Secrets[Kubernetes Secrets] EnvVars --> EnvExample[Examples:- NODE_ENV=production- LOG_LEVEL=info- FEATURE_FLAGS=true] ConfigMaps --> CMExample[Examples:- app-config.yaml- nginx.conf- application.properties] Secrets --> SecretExample[Examples:- DATABASE_PASSWORD- API_KEYS- TLS certificates] EnvExample --> Override{Override perenvironment?} CMExample --> Override SecretExample --> Override Override --> DevOverride[Dev Environment:DEBUG=trueDB_HOST=dev-dbREPLICAS=1CACHE_TTL=60s] Override --> StagingOverride[Staging Environment:DEBUG=falseDB_HOST=staging-dbREPLICAS=3CACHE_TTL=300s] Override --> ProdOverride[Production Environment:DEBUG=falseDB_HOST=prod-dbREPLICAS=10CACHE_TTL=600s] DevOverride --> Inject[Inject at deployment:kubectl apply -f k8s/dev/- deployment.yaml- configmap.yaml- secrets.yaml] StagingOverride --> Inject ProdOverride --> Inject style EnvVars fill:#1e3a8a,stroke:#3b82f6 style ConfigMaps fill:#1e3a8a,stroke:#3b82f6 style Secrets fill:#7f1d1d,stroke:#ef4444 Kubernetes Configuration Example # k8s/base/deployment.yaml (Common base) apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest # Overridden per environment ports: - containerPort: 8080 envFrom: - configMapRef: name: myapp-config - secretRef: name: myapp-secrets resources: # Overridden per environment requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" --- # k8s/dev/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: myapp-config namespace: dev data: NODE_ENV: "development" LOG_LEVEL: "debug" DATABASE_HOST: "postgres.dev.svc.cluster.local" REDIS_HOST: "redis.dev.svc.cluster.local" FEATURE_NEW_UI: "true" FEATURE_BETA_API: "true" --- # k8s/staging/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: myapp-config namespace: staging data: NODE_ENV: "staging" LOG_LEVEL: "info" DATABASE_HOST: "postgres.staging.svc.cluster.local" REDIS_HOST: "redis.staging.svc.cluster.local" FEATURE_NEW_UI: "true" FEATURE_BETA_API: "false" --- # k8s/production/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: myapp-config namespace: production data: NODE_ENV: "production" LOG_LEVEL: "warn" DATABASE_HOST: "postgres.production.svc.cluster.local" REDIS_HOST: "redis.production.svc.cluster.local" FEATURE_NEW_UI: "false" # Gradual rollout FEATURE_BETA_API: "false" --- # k8s/dev/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: dev resources: - ../base/deployment.yaml - configmap.yaml - secrets.yaml images: - name: myapp newTag: dev-abc123 replicas: - name: myapp count: 1 patches: - patch: |- - op: replace path: /spec/template/spec/containers/0/resources/requests/memory value: 128Mi - op: replace path: /spec/template/spec/containers/0/resources/limits/memory value: 256Mi target: kind: Deployment name: myapp --- # k8s/production/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: production resources: - ../base/deployment.yaml - configmap.yaml - secrets.yaml images: - name: myapp newTag: v1.2.3 replicas: - name: myapp count: 10 patches: - patch: |- - op: replace path: /spec/template/spec/containers/0/resources/requests/memory value: 512Mi - op: replace path: /spec/template/spec/containers/0/resources/limits/memory value: 1Gi - op: replace path: /spec/template/spec/containers/0/resources/requests/cpu value: 500m - op: replace path: /spec/template/spec/containers/0/resources/limits/cpu value: 1000m target: kind: Deployment name: myapp Part 5: Database Migration Strategy Multi-Environment Database Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Dev as Developer participant DevDB as Dev Database participant StagingDB as Staging Database participant ProdDB as Production Database participant Migration as Migration Tool Note over Dev: Write migration:001_add_users_table.sql Dev->>DevDB: Run migration locallyCREATE TABLE users... DevDB-->>Dev: Migration applied ✓ Dev->>Dev: Test applicationwith new schema Dev->>Dev: git push feature/add-users Note over DevDB: CI/CD Pipeline triggered Dev->>DevDB: Auto-run migrationsin dev environment DevDB-->>Dev: Dev DB updated ✓ Note over Dev: Create Pull RequestMerge to main Dev->>StagingDB: Trigger staging deployment Note over Migration,StagingDB: Pre-deployment hook Migration->>StagingDB: Backup databasepg_dump > backup.sql Migration->>StagingDB: Run migrations001_add_users_table.sql StagingDB-->>Migration: Migration applied ✓ Note over StagingDB: Deploy applicationTest with new schema alt Migration Failed Migration->>StagingDB: Rollback migrationRestore from backup StagingDB-->>Migration: Rolled back end Note over Dev: Manual approvalfor production Dev->>ProdDB: Trigger production deployment Note over Migration,ProdDB: Pre-deployment steps Migration->>ProdDB: Full database backupSnapshot created Migration->>ProdDB: Check migration statusSELECT version FROM schema_migrations ProdDB-->>Migration: Current version: 000 Migration->>ProdDB: Run migrationsin transaction Note over Migration,ProdDB: BEGIN;CREATE TABLE users;INSERT INTO schema_migrationsVALUES ('001');COMMIT; ProdDB-->>Migration: Migration successful ✓ Note over ProdDB: Deploy new applicationversion alt Production Issues Migration->>ProdDB: Rollback migrationRun down migration:DROP TABLE users; Note over ProdDB: Deploy previousapplication version end Migration->>ProdDB: Verify data integrityCheck constraints ProdDB-->>Migration: All checks passed ✓ Note over Dev,ProdDB: Production updated successfully Part 6: Multi-Environment CI/CD Pipeline Complete Pipeline Configuration # .github/workflows/multi-env-deploy.yml name: Multi-Environment Deployment on: push: branches: - main - develop pull_request: branches: - main env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }} jobs: # CI - Same for all environments build-and-test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Run linting run: npm run lint - name: Run unit tests run: npm test - name: Build Docker image run: docker build -t $IMAGE_NAME:${{ github.sha }} . - name: Run integration tests run: docker-compose -f docker-compose.test.yml up --abort-on-container-exit - name: Push image run: | echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin docker push $IMAGE_NAME:${{ github.sha }} # Deploy to Dev - Auto on feature branches deploy-dev: needs: build-and-test if: github.ref != 'refs/heads/main' runs-on: ubuntu-latest environment: name: development url: https://dev.example.com steps: - uses: actions/checkout@v3 - name: Deploy to Dev run: | kubectl config set-cluster dev --server="${{ secrets.DEV_K8S_SERVER }}" kubectl config set-credentials admin --token="${{ secrets.DEV_K8S_TOKEN }}" kubectl set image deployment/myapp myapp=$IMAGE_NAME:${{ github.sha }} -n dev kubectl rollout status deployment/myapp -n dev - name: Run smoke tests run: | curl https://dev.example.com/health npm run test:smoke -- --env=dev # Deploy to Staging - Auto on main branch deploy-staging: needs: build-and-test if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest environment: name: staging url: https://staging.example.com steps: - uses: actions/checkout@v3 - name: Run database migrations run: | kubectl exec -n staging deployment/postgres -- \ psql -U postgres -d app -f /migrations/migrate.sql - name: Deploy to Staging run: | kubectl config set-cluster staging --server="${{ secrets.STAGING_K8S_SERVER }}" kubectl config set-credentials admin --token="${{ secrets.STAGING_K8S_TOKEN }}" kubectl apply -k k8s/staging/ kubectl rollout status deployment/myapp -n staging --timeout=5m - name: Run E2E tests run: npm run test:e2e -- --env=staging - name: Run performance tests run: | k6 run --vus 10 --duration 30s tests/performance.js - name: Check staging health run: | curl https://staging.example.com/health | jq '.status' | grep -q "healthy" # Deploy to Production - Manual approval required deploy-production: needs: deploy-staging runs-on: ubuntu-latest environment: name: production url: https://example.com steps: - uses: actions/checkout@v3 - name: Backup production database run: | kubectl exec -n production deployment/postgres -- \ pg_dump -U postgres app > backup-$(date +%Y%m%d-%H%M%S).sql - name: Run database migrations run: | kubectl exec -n production deployment/postgres -- \ psql -U postgres -d app -f /migrations/migrate.sql - name: Deploy to Production (Blue-Green) run: | kubectl config set-cluster prod --server="${{ secrets.PROD_K8S_SERVER }}" kubectl config set-credentials admin --token="${{ secrets.PROD_K8S_TOKEN }}" # Deploy green version kubectl apply -k k8s/production/ kubectl rollout status deployment/myapp-green -n production --timeout=10m # Switch traffic to green kubectl patch service myapp -n production -p '{"spec":{"selector":{"version":"green"}}}' - name: Monitor production metrics run: | sleep 300 # Wait 5 minutes ERROR_RATE=$(curl -s prometheus.example.com/api/v1/query?query=rate5m) if [ "$ERROR_RATE" -gt "0.01" ]; then echo "Error rate too high, rolling back" kubectl patch service myapp -n production -p '{"spec":{"selector":{"version":"blue"}}}' exit 1 fi - name: Notify team if: success() uses: slackapi/slack-github-action@v1 with: payload: | { "text": "✅ Production deployment successful!", "version": "${{ github.sha }}", "deployed_by": "${{ github.actor }}" } Part 7: Best Practices Environment Management Checklist ✅ DO: ...

    January 23, 2025 · 11 min · Rafiul Alam

    Rollback & Recovery: Detection to Previous Version

    Introduction Even with the best testing, production issues happen. Having a solid rollback and recovery strategy is critical for minimizing downtime and data loss when deployments go wrong. This guide visualizes the complete rollback process: Issue Detection: Monitoring alerts and health checks Rollback Decision: When to rollback vs forward fix Rollback Execution: Different rollback strategies Data Recovery: Handling database changes Post-Incident: Learning and prevention Part 1: Issue Detection Flow From Healthy to Incident %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Production deploymentcompleted]) --> Monitor[Monitoring Systems- Prometheus metrics- Application logs- User reports- Health checks] Monitor --> Baseline[Baseline Metrics:✓ Error rate: 0.1%✓ Latency p95: 150ms✓ Traffic: 10k req/min✓ CPU: 40%✓ Memory: 60%] Baseline --> Time[Time passes...Minutes after deployment] Time --> Detect{Issuedetected?} Detect -->|No issue| Healthy[✅ Deployment HealthyContinue monitoringAll metrics normal] Detect -->|Yes| IssueType{Issuetype?} IssueType --> ErrorSpike[🔴 Error Rate Spike0.1% → 15%Alert: HighErrorRate firing] IssueType --> LatencySpike[🟡 Latency Increasep95: 150ms → 5000msAlert: HighLatency firing] IssueType --> TrafficDrop[🟠 Traffic Drop10k → 1k req/minUsers can't access] IssueType --> ResourceIssue[🔴 Resource ExhaustionCPU: 40% → 100%OOMKilled events] IssueType --> DataCorruption[🔴 Data IssuesDatabase errorsInvalid data returned] ErrorSpike --> Severity1[Severity: CRITICALUser impact: HIGHAffecting all users] LatencySpike --> Severity2[Severity: WARNINGUser impact: MEDIUMSlow but functional] TrafficDrop --> Severity3[Severity: CRITICALUser impact: HIGHComplete outage] ResourceIssue --> Severity4[Severity: CRITICALUser impact: HIGHPods crashing] DataCorruption --> Severity5[Severity: CRITICALUser impact: CRITICALData integrity at risk] Severity1 --> AutoAlert[🚨 Automated Alerts:- PagerDuty page- Slack notification- Email alerts- Status page update] Severity2 --> AutoAlert Severity3 --> AutoAlert Severity4 --> AutoAlert Severity5 --> AutoAlert AutoAlert --> OnCall[On-Call EngineerReceives alertAcknowledges incident] OnCall --> Investigate[Quick Investigation:- Check deployment timeline- Review recent changes- Check logs- Verify metrics] Investigate --> RootCause{Root causeidentified?} RootCause -->|Yes - Recent deployment| Decision[Go to Rollback Decision] RootCause -->|Yes - Other cause| OtherFix[Different remediationNot deployment-related] RootCause -->|No - Time critical| Decision style Healthy fill:#064e3b,stroke:#10b981 style Severity1 fill:#7f1d1d,stroke:#ef4444 style Severity3 fill:#7f1d1d,stroke:#ef4444 style Severity4 fill:#7f1d1d,stroke:#ef4444 style Severity5 fill:#7f1d1d,stroke:#ef4444 style Severity2 fill:#78350f,stroke:#f59e0b Part 2: Rollback Decision Tree When to Rollback vs Forward Fix %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Production issue detected]) --> Assess[Assess situation:- User impact- Severity- Time deployed- Data changes] Assess --> Q1{Can issue befixed quickly?5 min} Q1 -->|Yes - Simple config| QuickFix[Forward Fix:- Update config map- Restart pods- No rollback needed] Q1 -->|No| Q2{Is issue causedby latestdeployment?} Q2 -->|No - External issue| External[External Root Cause:- Third-party API down- Database issue- Infrastructure problem→ Fix underlying issue] Q2 -->|Yes| Q3{User impactseverity?} Q3 -->|Low - Minor bugs| Q4{Time sincedeployment?} Q4 -->|< 30 min| RollbackLow[Consider Rollback:Low risk, easy rollbackUsers barely affected] Q4 -->|> 30 min| ForwardFix[Forward Fix:Deploy hotfixMore data changesRollback riskier] Q3 -->|Medium - Degraded| Q5{Data changesmade?} Q5 -->|No DB changes| RollbackMed[Rollback:Safe to revertNo data migrationQuick recovery] Q5 -->|DB changes made| Q6{Can revertDB changes?} Q6 -->|Yes - Reversible| RollbackWithDB[Rollback + DB Revert:1. Revert application2. Run down migrationCoordinate carefully] Q6 -->|No - Irreversible| ForwardOnly[Forward Fix ONLY:Cannot rollbackFix bug in new versionData can't be reverted] Q3 -->|High - Outage| Q7{Rollbacktime?} Q7 -->|< 5 min| ImmediateRollback[IMMEDIATE Rollback:User impact too highRollback firstDebug later] Q7 -->|> 5 min| Q8{Forward fixfaster?} Q8 -->|Yes| HotfixDeploy[Deploy Hotfix:If fix is obviousand can deployfaster than rollback] Q8 -->|No| ImmediateRollback QuickFix --> Monitor[Monitor metricsVerify fix worked] RollbackLow --> ExecuteRollback[Execute Rollback] RollbackMed --> ExecuteRollback RollbackWithDB --> ExecuteRollback ImmediateRollback --> ExecuteRollback ForwardFix --> DeployFix[Deploy Forward Fix] HotfixDeploy --> DeployFix ForwardOnly --> DeployFix style ImmediateRollback fill:#7f1d1d,stroke:#ef4444 style RollbackWithDB fill:#78350f,stroke:#f59e0b style ForwardOnly fill:#78350f,stroke:#f59e0b style QuickFix fill:#064e3b,stroke:#10b981 Part 3: Rollback Execution Strategies Application Rollback Methods %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Decision: Rollback]) --> Method{Deploymentstrategyused?} Method --> K8sRolling[Kubernetes Rolling Update] Method --> BlueGreen[Blue-Green Deployment] Method --> Canary[Canary Deployment] subgraph RollingRollback[Kubernetes Rolling Rollback] K8sRolling --> K8s1[kubectl rollout undodeployment myapp] K8s1 --> K8s2[Kubernetes:- Find previous ReplicaSet- Rolling update to old version- maxSurge: 1, maxUnavailable: 1] K8s2 --> K8s3[Gradual Pod Replacement:1. Create 1 old version pod2. Wait for ready3. Terminate 1 new version pod4. Repeat until all replaced] K8s3 --> K8s4[Time to rollback: 2-5 minDowntime: NoneSome users see old, some new] end subgraph BGRollback[Blue-Green Rollback] BlueGreen --> BG1[Current state:Blue v1.0 IDLEGreen v2.0 ACTIVE 100%] BG1 --> BG2[Update Service selector:version: green → version: blue] BG2 --> BG3[Instant Traffic Switch:Blue v1.0 ACTIVE 100%Green v2.0 IDLE 0%] BG3 --> BG4[Time to rollback: 1-2 secDowntime: ~1 secAll users switched instantly] end subgraph CanaryRollback[Canary Rollback] Canary --> C1[Current state:v1.0: 0 replicasv2.0: 10 replicas 100%] C1 --> C2[Scale down v2.0:v2.0: 10 → 0 replicas] C2 --> C3[Scale up v1.0:v1.0: 0 → 10 replicas] C3 --> C4[Time to rollback: 1-3 minDowntime: MinimalGradual traffic shift] end K8s4 --> Verify[Verification Steps] BG4 --> Verify C4 --> Verify Verify --> V1[1. Check pod statuskubectl get podsAll running?] V1 --> V2[2. Run health checkscurl /healthAll healthy?] V2 --> V3[3. Monitor metricsError rate back to normal?Latency improved?] V3 --> V4[4. Check user reportsAre users reporting success?] V4 --> Success{Rollbacksuccessful?} Success -->|Yes| Complete[✅ Rollback CompleteService restoredMonitor closely] Success -->|No| StillBroken[🚨 Still Broken!Issue not deployment-relatedDeeper investigation needed] style K8s4 fill:#1e3a8a,stroke:#3b82f6 style BG4 fill:#064e3b,stroke:#10b981 style C4 fill:#1e3a8a,stroke:#3b82f6 style Complete fill:#064e3b,stroke:#10b981 style StillBroken fill:#7f1d1d,stroke:#ef4444 Part 4: Database Rollback Complexity Handling Database Migrations %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Need to rollbackwith DB changes]) --> Analyze[Analyze migration type] Analyze --> Type{Migrationtype?} Type --> AddColumn[Added ColumnALTER TABLE usersADD COLUMN email] Type --> DropColumn[Dropped ColumnALTER TABLE usersDROP COLUMN phone] Type --> ModifyColumn[Modified ColumnALTER TABLE usersALTER COLUMN age TYPE bigint] Type --> AddTable[Added TableCREATE TABLE orders] AddColumn --> AC1{Column hasdata?} AC1 -->|No data yet| AC2[Safe Rollback:1. Deploy old app version2. DROP COLUMN emailOld app doesn't use it] AC1 -->|Has data| AC3[⚠️ Data Loss Risk:1. Backup table first2. Consider keeping column3. Deploy old app versionColumn ignored by old app] DropColumn --> DC1[🚨 CANNOT Rollback:Data already lostForward fix ONLYOptions:1. Restore from backup2. Accept data loss3. Recreate from logs] ModifyColumn --> MC1{Datacompatible?} MC1 -->|Yes - reversible| MC2[Revert Column Type:ALTER COLUMN age TYPE intVerify no data truncationThen deploy old app] MC1 -->|No - data loss| MC3[🚨 Cannot Revert:bigint values exceed int rangeForward fix ONLY] AddTable --> AT1{Table hascritical data?} AT1 -->|No data| AT2[Safe Rollback:1. Deploy old app version2. DROP TABLE ordersNo data lost] AT1 -->|Has data| AT3[Risky Rollback:1. BACKUP TABLE orders2. DROP TABLE orders3. Deploy old app versionData preserved in backup] AC2 --> SafeProcess[Safe Rollback Process:✅ No data loss✅ Quick rollback✅ Reversible] AC3 --> RiskyProcess[Risky Rollback Process:⚠️ Potential data loss⚠️ Need backup⚠️ Manual intervention] DC1 --> NoRollback[Forward Fix Only:❌ Cannot rollback❌ Data already lost❌ Must fix forward] MC2 --> SafeProcess MC3 --> NoRollback AT2 --> SafeProcess AT3 --> RiskyProcess SafeProcess --> Execute1[Execute Safe Rollback] RiskyProcess --> Decision{Acceptablerisk?} Decision -->|Yes| Execute2[Execute with Caution] Decision -->|No| NoRollback NoRollback --> HotfixDeploy[Deploy Hotfix:New version with fixKeep new schema] style SafeProcess fill:#064e3b,stroke:#10b981 style RiskyProcess fill:#78350f,stroke:#f59e0b style NoRollback fill:#7f1d1d,stroke:#ef4444 style DC1 fill:#7f1d1d,stroke:#ef4444 style MC3 fill:#7f1d1d,stroke:#ef4444 Part 5: Complete Rollback Workflow From Detection to Recovery %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% sequenceDiagram participant Monitor as Monitoring participant Alert as Alerting participant Engineer as On-Call Engineer participant Incident as Incident Channel participant K8s as Kubernetes participant DB as Database participant Users as End Users Note over Monitor: 5 minutes after deployment Monitor->>Monitor: Detect anomaly:Error rate: 0.1% → 18%Latency p95: 150ms → 3000ms Monitor->>Alert: Trigger alert:HighErrorRate FIRING Alert->>Engineer: 🚨 PagerDuty callCritical alertProduction incident Engineer->>Alert: Acknowledge alertStop escalation Engineer->>Incident: Create #incident-456"High error rate after v2.5 deployment" Note over Engineer: Open laptopStart investigation Engineer->>Monitor: Check Grafana dashboardWhen did issue start?Which endpoints affected? Monitor-->>Engineer: Started 5 min agoRight after deploymentAll endpoints affected Engineer->>K8s: kubectl get podsCheck pod status K8s-->>Engineer: All pods RunningNo crashesHealth checks passing Engineer->>K8s: kubectl logs deployment/myappCheck application logs K8s-->>Engineer: ERROR: Cannot connect to cacheERROR: Redis timeoutERROR: Connection refused Note over Engineer: Root cause: New versionhas Redis connection bug Engineer->>Incident: Update: Redis connection issue in v2.5Decision: Rollback to v2.4 Note over Engineer: Check deployment history Engineer->>K8s: kubectl rollout history deployment/myapp K8s-->>Engineer: REVISION 10: v2.5 (current)REVISION 9: v2.4 (previous) Engineer->>Incident: Starting rollback to v2.4ETA: 3 minutes Engineer->>K8s: kubectl rollout undo deployment/myapp K8s->>K8s: Start rollback:- Create pods with v2.4- Wait for ready- Terminate v2.5 pods loop Rolling Update K8s->>Users: Some users on v2.4 ✓Some users on v2.5 ✗ Note over K8s: Pod 1: v2.4 ReadyTerminating v2.5 Pod 1 Engineer->>K8s: kubectl rollout statusdeployment/myapp --watch K8s-->>Engineer: Waiting for rollout:2/5 pods updated end K8s->>Users: All users now on v2.4 ✓ K8s-->>Engineer: Rollout complete:deployment "myapp" successfully rolled out Engineer->>Monitor: Check metrics Note over Monitor: Wait 2 minutesfor metrics to stabilize Monitor-->>Engineer: ✅ Error rate: 0.1%✅ Latency p95: 160ms✅ All metrics normal Note over Alert: Metrics normalized Alert->>Engineer: ✅ Alert resolved:HighErrorRate Engineer->>Users: Verify user experience Users-->>Engineer: No error reportsApplication working Engineer->>Incident: ✅ Incident resolvedService restored to v2.4Duration: 12 minutesRoot cause: Redis bug in v2.5 Engineer->>Incident: Next steps:1. Fix Redis bug2. Add integration test3. Post-mortem scheduled Note over Engineer: Create follow-up tasks Engineer->>Engineer: Create Jira tickets:- BUG-789: Fix Redis connection- TEST-123: Add cache integration test- DOC-456: Update deployment checklist Note over Engineer,Users: Service restored ✓Monitoring continues Part 6: Automated Rollback Auto-Rollback Decision Flow %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Deployment completed]) --> Monitor[Continuous MonitoringEvery 30 seconds] Monitor --> Collect[Collect Metrics:- Error rate- Latency p95/p99- Success rate- Pod health- Resource usage] Collect --> Check1{Error rate> 5%?} Check1 -->|Yes| Trigger1[🚨 Trigger auto-rollbackError threshold exceeded] Check1 -->|No| Check2{Latency p95> 2x baseline?} Check2 -->|Yes| Trigger2[🚨 Trigger auto-rollbackLatency degradation] Check2 -->|No| Check3{Pod crashrate > 50%?} Check3 -->|Yes| Trigger3[🚨 Trigger auto-rollbackPods failing] Check3 -->|No| Check4{Custom metricthreshold?} Check4 -->|Yes| Trigger4[🚨 Trigger auto-rollbackBusiness metric failed] Check4 -->|No| Healthy[✅ All checks passedContinue monitoring] Healthy --> TimeCheck{Monitoringduration?} TimeCheck -->|< 15 min| Monitor TimeCheck -->|>= 15 min| Stable[✅ Deployment STABLEPassed soak periodAuto-rollback disabled] Trigger1 --> Rollback[Execute Auto-Rollback] Trigger2 --> Rollback Trigger3 --> Rollback Trigger4 --> Rollback Rollback --> R1[1. Log rollback decisionMetrics that triggeredTimestamp] R1 --> R2[2. Alert team:PagerDuty criticalSlack notification"Auto-rollback initiated"] R2 --> R3[3. Execute rollback:kubectl rollout undodeployment/myapp] R3 --> R4[4. Wait for rollback:Monitor pod statusWait for all pods ready] R4 --> R5[5. Verify recovery:Check metrics againError rate normal?Latency normal?] R5 --> Verify{Recoverysuccessful?} Verify -->|Yes| Success[✅ Auto-Rollback SuccessService restoredNotify teamCreate incident report] Verify -->|No| StillFailing[🚨 Still Failing!Issue not deploymentPage on-call immediatelyManual intervention needed] style Healthy fill:#064e3b,stroke:#10b981 style Stable fill:#064e3b,stroke:#10b981 style Success fill:#064e3b,stroke:#10b981 style Trigger1 fill:#7f1d1d,stroke:#ef4444 style Trigger2 fill:#7f1d1d,stroke:#ef4444 style Trigger3 fill:#7f1d1d,stroke:#ef4444 style Trigger4 fill:#7f1d1d,stroke:#ef4444 style StillFailing fill:#7f1d1d,stroke:#ef4444 Auto-Rollback Configuration # Flagger auto-rollback configuration apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: myapp namespace: production spec: targetRef: apiVersion: apps/v1 kind: Deployment name: myapp service: port: 8080 # Canary analysis analysis: interval: 30s threshold: 5 # Rollback after 5 failed checks maxWeight: 50 stepWeight: 10 # Metrics for auto-rollback decision metrics: # HTTP error rate - name: request-success-rate thresholdRange: min: 95 # Rollback if success rate < 95% interval: 1m # HTTP latency - name: request-duration thresholdRange: max: 500 # Rollback if p95 > 500ms interval: 1m # Custom business metric - name: conversion-rate thresholdRange: min: 80 # Rollback if conversion < 80% of baseline interval: 2m # Webhooks for additional checks webhooks: - name: load-test url: http://flagger-loadtester/ timeout: 5s metadata: type: bash cmd: "hey -z 1m -q 10 http://myapp-canary:8080/" # Alerting on rollback alerts: - name: slack severity: error providerRef: name: slack namespace: flagger Part 7: Post-Incident Process Learning from Rollbacks %%{init: {'theme':'dark', 'themeVariables': {'primaryTextColor':'#e5e7eb','secondaryTextColor':'#e5e7eb','tertiaryTextColor':'#e5e7eb','textColor':'#e5e7eb','nodeTextColor':'#e5e7eb','edgeLabelText':'#e5e7eb','clusterTextColor':'#e5e7eb','actorTextColor':'#e5e7eb'}}}%% flowchart TD Start([Rollback completedService restored]) --> Timeline[Create Incident Timeline:- Deployment time- Issue detection time- Rollback decision time- Recovery timeTotal duration] Timeline --> PostMortem[Schedule Post-Mortem:Within 48 hoursAll stakeholders invitedBlameless culture] PostMortem --> Analyze[Root Cause Analysis:Why did issue occur?Why wasn't it caught?What can we learn?] Analyze --> Categories{Issuecategory?} Categories --> Testing[Insufficient Testing:- Missing test case- Integration gap- Load testing needed] Categories --> Monitoring[Monitoring Gap:- Missing alert- Wrong threshold- Blind spot found] Categories --> Process[Process Issue:- Skipped step- Wrong timing- Communication gap] Categories --> Code[Code Quality:- Bug in code- Edge case- Dependency issue] Testing --> Actions1[Action Items:□ Add integration test□ Expand E2E coverage□ Add load test□ Test in staging first] Monitoring --> Actions2[Action Items:□ Add new alert□ Adjust thresholds□ Add dashboard□ Improve visibility] Process --> Actions3[Action Items:□ Update runbook□ Add checklist item□ Change deployment time□ Improve communication] Code --> Actions4[Action Items:□ Fix bug□ Add validation□ Update dependency□ Code review process] Actions1 --> Assign[Assign Owners:Each action has ownerEach action has deadlineTrack in project board] Actions2 --> Assign Actions3 --> Assign Actions4 --> Assign Assign --> Document[Document Learnings:- Update wiki- Share with team- Add to knowledge base- Update training] Document --> Prevent[Prevent Recurrence:✓ Tests added✓ Monitoring improved✓ Process updated✓ Team educated] Prevent --> Complete[✅ Post-Incident CompleteStronger systemBetter preparedContinuous improvement] style Complete fill:#064e3b,stroke:#10b981 Part 8: Rollback Checklist Pre-Deployment Rollback Readiness Before Every Deployment: ...

    January 23, 2025 · 11 min · Rafiul Alam