Continuous Integration and Continuous Deployment (CI/CD) has transformed from a nice-to-have into an absolute requirement for modern software teams. In 2026, the question is no longer whether to implement CI/CD, but how to design pipelines that are fast, reliable, secure, and maintainable as your application and team grow.
A poorly designed pipeline becomes a bottleneck β slow feedback loops, flaky tests, manual approval gates that create queues, security scanning bolted on as an afterthought. A well-designed pipeline is invisible: code flows from developer to production seamlessly, with every necessary check happening automatically.
This guide covers CI/CD pipeline design patterns from simple to advanced, with concrete examples using GitHub Actions and GitLab CI. The patterns are tool-agnostic β they work with any CI/CD platform.
Chapter 1: Pipeline Architecture Fundamentals
The Pipeline as a Directed Acyclic Graph (DAG)
Every CI/CD pipeline is fundamentally a DAG β a series of stages and jobs where each job depends on the output of previous jobs, and no circular dependencies exist. Understanding this model helps you design efficient pipelines.
# Linear Pipeline (simplest, slowest)
# Build β Test β Security Scan β Deploy Staging β Deploy Production
#
# Parallel Pipeline (faster)
# ββ Unit Tests βββ
# Build ββββ€ ββ Deploy Staging β Deploy Production
# ββ Lint/Format ββ€
# ββ SAST Scan ββββ
#
# Diamond Pipeline (complex, with merge point)
# ββ Unit Tests βββββββ
# Build ββββ€ ββ Integration Tests β Deploy
# ββ Security Scan ββββ€
# ββ Build Docker βββββ
Core Principles of Pipeline Design
Fail fast: Run the cheapest, fastest checks first. Linting takes seconds; integration tests take minutes. If the code doesn't even compile, don't waste time running a 20-minute test suite. Order your pipeline stages so that quick failures happen early.
Parallelism: Independent jobs should run in parallel. Unit tests and linting don't depend on each other β run them simultaneously. Parallel execution can cut pipeline time by 60-80%.
Idempotency: Every pipeline run with the same inputs should produce the same outputs. No reliance on external mutable state, no "works on the second run" situations. Build artifacts should be deterministic.
Immutable artifacts: Build your artifact once, deploy the same artifact to every environment. Never rebuild for staging versus production. The artifact deployed to production should be bit-for-bit identical to what was tested in staging.
GitHub Actions: Basic Multi-Stage Pipeline
# .github/workflows/ci.yml
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
# Cancel previous runs on the same branch
concurrency:
group: ci-pipeline-${{ github.ref }}
cancel-in-progress: true
env:
NODE_VERSION: '20'
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# Stage 1: Build (runs first)
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run build
# Upload build artifacts for downstream jobs
- uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/
retention-days: 1
# Stage 2: Parallel checks (run simultaneously)
lint:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm run type-check
unit-tests:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run test -- --coverage
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
security-scan:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm audit --audit-level=high
- uses: github/codeql-action/init@v3
with:
languages: javascript
- uses: github/codeql-action/analyze@v3
# Stage 3: Integration tests (after parallel checks pass)
integration-tests:
needs: [lint, unit-tests, security-scan]
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run test:integration
env:
DATABASE_URL: postgresql://test_user:test_pass@localhost:5432/test_db
# Stage 4: Build and push Docker image
docker-build:
needs: integration-tests
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- uses: actions/checkout@v4
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=ref,event=branch
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Stage 5: Deploy to staging (automatic)
deploy-staging:
needs: docker-build
runs-on: ubuntu-latest
environment: staging
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- run: |
echo "Deploying to staging..."
# kubectl set image deployment/myapp # myapp=${{ needs.docker-build.outputs.image-tag }}
# Stage 6: Deploy to production (manual approval)
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production # Requires manual approval
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- run: |
echo "Deploying to production..."
Chapter 2: Advanced Testing Strategies in Pipelines
Test Splitting and Parallelism
Large test suites can take 30+ minutes to run sequentially. Test splitting distributes tests across multiple runners, reducing wall-clock time dramatically.
# GitLab CI: Parallel test execution with test splitting
unit-tests:
stage: test
parallel: 4
script:
- npm ci
- npx jest --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL --coverage
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
Flaky Test Detection
Flaky tests β tests that pass and fail intermittently β destroy confidence in your pipeline. When developers can't trust test results, they start ignoring failures, which defeats the purpose of CI.
# GitHub Actions: Retry flaky tests with quarantine
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
# Run tests with retry for flaky detection
- run: |
npm run test -- --bail --forceExit 2>&1 || (echo "::warning::Tests failed, retrying once..." && npm run test -- --bail --forceExit)
# Better approach: use jest-circus with retries
# In jest.config.js: { retryTimes: 2, retryImmediately: true }
Chapter 3: Deployment Strategies
Blue-Green Deployments
Blue-green deployment maintains two identical production environments. Only one (let's say "blue") serves live traffic. You deploy the new version to the other ("green"), test it, then switch traffic over. If something goes wrong, switch back instantly.
# Blue-Green deployment script
#!/bin/bash
set -euo pipefail
CURRENT_COLOR=$(kubectl get service myapp-live -o jsonpath='{.spec.selector.deployment}')
if [ "$CURRENT_COLOR" = "blue" ]; then
NEW_COLOR="green"
else
NEW_COLOR="blue"
fi
echo "Current: $CURRENT_COLOR β Deploying to: $NEW_COLOR"
# Deploy new version to the inactive environment
kubectl set image deployment/myapp-$NEW_COLOR myapp=$IMAGE_TAG
# Wait for rollout
kubectl rollout status deployment/myapp-$NEW_COLOR --timeout=300s
# Run smoke tests against the new deployment
curl -f http://myapp-$NEW_COLOR.internal/health || {
echo "Smoke tests failed! Aborting."
exit 1
}
# Switch traffic to the new deployment
kubectl patch service myapp-live -p "{"spec":{"selector":{"deployment":"$NEW_COLOR"}}}"
echo "Traffic switched to $NEW_COLOR"
echo "Previous version ($CURRENT_COLOR) is still running"
echo "To rollback: switch service back to $CURRENT_COLOR"
Canary Deployments
Canary deployments route a small percentage of traffic to the new version. If error rates increase or performance degrades, the canary is killed and traffic returns to the stable version. This provides real-world validation with minimal blast radius.
# Kubernetes canary with traffic splitting
# Using Istio VirtualService for traffic management
# Step 1: Deploy canary (1 replica alongside 3 stable replicas)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
labels:
app: myapp
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: canary
template:
metadata:
labels:
app: myapp
version: canary
spec:
containers:
- name: myapp
image: myapp:2.0.0 # New version
---
# Step 2: Route 10% of traffic to canary
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp.example.com
http:
- route:
- destination:
host: myapp
subset: stable
weight: 90
- destination:
host: myapp
subset: canary
weight: 10
Rolling Deployments
Rolling deployments gradually replace old instances with new ones. This is the default Kubernetes deployment strategy and works well for stateless applications.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # 2 extra pods during update
maxUnavailable: 1 # At most 1 pod unavailable
template:
spec:
containers:
- name: myapp
image: myapp:2.0.0
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
Chapter 4: Security Scanning in the Pipeline
Shift-Left Security: SAST, DAST, SCA, and Container Scanning
Security scanning should happen at every stage of the pipeline, not bolted on at the end.
# GitLab CI: Comprehensive security scanning
stages:
- build
- test
- security
- deploy
# Static Application Security Testing (SAST)
# Analyzes source code for vulnerabilities
sast:
stage: security
image: returntocorp/semgrep
script:
- semgrep scan --config auto --json --output semgrep-results.json .
artifacts:
reports:
sast: semgrep-results.json
# Software Composition Analysis (SCA)
# Checks dependencies for known vulnerabilities
dependency-scan:
stage: security
script:
- npm audit --json > npm-audit.json || true
- npx audit-ci --high
artifacts:
paths:
- npm-audit.json
# Container Image Scanning
container-scan:
stage: security
image:
name: aquasec/trivy
entrypoint: [""]
script:
- trivy image --exit-code 1 --severity HIGH,CRITICAL
--format json --output trivy-results.json
$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
artifacts:
reports:
container_scanning: trivy-results.json
# Secret Detection
secret-detection:
stage: security
image: trufflesecurity/trufflehog
script:
- trufflehog git file://. --only-verified --json > secrets.json
artifacts:
paths:
- secrets.json
Chapter 5: GitOps β Infrastructure as Code in the Pipeline
GitOps is a deployment pattern where the desired state of your infrastructure is stored in Git, and automated processes ensure the actual state matches the desired state. Instead of running deployment commands, you commit changes to a Git repository, and a GitOps operator (like ArgoCD or Flux) applies them.
# ArgoCD Application manifest
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/infrastructure.git
targetRevision: main
path: k8s/production/myapp
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
The CI pipeline's job in a GitOps workflow changes: instead of deploying directly, the pipeline builds the artifact, pushes it to a registry, and then updates the infrastructure repository with the new image tag. ArgoCD detects the change and handles the actual deployment.
# GitHub Actions: GitOps-compatible pipeline
deploy:
needs: docker-build
runs-on: ubuntu-latest
steps:
- name: Checkout infrastructure repo
uses: actions/checkout@v4
with:
repository: myorg/infrastructure
token: ${{ secrets.INFRA_REPO_TOKEN }}
path: infrastructure
- name: Update image tag
run: |
cd infrastructure/k8s/production/myapp
yq eval '.spec.template.spec.containers[0].image = "ghcr.io/myorg/myapp:${{ github.sha }}"' -i deployment.yaml
- name: Commit and push
run: |
cd infrastructure
git config user.name "CI Bot"
git config user.email "ci@myorg.com"
git add .
git commit -m "deploy: myapp ${{ github.sha }}"
git push
Chapter 6: Pipeline Performance Optimization
Caching Strategies
Effective caching can reduce pipeline time by 50-80%. Cache dependencies, build outputs, Docker layers, and test fixtures.
# GitHub Actions: Multi-layer caching
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Cache node_modules based on lockfile hash
- uses: actions/cache@v4
id: npm-cache
with:
path: node_modules
key: npm-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
restore-keys: |
npm-${{ runner.os }}-
# Only install if cache missed
- if: steps.npm-cache.outputs.cache-hit != 'true'
run: npm ci
# Cache Next.js build
- uses: actions/cache@v4
with:
path: .next/cache
key: nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-${{ hashFiles('src/**') }}
restore-keys: |
nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-
nextjs-${{ runner.os }}-
- run: npm run build
Conditional Pipeline Execution
Don't run every check on every commit. Use path-based triggers to run only relevant jobs:
# Only run backend tests when backend code changes
on:
push:
paths:
- 'apps/backend/**'
- 'packages/shared/**'
- 'package-lock.json'
# GitLab CI equivalent
backend-tests:
rules:
- changes:
- apps/backend/**
- packages/shared/**
- package-lock.json
Chapter 7: Multi-Environment Pipeline Patterns
Environment Promotion
The most common pattern for multi-environment pipelines is environment promotion: build once, deploy the same artifact through dev β staging β production.
# Environment promotion pipeline
#
# PR Branch β Dev (automatic)
# Main Branch β Staging (automatic) β Production (manual approval)
#
# CRITICAL: Same Docker image deployed everywhere
# Only configuration/env vars change per environment
deploy-dev:
stage: deploy
environment:
name: development
url: https://dev.myapp.com
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
script:
- helm upgrade --install myapp ./chart
--namespace dev
--set image.tag=$CI_COMMIT_SHA
--values chart/values-dev.yaml
deploy-staging:
stage: deploy
environment:
name: staging
url: https://staging.myapp.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
script:
- helm upgrade --install myapp ./chart
--namespace staging
--set image.tag=$CI_COMMIT_SHA
--values chart/values-staging.yaml
deploy-production:
stage: deploy
environment:
name: production
url: https://myapp.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual # Requires manual click
script:
- helm upgrade --install myapp ./chart
--namespace production
--set image.tag=$CI_COMMIT_SHA
--values chart/values-production.yaml
Chapter 8: Pipeline Observability and Debugging
Pipeline Metrics
Track these metrics to identify bottlenecks and improve pipeline performance over time:
- Lead time: Time from code commit to production deployment (target: under 30 minutes).
- Pipeline duration: Total wall-clock time (target: under 15 minutes).
- Success rate: Percentage of pipeline runs that succeed (target: above 95%).
- Mean time to recovery (MTTR): Time from detecting a failed deployment to rolling back or fixing (target: under 15 minutes).
- Flaky test rate: Percentage of test runs that fail intermittently (target: under 1%).
- Queue time: Time jobs spend waiting for runners (target: under 2 minutes).
Investing in CI/CD pipeline design pays compound returns. Every improvement to your pipeline β faster feedback loops, better parallelism, automated security scanning, reliable deployment strategies β multiplies across every developer and every commit. A team making 50 commits per day with a 20-minute pipeline wastes 16+ hours daily in pipeline time. Cutting that to 8 minutes saves the team 10 hours per day.
ZeonEdge helps engineering teams design and implement CI/CD pipelines that are fast, secure, and reliable. From GitHub Actions to GitLab CI, from Docker builds to Kubernetes deployments, we architect pipelines that scale with your team. Contact our DevOps engineers to optimize your software delivery pipeline.
Marcus Rodriguez
Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.