You push a one-line code change and wait. And wait. Fifteen minutes later, your GitHub Actions workflow finally completes. Most of that time was spent downloading dependencies, pulling Docker images, and rebuilding artifacts that have not changed. The irony is that CI/CD exists to accelerate development, but a poorly configured pipeline actively slows you down.
The typical GitHub Actions workflow for a Node.js application spends 3-4 minutes installing npm dependencies, 2-3 minutes pulling Docker base images, 3-5 minutes building the application, and 2-3 minutes running tests. With proper caching, the same workflow completes in 60-90 seconds because the only work that happens is compiling the changed code and running the affected tests.
Problem 1: Reinstalling Dependencies Every Run
The single biggest time waste in most CI pipelines is downloading and installing dependencies from scratch on every single run. A Node.js project with 500+ dependencies takes 2-4 minutes to npm install, and most runs change zero dependencies.
Use the actions/cache action to cache the node_modules directory (or your package manager's global cache) between runs:
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Cache node_modules
uses: actions/cache@v4
id: npm-cache
with:
path: node_modules
key: node-modules-${{ hashFiles('package-lock.json') }}
restore-keys: |
node-modules-
- name: Install dependencies
if: steps.npm-cache.outputs.cache-hit != 'true'
run: npm ci
- name: Build
run: npm run build
- name: Test
run: npm test
The key is based on the hash of package-lock.json. When the lockfile has not changed (meaning no dependency changes), the cached node_modules is restored and the install step is skipped entirely. When a dependency changes, the lockfile hash changes, the cache misses, and a fresh install runs.
For monorepos using Turborepo or Nx, also cache the build system's internal cache:
- name: Cache Turbo
uses: actions/cache@v4
with:
path: .turbo
key: turbo-${{ github.sha }}
restore-keys: |
turbo-
Problem 2: Docker Image Pulls on Every Build
If your workflow builds Docker images, it pulls the base image (node:20-alpine, python:3.12-slim, etc.) on every run. These images are 50-300 MB and take 30-60 seconds to pull. Use Docker layer caching to avoid this:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: docker-${{ hashFiles('Dockerfile') }}-${{ hashFiles('package-lock.json') }}
restore-keys: |
docker-${{ hashFiles('Dockerfile') }}-
docker-
- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: false
tags: myapp:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
# Prevent cache from growing indefinitely
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
With layer caching, only the layers that actually changed are rebuilt. If you changed application code but not the Dockerfile or package.json, only the COPY and build layers are rebuilt while the base image and dependency installation layers are cached. This typically reduces Docker build time from 3-5 minutes to 20-40 seconds.
Problem 3: Running Everything Sequentially
Many workflows run lint, test, build, and deploy as sequential steps in a single job. If lint takes 30 seconds, tests take 2 minutes, and build takes 2 minutes, the total is 4.5 minutes even though lint and tests are independent and could run in parallel.
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20' }
- uses: actions/cache@v4
with:
path: node_modules
key: nm-${{ hashFiles('package-lock.json') }}
- run: npm ci
if: steps.cache.outputs.cache-hit != 'true'
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20' }
- uses: actions/cache@v4
with:
path: node_modules
key: nm-${{ hashFiles('package-lock.json') }}
- run: npm ci
if: steps.cache.outputs.cache-hit != 'true'
- run: npm test
build:
runs-on: ubuntu-latest
needs: [lint, test] # Only build if both pass
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20' }
- uses: actions/cache@v4
with:
path: node_modules
key: nm-${{ hashFiles('package-lock.json') }}
- run: npm ci
if: steps.cache.outputs.cache-hit != 'true'
- run: npm run build
Now lint and test run simultaneously, and build only starts after both pass. Total time drops from 4.5 minutes to about 2.5 minutes (the duration of the longest parallel job plus the build step).
Problem 4: Not Using Path Filters
If you change a README or a documentation file, there is no reason to run the full test suite and build pipeline. Use path filters to skip workflows when only irrelevant files changed:
on:
push:
branches: [main]
paths-ignore:
- '*.md'
- 'docs/**'
- '.github/ISSUE_TEMPLATE/**'
- 'LICENSE'
pull_request:
branches: [main]
paths-ignore:
- '*.md'
- 'docs/**'
For monorepos, use path filters to only build the packages that changed:
on:
push:
paths:
- 'apps/web-company/**'
- 'packages/ui/**'
- 'packages/utils/**'
Problem 5: Not Reusing Build Artifacts
If your build step produces artifacts (compiled JavaScript, Docker images, static files) that are needed by multiple downstream jobs (deploy to staging, deploy to production, run E2E tests), build once and share the artifact:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: build-output
path: .next/
retention-days: 1
deploy-staging:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v4
with:
name: build-output
path: .next/
- run: ./deploy.sh staging
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/download-artifact@v4
with:
name: build-output
path: .next/
- run: ./deploy.sh production
Problem 6: Large Repository Checkout
The actions/checkout action by default clones the full repository history. For large repositories with years of history, this can take minutes. Use a shallow clone:
- uses: actions/checkout@v4
with:
fetch-depth: 1 # Shallow clone, only latest commit
For monorepos, consider using sparse checkout to only clone the relevant subdirectory:
- uses: actions/checkout@v4
with:
fetch-depth: 1
sparse-checkout: |
apps/web-company
packages/ui
packages/utils
Real-World Results
Applying all of these optimizations to a real Next.js monorepo project reduced CI/CD time from 14 minutes to 1 minute 45 seconds: dependency caching saved 3 minutes, Docker layer caching saved 3 minutes, parallel jobs saved 2 minutes, path filters eliminated unnecessary runs entirely, shallow checkout saved 30 seconds, and artifact reuse eliminated duplicate builds.
The cost savings are significant too. GitHub Actions charges per minute of compute time. At 14 minutes per run and 40 pushes per day, you consume 560 minutes daily. At 1.75 minutes per run, you consume 70 minutes — an 87 percent reduction in CI/CD costs.
ZeonEdge optimizes CI/CD pipelines for speed and cost efficiency. Learn more about our DevOps services.
Marcus Rodriguez
Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.