БлогDevOps
DevOps

Docker Security in 2026: Rootless Containers, Secrets Management, and Supply Chain Protection

Docker security has evolved dramatically. Learn rootless containers, BuildKit secrets, Sigstore image signing, SBOM generation, and runtime protection with Falco for hardened production deployments.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

March 14, 2026
22 минута чтения

The Container Security Landscape in 2026

Container security breaches have cost enterprises billions in 2025. The attack surface has expanded dramatically: compromised base images, secrets baked into layers, overprivileged containers running as root, and poisoned dependencies in multi-stage builds. Docker's response has been a comprehensive security framework — but most teams are still running containers the same way they did in 2018.

This guide covers every layer of the modern Docker security stack: rootless containers that eliminate privilege escalation, BuildKit's secrets API that prevents credential leakage, Sigstore/Cosign for cryptographic image verification, SBOM generation for compliance, and Falco for runtime threat detection. Each section includes production-ready configurations, not just theory.

Why Root Containers Are a Critical Risk

The default docker run command runs your process as root (UID 0) inside the container. When a container escape occurs — and they do, via kernel vulnerabilities, misconfigured volumes, or privileged mode — that root maps to root on the host. The impact is total host compromise.

The statistics are stark: 78% of containers in production still run as root (Sysdig 2025 report). This is entirely unnecessary for virtually every workload.

Rootless Docker: Complete Setup Guide

Rootless Docker runs the entire Docker daemon — not just containers — as a non-root user. The daemon itself has no elevated privileges. Container escapes cannot escalate to host root because the daemon doesn't have it.

Installing Rootless Docker

# Prerequisites: newuidmap and newgidmap (uidmap package)
sudo apt-get install -y uidmap

# Install rootless Docker for current user
curl -fsSL https://get.docker.com/rootless | sh

# Add to PATH
export PATH=/home/$USER/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock

# Enable lingering so service persists after logout
sudo loginctl enable-linger $USER

# Start rootless daemon
systemctl --user start docker
systemctl --user enable docker

# Verify: daemon PID should be owned by your user
ps aux | grep dockerd | head -1

Migrating Existing Workloads to Rootless

# Check which containers currently run as root
docker ps -q | xargs -I{} docker inspect {}   --format '{{.Name}}: User={{.Config.User}} Privileged={{.HostConfig.Privileged}}'

# Containers showing empty User or User=root need fixing

Dockerfile Changes for Non-Root Operation

# BAD: runs as root implicitly
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

# GOOD: explicit non-root user
FROM node:20-alpine

# Create app directory with correct ownership
RUN addgroup -g 1001 -S nodejs  && adduser -S nextjs -u 1001 -G nodejs

WORKDIR /app

# Copy package files as root (for npm ci), then chown
COPY --chown=nextjs:nodejs package*.json ./
RUN npm ci --production --ignore-scripts

# Copy source with correct ownership
COPY --chown=nextjs:nodejs . .

# Switch to non-root user
USER nextjs

# Use unprivileged port (>1024)
EXPOSE 3000
CMD ["node", "server.js"]

Handling Capabilities Without Root

# Some apps need specific capabilities without full root
# Example: app needs to bind port 80 (normally requires root)

FROM python:3.13-slim

# Install libcap2-bin for setcap
RUN apt-get update && apt-get install -y --no-install-recommends     libcap2-bin  && rm -rf /var/lib/apt/lists/*

RUN adduser --disabled-password --gecos '' appuser

# Grant specific capability to Python binary
RUN setcap 'cap_net_bind_service=+ep' /usr/local/bin/python3.13

USER appuser
EXPOSE 80
CMD ["python", "-m", "uvicorn", "main:app", "--port", "80"]

BuildKit Secrets: Zero Credential Leakage

The most common Docker security mistake: credentials in build args or ENV statements that persist in image layers. Every docker history command reveals them. BuildKit's --secret flag mounts secrets at build time without writing them to any layer.

BuildKit Secret Mounts

# syntax=docker/dockerfile:1.7

FROM python:3.13-slim

# Secret is mounted at /run/secrets/pip_token during build
# but NEVER written to any image layer
RUN --mount=type=secret,id=pip_token     pip install --extra-index-url     "https://user:$(cat /run/secrets/pip_token)@pypi.company.com/simple/"     private-package==1.2.3

# For SSH keys (private git repos)
RUN --mount=type=ssh     pip install "git+ssh://git@github.com/company/private-lib.git@v2.1"

# For apt with authenticated sources
RUN --mount=type=secret,id=apt_auth,target=/etc/apt/auth.conf.d/private.conf     apt-get update && apt-get install -y private-package
# Build with secret injection
docker buildx build   --secret id=pip_token,src=$HOME/.tokens/pip_token   --ssh default=$SSH_AUTH_SOCK   -t myapp:latest .

# Verify: secret should NOT appear in history
docker history myapp:latest --no-trunc | grep -i token
# Should return empty

Multi-Stage Builds for Credential Isolation

# syntax=docker/dockerfile:1.7

# Stage 1: Builder with credentials
FROM node:20-alpine AS builder

ARG NPM_TOKEN
# Write .npmrc only in this stage
RUN echo "//registry.npmjs.org/:_authToken=NPM_TOKEN" > .npmrc

WORKDIR /app
COPY package*.json ./
RUN npm ci

# .npmrc is NOT copied to final stage
FROM node:20-alpine AS runtime

WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .

RUN adduser -D -u 1001 appuser
USER appuser

EXPOSE 3000
CMD ["node", "server.js"]

Image Signing with Sigstore/Cosign

Sigstore has become the standard for container image signing. Cosign handles signing and verification; Rekor provides the transparency log. In 2026, major registries (Docker Hub, ECR, GCR) all support Cosign verification natively.

Setting Up Cosign Signing Pipeline

# Install cosign
brew install sigstore/tap/cosign

# Generate signing key pair
cosign generate-key-pair
# Creates cosign.key (private, protect this!) and cosign.pub

# Sign an image after pushing
IMAGE="registry.company.com/myapp:v1.2.3"
cosign sign --key cosign.key IMAGE

# Sign with OIDC (keyless signing — recommended for CI)
cosign sign IMAGE
# Opens browser for OIDC flow, records identity in Rekor transparency log

# Verify before deployment
cosign verify --key cosign.pub IMAGE

# Verify keyless signature
cosign verify   --certificate-identity developer@company.com   --certificate-oidc-issuer https://accounts.google.com   IMAGE

GitHub Actions Signing Workflow

name: Build and Sign

on:
  push:
    tags: ['v*']

permissions:
  contents: read
  packages: write
  id-token: write  # Required for keyless signing

jobs:
  build-sign:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Install Cosign
        uses: sigstore/cosign-installer@v3
        
      - name: Build and push
        uses: docker/build-push-action@v5
        id: build
        with:
          push: true
          tags: ghcr.io/{{} github.repository }}:{{} github.ref_name }}
          
      - name: Sign image (keyless OIDC)
        run: |
          cosign sign --yes             ghcr.io/{{} github.repository }}@{{} steps.build.outputs.digest }}

Enforcing Signature Verification in Kubernetes

# Using Kyverno policy to enforce signed images
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-signed-images
spec:
  validationFailureAction: Enforce
  background: false
  rules:
    - name: verify-image-signature
      match:
        any:
          - resources:
              kinds: [Pod]
              namespaces: [production]
      verifyImages:
        - imageReferences:
            - "registry.company.com/*"
          attestors:
            - count: 1
              entries:
                - keys:
                    publicKeys: |-
                      -----BEGIN PUBLIC KEY-----
                      MFkwEwYH...
                      -----END PUBLIC KEY-----

SBOM Generation for Compliance

Software Bill of Materials (SBOM) is now legally required for US government contracts (Executive Order 14028) and increasingly mandated by enterprise security policies. Docker BuildKit can generate SBOMs natively.

# Generate SBOM during build (SPDX format)
docker buildx build   --sbom=true   --output type=local,dest=./sbom   -t myapp:latest .

# Generate SBOM for existing image
syft myapp:latest -o spdx-json > sbom.spdx.json

# Scan SBOM for vulnerabilities with Grype
grype sbom:./sbom.spdx.json

# Attach SBOM to image with Cosign
cosign attach sbom --sbom sbom.spdx.json myapp:latest

# Verify SBOM attestation
cosign verify-attestation   --type spdxjson   --key cosign.pub   myapp:latest

Vulnerability Scanning: Trivy in CI/CD

Trivy is the industry standard for container vulnerability scanning — it scans OS packages, language dependencies, IaC files, and secrets in a single tool.

# Install Trivy
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh

# Full image scan (CVEs, secrets, misconfigs)
trivy image --format table myapp:latest

# Fail CI on CRITICAL/HIGH CVEs
trivy image   --exit-code 1   --severity CRITICAL,HIGH   --ignore-unfixed   myapp:latest

# Scan Dockerfile for misconfigurations
trivy config ./Dockerfile

# Generate CycloneDX SBOM
trivy image --format cyclonedx --output sbom.cdx.json myapp:latest

Trivy in GitHub Actions

- name: Run Trivy vulnerability scan
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: 'myapp:{{} github.sha }}'
    format: 'sarif'
    output: 'trivy-results.sarif'
    severity: 'CRITICAL,HIGH'
    exit-code: '1'
    ignore-unfixed: true

- name: Upload scan results to Security tab
  uses: github/codeql-action/upload-sarif@v3
  if: always()
  with:
    sarif_file: 'trivy-results.sarif'

Runtime Security with Falco

Falco is the de facto standard for container runtime threat detection. It hooks into the Linux kernel via eBPF to detect anomalous behavior: shells spawning in containers, unexpected network connections, file system writes to sensitive paths, privilege escalations.

Deploying Falco on Kubernetes

# Add Falco Helm repo
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

# Install with eBPF driver (preferred over kernel module)
helm install falco falcosecurity/falco   --namespace falco   --create-namespace   --set driver.kind=ebpf   --set falcosidekick.enabled=true   --set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/..."   --set falcosidekick.config.slack.minimumpriority=warning

Custom Falco Rules for Your Environment

# /etc/falco/rules.d/company-rules.yaml

# Alert on any shell in production containers
- rule: Shell spawned in production container
  desc: A shell was spawned in a production container
  condition: >
    spawned_process and container
    and container.label.env = "production"
    and proc.name in (shell_binaries)
  output: >
    Shell spawned in production (user=%user.name container=%container.name
    image=%container.image.repository shell=%proc.name parent=%proc.pname)
  priority: CRITICAL
  tags: [production, shell]

# Alert on crypto miners
- rule: Cryptominer detected
  desc: Crypto mining process or network connection detected
  condition: >
    spawned_process and container
    and (proc.name in (known_miners) or
         proc.cmdline contains "stratum+tcp" or
         proc.cmdline contains "xmrig")
  output: Cryptominer in container (container=%container.name proc=%proc.cmdline)
  priority: CRITICAL

# Alert on sensitive file access
- rule: Container accessing host secrets
  desc: Container is reading /etc/shadow or /etc/passwd from host
  condition: >
    open_read and container
    and (fd.name = /etc/shadow or fd.name = /etc/passwd)
    and not proc.name in (known_safe_processes)
  output: Sensitive file read in container (container=%container.name file=%fd.name)
  priority: WARNING

Docker Compose Security Hardening

Production Compose files need security constraints that are never set by default.

version: "3.9"

services:
  api:
    image: registry.company.com/api:v2.1.0@sha256:abc123...  # Pin by digest!
    
    security_opt:
      - no-new-privileges:true          # Prevent privilege escalation
      - seccomp:./seccomp-profile.json  # Custom seccomp profile
    
    cap_drop:
      - ALL                             # Drop all capabilities
    cap_add:
      - NET_BIND_SERVICE               # Add back only what's needed
    
    read_only: true                     # Read-only root filesystem
    
    tmpfs:
      - /tmp:size=100m,noexec          # Writable tmp without exec
      - /var/run:size=10m
    
    user: "1001:1001"                   # Non-root user:group
    
    ulimits:
      nproc: 65535
      nofile:
        soft: 65535
        hard: 65535
    
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 128M
    
    networks:
      - app-internal  # Isolated network, not default bridge
    
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    
    environment:
      - NODE_ENV=production
    
    secrets:
      - db_password
      - jwt_secret

secrets:
  db_password:
    external: true   # From Docker Swarm secrets or Vault
  jwt_secret:
    external: true

networks:
  app-internal:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: "false"  # No inter-container comms by default

Seccomp Profiles: Restricting System Calls

The Linux kernel exposes ~350 system calls. A typical Node.js application uses fewer than 50. Restricting to only needed syscalls dramatically reduces attack surface.

# Generate a syscall profile using strace
strace -c -f -e trace=all node server.js 2>&1 | head -50

# Use docker/labs-make-runbook to auto-generate seccomp
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock   docker/labs-make-runbook:latest

# Apply default seccomp (Docker's built-in — blocks 44 dangerous syscalls)
docker run --security-opt seccomp=/etc/docker/seccomp-default.json myapp

# Create custom minimal profile
cat > seccomp-node.json << 'EOF'
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": [
        "read", "write", "open", "close", "stat", "fstat", "lstat",
        "poll", "lseek", "mmap", "mprotect", "munmap", "brk",
        "rt_sigaction", "rt_sigprocmask", "ioctl", "pread64",
        "access", "pipe", "select", "sched_yield", "mremap",
        "accept", "bind", "connect", "getpeername", "getsockname",
        "setsockopt", "socket", "socketpair", "recvfrom", "sendto",
        "listen", "shutdown", "epoll_wait", "epoll_ctl", "epoll_create1",
        "clone", "fork", "execve", "wait4", "exit", "exit_group",
        "getpid", "getuid", "getgid", "geteuid", "getegid",
        "nanosleep", "clock_gettime", "futex", "set_robust_list"
      ],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}
EOF

Container Registry Security

Private Registry with Authentication

# Configure Docker credential store (don't store in plain JSON)
# Install pass or docker-credential-helpers

# On macOS, credentials go to Keychain automatically
# On Linux, install helper:
sudo apt-get install gnupg2 pass
gpg2 --gen-key
pass init <your-gpg-key-id>

# Configure Docker to use credential store
cat ~/.docker/config.json
{
  "credsStore": "pass"   # or "desktop" on macOS
}

# Rotate registry credentials
docker logout registry.company.com
docker login registry.company.com -u ci-bot -p "$(vault kv get -field=password secret/ci/registry)"

Image Policy with OPA/Gatekeeper

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: allow-only-internal-registry
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces: ["production", "staging"]
  parameters:
    repos:
      - "registry.company.com/"
      - "gcr.io/company-project/"
    # Blocks: docker.io, quay.io, etc. in production

Docker Bench Security Audit

# Run Docker Bench for Security — comprehensive host/daemon/container audit
docker run --rm -it   --net host   --pid host   --userns host   --cap-add audit_control   -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST   -v /etc:/etc:ro   -v /lib/systemd/system:/lib/systemd/system:ro   -v /usr/bin/containerd:/usr/bin/containerd:ro   -v /usr/bin/runc:/usr/bin/runc:ro   -v /usr/lib/systemd:/usr/lib/systemd:ro   -v /var/lib:/var/lib:ro   -v /var/run/docker.sock:/var/run/docker.sock:ro   --label docker_bench_security   docker/docker-bench-security

# Key checks: daemon config, container runtime, Docker files permissions

Security Checklist for Production Docker

Use this checklist before any production deployment:

  • Image provenance: All images are from trusted registries, pinned by digest (image@sha256:...), and cosign-verified
  • Non-root execution: All containers run as non-root user (UID > 1000)
  • Read-only filesystem: read_only: true with explicit tmpfs mounts
  • Capability dropping: cap_drop: ALL with only required caps added back
  • No privileged mode: privileged: false always
  • Seccomp profile: Custom or default Docker seccomp applied
  • No host networking: Never network_mode: host in production
  • Resource limits: CPU and memory limits always set
  • Secrets management: No ENV or ARG with credentials; use secrets API
  • Trivy scanning: No unfixed CRITICAL/HIGH CVEs
  • Falco deployed: Runtime threat detection active
  • SBOM generated: And attached to image via Cosign attestation

Conclusion

Container security in 2026 is a multi-layer discipline. No single control is sufficient: rootless containers prevent privilege escalation, BuildKit secrets prevent credential leakage, Sigstore prevents supply chain compromise, Trivy catches known vulnerabilities, and Falco catches runtime anomalies that bypass all static checks.

The good news: every tool here is open-source, battle-tested, and integrates cleanly into existing CI/CD pipelines. The organizations with the best container security posture aren't those with the biggest budgets — they're the ones who've methodically applied each layer of defense. Start with non-root containers and Trivy scanning this week, and add the remaining layers incrementally.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

Готовы преобразовать свою инфраструктуру?

Давайте обсудим, как мы можем помочь вам достичь подобных результатов.