Docker transformed how we build, ship, and run software. But Docker's default configuration prioritizes convenience over security. Out of the box, Docker containers run as root, share the host kernel, have unrestricted network access, and can mount any host directory. A single misconfigured container can compromise your entire infrastructure.
In 2025, container-related security incidents increased 47% year-over-year (Sysdig Cloud-Native Security Report). The most common attack vectors: vulnerable base images (32%), misconfigured containers running as root (28%), exposed Docker sockets (18%), hardcoded secrets in images (12%), and excessive container privileges (10%). Every one of these is preventable with proper configuration.
This guide covers Docker security from the ground up β image selection, build process hardening, runtime security configuration, secrets management, network isolation, and monitoring. Whether you're running a single Docker host or orchestrating thousands of containers, these practices apply.
Chapter 1: Image Security β The Foundation of Container Security
Your container is only as secure as the image it's built from. If your base image contains known vulnerabilities, every container you launch inherits those vulnerabilities. Image security starts with choosing the right base image and continues through your entire build pipeline.
Choosing Secure Base Images
The first decision in any Dockerfile is the base image. This decision has more security impact than any other choice you'll make.
Alpine Linux (5MB): The most popular minimal base image. Alpine uses musl libc instead of glibc, which means a smaller attack surface but occasional compatibility issues with software compiled against glibc. Alpine's package manager (apk) provides regularly updated packages. Best for: Go binaries, Node.js applications, and any workload that doesn't require glibc-specific libraries.
Google Distroless (2-15MB): Distroless images contain only your application and its runtime dependencies β no shell, no package manager, no utilities. An attacker who compromises a distroless container can't spawn a shell, can't install tools, and can't easily exfiltrate data. This makes post-exploitation significantly harder. Best for: Java, Python, Node.js, and Go applications in production. Not suitable for development or debugging (you can't exec into the container).
Chainguard Images (varies): A newer option that provides hardened, FIPS-compliant, vulnerability-free base images. Chainguard rebuilds images daily with the latest security patches and provides SBOMs (Software Bills of Materials) for every image. Best for: regulated environments that require compliance certifications.
Ubuntu/Debian (75-125MB): Full-featured base images with comprehensive package repositories. These images contain hundreds of packages, many of which your application doesn't need β each one is a potential vulnerability. Only use these if your application specifically requires packages not available in minimal distributions, and even then, consider a multi-stage build that copies only the needed binaries into a minimal final image.
Images to avoid in production: latest tags (mutable, unpredictable), images from unknown publishers on Docker Hub, images without regular security updates, and images based on end-of-life operating systems.
Image Scanning: Finding Vulnerabilities Before Deployment
Every image should be scanned for known vulnerabilities before it reaches production. Image scanning tools compare the packages in your image against vulnerability databases (CVE, NVD) and report findings by severity.
# Trivy β the most popular open-source scanner
# Scan a local image
trivy image myapp:v1.2.3
# Scan with severity filter (only critical and high)
trivy image --severity CRITICAL,HIGH myapp:v1.2.3
# Scan and fail CI if critical vulnerabilities found
trivy image --exit-code 1 --severity CRITICAL myapp:v1.2.3
# Scan a Dockerfile for misconfigurations
trivy config Dockerfile
# Scan a running container's filesystem
trivy rootfs /path/to/container/rootfs
# Generate SBOM (Software Bill of Materials)
trivy image --format spdx-json --output sbom.json myapp:v1.2.3
Integrate scanning into your CI/CD pipeline. The scan should run after the image is built but before it's pushed to the registry. If critical or high-severity vulnerabilities are found, the pipeline should fail and the image should not be deployed.
But scanning alone isn't enough. Scanners only detect known vulnerabilities β they can't find zero-days, logic flaws, or misconfigurations specific to your application. Scanning is one layer of a defense-in-depth strategy.
Image Digests: Immutable References
Docker tags are mutable. nginx:1.25 today might be a different image than nginx:1.25 tomorrow. If someone pushes a compromised image with the same tag, you'll pull the compromised version without knowing.
Image digests are immutable SHA256 hashes that uniquely identify an image. Use digests in production to guarantee you're running the exact image you tested:
# Bad: tag is mutable
FROM nginx:1.25
# Good: digest is immutable
FROM nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764e...
# Find the digest of an image
docker inspect --format='{{index .RepoDigests 0}}' nginx:1.25
# Pin images in docker-compose.yml
services:
web:
image: nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764e...
Multi-Stage Builds: Minimizing Attack Surface
Multi-stage builds are the single most impactful technique for reducing image size and attack surface. The idea: use a full-featured image for building your application, then copy only the compiled artifact into a minimal runtime image.
# Multi-stage build example for a Go application
# Stage 1: Build (uses full Go SDK β 800MB)
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build a statically linked binary
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags='-w -s' -o /app/server ./cmd/server
# Stage 2: Runtime (uses distroless β 2MB)
FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /app/server /server
USER nonroot:nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]
# Result: ~10MB image with only your binary
# No shell, no package manager, no utilities for attackers
For Node.js applications, the multi-stage approach copies only the production node_modules and compiled assets:
# Multi-stage build for Next.js
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
Chapter 2: Dockerfile Hardening β Secure Build Practices
Never Run as Root
By default, Docker containers run as root (UID 0). If an attacker exploits a vulnerability in your application, they have root access inside the container. Combined with a kernel vulnerability or misconfiguration, this can escalate to root access on the host.
Always create a non-root user and switch to it:
# Create a non-root user
RUN groupadd --system --gid 1001 appgroup && useradd --system --uid 1001 --gid appgroup appuser
# Set ownership of application files
COPY --chown=appuser:appgroup . /app
# Switch to non-root user
USER appuser
# Verify: container should not run as root
# docker run myapp whoami β should output "appuser"
Some applications require root during setup (e.g., installing packages) but not during runtime. Use multi-stage builds to install as root in the build stage, then run as non-root in the runtime stage.
Read-Only Filesystem
If your application doesn't need to write to the filesystem, make it read-only. This prevents attackers from writing malware, backdoors, or configuration changes to the container filesystem:
# Run with read-only filesystem
docker run --read-only myapp
# If your app needs temporary write access, use tmpfs
docker run --read-only --tmpfs /tmp:rw,noexec,nosuid myapp
# In docker-compose.yml
services:
app:
image: myapp:v1.2.3
read_only: true
tmpfs:
- /tmp:rw,noexec,nosuid
- /var/run:rw,noexec,nosuid
Drop All Capabilities
Linux capabilities grant specific privileges to processes. Docker containers start with a subset of capabilities (including NET_RAW, which allows packet sniffing, and SYS_CHROOT). Most applications don't need any capabilities.
# Drop all capabilities, add back only what's needed
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp
# In docker-compose.yml
services:
app:
image: myapp:v1.2.3
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if binding to ports below 1024
security_opt:
- no-new-privileges:true # Prevent privilege escalation
The no-new-privileges security option prevents processes inside the container from gaining additional privileges through setuid/setgid binaries or capability escalation. Always enable this.
Minimize Layers and Clean Up
Each Dockerfile instruction creates a layer. Files deleted in a later layer still exist in the previous layer β they're just hidden. If you install packages, copy secrets, or download files and then delete them in a separate RUN instruction, they're still in the image.
# Bad: secret exists in layer 2 even though it's deleted in layer 3
COPY secret.key /app/secret.key
RUN /app/setup.sh
RUN rm /app/secret.key
# Good: single layer, file never persists
RUN --mount=type=secret,id=mykey,target=/tmp/secret.key /app/setup.sh
# Good: combine commands to reduce layers and clean up
RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates && curl -fsSL https://example.com/install.sh | sh && apt-get purge -y curl && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
Use .dockerignore
The .dockerignore file prevents sensitive files from being copied into the image. Without it, COPY . . copies everything β including .env files, .git directories, private keys, and other sensitive data.
# .dockerignore
.git
.gitignore
.env
.env.*
*.md
docker-compose*.yml
Dockerfile*
node_modules
.next
coverage
tests
__tests__
*.pem
*.key
id_rsa*
Chapter 3: Runtime Security β Protecting Running Containers
Resource Limits: Preventing Resource Exhaustion
A container without resource limits can consume all available CPU, memory, and disk I/O on the host, affecting every other container. This is both a stability concern (a memory leak crashes the host) and a security concern (denial-of-service attacks are trivial without limits).
# Set memory and CPU limits
docker run --memory=512m --memory-swap=512m --cpus=0.5 myapp
# In docker-compose.yml
services:
app:
image: myapp:v1.2.3
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
# Prevent fork bombs
ulimits:
nproc: 100
nofile:
soft: 1024
hard: 2048
# Limit container restarts (prevents restart loops)
restart: on-failure:5
Setting --memory-swap equal to --memory disables swap, which prevents the container from using disk as virtual memory (slower, and can exhaust disk space).
Network Isolation
By default, all containers on the same Docker network can communicate with each other. In production, containers should only communicate with the services they need.
# docker-compose.yml with network isolation
services:
frontend:
image: frontend:v1.0
networks:
- frontend-net
ports:
- "443:3000"
backend:
image: backend:v1.0
networks:
- frontend-net # Can talk to frontend
- backend-net # Can talk to database
# No ports exposed to host β only accessible via frontend-net
database:
image: postgres:16-alpine
networks:
- backend-net # Only accessible from backend
# No ports exposed to host
# Frontend cannot reach database directly
redis:
image: redis:7-alpine
networks:
- backend-net
# Only backend can access Redis
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
internal: true # No external access at all
The internal: true flag on the backend-net network prevents any container on that network from accessing the internet. The database and Redis have no route to the outside world β they can only communicate with the backend.
Protect the Docker Socket
The Docker socket (/var/run/docker.sock) is the API endpoint that controls Docker. Any container with access to the socket has full control over the Docker daemon β it can create new containers, read secrets from other containers, mount the host filesystem, and effectively has root access to the host.
Never mount the Docker socket into a container unless absolutely necessary (e.g., for CI/CD runners or monitoring tools). If you must mount it, use a Docker socket proxy (like Tecnativa's docker-socket-proxy) that filters API calls and only allows specific operations.
# NEVER do this in production:
docker run -v /var/run/docker.sock:/var/run/docker.sock myapp
# If you must (e.g., for Portainer, monitoring):
# Use a socket proxy that restricts API access
services:
socket-proxy:
image: tecnativa/docker-socket-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
CONTAINERS: 1 # Allow listing containers
IMAGES: 0 # Deny image operations
EXEC: 0 # Deny exec into containers
VOLUMES: 0 # Deny volume operations
POST: 0 # Deny all POST requests (read-only)
networks:
- proxy-net
monitoring:
image: monitoring-tool
environment:
DOCKER_HOST: tcp://socket-proxy:2375
networks:
- proxy-net
# No direct access to Docker socket
Seccomp Profiles: Restricting System Calls
Seccomp (Secure Computing Mode) restricts which Linux system calls a container can make. Docker's default seccomp profile blocks about 44 dangerous syscalls (including reboot, mount, keyctl), but you can create a more restrictive custom profile tailored to your application.
# Generate a seccomp profile by tracing your application
# 1. Run your app and record which syscalls it actually uses
docker run --security-opt seccomp=unconfined strace -f -c myapp
# 2. Create a custom profile that only allows those syscalls
# custom-seccomp.json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64"],
"syscalls": [
{
"names": ["read", "write", "open", "close", "stat", "fstat",
"mmap", "mprotect", "munmap", "brk", "rt_sigaction",
"rt_sigprocmask", "ioctl", "access", "pipe", "select",
"sched_yield", "clone", "execve", "exit", "wait4",
"fcntl", "getdents64", "getcwd", "chdir", "rename",
"mkdir", "link", "unlink", "readlink", "chmod",
"getuid", "getgid", "geteuid", "getegid",
"epoll_create1", "epoll_ctl", "epoll_wait",
"socket", "connect", "accept", "bind", "listen",
"sendto", "recvfrom", "setsockopt", "getsockopt",
"futex", "set_robust_list", "clock_gettime",
"exit_group", "openat", "newfstatat"],
"action": "SCMP_ACT_ALLOW"
}
]
}
# 3. Apply the custom profile
docker run --security-opt seccomp=custom-seccomp.json myapp
Chapter 4: Secrets Management β Keeping Credentials Out of Images
The Problem with Environment Variables
The most common way to pass secrets to containers is environment variables. While better than hardcoding secrets in images, environment variables have security issues: they're visible in docker inspect, they appear in process listings (/proc/*/environ), they're included in crash dumps and error reports, and they persist in the container's metadata.
Docker Secrets (Swarm Mode)
Docker Swarm provides built-in secrets management. Secrets are encrypted at rest, encrypted in transit, and mounted as files in the container's filesystem (at /run/secrets/). They're only available to services that explicitly request them.
# Create a secret
echo "my_database_password" | docker secret create db_password -
# Use in docker-compose.yml (Swarm mode)
services:
app:
image: myapp:v1.2.3
secrets:
- db_password
- api_key
environment:
DB_PASSWORD_FILE: /run/secrets/db_password
API_KEY_FILE: /run/secrets/api_key
secrets:
db_password:
external: true
api_key:
external: true
Your application reads from the file instead of an environment variable:
// Read secret from file (Node.js)
import { readFileSync } from 'fs';
function getSecret(name: string): string {
const filePath = process.env[`${name}_FILE`];
if (filePath) {
return readFileSync(filePath, 'utf-8').trim();
}
// Fallback to environment variable for development
return process.env[name] || '';
}
const dbPassword = getSecret('DB_PASSWORD');
const apiKey = getSecret('API_KEY');
External Secret Managers
For production environments, use a dedicated secret manager: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or Google Secret Manager. These provide: encryption at rest and in transit, access control and audit logging, automatic rotation, versioning, and dynamic secrets (credentials generated on demand with automatic expiration).
The pattern is: your container starts, authenticates with the secret manager (using a short-lived token, IAM role, or service account), retrieves the secrets it needs, and uses them. The secrets never appear in environment variables, Docker metadata, or image layers.
Build-Time Secrets with BuildKit
Sometimes you need secrets during the build process (e.g., to pull private packages). Docker BuildKit provides a secure way to pass secrets that are never stored in image layers:
# Dockerfile using BuildKit secrets
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
# Mount npm token as a secret during npm install
# The secret is available during this RUN instruction only
# It is NOT stored in any layer
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc npm ci --only=production
COPY . .
USER node
CMD ["node", "server.js"]
# Build with the secret
DOCKER_BUILDKIT=1 docker build --secret id=npmrc,src=.npmrc -t myapp:v1.2.3 .
Chapter 5: Docker Compose Security for Production
Most production Docker deployments use Docker Compose. Here's a hardened docker-compose.yml that implements all the security practices covered in this guide:
version: '3.8'
services:
app:
image: myapp@sha256:abc123... # Pinned by digest
build:
context: .
dockerfile: Dockerfile.production
read_only: true # Read-only filesystem
tmpfs:
- /tmp:rw,noexec,nosuid,size=100m
cap_drop:
- ALL # Drop all capabilities
cap_add:
- NET_BIND_SERVICE # Only if needed
security_opt:
- no-new-privileges:true # Prevent privilege escalation
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
ulimits:
nproc: 200
nofile:
soft: 1024
hard: 4096
networks:
- frontend
- backend
ports:
- "127.0.0.1:8080:8080" # Bind to localhost only
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
restart: on-failure:5
database:
image: postgres:16-alpine@sha256:def456...
read_only: true
tmpfs:
- /tmp:rw,noexec,nosuid
- /run/postgresql:rw,noexec,nosuid
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
volumes:
- pgdata:/var/lib/postgresql/data:rw
networks:
- backend # Only accessible from backend network
# No ports exposed to host
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No internet access
volumes:
pgdata:
driver: local
secrets:
db_password:
file: ./secrets/db_password.txt # Or use external secrets
Chapter 6: Monitoring and Incident Response
Container Monitoring
Monitoring running containers is essential for detecting security incidents. Key signals to monitor:
Process monitoring: Alert on unexpected processes running inside containers. If a container running a Node.js application suddenly spawns a Python process or a shell, that's suspicious. Tools like Falco (open-source, CNCF) monitor syscalls in real-time and alert on anomalous behavior.
Network monitoring: Alert on unexpected network connections. If your backend container suddenly connects to an IP in a known-malicious range, or makes DNS requests to unusual domains, that could indicate compromise. Monitor both ingress (who's connecting to your containers) and egress (what your containers connect to).
File integrity monitoring: In read-only containers, any filesystem write attempt is suspicious. In writable containers, monitor for changes to configuration files, new executable files, and modifications to application code.
Resource usage anomalies: A container suddenly consuming 100% CPU might be running a cryptominer. A container with unusual memory growth might be exfiltrating data. Baseline normal resource usage and alert on deviations.
# Falco rule examples for Docker monitoring
# Alert when a shell is spawned in a container
- rule: Shell Spawned in Container
desc: Detect shell spawned in a container
condition: >
spawned_process and container and
proc.name in (bash, sh, zsh, dash, ash, csh, ksh, fish)
output: >
Shell spawned in container
(user=%user.name container=%container.name
shell=%proc.name parent=%proc.pname)
priority: WARNING
# Alert when a container makes an outbound connection
# to an unusual port
- rule: Unexpected Outbound Connection
desc: Detect outbound connections on non-standard ports
condition: >
outbound and container and
fd.sport not in (80, 443, 53, 5432, 6379, 3306)
output: >
Unexpected outbound connection from container
(container=%container.name port=%fd.sport ip=%fd.sip)
priority: NOTICE
# Alert when sensitive files are read
- rule: Sensitive File Read
desc: Detect reads of sensitive files in containers
condition: >
open_read and container and
(fd.name startswith /etc/shadow or
fd.name startswith /etc/passwd or
fd.name startswith /proc/self/environ)
output: >
Sensitive file read in container
(file=%fd.name container=%container.name user=%user.name)
priority: WARNING
Docker Logging Best Practices
Container logs are essential for security investigations. Configure centralized logging to ensure logs survive container restarts and are searchable during incidents.
Use the json-file logging driver with size limits to prevent disk exhaustion. For production, forward logs to a centralized logging system (ELK Stack, Loki, Datadog) using the syslog, fluentd, or GELF driver.
What to log from a security perspective: all authentication events (successful and failed), authorization decisions (who accessed what), data access patterns (which records were read or modified), configuration changes, error events (especially unexpected errors that could indicate attacks), and API request metadata (source IP, user agent, request path).
Chapter 7: Docker Host Security
The Docker host (the machine running the Docker daemon) is the last line of defense. If the host is compromised, every container on it is compromised.
Host Hardening
Keep Docker updated. Docker releases include security patches. Run the latest stable version and apply updates promptly. Subscribe to Docker security advisories.
Restrict Docker daemon access. The Docker daemon runs as root. Only trusted users should be in the docker group. Membership in the docker group is effectively equivalent to root access on the host.
Enable user namespaces. User namespaces remap the root user inside the container to a non-root user on the host. Even if a container runs as root internally, it maps to an unprivileged user on the host, significantly limiting the impact of container escapes.
Use a minimal host OS. Consider Container-Optimized OS (Google), Bottlerocket (AWS), or Flatcar Container Linux instead of a general-purpose Linux distribution. These operating systems are designed specifically for running containers and have minimal attack surface.
Enable audit logging. Configure auditd to log Docker-related activities: daemon configuration changes, container creation and deletion, image pulls, and volume mounts. These logs are invaluable during security investigations.
Docker Bench Security
Docker Bench for Security is an open-source script that checks your Docker installation against dozens of security best practices from the CIS Docker Benchmark:
# Run Docker Bench Security
docker run --rm --net host --pid host --userns host --cap-add audit_control -e DOCKER_CONTENT_TRUST=1 -v /etc:/etc:ro -v /usr/bin/containerd:/usr/bin/containerd:ro -v /usr/bin/runc:/usr/bin/runc:ro -v /usr/lib/systemd:/usr/lib/systemd:ro -v /var/lib:/var/lib:ro -v /var/run/docker.sock:/var/run/docker.sock:ro docker/docker-bench-security
# Review the output and address WARN findings
Chapter 8: The Complete Docker Security Checklist
Use this checklist for every Docker deployment. Items are ordered by impact β address top items first.
Image Security: β Use minimal base images (Alpine, distroless, or Chainguard). β Pin images by digest, not tag. β Scan all images for vulnerabilities in CI/CD. β Use multi-stage builds to minimize final image size. β Never include secrets, credentials, or private keys in images. β Use .dockerignore to exclude sensitive files from build context. β Rebuild images regularly to pick up base image security patches.
Runtime Security: β Run containers as non-root (USER directive in Dockerfile). β Use read-only filesystem where possible. β Drop all capabilities, add back only what's needed. β Enable no-new-privileges security option. β Set memory and CPU limits. β Never mount the Docker socket into containers. β Use custom seccomp profiles to restrict system calls.
Network Security: β Isolate containers using Docker networks. β Use internal: true for backend networks that don't need internet. β Bind ports to 127.0.0.1 unless external access is needed. β Don't use --net=host unless absolutely necessary.
Secrets: β Never hardcode secrets in Dockerfiles or images. β Use Docker secrets, mounted files, or external secret managers. β Use BuildKit secrets for build-time credentials. β Rotate secrets regularly.
Monitoring: β Configure centralized logging with size limits. β Monitor for unexpected processes, network connections, and file changes. β Run Docker Bench Security regularly. β Set up alerts for security-relevant events.
Docker security isn't a one-time configuration β it's an ongoing practice. New vulnerabilities are discovered daily, attack techniques evolve, and your application changes over time. Review your security posture quarterly, stay updated on Docker security advisories, and continuously improve your defenses.
ZeonEdge provides Docker security assessments, hardening implementation, and ongoing container security monitoring. From image scanning pipeline setup to runtime protection with Falco, we secure your containerized infrastructure. Contact our security team for a container security audit.
Sarah Chen
Senior Cybersecurity Engineer with 12+ years of experience in penetration testing and security architecture.