بلاگCloud & Infrastructure
Cloud & Infrastructure

Kubernetes 1.32 "Penelope" Released: Every New Feature, Breaking Change & Upgrade Guide

Kubernetes 1.32 brings sidecar containers graduating to stable, in-place pod resource resize, dynamic resource allocation improvements, and major changes to cloud provider integrations. Here is everything you need to know before upgrading.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

January 22, 2026
28 منٹ پڑھنے کا وقت

Kubernetes 1.32 "Penelope" — Overview

Kubernetes 1.32, named "Penelope," was released with 44 enhancements: 13 graduating to stable, 12 moving to beta, and 19 in alpha. This release continues the Kubernetes project's focus on stability, resource efficiency, and developer experience. For most production clusters, this is a recommended upgrade — but as always, several deprecations and breaking changes require careful attention.

In this guide we cover every significant change, explain the technical details behind why each was made, and provide a tested upgrade procedure for production clusters.

Stable (GA) Features in 1.32

1. Sidecar Containers (SidecarContainers) — Now Stable

After years as a workaround using init containers with restartPolicy: Always, native sidecar container support has graduated to stable. Sidecars now have first-class lifecycle management: they start before the main containers, receive SIGTERM when the pod terminates, and the pod waits for sidecars to exit cleanly.

# Before 1.32: Hack using restartPolicy on init container
apiVersion: v1
kind: Pod
spec:
  initContainers:
  - name: log-agent
    image: fluent/fluent-bit:latest
    restartPolicy: Always  # Made it behave like a sidecar

# After 1.32: Native sidecar support (stable)
apiVersion: v1
kind: Pod
spec:
  initContainers:
  - name: log-agent
    image: fluent/fluent-bit:latest
    restartPolicy: Always  # Same syntax, now officially stable
  containers:
  - name: app
    image: my-app:latest
# log-agent starts first, stays running, gets SIGTERM on pod termination

The real-world impact is significant for service mesh and observability deployments. Istio's Envoy sidecar, Datadog agents, Fluentd/Fluent Bit log collectors, and Vault agent injectors all benefit from proper lifecycle ordering.

2. In-Place Pod Vertical Scaling — Beta (Moving to Stable)

You can now resize CPU and memory requests/limits on running pods without restarting them. This was previously impossible — any resource change required pod deletion and recreation. For stateful workloads this is game-changing:

# Enable the feature gate (required until full GA)
# kube-apiserver and kubelet flags:
# --feature-gates=InPlacePodVerticalScaling=true

# Resize a running pod's CPU limit without restart
kubectl patch pod my-pod --type='json' -p='[
  {
    "op": "replace",
    "path": "/spec/containers/0/resources/requests/cpu",
    "value": "500m"
  },
  {
    "op": "replace", 
    "path": "/spec/containers/0/resources/limits/cpu",
    "value": "1000m"
  }
]'

# Check resize status
kubectl get pod my-pod -o jsonpath='{.status.containerStatuses[0].resources}'

# Resize policy per container (default: RestartContainer for memory, NotRequired for CPU)
spec:
  containers:
  - name: app
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired     # Resize CPU without restart
    - resourceName: memory
      restartPolicy: RestartContainer # Memory change still requires restart

3. Structured Authorization Configuration — Stable

The --authorization-config flag for kube-apiserver is now stable, replacing the older --authorization-modes flag. This allows multiple authorizers with ordering, failure modes, and webhook configuration in a single YAML file:

# /etc/kubernetes/authorization-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AuthorizationConfiguration
authorizers:
- type: Node
  name: node
- type: RBAC
  name: rbac
- type: Webhook
  name: my-opa-webhook
  webhook:
    timeout: 5s
    failurePolicy: Deny   # Deny if webhook fails (vs old NoOpinion)
    connectionInfo:
      type: KubeConfigFile
      kubeConfigFile: /etc/kubernetes/opa-webhook.kubeconfig
    matchConditions:
    - expression: "request.user != 'system:serviceaccount:kube-system:default'"

Breaking Changes & Deprecations

1. Removal of In-Tree Cloud Provider Code

Kubernetes 1.32 removes the remaining in-tree cloud provider implementations for AWS, Azure, GCP, and OpenStack. If you're still using --cloud-provider=aws on your kubelet or kube-controller-manager, your cluster will break after upgrading.

# BEFORE (1.31 and earlier — still works):
# kube-controller-manager flags
--cloud-provider=aws
--cloud-config=/etc/kubernetes/cloud.conf

# AFTER (1.32 — this will FAIL):
# In-tree providers removed. You MUST use external CCM.

# CORRECT approach for AWS:
# Deploy AWS Cloud Controller Manager as a DaemonSet
helm repo add aws-cloud-controller-manager   https://kubernetes.github.io/cloud-provider-aws

helm install aws-cloud-controller-manager   aws-cloud-controller-manager/aws-cloud-controller-manager   --namespace kube-system   --set image.tag=v1.32.0

# Remove old flags from kube-controller-manager
# Add to kube-controller-manager: --cloud-provider=external

Check if you're affected:

# Check if in-tree cloud provider is still in use
kubectl get nodes -o jsonpath='{.items[*].spec.providerID}' | tr ' ' '
'
# If output starts with "aws://", "azure://", or "gce://" you need to migrate

# Check kube-controller-manager flags
kubectl get pod kube-controller-manager- -n kube-system   -o jsonpath='{.spec.containers[0].command}' | tr ',' '
' | grep cloud

2. Removal of SHA-1 Certificate Support

Kubernetes 1.32 drops support for SHA-1 signed certificates in TLS connections. Any certificates signed with SHA-1 (which you shouldn't have anyway, but many old clusters do) will cause TLS handshake failures.

# Audit your certificates before upgrading
for cert in /etc/kubernetes/pki/*.crt; do
  echo "=== $cert ==="
  openssl x509 -in $cert -noout -text | grep "Signature Algorithm"
done

# Any "sha1WithRSAEncryption" must be rotated before upgrading
# Rotate with kubeadm:
kubeadm certs renew all --config /etc/kubernetes/kubeadm-config.yaml

# Verify new certs use SHA-256
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text   | grep "Signature Algorithm"
# Expected: sha256WithRSAEncryption

3. Removed API Versions

The following API versions are removed in 1.32 and will return HTTP 404:

  • flowcontrol.apiserver.k8s.io/v1beta2 → use v1
  • Several internal alpha API versions cleaned up
# Check for deprecated API usage in your manifests
kubectl convert --dry-run=server -f your-manifest.yaml

# Or use pluto to scan your entire cluster
wget https://github.com/FairwindsOps/pluto/releases/latest/download/pluto_linux_amd64.tar.gz
tar xzf pluto_linux_amd64.tar.gz
./pluto detect-all-in-cluster --target-versions k8s=v1.32.0

New Alpha Features Worth Watching

Dynamic Resource Allocation (DRA) v2

DRA received major improvements in 1.32, making it the recommended path for GPU and accelerator scheduling. The new ResourceClaim API allows fine-grained allocation of hardware resources:

# DRA ResourceClaimTemplate for GPU workloads
apiVersion: resource.k8s.io/v1alpha3
kind: ResourceClaimTemplate
metadata:
  name: gpu-claim-template
spec:
  spec:
    devices:
      requests:
      - name: gpu
        deviceClassName: gpu.nvidia.com
        allocationMode: ExactCount
        count: 1
---
apiVersion: v1
kind: Pod
spec:
  resourceClaims:
  - name: gpu
    resourceClaimTemplateName: gpu-claim-template
  containers:
  - name: ml-training
    image: pytorch/pytorch:latest
    resources:
      claims:
      - name: gpu

Production Upgrade Procedure

This is a tested procedure for upgrading a production kubeadm cluster from 1.31 to 1.32:

# === PRE-UPGRADE CHECKLIST ===

# 1. Run pluto to find deprecated APIs
./pluto detect-all-in-cluster --target-versions k8s=v1.32.0

# 2. Check cloud provider migration status
kubectl get pod -n kube-system | grep cloud-controller

# 3. Audit all certificates
kubeadm certs check-expiration

# 4. Take etcd backup
ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-pre-132-$(date +%Y%m%d).db   --endpoints=https://127.0.0.1:2379   --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt   --key=/etc/kubernetes/pki/etcd/server.key

# === CONTROL PLANE UPGRADE ===

# 5. Upgrade kubeadm on first control plane node
apt-get update && apt-get install -y kubeadm=1.32.0-1.1
kubeadm upgrade plan
kubeadm upgrade apply v1.32.0

# 6. Upgrade kubelet and kubectl on control plane
apt-get install -y kubelet=1.32.0-1.1 kubectl=1.32.0-1.1
systemctl daemon-reload && systemctl restart kubelet

# 7. Repeat for additional control plane nodes
# kubeadm upgrade node (not apply) on additional control planes

# === WORKER NODE UPGRADE ===

# 8. Drain worker node
kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data

# 9. Upgrade packages on worker
ssh worker-1 "apt-get update && apt-get install -y kubelet=1.32.0-1.1 kubectl=1.32.0-1.1 kubeadm=1.32.0-1.1"
ssh worker-1 "kubeadm upgrade node"
ssh worker-1 "systemctl daemon-reload && systemctl restart kubelet"

# 10. Uncordon and verify
kubectl uncordon worker-1
kubectl get node worker-1  # Should show v1.32.0

# 11. Repeat for all worker nodes (do not drain all at once!)

Performance Improvements

Kubernetes 1.32 includes several scheduler and API server performance improvements worth noting for large clusters:

  • Scheduler throughput: 15–20% improvement in pod scheduling throughput for clusters with 5000+ nodes through optimized queue processing
  • API server memory: Reduced memory usage for watch connections — significant for clusters with many controllers
  • etcd watch cache: Improved watch cache efficiency reducing etcd read load
  • Node startup latency: New nodes join the cluster ~40% faster due to optimized certificate provisioning

Summary

Kubernetes 1.32 is a solid release. The graduation of sidecar containers to stable alone justifies upgrading for most organizations running service meshes or observability stacks. The in-tree cloud provider removal is the most disruptive change, but if you're running a managed Kubernetes service (EKS, GKE, AKS) you won't notice it at all — those managed services handle CCM for you.

For self-managed clusters: audit your cloud provider configuration before upgrading, use pluto to catch deprecated API usage, and follow the procedure above with etcd backup in hand.

M

Marcus Rodriguez

Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.

اپنے بنیادی ڈھانچے کو تبدیل کرنے کے لیے تیار ہیں؟

آئیے بات کریں کہ ہم آپ کی مدد کیسے کر سکتے ہیں۔