Kubernetes Deployment Guide
This guide covers production-ready deployment strategies, best practices, and patterns for deploying applications on Kubernetes clusters.
Deployment Strategies
1. Rolling Update (Default)
Gradually replaces old Pods with new ones. Zero downtime but no easy rollback during deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Can have 1 extra pod during update
maxUnavailable: 0 # Must have all pods available
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
2. Recreate Strategy
Kills all old Pods before creating new ones. Causes downtime but ensures consistency.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
3. Blue/Green Deployment
Maintains two identical environments (blue and green). Switch traffic instantly between them.
# Blue deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-blue
labels:
version: blue
spec:
replicas: 3
selector:
matchLabels:
app: nginx
version: blue
template:
metadata:
labels:
app: nginx
version: blue
spec:
containers:
- name: nginx
image: nginx:1.21
---
# Green deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-green
labels:
version: green
spec:
replicas: 3
selector:
matchLabels:
app: nginx
version: green
template:
metadata:
labels:
app: nginx
version: green
spec:
containers:
- name: nginx
image: nginx:1.22
---
# Service switching between blue/green
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
version: blue # Change to 'green' to switch
ports:
- port: 80
targetPort: 80
💡 Tip: Use this strategy for critical applications that cannot tolerate mixed versions.
4. Canary Deployment
Releases new version to a small subset of users first, then gradually increases traffic.
# Main deployment (90% traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-main
spec:
replicas: 9
selector:
matchLabels:
app: nginx
track: stable
template:
metadata:
labels:
app: nginx
track: stable
spec:
containers:
- name: nginx
image: nginx:1.21
---
# Canary deployment (10% traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary
spec:
replicas: 1
selector:
matchLabels:
app: nginx
track: canary
template:
metadata:
labels:
app: nginx
track: canary
spec:
containers:
- name: nginx
image: nginx:1.22
---
# Service with traffic splitting (requires Istio or similar)
# Using simple approach with replicas ratio
Production-Ready Deployment
Complete Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: production
labels:
app: myapp
version: v1.0.0
environment: production
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
# Service account
serviceAccountName: myapp-sa
# Init containers
initContainers:
- name: init-migrations
image: myapp:migrations
command: ['sh', '-c', 'python manage.py migrate']
envFrom:
- configMapRef:
name: myapp-config
- secretRef:
name: myapp-secrets
# Main containers
containers:
- name: myapp
image: myapp:v1.0.0
imagePullPolicy: Always
# Ports
ports:
- name: http
containerPort: 8080
protocol: TCP
# Environment variables
env:
- name: NODE_ENV
value: "production"
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: myapp-config
key: log_level
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: db_password
# Resource limits
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# Health checks
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Startup probe (for slow-starting apps)
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
# Volume mounts
volumeMounts:
- name: config
mountPath: /etc/config
readOnly: true
- name: tmp
mountPath: /tmp
# Volumes
volumes:
- name: config
configMap:
name: myapp-config
- name: tmp
emptyDir: {}
# Affinity rules
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostname
# Tolerations
tolerations:
- key: "critical"
operator: "Equal"
value: "true"
effect: "NoSchedule"
# Node selector
nodeSelector:
kubernetes.io/os: linux
node-type: compute-optimized
---
# Service
apiVersion: v1
kind: Service
metadata:
name: myapp-service
namespace: production
spec:
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
type: ClusterIP
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Deployment Best Practices
1. Resource Management
✅ Always set:
- Resource requests for guaranteed allocation
- Resource limits to prevent resource starvation
- Appropriate CPU/memory based on workload
2. Health Checks
💡 Health Check Tips:
- Use
livenessProbeto restart unhealthy containers - Use
readinessProbeto remove Pods from service load balancer - Use
startupProbefor slow-starting applications - Set appropriate delays and timeouts
3. Security
# Security Context
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
seccompProfile:
type: RuntimeDefault
4. Image Management
- ✅ Use specific image tags (avoid
:latest) - ✅ Use image pull policies appropriately
- ✅ Scan images for vulnerabilities
- ✅ Use private registries for production
5. Rolling Updates Configuration
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25% # Allow extra pods during update
maxUnavailable: 25% # Allow some pods to be unavailable
Deployment Commands
Deploy and Update
# Create deployment from YAML
kubectl apply -f deployment.yaml
# Create deployment with namespace
kubectl apply -f deployment.yaml -n production
# Update deployment
kubectl apply -f deployment.yaml
# Update image directly
kubectl set image deployment/myapp-deployment myapp=myapp:v2.0.0
# Update environment variable
kubectl set env deployment/myapp-deployment NODE_ENV=production
Rollout Management
# Check rollout status
kubectl rollout status deployment/myapp-deployment
# View rollout history
kubectl rollout history deployment/myapp-deployment
# View details of specific revision
kubectl rollout history deployment/myapp-deployment --revision=2
# Rollback to previous version
kubectl rollout undo deployment/myapp-deployment
# Rollback to specific revision
kubectl rollout undo deployment/myapp-deployment --to-revision=2
# Pause rollout
kubectl rollout pause deployment/myapp-deployment
# Resume rollout
kubectl rollout resume deployment/myapp-deployment
Scale Deployment
# Scale deployment
kubectl scale deployment/myapp-deployment --replicas=5
# Scale with resource limits check
kubectl scale deployment/myapp-deployment --replicas=5 --current-replicas=3
# Auto-scale (requires metrics server)
kubectl autoscale deployment/myapp-deployment --min=2 --max=10 --cpu-percent=80
Multi-Environment Deployment
Using Namespaces
# Create namespaces
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production
# Deploy to different environments
kubectl apply -f deployment.yaml -n development
kubectl apply -f deployment.yaml -n staging
kubectl apply -f deployment.yaml -n production
# Label namespaces
kubectl label namespace development environment=dev
kubectl label namespace staging environment=staging
kubectl label namespace production environment=prod
Using Kustomize
# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
template:
spec:
containers:
- name: myapp
image: myapp:latest
---
# overlays/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
replicas:
- name: myapp
count: 2
images:
- name: myapp
newTag: dev
---
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
replicas:
- name: myapp
count: 5
images:
- name: myapp
newTag: v1.0.0
Troubleshooting Deployments
Check Deployment Status
# Get deployment status
kubectl get deployment myapp-deployment
# Describe deployment
kubectl describe deployment myapp-deployment
# Get rollout status
kubectl rollout status deployment/myapp-deployment
# View deployment events
kubectl get events --sort-by='.lastTimestamp'
Check Pod Status
# Get pods
kubectl get pods -l app=myapp
# Describe pod
kubectl describe pod <pod-name>
# Check pod logs
kubectl logs <pod-name>
# Check previous container logs (if crashed)
kubectl logs <pod-name> --previous
# Execute into pod
kubectl exec -it <pod-name> -- sh
Common Issues
⚠️ Issue: Pods stuck in Pending
Possible causes:
Solution:
Possible causes:
- Insufficient resources
- No matching nodes (node selectors/affinity)
- No available PVC
Solution:
kubectl describe pod <pod-name>
⚠️ Issue: Pods crashing
Possible causes:
Solution:
Possible causes:
- Application errors
- Resource limits exceeded
- Missing configuration/secrets
Solution:
kubectl logs <pod-name>
kubectl describe pod <pod-name>
⚠️ Issue: Deployment not updating
Possible causes:
Solution:
Possible causes:
- Image pull errors
- Readiness probe failing
- Rollout paused
Solution:
kubectl rollout status deployment/myapp-deployment
kubectl rollout resume deployment/myapp-deployment
CI/CD Integration
GitHub Actions Example
name: Deploy to Kubernetes
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker image
run: |
docker build -t myapp:${{ github.sha }} .
docker tag myapp:${{ github.sha }} myapp:latest
- name: Push to registry
run: |
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker push myapp:${{ github.sha }}
docker push myapp:latest
- name: Set up kubectl
uses: azure/setup-kubectl@v1
- name: Configure kubectl
run: |
echo "${{ secrets.KUBECONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=./kubeconfig
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp-deployment myapp=myapp:${{ github.sha }}
kubectl rollout status deployment/myapp-deployment
Next Steps
- Scaling & Monitoring - Auto-scaling and monitoring strategies
- Core Concepts - Deep dive into Kubernetes objects
- Getting Started - Back to basics