If your Pod is stuck in CrashLoopBackOff, it means Kubernetes is repeatedly starting the container, but the container keeps crashing. After a few failed restarts, Kubernetes applies a backoff delay and shows the status CrashLoopBackOff.
In this guide, you’ll learn the exact commands to debug CrashLoopBackOff and the 7 most common real-world causes with fixes.
What is CrashLoopBackOff in Kubernetes?
Quick Commands to Debug CrashLoopBackOff
Fix #1: Application is crashing on startup
Fix #2: Wrong command or entrypoint
Fix #3: Missing environment variables / secrets
Fix #4: Liveness probe killing the pod
Fix #5: Image issues or wrong architecture
Fix #6: OOMKilled (memory limit exceeded)
Fix #7: Volume mount or permission issues
Best Practices to Prevent CrashLoopBackOff
1) What is CrashLoopBackOff in Kubernetes?
CrashLoopBackOff is not the error itself.
It is Kubernetes telling you:
“Your container keeps crashing. I will restart it, but with increasing delay.”
So your real job is to find why the container exits.
2) Quick Commands to Debug CrashLoopBackOff
Step 1: Check pod status
kubectl get pods -n <namespace>
Step 2: Describe the pod (most important)
kubectl describe pod <pod-name> -n <namespace>
Look at:
Events section
Container restart count
Probe failures
OOMKilled
Step 3: Check logs
kubectl logs <pod-name> -n <namespace>
If the pod restarted, check previous container logs:
kubectl logs <pod-name> -n <namespace> --previous
Step 4: Check container exit code
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.status.containerStatuses[*].state.terminated.exitCode}'
3) Fix #1: Application is Crashing on Startup
Symptoms
Logs show stack trace
Exit code is often
1Pod restarts continuously
Fix
Check the logs:
kubectl logs <pod-name> -n <namespace> --previous
Then fix the app issue:
missing config file
invalid config format
app cannot connect to DB
missing dependency
4) Fix #2: Wrong Command or Entrypoint
This is extremely common in Docker + Kubernetes.
Symptoms
Logs show:
exec format errorcommand not foundcontainer exits instantly
Check your deployment YAML
kubectl get deploy <deploy-name> -n <namespace> -o yaml
Look for:
command: ["..."]
args: ["..."]
Fix
Remove incorrect command/args
Ensure the binary exists in image
Verify Dockerfile
ENTRYPOINTandCMD
5) Fix #3: Missing Environment Variables / Secrets
Symptoms
Logs show:
ENV VAR not setunable to load configpermission denied to secret
Check env vars
kubectl describe pod <pod-name> -n <namespace>
Fix
If using secret:
kubectl get secret -n <namespace>
kubectl describe secret <secret-name> -n <namespace>
Also confirm secret is referenced correctly:
envFrom:
- secretRef:
name: my-secret
6) Fix #4: Liveness Probe Killing the Pod
Many people confuse probe failure with “app crash”.
Your container might be running, but Kubernetes kills it due to liveness probe failures.
Symptoms
Events show:
Liveness probe failedReadiness probe failed
Check pod events
kubectl describe pod <pod-name> -n <namespace>
Fix
Increase initialDelaySeconds
Increase timeoutSeconds
Fix health endpoint
Example:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
7) Fix #5: Image Issues or Wrong Architecture
Symptoms
exec format errorWorks on local, fails in cluster
Image pulled successfully but crashes instantly
Fix
Check node architecture:
kubectl get nodes -o wide
If your nodes are amd64 but image is arm64, container will fail.
Solution:
Build multi-arch image
Or build image for correct architecture
8) Fix #6: OOMKilled (Memory Limit Exceeded)
This is one of the most common reasons in production.
Symptoms
Exit code often
137Pod restarts
Events show
OOMKilled
Check pod status
kubectl describe pod <pod-name> -n <namespace>
Fix
Increase memory limit:
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"
Or optimize application memory usage.
9) Fix #7: Volume Mount or Permission Issues
Symptoms
container crashes when writing to a directory
logs show
permission deniederrors related to volume mount paths
Check volume mount in YAML
kubectl get pod <pod-name> -n <namespace> -o yaml
Fix
Ensure mountPath is correct
Use correct securityContext
If using non-root container, fix permissions
Example:
securityContext:
runAsUser: 1000
fsGroup: 1000
10) Best Practices to Prevent CrashLoopBackOff
✅ Always use readiness + liveness probes correctly
✅ Add proper resource requests/limits
✅ Validate config before deploy
✅ Use structured logs
✅ Use kubectl logs --previous during debugging
✅ Add alerting for restart count
11) FAQ
Q1. How long does CrashLoopBackOff last?
It continues until:
the pod becomes healthy
or you delete/scale down deployment
Q2. How do I stop CrashLoopBackOff immediately?
Scale deployment to 0:
kubectl scale deploy <deploy-name> -n <namespace> --replicas=0
Q3. What is the fastest way to debug?
Run:
kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace> --previous
Conclusion
CrashLoopBackOff is common in Kubernetes, but it is always solvable if you debug systematically.
Start with:
kubectl describe podkubectl logs --previousEvents + exit codes
Then apply the fix based on the root cause.