DB / K8s / CrashLoopBackOff
CRITICAL
K8s CrashLoopBackOff
A pod status indicating the application inside the container is repeatedly crashing. Kubernetes restarts it, but it fails again, entering a backoff loop.
Common Causes
- Application crashes due to bugs, missing dependencies, or incorrect startup commands.
- Misconfigured liveness or readiness probes causing the container to be killed.
- Insufficient resources (CPU/Memory) or hitting resource limits (OOMKilled).
How to Fix
1 Check Pod Logs for Application Errors
Examine the logs of the crashing container to identify the root cause of the application failure.
BASH
$ kubectl logs <pod-name> --previous 2 Describe the Pod for Events
Use `kubectl describe` to see recent events, state changes, and potential system-level issues like image pull errors or resource constraints.
BASH
$ kubectl describe pod <pod-name> 3 Debug with an Interactive Shell
If the image supports it, run a temporary debug pod to inspect the filesystem and environment manually.
BASH
$ kubectl run -i --tty debug --image=<your-image> --restart=Never -- sh