DB / K8s / RunContainerError
CRITICAL
K8s Container Startup Failure
A Kubernetes error indicating that the kubelet failed to start a container within a pod. This occurs after the pod is scheduled but before the container processes begin execution.
Common Causes
- Image pull failures (wrong name, tag, registry auth)
- Volume mount issues (missing ConfigMap/Secret, wrong path, permission denied)
- Invalid container configuration (command/args, security context, resources)
- Missing or inaccessible runtime dependencies
- Container runtime failures (Docker/containerd issues)
How to Fix
1 Check Pod Events and Logs
Examine detailed error messages from kubectl describe to identify the specific failure cause.
BASH
$ kubectl describe pod <pod-name> -n <namespace>
kubectl logs <pod-name> --previous 2 Verify Image Availability
Ensure the container image exists and is accessible from the cluster nodes.
BASH
$ kubectl get events --field-selector involvedObject.name=<pod-name>
docker pull <image-name:tag> # On node if using Docker 3 Validate Volume Mounts
Check that referenced ConfigMaps, Secrets, and PersistentVolumeClaims exist and are mounted correctly.
BASH
$ kubectl get configmap,secret,pvc -n <namespace>
kubectl get pod <pod-name> -o yaml | grep -A5 -B5 volumeMount 4 Test Container Configuration
Run the container image locally with similar configuration to isolate runtime issues.
BASH
$ docker run --rm -it <image-name:tag> <command>
docker run --rm -it -v $(pwd):/app <image-name:tag> 5 Inspect Container Runtime
Check the container runtime (Docker/containerd) status and logs on the affected node.
BASH
$ journalctl -u docker --no-pager | tail -50
crictl ps -a | grep <pod-name>