Kubernetes Pod Evicted
A Kubernetes pod enters the 'Evicted' status when the node it was running on experiences resource pressure (e.g., disk, memory, inode exhaustion), or when a node is cordoned/drained, or due to node taints/tolerations.
Common Causes
- Node resource exhaustion (disk pressure, memory pressure, inode pressure)
- Node cordoned or drained by an administrator
- Node taints preventing pod scheduling or forcing eviction
- Kubelet garbage collection policies (e.g., image cleanup)
- Failed volume mounts or storage issues
How to Fix
1 Monitor Node Resources
Check the resource utilization (CPU, memory, disk, inodes) of the node where the pod was evicted. High utilization often triggers eviction policies.
$ kubectl describe node <node-name>
kubectl top node <node-name> 2 Optimize Pod Resource Requests and Limits
Review and adjust the resource requests and limits for your pods. Setting appropriate limits prevents pods from consuming too many resources and causing pressure on the node.
$ apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m" 3 Verify Taints and Tolerations
Ensure that the pod has the necessary tolerations for any taints present on the node. Taints can prevent pods from being scheduled or cause existing pods to be evicted.
$ kubectl describe node <node-name> | grep Taints
# Example pod toleration
spec:
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule" 4 Examine Node Events
Look at the events associated with the node to understand why the eviction occurred. Kubelet often logs specific reasons for eviction.
$ kubectl describe node <node-name> | grep -A 10 Events 5 Clear Node Disk Space
If disk pressure is the cause, investigate and clear unnecessary files, old images, or logs on the node. Kubelet's garbage collection might not be aggressive enough.
$ # SSH into the node and check disk usage
df -h
docker system prune -a # if using Docker runtime