Skip to main content

Pod Status: Pending

Unless this lasts longer than a few minutes, this is normal. This just means there aren't enough nodes on the cluster to support the new pod and the autoscaler is in the process of spinning up a new node. The pod should move on to the ContainerCreating step shortly. If it does not, check the nodeName for the pod in question.

# kubectl will print the nodeName without a trailing newline so we add some trailing spaces
# `{.spec.nodeName}` is a field specifier and should be typed exactly as shown.
kubectl get pod ${POD_NAME} -n ${NAMESPACE} -o jsonpath="{.spec.nodeName}{'\n'}"

There IS NO node assigned to the pod

This usually indicates VM capacity problems with your cloud provider. This usually means you have constraints and affinities set up on the pod the prevent any node, current or potential, from being a candidate for scheduling. This may be resolved by manually deleting the old ReplicaSet to clean up old pods which can allow the scheduler to find a valid candidate node for your new pod. This can be the case even if all the pods in your old ReplicaSet are failing — all that matters to the scheduler is whether those old pods are assigned to a node. This may also be caused by VM capacity issues with your cloud provider — such as quota limits or subnet IP exhaustion. Ideally, you should resolve the issues with the cloud provider. Alternatively, deleting the old ReplicaSet will free up resources on a node allowing it to be reassigned to your pod.

# Find old ReplicaSet
kubectl get pod ${OLD_POD_NAME} -n ${NAMESPACE} -o jsonpath='{.metadata.ownerReferences[?(@.kind=="ReplicaSet")].name}{"\n"}'

# Delete old ReplicaSet
kubectl delete replicaset ${OLD_REPLICA_SET} -n ${NAMESPACE}

There IS a node assigned to the pod

This usually indicates a problem with the node/VM. You will need to clear the node from kubernetes as shown below. This usually results when the cloud provider has problems provisioning the VM after it has already sent a notification to the autoscaler to add the node to kubernetes. This could mean the VM was never provisioned — or provisioned and promptly deleted — or it could mean that the cloud provider failed to image the VM properly.

# Cordon the node to prevent other pods from being scheduled on it.
kubectl cordon ${NODE_NAME}

# Drain the node to unassign it from any existing pods
kubectl drain --ignore-daemonsets --delete-emptydir-data ${NODE_NAME}

# Delete the node to clear it from kubernetes's database
kubectl delete node ${NODE_NAME}