kubectl describe pods
kubectl describe pods kubectl describe pods | grep Unhealthy
Example: kubernetes-dashboard in EKS
kubectl describe pods Name: my-release-kubernetes-dashboard-b8c4f9c87-vljqm Namespace: default Priority: 0 Node: ip-10-0-3-153.us-east-2.compute.internal/10.0.3.153 Start Time: Tue, 30 Nov 2021 00:09:06 +0300 Labels: app.kubernetes.io/component=kubernetes-dashboard app.kubernetes.io/instance=my-release app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=kubernetes-dashboard app.kubernetes.io/version=2.4.0 helm.sh/chart=kubernetes-dashboard-5.0.4 pod-template-hash=b8c4f9c87 Annotations: kubernetes.io/psp: eks.privileged seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 10.0.3.167 IPs: IP: 10.0.3.167 Controlled By: ReplicaSet/my-release-kubernetes-dashboard-b8c4f9c87 Containers: kubernetes-dashboard: Container ID: docker://d09b4d275744f4f456f3cef5d1fa3ea29beb57e7695ced2cc84e37b1a52943e5 Image: kubernetesui/dashboard:v2.4.0 Image ID: docker-pullable://kubernetesui/dashboard@sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f Port: 8443/TCP Host Port: 0/TCP Args: --namespace=default --auto-generate-certificates --metrics-provider=none State: Running Started: Tue, 30 Nov 2021 00:09:14 +0300 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 200Mi Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from my-release-kubernetes-dashboard-token-v9fvr (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: my-release-kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> my-release-kubernetes-dashboard-token-v9fvr: Type: Secret (a volume populated by a Secret) SecretName: my-release-kubernetes-dashboard-token-v9fvr Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
Example: Grafana in EKS
kubectl describe pods Name: grafana-65b996b88c-dxg4l Namespace: default Priority: 0 Node: ip-192-168-71-222.us-east-2.compute.internal/192.168.71.222 Start Time: Mon, 29 Nov 2021 15:01:50 +0300 Labels: app.kubernetes.io/instance=grafana app.kubernetes.io/name=grafana pod-template-hash=65b996b88c Annotations: checksum/config: a4050f488319bf769d1c8afa79d3cce1dc01de73d491b3516a39582a12f82c44 checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/secret: 7076c3f3658be7da8787fea30b2f227b4779eb4f071097eef69e82a413105f3d kubernetes.io/psp: eks.privileged Status: Running IP: 192.168.66.98 IPs: IP: 192.168.66.98 Controlled By: ReplicaSet/grafana-65b996b88c Init Containers: init-chown-data: Container ID: docker://3b6ec1fa25e20658dbb6cfa4ece98bbed657859bcba43c1de5ef0949c1a61f44 Image: busybox:1.31.1 Image ID: docker-pullable://busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 Port: <none> Host Port: <none> Command: chown -R 472:472 /var/lib/grafana State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 29 Nov 2021 15:02:01 +0300 Finished: Mon, 29 Nov 2021 15:02:01 +0300 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/lib/grafana from storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ngqrq (ro) Containers: grafana: Container ID: docker://6f3add583b4726e8acb0bb35663c84204469743eaf7af26b3945391ac5c1f2c2 Image: grafana/grafana:8.2.5 Image ID: docker-pullable://grafana/grafana@sha256:00568d89c4f8a2cfa0d56f0fcd875b23ec8000b743a62f442e1ee91fce9a6e24 Ports: 80/TCP, 3000/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Mon, 29 Nov 2021 15:02:08 +0300 Ready: True Restart Count: 0 Liveness: http-get http://:3000/api/health delay=60s timeout=30s period=10s #success=1 #failure=10 Readiness: http-get http://:3000/api/health delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: GF_SECURITY_ADMIN_USER: <set to the key 'admin-user' in secret 'grafana'> Optional: false GF_SECURITY_ADMIN_PASSWORD: <set to the key 'admin-password' in secret 'grafana'> Optional: false GF_PATHS_DATA: /var/lib/grafana/ GF_PATHS_LOGS: /var/log/grafana GF_PATHS_PLUGINS: /var/lib/grafana/plugins GF_PATHS_PROVISIONING: /etc/grafana/provisioning Mounts: /etc/grafana/grafana.ini from config (rw,path="grafana.ini") /var/lib/grafana from storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ngqrq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: grafana Optional: false storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: grafana ReadOnly: false kube-api-access-ngqrq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m54s default-scheduler Successfully assigned default/grafana-65b996b88c-dxg4l to ip-192-168-71-222.us-east-2.compute.internal Normal SuccessfulAttachVolume 6m52s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-87ffbfb1-0e66-4633-84df-9d3c655ee338" Normal Pulling 6m44s kubelet Pulling image "busybox:1.31.1" Normal Pulled 6m43s kubelet Successfully pulled image "busybox:1.31.1" in 795.579016ms Normal Created 6m43s kubelet Created container init-chown-data Normal Started 6m43s kubelet Started container init-chown-data Normal Pulling 6m42s kubelet Pulling image "grafana/grafana:8.2.5" Normal Pulled 6m36s kubelet Successfully pulled image "grafana/grafana:8.2.5" in 5.640256581s Normal Created 6m36s kubelet Created container grafana Normal Started 6m36s kubelet Started container grafana Warning Unhealthy 6m34s (x2 over 6m36s) kubelet Readiness probe failed: Get "http://192.168.66.98:3000/api/health": dial tcp 192.168.66.98:3000: connect: connection refused
Example: Redis in minikube
kubectl describe pod Name: redis Namespace: default Priority: 0 Node: minikube/192.168.99.100 Start Time: Sat, 17 Jul 2021 20:07:06 +0400 Labels: <none> Annotations: <none> Status: Running IP: 172.17.0.5 IPs: IP: 172.17.0.5 Containers: redis: Container ID: docker://f1bf24ad3f84de2d7bdd6a6d734d8f8b051c99ab7b138abfcdfd6af2283c4116 Image: redis Image ID: docker-pullable://redis@sha256:b6a9fc3535388a6fc04f3bdb83fb4d9d0b4ffd85e7609a6ff2f0f731427823e3 Port: <none> Host Port: <none> State: Running Started: Sat, 17 Jul 2021 20:07:42 +0400 Ready: True Restart Count: 0 Environment: <none> Mounts: /data/redis from redis-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sck6w (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: redis-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-sck6w: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
Events
.../... Normal Pulled 7m50s (x4 over 9m12s) kubelet Container image "123243534.dkr.ecr.eu-central-1.amazonaws.com/your_proyect/elasticsearch:latest" already present on machine Warning BackOff 4m32s (x23 over 9m11s) kubelet Back-off restarting failed container
Related
See also
- Pods:
kubectl apply
,kubectl [ pod get | top | delete | describe pods ]
,InitContainers, PodInitializing, CrashLoopBackOff, ImagePullPolicy:, NodeAffinity, NodeSelector, Terminated
kubectl describe [ nodes | pods | deployment | pv | pvc | secrets | configmaps | networkpolicy | job ]
kubectl
: [cp | config | create
|delete
|edit | explain |
apply
|exec
|get
|set
|drain | uncordon | rolling-update
|rollout
|logs
|run
|auth
|label | annotate
|version
|top
|diff
|debug
|replace
|describe
|port-forward | proxy
|scale
|rollout
|api-resources
| expose deployment | expose | patch | attach | get endpoints | ~/.kube/config | kubectl logs --help | kubectl --help, kubectl-convert, kubectl autoscale, kubectl.kubernetes.io
Advertising: