Difference between revisions of "Kubectl get events"
Jump to navigation
Jump to search
Line 2: | Line 2: | ||
TOMERGE: [[Kubernetes node events]] | TOMERGE: [[Kubernetes node events]] | ||
− | + | * <code>[[kubectl get events --help]]</code> | |
* <code>[[kubectl get events -A]]</code> | * <code>[[kubectl get events -A]]</code> | ||
* <code>[[kubectl get events -A]] | grep [[Warning]]</code> | * <code>[[kubectl get events -A]] | grep [[Warning]]</code> |
Revision as of 10:24, 14 December 2023
TOMERGE: Kubernetes node events
kubectl get events --help
kubectl get events -A
kubectl get events -A | grep Warning
kubectl get events
kubectl get events -o yaml
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl get events --sort-by=.metadata.creationTimestamp -A
kubectl get events --sort-by='.lastTimestamp
kubectl get events -A | grep Warning | egrep "FailedMount|FailedAttachVolume|Unhealthy|ClusterUnhealthy|FailedScheduling"
kubectl get events -A | grep Normal
Events examples
Warning
BackoffLimitExceeded CalculateExpectedPodCountFailed ClusterUnhealthy FailedMount FailedScheduling InvalidDiskCapacity Unhealthy .../...
your_namespace 28s Warning FailedScheduling pod/kibana-kibana-654ccb45bd-pbp4r 0/2 nodes are available: 2 Insufficient cpu.
your_namespace 4m53s Warning ProbeWarning pod/metabase-prod-f8f4b765b-h4pgs Readiness probe warning:
your_namespace 30m Warning BackoffLimitExceeded job/your-job27740460 Job has reached the specified backoff limit
your_namespace 26m Warning Unhealthy pod/elasticsearch-master-1 Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )...
your_namespace 99s Warning BackOff pod/elasticsearch-master-0 Back-off restarting failed container your_namespace 108s Warning BackOff pod/elasticsearch-master-1 Back-off restarting failed container
your_namespace 12m Warning PresentError challenge/prod-admin-tls-cert-dzmbt-2545 Error presenting challenge: error getting clouddns service account: secret "clouddns-dns01-solver-svc-acct" not found
your_namespace 27m Warning OOMKilling node/gke-you-pool4 Memory cgroup out of memory: Killed process 2768158 (python) total-vm:5613088kB, anon-rss:3051580kB, file-rss:65400kB, shmem-rss:0kB, UID:0 pgtables:7028kB oom_score_adj:997
your_namespace 8m51s Warning FailedScheduling pod/myprometheus-alertmanager-5967d4ff85-5glkh running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition default 4m58s Normal ExternalProvisioning persistentvolumeclaim/myprometheus-alertmanager waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Solution: Install aws-ebs-csi-driver
default 107s Warning ProvisioningFailed persistentvolumeclaim/myprometheus-server (combined from similar events): failed to provision volume with StorageClass "gp2": rpc error: code = Internal desc = Could not create volume "pvc-4e14416c-c9c2-4d39-b749-9ce0fa98d597": could not create volume in EC2: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: Goz6E3qExxxxx.../...
kube-system 9m44s Warning FailedMount pod/kube-dns-85df8994db-v8qdg MountVolume.SetUp failed for volume "kube-dns-config" : failed to sync configmap cache: timed out waiting for the condition
kube-system 43m Warning ClusterUnhealthy configmap/cluster-autoscaler-status Cluster has no ready nodes.
Normal
Started Created Pulled Pulling Scheduled Killing Evict SandboxChanged SuccessfulCreate - ReplicaSet SuccessfulDelete NodeNotSchedulable RemovingNode TaintManagerEviction WaitForFirstConsumer ExternalProvisioning TaintManagerEviction: Cancelling deletion of pod
default 4s Normal Provisioning persistentvolumeclaim/myprometheus-alertmanager External provisioner is provisioning volume for claim "default/myprometheus-alertmanager"
Related: kubectl get pvc
ingress-nginx 53m Normal UpdatedLoadBalancer service/nginx-ingress-controller Updated load balancer with new hosts ingress-nginx 54m Warning UnAvailableLoadBalancer service/nginx-ingress-controller There are no available nodes for LoadBalancer
Events
BackOff
Completed
Created
DeadlineExceeded
Failed
FailedAttachVolume
FailedCreatePodSandBox
FailedMount
FailedKillPod
FailedScheduling
FailedToUpdateEndpoint
FailedToUpdateEndpointSlices
Generated
PresentError
Pulled
Pulling
Requested
SawCompletedJob
Scheduled
Started
SuccessfulCreate
SuccessfulDelete
NetworkNotReady
NodeNotReady
NodeAllocatableEnforced
NoPods
NodeHasNoDiskPressure
UnAvailableLoadBalancer
Unhealthy
VolumeFailedDelete
Related
kubectl top
kubectl logs
gcloud logging read resource.labels.cluster_name
- job-controller
- GCP Node logs
gcloud logging read projects/yourproject/logs/kubelet
kubectl describe nodes (conditions:)
kubectl describe nodes | grep KubeletReady
--event-ttl
defines the amount of time to retain events, default 1h.
See also
- Kubernetes events, Kubernetes node events:
[ Normal | Warning ]
,kubectl events, kubectl get events
:BackOff
,FailedMount, FailedAttachVolume, TaintManagerEviction, InvalidDiskCapacity
,FailedScheduling, kubectl describe events, Unhealthy
,conditions:, UpdatedLoadBalancer, BackoffLimitExceeded
- K8s troubleshooting:
kubectl logs, kubectl top, kubectl get events -A, kubectl describe pod
, Liveness, Readiness,Kubernetes events
, Pulling image, OOMKilled, ProbeWarning, Reason,FailedScheduling
,errImagePull, ImagePullBackOff
, Kubelet conditions:MemoryPressure, DiskPressure, KubeletHasSufficientPID, KubeletReady, kubectl [ debug | attach | exec ] kubectl cluster-info dump, SimKube, KWOK
- Kubernetes monitoring, node conditions, Kube-state-metrics (KSM), Prometheus, VictoriaMetrics,
node-problem-detector, Thanos
, log collection,ProbeWarning
, Kubernetes node-problem-detector, Pixie, OpenMetrics,kind: PodMonitor
, Jaeger
Advertising: