Difference between revisions of "FailedScheduling"
Jump to navigation
Jump to search
(40 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | |||
+ | FailedScheduling | ||
+ | * <code>[[Insufficient cpu]]</code> | ||
+ | * <code>[[Insufficient memory]]</code> | ||
+ | * <code>[[timed out waiting for the condition]]</code> | ||
+ | * <code>[[unbound immediate PersistentVolumeClaims]]</code> | ||
+ | * <code>[[volume node affinity conflict]]</code> | ||
+ | * <code>[[didn't match Pod's node affinity/selector]]</code> | ||
+ | |||
[[kubectl get events -A]] | [[kubectl get events -A]] | ||
your-namespace 22s Warning [[FailedScheduling]] pod/your_elasticsearch-master-1 0/8 nodes are available: 1 [[Insufficient cpu]], 1 node(s) had taint {risk: etl}, that the pod didn't tolerate, 1 node(s) were unschedulable, 3 node(s) had [[taint]] {NAME: run-your-command}, that the pod didn't tolerate, 7 node(s) had [[volume node affinity conflict]]. | your-namespace 22s Warning [[FailedScheduling]] pod/your_elasticsearch-master-1 0/8 nodes are available: 1 [[Insufficient cpu]], 1 node(s) had taint {risk: etl}, that the pod didn't tolerate, 1 node(s) were unschedulable, 3 node(s) had [[taint]] {NAME: run-your-command}, that the pod didn't tolerate, 7 node(s) had [[volume node affinity conflict]]. | ||
+ | kube-system 10m Warning FailedScheduling pod/[[storage-provisioner]] 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. | ||
+ | |||
+ | 0/1 nodes are available: 1 node(s) had taint {[[node.kubernetes.io]]/not-ready: }, [[that the pod didn't tolerate]]. | ||
your-namespace 2m16s Warning FailedScheduling pod/your-elasticsearch-master-1 | your-namespace 2m16s Warning FailedScheduling pod/your-elasticsearch-master-1 | ||
Line 9: | Line 21: | ||
default 8m51s Warning FailedScheduling pod/myprometheus-alertmanager-5967d4ff85-5glkh running [[PreBind plugin]] "[[VolumeBinding]]": binding volumes: [[timed out waiting for the condition]] | default 8m51s Warning FailedScheduling pod/myprometheus-alertmanager-5967d4ff85-5glkh running [[PreBind plugin]] "[[VolumeBinding]]": binding volumes: [[timed out waiting for the condition]] | ||
default 4m58s Normal ExternalProvisioning persistentvolumeclaim/myprometheus-alertmanager waiting for a volume to be created, either by external provisioner "[[ebs.csi.aws.com]]" or manually created by system administrator | default 4m58s Normal ExternalProvisioning persistentvolumeclaim/myprometheus-alertmanager waiting for a volume to be created, either by external provisioner "[[ebs.csi.aws.com]]" or manually created by system administrator | ||
+ | |||
+ | default 32s Warning FailedScheduling pod/elasticsearch-master-0 0/2 nodes are available: 2 [[Insufficient cpu]], 2 [[Insufficient memory]]. | ||
+ | |||
+ | default 4m7s Warning FailedScheduling pod/[[kibana]]-kibana-654ccb45bd-pbp4r [[0/2 nodes are available: 2 Insufficient cpu.]] | ||
+ | |||
+ | default 18m Warning FailedScheduling pod/[[sentry-web]]-79b8bbccdf-r2qr4 0/4 nodes are available: 4 [[pod has unbound immediate PersistentVolumeClaims]]. | ||
+ | |||
+ | 2m22s Warning FailedScheduling pod/[[kotsadm]]-d686d8485-d2fhg 0/1 nodes are available: 1 node(s) [[didn't match Pod's node affinity/selector]]. [[preemption]]: 0/1 nodes are available: 1 [[Preemption is not helpful for scheduling]].. | ||
+ | |||
+ | == [[kubectl get events -o yaml]] == | ||
+ | |||
+ | [[kubectl get events]] -o yaml | grep message | ||
+ | message: '0/2 nodes are available: 2 Insufficient cpu.' | ||
+ | message: '[[pod didn''t trigger scale-up]]: ' | ||
+ | |||
+ | |||
+ | apiVersion: v1 | ||
+ | items: | ||
+ | - apiVersion: v1 | ||
+ | count: 83 | ||
+ | eventTime: null | ||
+ | firstTimestamp: "2022-11-01T06:21:01Z" | ||
+ | involvedObject: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | name: kibana-kibana-654ccb45bd-pbp4r | ||
+ | namespace: default | ||
+ | resourceVersion: "800158" | ||
+ | uid: b10009ea-51df-47b8-be45-3e616f192f1a | ||
+ | [[kind: Event]] | ||
+ | lastTimestamp: "2022-11-01T07:46:28Z" | ||
+ | message: '0/2 nodes are available: 2 Insufficient cpu.' | ||
+ | metadata: | ||
+ | creationTimestamp: "2022-11-01T06:21:01Z" | ||
+ | name: kibana-kibana-654ccb45bd-pbp4r.172361a82959aa95 | ||
+ | namespace: default | ||
+ | resourceVersion: "2825" | ||
+ | uid: 13937a60-c494-47fa-aaa0-5380a6e1412f | ||
+ | reason: FailedScheduling | ||
+ | reportingComponent: "" | ||
+ | reportingInstance: "" | ||
+ | source: | ||
+ | component: default-scheduler | ||
+ | type: Warning | ||
+ | - apiVersion: v1 | ||
+ | count: 511 | ||
+ | eventTime: null | ||
+ | firstTimestamp: "2022-11-01T06:21:04Z" | ||
+ | involvedObject: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | name: kibana-kibana-654ccb45bd-pbp4r | ||
+ | namespace: default | ||
+ | resourceVersion: "800163" | ||
+ | uid: b10009ea-51df-47b8-be45-3e616f192f1a | ||
+ | kind: Event | ||
+ | lastTimestamp: "2022-11-01T07:46:18Z" | ||
+ | message: 'pod didn''t trigger scale-up: ' | ||
+ | metadata: | ||
+ | creationTimestamp: "2022-11-01T06:21:04Z" | ||
+ | name: kibana-kibana-654ccb45bd-pbp4r.172361a8a65f467d | ||
+ | namespace: default | ||
+ | resourceVersion: "2824" | ||
+ | uid: 38724dae-0dde-42e3-848b-fe24bbbccda3 | ||
+ | reason: [[NotTriggerScaleUp]] | ||
+ | reportingComponent: "" | ||
+ | reportingInstance: "" | ||
+ | source: | ||
+ | component: [[cluster-autoscaler]] | ||
+ | type: Normal | ||
+ | kind: List | ||
+ | metadata: | ||
+ | resourceVersion: "" | ||
== Related == | == Related == | ||
* <code>[[kubectl get pvc]]</code> | * <code>[[kubectl get pvc]]</code> | ||
− | + | * [[GCP machine families]]: <code>[[e2-medium]]</code> | |
+ | * [[Killing]] | ||
+ | * <code>[[kubectl get events -A]]</code> | ||
+ | * Source: <code>[[default-scheduler]]</code> | ||
+ | * <code>[[SchedulingDisabled]]</code> | ||
+ | * <code>[[NodeHasInsufficientMemory]]</code> | ||
+ | * [[FailedMount]] | ||
+ | * [[Karpenter]] | ||
+ | * [[Unable to schedule pod]] | ||
== See also == | == See also == | ||
+ | * {{FailedScheduling}} | ||
* {{kubectl events}} | * {{kubectl events}} | ||
− | * {{ | + | * {{K8s tr}} |
[[Category:K8s]] | [[Category:K8s]] |
Latest revision as of 13:45, 17 January 2024
FailedScheduling
Insufficient cpu
Insufficient memory
timed out waiting for the condition
unbound immediate PersistentVolumeClaims
volume node affinity conflict
didn't match Pod's node affinity/selector
kubectl get events -A your-namespace 22s Warning FailedScheduling pod/your_elasticsearch-master-1 0/8 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {risk: etl}, that the pod didn't tolerate, 1 node(s) were unschedulable, 3 node(s) had taint {NAME: run-your-command}, that the pod didn't tolerate, 7 node(s) had volume node affinity conflict.
kube-system 10m Warning FailedScheduling pod/storage-provisioner 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
your-namespace 2m16s Warning FailedScheduling pod/your-elasticsearch-master-1 0/13 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {risk: etl}, that the pod didn't tolerate, 3 node(s) had taint {Name: run-your-command}, that the pod didn't tolerate, 4 node(s) had volume node affinity conflict, 4 node(s) were unschedulable.
default 8m51s Warning FailedScheduling pod/myprometheus-alertmanager-5967d4ff85-5glkh running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition default 4m58s Normal ExternalProvisioning persistentvolumeclaim/myprometheus-alertmanager waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
default 32s Warning FailedScheduling pod/elasticsearch-master-0 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory.
default 4m7s Warning FailedScheduling pod/kibana-kibana-654ccb45bd-pbp4r 0/2 nodes are available: 2 Insufficient cpu.
default 18m Warning FailedScheduling pod/sentry-web-79b8bbccdf-r2qr4 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
2m22s Warning FailedScheduling pod/kotsadm-d686d8485-d2fhg 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
kubectl get events -o yaml[edit]
kubectl get events -o yaml | grep message message: '0/2 nodes are available: 2 Insufficient cpu.' message: 'pod didn''t trigger scale-up: '
apiVersion: v1 items: - apiVersion: v1 count: 83 eventTime: null firstTimestamp: "2022-11-01T06:21:01Z" involvedObject: apiVersion: v1 kind: Pod name: kibana-kibana-654ccb45bd-pbp4r namespace: default resourceVersion: "800158" uid: b10009ea-51df-47b8-be45-3e616f192f1a kind: Event lastTimestamp: "2022-11-01T07:46:28Z" message: '0/2 nodes are available: 2 Insufficient cpu.' metadata: creationTimestamp: "2022-11-01T06:21:01Z" name: kibana-kibana-654ccb45bd-pbp4r.172361a82959aa95 namespace: default resourceVersion: "2825" uid: 13937a60-c494-47fa-aaa0-5380a6e1412f reason: FailedScheduling reportingComponent: "" reportingInstance: "" source: component: default-scheduler type: Warning - apiVersion: v1 count: 511 eventTime: null firstTimestamp: "2022-11-01T06:21:04Z" involvedObject: apiVersion: v1 kind: Pod name: kibana-kibana-654ccb45bd-pbp4r namespace: default resourceVersion: "800163" uid: b10009ea-51df-47b8-be45-3e616f192f1a kind: Event lastTimestamp: "2022-11-01T07:46:18Z" message: 'pod didnt trigger scale-up: ' metadata: creationTimestamp: "2022-11-01T06:21:04Z" name: kibana-kibana-654ccb45bd-pbp4r.172361a8a65f467d namespace: default resourceVersion: "2824" uid: 38724dae-0dde-42e3-848b-fe24bbbccda3 reason: NotTriggerScaleUp reportingComponent: "" reportingInstance: "" source: component: cluster-autoscaler type: Normal kind: List metadata: resourceVersion: ""
Related[edit]
kubectl get pvc
- GCP machine families:
e2-medium
- Killing
kubectl get events -A
- Source:
default-scheduler
SchedulingDisabled
NodeHasInsufficientMemory
- FailedMount
- Karpenter
- Unable to schedule pod
See also[edit]
- FailedScheduling:
Insufficient cpu
,Insufficient memory
,timed out waiting for the condition
,unbound immediate PersistentVolumeClaims
- Kubernetes events, Kubernetes node events:
[ Normal | Warning ]
,kubectl events, kubectl get events
:BackOff
,FailedMount, FailedAttachVolume, TaintManagerEviction, InvalidDiskCapacity
,FailedScheduling, kubectl describe events, Unhealthy
,conditions:, UpdatedLoadBalancer, BackoffLimitExceeded
- K8s troubleshooting:
kubectl logs, kubectl top, kubectl get events -A, kubectl describe pod
, Liveness, Readiness,Kubernetes events
, Pulling image, OOMKilled, ProbeWarning, Reason,FailedScheduling
,errImagePull, ImagePullBackOff
, Kubelet conditions:MemoryPressure, DiskPressure, KubeletHasSufficientPID, KubeletReady, kubectl [ debug | attach | exec ] kubectl cluster-info dump, SimKube, KWOK
Advertising: