Difference between revisions of "Kubernetes pod affinity and anti affinity"
Jump to navigation
Jump to search
Line 6: | Line 6: | ||
0/11 nodes are available: 1 [[Insufficient cpu]], 1 Too many pods, 10 nodes(s) didn't match Pod's node [[affinity]]/selector | 0/11 nodes are available: 1 [[Insufficient cpu]], 1 Too many pods, 10 nodes(s) didn't match Pod's node [[affinity]]/selector | ||
0/2 nodes are available: 1 Too many pods, 1 nodes(s) [[didn't match Pod's node affinity/selector]] | 0/2 nodes are available: 1 Too many pods, 1 nodes(s) [[didn't match Pod's node affinity/selector]] | ||
− | 0/ | + | 0/24 nodes are available: 0/24 [[nodes are available]]: 1 node(s) [[didn't match pod anti-affinity rules]], 1 node(s) [[were unschedulable]], 2 node(s) [[had taint]] {[[node.kubernetes.io/not-ready]]}, that the pod did not tolerate, 20 node(s) [[didn't match Pod's node affinity/selector]]. |
Too many pods, 1 nodes(s) [[didn't match Pod's node affinity/selector]] | Too many pods, 1 nodes(s) [[didn't match Pod's node affinity/selector]] |
Revision as of 11:46, 19 January 2023
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
- Elasticsearch PodAntiAffinity official example
0/11 nodes are available: 1 Insufficient cpu, 1 Too many pods, 10 nodes(s) didn't match Pod's node affinity/selector 0/2 nodes are available: 1 Too many pods, 1 nodes(s) didn't match Pod's node affinity/selector 0/24 nodes are available: 0/24 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) had taint {node.kubernetes.io/not-ready}, that the pod did not tolerate, 20 node(s) didn't match Pod's node affinity/selector.
Too many pods, 1 nodes(s) didn't match Pod's node affinity/selector
Official examples
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - antarctica-east1 - antarctica-west1 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value containers: - name: with-node-affinity image: registry.k8s.io/pause:2.0 apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: topology.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S2 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: registry.k8s.io/pause:2.0
Related
Activities
See also
- Kubernetes pod affinity and anti affinity, Kubernetes Node Affinity,
default-scheduler, affinity:, NodeAffinity, spec.affinity.podAntiAffinity
- Pods:
kubectl apply
,kubectl [ pod get | top | delete | describe pods ]
,InitContainers, PodInitializing, CrashLoopBackOff, ImagePullPolicy:, NodeAffinity, NodeSelector, Terminated
- Karpenter,
karpenter.sh, provisioners.karpenter.sh
, Karpenter releases, best practices,karpenter.sh/capacity-type, karpenter.sh/discovery
,kind: Provisioner, kind: AWSNodeTemplate
,kubectl provisioner
,TopologyKey, FailedDraining, Evict, DisruptionBlocked
, Karpenter logs,controller., ttlSecondsUntilExpired
, KEDA, NodePools, Kind: NodePool, Workload Consolidation, Disruption controls
Advertising: