Difference between revisions of "Kubectl drain"
Jump to navigation
Jump to search
(22 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{{lc}} | {{lc}} | ||
+ | [[kubectl]] drain | ||
+ | [[kubectl drain]] [[your-node-name]] [[--ignore-daemonsets]] | ||
+ | |||
+ | [[kubectl drain]] [[your-node-name]] [[--ignore-daemonsets]] | ||
+ | node/your-node-name already cordoned | ||
+ | |||
+ | |||
+ | [[kubectl drain]] [[your-node-name]] | ||
+ | |||
+ | [[kubectl drain]] mynode [[--as]]=superman --as-group=system:masters | ||
+ | |||
+ | * https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain | ||
+ | * https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ | ||
+ | |||
+ | == Examples == | ||
+ | |||
+ | [[Pod Disruption Budget]] example: | ||
+ | [[kubectl]] drain ip-10-0-3-48.us-east-2.compute.internal --ignore-daemonsets | ||
+ | error when evicting pods/"your-pod-name" (will retry after 5s): Cannot [[evict]] pod as it would violate the pod's [[disruption budget]] | ||
+ | |||
[[kubectl]] drain ip-10-0-3-48.us-east-2.compute.internal --ignore-daemonsets --delete-local-data | [[kubectl]] drain ip-10-0-3-48.us-east-2.compute.internal --ignore-daemonsets --delete-local-data | ||
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir- | Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir- | ||
Line 17: | Line 37: | ||
− | [[kubectl get | + | [[kubectl get nodes]] |
NAME STATUS ROLES AGE VERSION | NAME STATUS ROLES AGE VERSION | ||
ip-10-0-1-25.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 | ip-10-0-1-25.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 | ||
ip-10-0-3-240.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 | ip-10-0-3-240.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 | ||
ip-10-0-3-48.us-east-2.compute.internal Ready,[[SchedulingDisabled]] <none> 3d1h v1.20.11-eks-f17b81 | ip-10-0-3-48.us-east-2.compute.internal Ready,[[SchedulingDisabled]] <none> 3d1h v1.20.11-eks-f17b81 | ||
+ | |||
+ | [[kubectl get pods]] | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | grafana-97d4b5896-qm9dg 0/1 Pending 0 2m5s | ||
+ | metrics-server-679944f8f6-45jvn 0/1 Pending 0 78s | ||
+ | my-release-kubernetes-dashboard-77db8d9694-4p7cg 0/1 Pending 0 78s | ||
== Related == | == Related == | ||
* <code>[[eksctl drain nodegroup]]</code> | * <code>[[eksctl drain nodegroup]]</code> | ||
+ | * <code>[[kubectl uncordon]]</code> | ||
+ | * [[How can I check, scale, delete, or drain my worker nodes in Amazon EKS?]] | ||
+ | * <code>[[kubectl scale deployment]]</code> | ||
+ | * [[Kubernetes Pod Disruptions]] | ||
+ | * 1.21 [[Graceful node shutdown]] | ||
== See also == | == See also == | ||
− | * {{kubectl}} | + | * {{kubectl drain}} |
+ | * {{kubectl nodes}} | ||
[[Category:K8s]] | [[Category:K8s]] |
Latest revision as of 11:41, 28 February 2024
kubectl drain kubectl drain your-node-name --ignore-daemonsets
kubectl drain your-node-name --ignore-daemonsets node/your-node-name already cordoned
kubectl drain your-node-name
kubectl drain mynode --as=superman --as-group=system:masters
- https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain
- https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
Examples[edit]
Pod Disruption Budget example:
kubectl drain ip-10-0-3-48.us-east-2.compute.internal --ignore-daemonsets error when evicting pods/"your-pod-name" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget
kubectl drain ip-10-0-3-48.us-east-2.compute.internal --ignore-daemonsets --delete-local-data Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir- data. node/ip-10-0-3-48.us-east-2.compute.internal cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-node-4zqph, kube-system/kube-proxy-w9vms evicting pod kube-system/coredns-5c778788f4-6hgp2 evicting pod default/grafana-97d4b5896-ksvz5 evicting pod default/metrics-server-679944f8f6-mbfns evicting pod default/my-release-kubernetes-dashboard-77db8d9694-s47rs pod/my-release-kubernetes-dashboard-77db8d9694-s47rs evicted pod/metrics-server-679944f8f6-mbfns evicted pod/coredns-5c778788f4-6hgp2 evicted pod/grafana-97d4b5896-ksvz5 evicted node/ip-10-0-3-48.us-east-2.compute.internal drained
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-25.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 ip-10-0-3-240.us-east-2.compute.internal Ready <none> 3d1h v1.20.11-eks-f17b81 ip-10-0-3-48.us-east-2.compute.internal Ready,SchedulingDisabled <none> 3d1h v1.20.11-eks-f17b81
kubectl get pods NAME READY STATUS RESTARTS AGE grafana-97d4b5896-qm9dg 0/1 Pending 0 2m5s metrics-server-679944f8f6-45jvn 0/1 Pending 0 78s my-release-kubernetes-dashboard-77db8d9694-4p7cg 0/1 Pending 0 78s
Related[edit]
eksctl drain nodegroup
kubectl uncordon
- How can I check, scale, delete, or drain my worker nodes in Amazon EKS?
kubectl scale deployment
- Kubernetes Pod Disruptions
- 1.21 Graceful node shutdown
See also[edit]
Advertising: