Difference between revisions of "Kubernetes installation"

From wikieduonline
Jump to navigation Jump to search
Tags: Mobile web edit, Mobile edit
 
(28 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Installation ==
+
= Installation =
 
[[Kubernetes]] as of April 2019 can be installed in more that 40 different ways<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> and in particular can be installed using your Linux distribution packages or using Kubernetes [[upstream]] version.  
 
[[Kubernetes]] as of April 2019 can be installed in more that 40 different ways<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> and in particular can be installed using your Linux distribution packages or using Kubernetes [[upstream]] version.  
 
It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like [[EKS]] from AWS, [[Google Kubernetes Engine]] (GKE) in [[Google Cloud Platform]] (GCP) or GKE on-prem<ref>https://cloud.google.com/gke-on-prem/</ref> or also some CI/CD tools like [[Jenkins X]] and [[GitLab]]<ref>https://about.gitlab.com/solutions/kubernetes/</ref> that support integration with different Kubernetes Cloud providers.
 
It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like [[EKS]] from AWS, [[Google Kubernetes Engine]] (GKE) in [[Google Cloud Platform]] (GCP) or GKE on-prem<ref>https://cloud.google.com/gke-on-prem/</ref> or also some CI/CD tools like [[Jenkins X]] and [[GitLab]]<ref>https://about.gitlab.com/solutions/kubernetes/</ref> that support integration with different Kubernetes Cloud providers.
  
=== Install Kubernetes on Debian/Ubuntu using upstream<ref>
+
== Install Kubernetes on Debian/Ubuntu using upstream<ref>
 
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/
 
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/
</ref> ===
+
</ref> ==
  
 
* Our first step is to download and add the key for the '''Kubernetes and docker''' install. Back at the terminal, issue the following command:
 
* Our first step is to download and add the key for the '''Kubernetes and docker''' install. Back at the terminal, issue the following command:
Line 22: Line 22:
 
EOF</pre>
 
EOF</pre>
  
* And now, '''Install Docker, [[/kubeadm/]], [[/kubelet/]]<ref>https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</ref>, and [[/kubectl/]]''' on all your servers.
+
* And now, '''Install Docker, [[kubeadm]], [[kubelet]]<ref>https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</ref>, and [[kubectl]]''' on all your servers.
<pre>sudo apt-get update
+
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
+
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl</pre>
+
sudo [[apt-mark hold]] docker-ce kubelet kubeadm kubectl
  
===Initialize your master===
+
==Initialize your [[master node]]==
  
 
* Enable '''net.bridge.bridge-nf-call-iptables''' on all your nodes.
 
* Enable '''net.bridge.bridge-nf-call-iptables''' on all your nodes.
Line 34: Line 34:
  
 
* On only '''the Kube Master server''', initialize the cluster and configure '''kubectl.'''
 
* On only '''the Kube Master server''', initialize the cluster and configure '''kubectl.'''
<code>sudo [[/kubeadm/]] init --pod-network-cidr=10.244.0.0/16</code>
+
<code>sudo [[kubeadm init]] --pod-network-cidr=10.244.0.0/16</code>
  
 
When this completes, you'll be presented with the exact command you need to join the nodes to the master.
 
When this completes, you'll be presented with the exact command you need to join the nodes to the master.
In case you make any mistake and want to undo your changes you can use: <code>kubeadm reset<ref>https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/</ref></code> command.
+
In case you make any mistake and want to undo your changes you can use: <code>[[kubeadm reset]]</code> <ref>https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/</ref> command.
  
 
* Before you join a node, you need to issue the following commands:
 
* Before you join a node, you need to issue the following commands:
Line 48: Line 48:
  
 
*Install '''the flannel networking''' plugin in the cluster by running this command '''on the Kube Master''' server.
 
*Install '''the flannel networking''' plugin in the cluster by running this command '''on the Kube Master''' server.
<pre>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</pre>
+
<pre>[[kubectl apply]] -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</pre>
  
* The <code>kubeadm init</code> command that you ran on the master should output a <code>kubeadm join</code> command containing a '''token and hash'''. You will need to copy that command '''from the master''' and run it on both worker nodes with '''sudo'''.
+
* The <code>[[kubeadm init]]</code> command that you ran on the master should output a <code>kubeadm join</code> command containing a '''token and hash'''. You will need to copy that command '''from the master''' and run it on both worker nodes with '''sudo'''.
 
<pre>sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash</pre>
 
<pre>sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash</pre>
  
Line 61: Line 61:
 
wboyd3c.mylabserver.com  Ready    <none>  49m  v1.12.2
 
wboyd3c.mylabserver.com  Ready    <none>  49m  v1.12.2
 
</pre>
 
</pre>
 
===Containers and Pods===
 
 
'''[[Pods]]'''<ref>https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/</ref> are the smallest and most basic building block of the Kubernetes model.
 
A pod consist of one or more containers storage resources, and a unique IP address in the Kubernetes cluster network.
 
 
In order to run containers, Kubernetes '''schedules''' pods to run on servers in the cluster. When a pod is scheduled the server will run the containers that are part of that pod.
 
 
Create a simple pod running an nginx container, for more configuration options check Kubernetes Pod official documentation<ref>https://kubernetes.io/docs/tasks/configure-pod-container/</ref>:
 
* Create a basic Pod file definition with your container image: <code>mypod.yml</code>
 
<pre>
 
apiVersion: v1
 
kind: Pod
 
metadata:
 
  name: MyNginxPod
 
spec:
 
  containers:
 
  - name: MyNginxContainer
 
    image: nginx
 
</pre>
 
* Create Pod: <code>kubectl create -f mypod.yml</code>
 
 
*Get a list of pods and verify that your new nginx pod is in the Running state:
 
<pre>kubectl get pods</pre>
 
 
*Get more information about your nginx pod:
 
<pre>kubectl describe pod nginx</pre>
 
 
*Delete the pod:
 
<pre>kubectl delete pod nginx</pre>
 
 
 
See also ReplicaSet<ref>https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/</ref> concept.
 
  
 
===Clustering and Nodes===
 
===Clustering and Nodes===
Line 99: Line 66:
 
Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers)
 
Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers)
 
These servers which actually run the containers are called '''nodes.'''
 
These servers which actually run the containers are called '''nodes.'''
A kubernetes cluster has one or more '''control servers''' which manage and control the cluster and host the '''kubernetes API'''. These control server are usually separate from worker nodes, which run applications within the cluster.
+
A kubernetes cluster has one or more '''control servers''' which manage and control the cluster and host the '''[[Kubernetes API]]'''. These control server are usually separate from worker nodes, which run applications within the cluster.
  
*Get a list of nodes: <code>kubectl get nodes</code>
+
*Get a list of nodes:
 +
::<code>[[kubectl get nodes]]</code>
  
 
*Get more information about a specific node:
 
*Get more information about a specific node:
<pre>kubectl describe node $node_name</pre>
+
::<code>[[kubectl describe node]] $node_name</code>
 
 
===Networking in Kubernetes===
 
 
 
The Kubernetes networking model involves creating a '''virtual network''' across the whole cluster. This means that every pod on the cluster has a unique IP address, and can communicate with any other pod in the cluster, even if that other pod is running on a different node.
 
 
 
Kubernetes supports a variety of networking plugins that implements this model in various ways. One of the most popular and easy-to-use<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> is '''Flannel''', although as of April 2019 do not support network policies.
 
 
 
*Create a deployment with two nginx pods:
 
<pre>cat << EOF | kubectl create -f -
 
apiVersion: apps/v1
 
kind: Deployment
 
metadata:
 
  name: nginx
 
  labels:
 
    app: nginx
 
spec:
 
  replicas: 2
 
  selector:
 
    matchLabels:
 
      app: nginx
 
  template:
 
    metadata:
 
      labels:
 
        app: nginx
 
    spec:
 
      containers:
 
      - name: nginx
 
        image: nginx:1.15.4
 
        ports:
 
        - containerPort: 80
 
EOF</pre>
 
  
*Create a busybox pod to use for testing:
+
=== [[Networking in Kubernetes ]]===
<pre>cat << EOF | kubectl create -f -
 
apiVersion: v1
 
kind: Pod
 
metadata:
 
  name: busybox
 
spec:
 
  containers:
 
  - name: busybox
 
    image: radial/busyboxplus:curl
 
    args:
 
    - sleep
 
    - "1000"
 
EOF</pre>
 
  
*Get the IP addresses of your pods:
+
== Activities ==
<pre>kubectl get pods -o wide</pre>
+
* [[CKA v1.18]]: [[Install Kubernetes master and nodes]]
  
*Get the IP address of one of the nginx pods, then contact that nginx pod from the busybox pod using the nginx pod's IP address:
+
== Related terms ==
<pre>kubectl exec busybox -- curl $nginx_pod_ip</pre>
+
* [[Kubernetes (snap install)]]:  <code>[[juju deploy charmed-kubernetes]]</code>
 +
* <code>[[eksctl create cluster]]</code>
 +
* [[Deploy EKS cluster using Terraform]]
 +
* [[Deploy GKE cluster using Terraform]]
 +
* <code>[[kubeadm init]]</code>
 +
* <code>[[kubectl version --short]]</code>
  
 
== See also ==
 
== See also ==
* {{K8s}}
+
* {{kubectl}}
 +
* {{K8s installation}}
  
  

Latest revision as of 07:34, 31 October 2022

Installation[edit]

Kubernetes as of April 2019 can be installed in more that 40 different ways[1] and in particular can be installed using your Linux distribution packages or using Kubernetes upstream version. It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like EKS from AWS, Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) or GKE on-prem[2] or also some CI/CD tools like Jenkins X and GitLab[3] that support integration with different Kubernetes Cloud providers.

Install Kubernetes on Debian/Ubuntu using upstream[4][edit]

  • Our first step is to download and add the key for the Kubernetes and docker install. Back at the terminal, issue the following command:
  • Add the Docker Repository on all your servers:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) \
 stable"
  • Add the Kubernetes repository in your apt source.list on all your servers.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl

Initialize your master node[edit]

  • Enable net.bridge.bridge-nf-call-iptables on all your nodes.
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
  • On only the Kube Master server, initialize the cluster and configure kubectl.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

When this completes, you'll be presented with the exact command you need to join the nodes to the master. In case you make any mistake and want to undo your changes you can use: kubeadm reset [6] command.

  • Before you join a node, you need to issue the following commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Install the flannel networking plugin in the cluster by running this command on the Kube Master server.
[[kubectl apply]] -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  • The kubeadm init command that you ran on the master should output a kubeadm join command containing a token and hash. You will need to copy that command from the master and run it on both worker nodes with sudo.
sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
  • Now you are ready to verify that the cluster is up and running. On the Kube Master server, check the list of nodes.
kubectl get nodes
NAME                      STATUS   ROLES    AGE   VERSION
wboyd1c.mylabserver.com   Ready    master   54m   v1.12.2
wboyd2c.mylabserver.com   Ready    <none>   49m   v1.12.2
wboyd3c.mylabserver.com   Ready    <none>   49m   v1.12.2

Clustering and Nodes[edit]

Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers) These servers which actually run the containers are called nodes. A kubernetes cluster has one or more control servers which manage and control the cluster and host the Kubernetes API. These control server are usually separate from worker nodes, which run applications within the cluster.

  • Get a list of nodes:
kubectl get nodes
  • Get more information about a specific node:
kubectl describe node $node_name

Networking in Kubernetes [edit]

Activities[edit]

Related terms[edit]

See also[edit]

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy.

Source: Wikiversity

Advertising: