Creating a Kubernetes Cluster
Install Docker and Kubernetes on All Servers
Nodes
Once we have logged in, we need to elevate privileges using sudo:
sudo suDisable SELinux:
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinuxEnable the br_netfilter module for cluster communication:
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptablesEnsure that the Docker dependencies are satisfied:
yum install -y yum-utils device-mapper-persistent-data lvm2Add the Docker repo and install Docker:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ceSet the cgroup driver for Docker to systemd, reload systemd, then enable and start Docker:
sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker --nowAdd the Kubernetes repo:
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOFInstall Kubernetes v1.14.0:
yum install -y kubelet-1.14.0-0 kubeadm-1.14.0-0 kubectl-1.14.0-0 kubernetes-cni-0.7.5Enable the kubelet service. The kubelet service will fail to start until the cluster is initialized, this is expected:
systemctl enable kubeletMaster
Once we have logged in, we need to elevate privileges using sudo:
sudo suDisable SELinux:
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinuxEnable the br_netfilter module for cluster communication:
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptablesEnsure that the Docker dependencies are satisfied:
yum install -y yum-utils device-mapper-persistent-data lvm2Add the Docker repo and install Docker:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ceSet the cgroup driver for Docker to systemd, reload systemd, then enable and start Docker:
sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker --nowAdd the Kubernetes repo:
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOFInstall Kubernetes v1.14.0:
yum install -y kubelet-1.14.0-0 kubeadm-1.14.0-0 kubectl-1.14.0-0 kubernetes-cni-0.7.5Enable the kubelet service. The kubelet service will fail to start until the cluster is initialized, this is expected:
systemctl enable kubeletInitialize the cluster using the IP range for Flannel:
kubeadm init --pod-network-cidr=10.244.0.0/16Copy the kubeadmn join command that is in the output. We will need this later.

Nodes
Login back to the node and run the kubeadm command from master
kubeadm join 10.0.1.100:6443 --token 8j9ng5.9dmce3s6rnhyx6ez \
--discovery-token-ca-cert-hash sha256:48adc8e2ae7e83d92ec143f1720c4652da7c4703f0ca0197bf9d7a3bf6b5210bMaster
Exit sudo, copy the admin.conf to your home directory, and take ownership.
exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configDeploy Flannel:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel-old.yamlCheck the cluster state:
kubectl get pods --all-namespaces
kubectl get nodes
Create and Scale a Deployment Using kubectl
Master
Create a simple deployment:
kubectl create deployment nginx --image=nginxInspect the pod:
kubectl get podsScale the deployment:
kubectl scale deployment nginx --replicas=4Inspect the pods. We should have four now:
kubectl get pods
Last updated