setup_centos_kubernetes_docker_cluster_tutorial


Pre-Steps,  install 3 – 5 VM or physical nodes with Centos7.X,  make /etc/hosts file on each of them !

Step1:  Run my setup below to config systems, setup docker and kubernetes repository, then install on each k8 notes:

# cat install_docker_k8.sh
#! /usr/bin/bash
#
# Disable SELinux
setenforce 0
sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux

# Enable br_netfilter_kernel_module
modprobe br_netfilter
echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

# Disable SWAP
swapoff -a
cp /etc/fstab /etc/fstab.orig.BK ### backup this file before change
sed -i ‘s/^\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/g’ /etc/fstab
cat /etc/fstab ### verify if the swap line get comment out
sleep 15

# update systeam and install Docker-CE from docker repository
yum update -y
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

# setup kubernetes repository and install k8
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

#cat /etc/yum.repos.d/kubernetes.repo ## this line is to verify the repo contains
yum install -y kubelet kubeadm kubectl

# ./install_docker_k8.sh

# Step2:   reboot each node after install

# Step3:   Start docker and kubernetes services on each node:

# cat start_docker_k8_service.sh
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

systemctl status docker
systemctl status kubelet

# ./start_docker_k8_service.sh

# The process above will give output as below:

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
● docker.service – Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-12-07 20:26:32 EST; 121ms ago
Docs: https://docs.docker.com
Main PID: 3824 (dockerd)
CGroup: /system.slice/docker.service
├─3824 /usr/bin/dockerd -H unix://
└─3847 containerd –config /var/run/docker/containerd/containerd.toml –log-level info

……

Hint: Some lines were ellipsized, use -l to show in full.
● kubelet.service – kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2018-12-07 20:26:32 EST; 76ms ago
Docs: https://kubernetes.io/docs/
Main PID: 4018 (kubelet)
CGroup: /system.slice/kubelet.service
└─4018 /usr/bin/kubelet –bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf –kubeconfig=/etc/kubernetes/kubelet.conf –config=/va…

Dec 07 20:26:32 centos_k8_node4 systemd[1]: Started kubelet: The Kubernetes Node Agent.

# Step 4,  Change the cgroup-driver

docker info | grep -i cgroup
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[root@centos_k8_master ~]# docker info | grep -i cgroup
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Cgroup Driver: cgroupfs
[root@centos_k8_master ~]# sed -i ‘s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@centos_k8_master ~]# docker info | grep -i cgroup
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Cgroup Driver: cgroupfs
[root@centos_k8_master ~]# systemctl daemon-reload
[root@centos_k8_master ~]# systemctl restart kubelet
[root@centos_k8_master ~]# systemctl status kubelet
● kubelet.service – kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Fri 2018-12-07 20:37:30 EST; 8s ago
Docs: https://kubernetes.io/docs/
Process: 4889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 4889 (code=exited, status=255)

Dec 07 20:37:30 centos_k8_master systemd[1]: Unit kubelet.service entered failed state.
Dec 07 20:37:30 centos_k8_master systemd[1]: kubelet.service failed.

[root@centos_k8_master ~]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabl

Step 5:  Kubernetes Cluster Initialization

# cat config_k8_master.sh
# setup firewall to allow k8 process
firewall-cmd –zone=public –permanent –add-port=6443/tcp
firewall-cmd –zone=public –permanent –add-port=10250/tcp
firewall-cmd –zone=public –permanent –add-port=6443/udp
firewall-cmd –zone=public –permanent –add-port=10250/udp
firewall-cmd –zone=public –permanent –list-ports
firewall-cmd –reload
firewall-cmd –get-services

## setup k8 iptables to allow net.bridge traffic
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# create k8 cluster from “kubeadm init”
echo “kubeadm init” > k8_init.log
#kubeadm init | tee -a k8_init.log
kubeadm init –apiserver-advertise-address=192.168.57.59 –pod-network-cidr=10.244.0.0/16 | tee -a k8_init.log

# deploy the flannel network to the kubernetes cluster using the kubectl command
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# run the script above:

# ./config_k8_master.sh

[root@centos-k8-node4 ~]# cat k8_init.log
kubeadm init
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos-k8-node4 localhost] and IPs [192.168.57.59 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos-k8-node4 localhost] and IPs [192.168.57.59 127.0.0.1 ::1]
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [centos-k8-node4 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.57.59]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.506171 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.13” in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “centos-k8-node4” as an annotation
[mark-control-plane] Marking the node centos-k8-node4 as control-plane by adding the label “node-role.kubernetes.io/master=””
[mark-control-plane] Marking the node centos-k8-node4 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 94q0sq.1erns4x5ail242dv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.57.59:6443 –token 94q0sq.1erns4x5ail242dv –discovery-token-ca-cert-hash sha256:8ab6c17b87d60e4fc5eaad8739e557c4788ab8e753b79ccc47bfb2ab5ed61dc6

root@centos-k8-node4 ~]# mkdir -p $HOME/.kube

[root@centos-k8-node4 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

cp: overwrite ‘/root/.kube/config’? y

[root@centos-k8-node4 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@centos-k8-node4 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds-amd64 created

daemonset.extensions/kube-flannel-ds-arm64 created

daemonset.extensions/kube-flannel-ds-arm created

daemonset.extensions/kube-flannel-ds-ppc64le created

daemonset.extensions/kube-flannel-ds-s390x created

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# systemctl restart docker

[root@centos-k8-node4 ~]# systemctl restart kubelet

[root@centos-k8-node4 ~]# systemctl status docker kubelet

root@centos-k8-node4 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

centos-k8-node4 Ready master 12m v1.13.0

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# kubectl get pods –all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-86c58d9df4-4nkc6 1/1 Running 1 12m

kube-system coredns-86c58d9df4-q8bdp 1/1 Running 1 12m

kube-system etcd-centos-k8-node4 1/1 Running 1 12m

kube-system kube-apiserver-centos-k8-node4 1/1 Running 1 11m

kube-system kube-controller-manager-centos-k8-node4 1/1 Running 2 11m

kube-system kube-flannel-ds-amd64-fb427 1/1 Running 2 12m

kube-system kube-proxy-8pct6 1/1 Running 2 12m

kube-system kube-scheduler-centos-k8-node4 1/1 Running 1 11m

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]#

# add 2 more node to the cluster and verify:

# cat config_k8_slave.sh  ## run this script from k8 slave node

cat config_k8_slave.sh
# setup firewall to allow k8 process
firewall-cmd –zone=public –permanent –add-port=6443/tcp
firewall-cmd –zone=public –permanent –add-port=10250/tcp
firewall-cmd –zone=public –permanent –add-port=6443/udp
firewall-cmd –zone=public –permanent –add-port=10250/udp
firewall-cmd –zone=public –permanent –list-ports
firewall-cmd –reload
firewall-cmd –get-services

# setup k8 iptables to allow net.bridge traffic
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

echo “1” > /proc/sys/net/bridge/bridge-nf-call-iptables

run above script on each slave node:

./config_k8_slave.sh

Then add each slave node to master with command:

kubeadm join 192.168.57.59:6443 –token 94q0sq.1erns4x5ail242dv –discovery-token-ca-cert-hash sha256:8ab6c17b87d60e4fc5eaad8739e557c4788ab8e753b79ccc47bfb2ab5ed61dc6

## verify

[root@centos-k8-node4 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

centos-k8-node4 Ready master 47m v1.13.0

centos-k8-node5 Ready <none> 4m41s v1.13.0

centos-k8-node6 Ready <none> 27s v1.13.0

[root@centos-k8-node4 ~]# kubectl get pods –all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-86c58d9df4-4nkc6 1/1 Running 1 47m

kube-system coredns-86c58d9df4-q8bdp 1/1 Running 1 47m

kube-system etcd-centos-k8-node4 1/1 Running 1 46m

kube-system kube-apiserver-centos-k8-node4 1/1 Running 1 46m

kube-system kube-controller-manager-centos-k8-node4 1/1 Running 2 46m

kube-system kube-flannel-ds-amd64-4w6kh 1/1 Running 0 5m30s

kube-system kube-flannel-ds-amd64-fb427 1/1 Running 2 46m

kube-system kube-flannel-ds-amd64-tp6df 1/1 Running 0 76s

kube-system kube-proxy-8pct6 1/1 Running 2 47m

kube-system kube-proxy-djqp5 config_k8_slave.sh 1/1 Running 0 76s

kube-system kube-proxy-v4rwl 1/1 Running 0 5m30s

kube-system kube-scheduler-centos-k8-node4 1/1 Running 1 46m

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# kubectl create deployment nginx –image=nginx

deployment.apps/nginx created

[root@centos-k8-node4 ~]# kubectl describe deployment nginx

Name: nginx

Namespace: default

CreationTimestamp: Fri, 14 Dec 2018 18:20:18 -0500

Labels: app=nginx

Annotations: deployment.kubernetes.io/revision: 1

Selector: app=nginx

Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable

StrategyType: RollingUpdate

MinReadySeconds: 0

RollingUpdateStrategy: 25% max unavailable, 25% max surge

Pod Template:

Labels: app=nginx

Containers:

nginx:

Image: nginx

Port: <none>

Host Port: <none>

Environment: <none>

Mounts: <none>

Volumes: <none>

Conditions:

Type Status Reason

—- —— ——

Available True MinimumReplicasAvailable

Progressing True NewReplicaSetAvailable

OldReplicaSets: <none>

NewReplicaSet: nginx-5c7588df (1/1 replicas created)

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal ScalingReplicaSet 15s deployment-controller Scaled up replica set nginx-5c7588df to 1

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# kubectl create service nodeport nginx –tcp=80:80

service/nginx created

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

centos-k8-node4 Ready master 50m v1.13.0

centos-k8-node5 Ready <none> 7m38s v1.13.0

centos-k8-node6 Ready <none> 3m24s v1.13.0

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m

nginx NodePort 10.109.14.203 <none> 80:32543/TCP 44s

[root@centos-k8-node4 ~]#

[root@centos-k8-node4 ~]# curl centos-k8-node6:32543

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

body {

width: 35em;

margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif;

}

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

<p>For online documentation and support please refer to

<a href=”http://nginx.org/”>nginx.org</a>.<br/>

Commercial support is available at

<a href=”http://nginx.com/”>nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

[root@centos-k8-node4 ~]#

 

refer:

https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/