[kubernetes] Kubeadm 을 사용해서 k8s 설치

728x90

Kubeadm 설치

공식 문서: https://kubernetes.io/ko/docs/setup/

kubeadm 설치 스크립트

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-cache madison kubeadm
apt-cache madison kubectl
apt-cache madison kubelet
sudo apt-get update
sudo apt-get install kubeadm=1.22.8-00 kubelet=1.22.8-00 kubectl=1.22.8-00 -y
sudo apt-mark hold kubelet kubeadm kubectl

한 대의 VM 에 Control Plane 과 Node 를 모두 설치한다.

Kubeadm 설치 공식문서: https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Kubeadm 설치 전에 먼저 조건들을 확인한 후에 설치하도록 하자.

apt 패키지 색인을 업데이트하고, 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치한다.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

구글 클라우드의 공개 사이닝 키를 다운로드 한다.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>

쿠버네티스 apt 리포지터리를 추가한다.

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

kubeadm, kubectl, kubelet 패키지 버전 확인

apt-cache madison kubeadm
apt-cache madison kubectl
apt-cache madison kubelet

apt 패키지 색인을 업데이트하고, kubelet, kubeadm, kubectl을 설치하고 해당 버전을 고정한다.

sudo apt-get update
sudo apt-get install kubeadm=1.22.8-00 kubelet=1.22.8-00 kubectl=1.22.8-00 -y
sudo apt-mark hold kubelet kubeadm kubectl

Cgroup Driver 오류

sudo kubeadm init 을 실행할 경우 아래와 같은 오류가 발생한다. 이는 Cgroup Driver 오류로 발생하는 것이다.

vagrant@docker ~ sudo kubeadm init --control-plane-endpoint 192.168.100.100 --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address 192.168.100.100
I0513 06:45:22.350921   11596 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.22
[init] Using Kubernetes version: v1.22.9
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

https://github.com/kubernetes/kubeadm/issues/2605

$ docker info | grep 'Cgroup Driver'  
Cgroup Driver: cgroupfs

/etc/docker/daemon.json

# /etc/docker/daemon.json
{  
  "exec-opts": ["native.cgroupdriver=systemd"]
}
$ sudo systemctl restart docker
$ docker info | grep 'Cgroup Driver'
Cgroup Driver: systemd
$ sudo systemctl daemon-reload && sudo systemctl restart kubelet
$ sudo kubeadm reset
$ sudo kubeadm init --control-plane-endpoint 192.168.100.100 --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address 192.168.100.100

k8s 클러스터 생성

공식 문서: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

클러스터 내부에서 사용할 네트워크를 설정(네트워크 대역폭, 컨트롤 플레인 엔드포인트 주소, api 서버 주소 등..)해야 한다.

kubeadm init --control-plane-endpoint <control_plane_ip_주소> --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address <api_server_ip_주소>
sudo kubeadm init --control-plane-endpoint 192.168.100.100 --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address 192.168.100.100
sudo kubeadm init --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address 192.168.100.100

kubeadm output

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  <https://kubernetes.io/docs/concepts/cluster-administration/addons/>

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.100.100:6443 --token htblfl.yyv25zv9bsq73faq \\
        --discovery-token-ca-cert-hash sha256:7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a \\
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.100:6443 --token htblfl.yyv25zv9bsq73faq \\
        --discovery-token-ca-cert-hash sha256:7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a

일반 사용자 자격증명을 하기 위해서 인증 파일을 복사하여 ~/.kube/config 로 저장한다.

쿠버네티스는 환경 파일을 항상 <홈 디렉토리>/.kube 디렉토리에서 찾는다.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant@docker ~/.kube  kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
docker   NotReady   control-plane,master   5m14s   v1.22.8

kubeadm init 명령어가 끝나면 아래와 같은 출력 화면이 나온다.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  <https://kubernetes.io/docs/concepts/cluster-administration/addons/>

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.100.100:6443 --token htblfl.yyv25zv9bsq73faq \\
        --discovery-token-ca-cert-hash sha256:7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a \\
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.100:6443 --token htblfl.yyv25zv9bsq73faq \\
        --discovery-token-ca-cert-hash sha256:7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a

여기서 6443 포트는 api-server 의 포트이다. 즉, 192.168.100.100:6443 은 api-server 의 아이피와 포트이다.

--token 은 인증을 받기 위한 키이다.

쿠버네티스의 모든 구성 요소(kubectl 포함)는 api-server 한테 인증을 받아야 한다. 이 인증이라는 것은 내부적으로 CA 와 CA 인증서로부터 발급된 인증서를 통해 인증을 받는 것이다.

토큰을 발급해준 CA 인증서의 해시 값이 같은지 확인한다.

토큰값은 24시간 동안만 유효하다. 그래서 토큰 값이 없을 경우 명령어로 토큰값을 만들수 있다.

즉, join 명령어를 통해 클러스터에 새로운 Worker Node 를 추가하거나 Control Plane 을 추가할 수 있고 api-server 와 인증을 하여 올바른 녀석인지 확인한다.

kubeadm token create
cfc4zg.li7km2lv8z4vswfj

CA 인증서의 해시값을 확인하는 명령어는 다음과 같다.

정확히는 /ect/kubernetes/pki 에 인증서와 개인키가 모두 있다.

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \\
   openssl dgst -sha256 -hex | sed 's/^.* //'
7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a

네트워크 방식 구현

공식 문서: https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model

애드온을 사용하여 네트워크를 구현해야 한다. 일반적으로 Calico 를 사용한다. → DNS 애드온

Calico 공식 문서: https://projectcalico.docs.tigera.io/about/about-calico

onPremis 설치: https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises

Calico Network Add-On

클러스터에 Calico 다운로드

kubectl create -f <https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml>

custom-resources.yaml 파일 다운로드

curl <https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml> -O

custom-resources.yaml 에서 cidr 블록을 kube init 을 할 때 만든 pod 의 네트워크 대역폭으로 변경해준다.

custom-resources.yaml 로 네트워크 생성

kubectl create -f custom-resources.yaml

클러스터 상태 확인

NAMESPACE 는 도커에서 배운 네임스페이스와 전혀 상관없다.

vagrant@docker ~ kubectl get pods -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-85654f6c94-jb67q          1/1     Running   0          54s
calico-apiserver   calico-apiserver-85654f6c94-nlxts          1/1     Running   0          54s
calico-system      calico-kube-controllers-5d74cd74bc-mmqcc   1/1     Running   0          2m56s
calico-system      calico-node-fvh2z                          1/1     Running   0          2m56s
calico-system      calico-typha-7cf4d47df6-n82zm              1/1     Running   0          2m56s
kube-system        coredns-78fcd69978-llx42                   1/1     Running   0          18m
kube-system        coredns-78fcd69978-rzsdn                   1/1     Running   0          18m
kube-system        etcd-docker                                1/1     Running   1          18m
kube-system        kube-apiserver-docker                      1/1     Running   1          18m
kube-system        kube-controller-manager-docker             1/1     Running   1          18m
kube-system        kube-proxy-b9ghr                           1/1     Running   0          18m
kube-system        kube-scheduler-docker                      1/1     Running   1          18m
tigera-operator    tigera-operator-7cf4df8fc7-rvtdw           1/1     Running   0          4m46s

최종적으로 kubectl get nodes 를 했을 때 STATUS 가 Ready 상태여야 한다.

vagrant@docker ~ kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
docker   Ready    control-plane,master   21m   v1.22.8
$ kubectl taint node docker node-role.kubernetes.io/master-

간단한 예제

$ kubectl create deployment myweb --image=ghcr.io/c1t1d0s7/go-myweb

$ kubectl get deployments,replicasets,pods
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myweb   1/1     1            1           4m40s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/myweb-97dbf5749   1         1         1       4m40s

NAME                        READY   STATUS    RESTARTS   AGE
pod/myweb-97dbf5749-8tq2l   1/1     Running   0          4m40s

$ kubectl expose deployment myweb --port=80 --protocol=TCP --target-port=8080 --name myweb-svc --type=NodePort
$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        40m
myweb-svc    NodePort    10.96.114.201   <none>        80:31891/TCP   5s

$ curl 192.168.100.100:31891

Hello World!
myweb-97dbf5749-8tq2l

$ kubectl scale deployment myweb --replicas=3
$ kubectl get pods

myweb-97dbf5749-8tq2l   1/1     Running       0          12m
myweb-97dbf5749-9bm8l   1/1     Running       0          3m13s
myweb-97dbf5749-n29m2   1/1     Running       0          3m13s

$ kubectl delete service myweb-svc
$ kubectl delete deployment myweb
  • kubectl create deployment myweb --image=ghcr.io/cltld0s7/go-myweb
    • deployment 타입의 이름은 myweb 인 컨테이너를 만든다.
  • kubectl get deployments,replicasets,pods myweb
    • myweb 컨테이너 정보를 확인
  • kubectl expose deployment myweb --port=80 --protocol=TCP --target-port=8080 --name myweb-svc --type=NodePort
    • 쿠버네티스 내부에서 사용하는 서비스(로드밸런서)를 외부에 노출시킨다.
  • kubectl scale deployment myweb --replicas=3
    • myweb 컨테이너를 3개 생성, 스케일 아웃한다.

Worker Node 추가

새로운 VM 을 생성하여 해당 VM 을 Worker Node 로 클러스터에 등록한다.

해야할 것들은 아래와 같다.

  1. Vagrantfile VM 추가
    1. cpu: 2, mem: 2G
  2. Docker 설치
  3. kubeadm, kubctl, kubelet 설치, 버전이 컨트롤 플레인과 동일해야 한다.
  4. worker node 에서 kubeadm join 명령어를 실행
  5. 컨트롤 플레인에서 kudeadm get nodes 명령어를 실행

리눅스에서 도커 설치할 때 사용할 install_docker.sh 스크립트

sudo apt update
sudo apt-get install -y \\
    ca-certificates \\
    curl \\
    gnupg \\
    lsb-release
curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \\
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] <https://download.docker.com/linux/ubuntu> \\
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo usermod -aG docker vagrant

kubeadm 설치할 때 사용할 install_kubeadm.sh 스크립트

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-cache madison kubeadm
apt-cache madison kubectl
apt-cache madison kubelet
sudo apt-get update
sudo apt-get install kubeadm=1.22.8-00 kubelet=1.22.8-00 kubectl=1.22.8-00 -y
sudo apt-mark hold kubelet kubeadm kubectl

Cgroup Driver 오류 해결

/etc/docker/daemon.json

sudo vi /etc/docker/daemon.json
{
   "exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl restart docker
sudo systemctl daemon-reload && sudo systemctl restart kubelet
sudo reboot
$ docker info | grep 'Cgroup Driver'
Cgroup Driver: systemd

worker node 에서 kubeadm join 명령어 실행

vagrant@worker2:~$ sudo kubeadm join 192.168.100.100:6443 --token cfc4zg.li7km2lv8z4vswfj \\
 --discovery-token-ca-cert-hash sha256:7a2d8972ab81310f67a5abd5b59d2efca4064b8fb132bd6428decd1deb0f657a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

728x90