Amazon Linux 2에 Kubernetes 1.26 설치 (1)
쿠버네티스 공식 다큐멘테이션 내의 kubeadm으로 설치하는 문서를 기반으로 작성하였습니다. (참고사이트: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/)
- OS는 Amazon Linux 2를 사용하였습니다.
[root@minhangk8s-01 ~]# cat /etc/*release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"
Amazon Linux release 2 (Karoo)
- 설치 요구 조건으로 2 CPU, 2GB 메모리가 있습니다. 2 CPU, 4GB 메모리 환경에서 진행했습니다. 총 3대로 구성할 것입니다.
- Swap을 끄는 것과, selinux를 끄라는 조건이 있는데, Amazon Linux 2에는 이미 이 설정이 되어있는 상태라 진행할 필요가 없었습니다. selinux를 끄는 것은, 매뉴얼에 정확히 selinux가 언급된 것은 아니지만, 전체 클러스터간에 전체 네트워크 통신이 가능해야 한다는 표현이 있습니다. selinux를 정교하게 관리할 수 있다면 켜두어도 설치가 가능할 수 있습니다.
- 그 외 세부적인 설치 요구 조건은 공식 다큐멘테이션 참고바랍니다.
kubeadm을 통한 kubernetes 설치
(참고사이트: https://v1-26.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) Amazon Linux는 Redhat Linux 계열 OS입니다. Red Hat-based distributions
를 따라 설치 진행합니다.
- SELinux를 Permissive 모드로 변경 상술하였듯이, Amazon Linux는 SELinux가 애초에 Disabled 상태입니다.
[root@minhangk8s-01 ~]# getenforce
Disabled
따라서 굳이 더 보안 등급이 높은 Permissive 모드로 올리지 않고, disabled 상태로 두고 진행하면 됩니다.
- 쿠버네티스 yum repository 추가
[root@minhangk8s-01 ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/
> enabled=1
> gpgcheck=1
> gpgkey=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/repodata/repomd.xml.key
> exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
> EOF
-> yum 명령어 사용을 위한 repo 파일 생성. EOF까지가 쭉 명령어임. root 계정으로 진행하고 있어서, 매뉴얼상의 명령어중 sudo는 뺐음
[root@minhangk8s-01 ~]# ls -l /etc/yum.repos.d/kubernetes.repo
-rw-r--r-- 1 root root 235 Oct 24 20:01 /etc/yum.repos.d/kubernetes.repo
-> repo 파일 생성된 것 확인
[root@minhangk8s-01 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
-> 상단에 실행한데로 파일 내용 들어간 것 확인
- kubelet, kubeadm, kubectl설치
[root@minhangk8s-01 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Loaded plugins: langpacks, priorities, update-motd
amzn2-core | 3.6 kB 00:00:00
amzn2extra-docker | 2.9 kB 00:00:00
kubernetes | 1.7 kB 00:00:00
(1/5): amzn2extra-docker/2/x86_64/primary_db | 101 kB 00:00:00
(2/5): amzn2-core/2/x86_64/updateinfo | 729 kB 00:00:00
(3/5): amzn2extra-docker/2/x86_64/updateinfo | 13 kB 00:00:00
(4/5): amzn2-core/2/x86_64/primary_db | 67 MB 00:00:01
(5/5): kubernetes/primary | 19 kB 00:00:01
kubernetes 185/185
2 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.26.10-150500.1.1 will be installed
--> Processing Dependency: kubernetes-cni >= 1.1.1 for package: kubeadm-1.26.10-150500.1.1.x86_64
--> Processing Dependency: cri-tools >= 1.25.0 for package: kubeadm-1.26.10-150500.1.1.x86_64
---> Package kubectl.x86_64 0:1.26.10-150500.1.1 will be installed
---> Package kubelet.x86_64 0:1.26.10-150500.1.1 will be installed
--> Processing Dependency: socat for package: kubelet-1.26.10-150500.1.1.x86_64
--> Processing Dependency: ebtables for package: kubelet-1.26.10-150500.1.1.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.26.10-150500.1.1.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
---> Package cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 will be installed
---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed
---> Package kubernetes-cni.x86_64 0:1.2.0-150500.2.1 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================================================================================================
Installing:
kubeadm x86_64 1.26.10-150500.1.1 kubernetes 9.5 M
kubectl x86_64 1.26.10-150500.1.1 kubernetes 9.8 M
kubelet x86_64 1.26.10-150500.1.1 kubernetes 20 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k
cri-tools x86_64 1.26.1-1.amzn2.0.2 amzn2-core 18 M
ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k
kubernetes-cni x86_64 1.2.0-150500.2.1 kubernetes 6.2 M
libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k
libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k
socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k
Transaction Summary
======================================================================================================================================================================================================================================
Install 3 Packages (+8 Dependent packages)
Total download size: 64 M
Installed size: 333 M
Downloading packages:
(1/11): conntrack-tools-1.4.4-5.amzn2.2.x86_64.rpm | 186 kB 00:00:01
(2/11): ebtables-2.0.10-16.amzn2.0.1.x86_64.rpm | 122 kB 00:00:00
(3/11): cri-tools-1.26.1-1.amzn2.0.2.x86_64.rpm | 18 MB 00:00:02
warning: /var/cache/yum/x86_64/2/kubernetes/packages/kubectl-1.26.10-150500.1.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 9a296436: NOKEY ] 4.6 MB/s | 28 MB 00:00:07 ETA
Public key for kubectl-1.26.10-150500.1.1.x86_64.rpm is not installed
(4/11): kubectl-1.26.10-150500.1.1.x86_64.rpm | 9.8 MB 00:00:03
(5/11): kubeadm-1.26.10-150500.1.1.x86_64.rpm | 9.5 MB 00:00:04
(6/11): libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(7/11): libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(8/11): libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64.rpm | 24 kB 00:00:00
(9/11): socat-1.7.3.2-2.amzn2.0.1.x86_64.rpm | 291 kB 00:00:01
(10/11): kubernetes-cni-1.2.0-150500.2.1.x86_64.rpm | 6.2 MB 00:00:03
(11/11): kubelet-1.26.10-150500.1.1.x86_64.rpm | 20 MB 00:00:04
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 6.9 MB/s | 64 MB 00:00:09
Retrieving key from https://pkgs.k8s.io/core:/stable:/v1.26/rpm/repodata/repomd.xml.key
Importing GPG key 0x9A296436:
Userid : "isv:kubernetes OBS Project <isv:kubernetes@build.opensuse.org>"
Fingerprint: de15 b144 86cd 377b 9e87 6e1a 2346 54da 9a29 6436
From : https://pkgs.k8s.io/core:/stable:/v1.26/rpm/repodata/repomd.xml.key
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : kubernetes-cni-1.2.0-150500.2.1.x86_64 1/11
Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 2/11
Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 3/11
Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 4/11
Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 5/11
Installing : kubectl-1.26.10-150500.1.1.x86_64 6/11
Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 7/11
Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 8/11
Installing : kubelet-1.26.10-150500.1.1.x86_64 9/11
Installing : cri-tools-1.26.1-1.amzn2.0.2.x86_64 10/11
Installing : kubeadm-1.26.10-150500.1.1.x86_64 11/11
Verifying : kubernetes-cni-1.2.0-150500.2.1.x86_64 1/11
Verifying : cri-tools-1.26.1-1.amzn2.0.2.x86_64 2/11
Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 3/11
Verifying : kubeadm-1.26.10-150500.1.1.x86_64 4/11
Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 5/11
Verifying : kubectl-1.26.10-150500.1.1.x86_64 6/11
Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 7/11
Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 8/11
Verifying : kubelet-1.26.10-150500.1.1.x86_64 9/11
Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 10/11
Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 11/11
Installed:
kubeadm.x86_64 0:1.26.10-150500.1.1 kubectl.x86_64 0:1.26.10-150500.1.1 kubelet.x86_64 0:1.26.10-150500.1.1
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 ebtables.x86_64 0:2.0.10-16.amzn2.0.1 kubernetes-cni.x86_64 0:1.2.0-150500.2.1 libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1
libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 socat.x86_64 0:1.7.3.2-2.amzn2.0.1
Complete!
- 설치된 것 확인
[root@minhangk8s-01 ~]# kubelet --version
Kubernetes v1.26.10
[root@minhangk8s-01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.10", GitCommit:"b8609d4dd75c5d6fba4a5eaa63a5507cb39a6e99", GitTreeState:"clean", BuildDate:"2023-10-18T11:42:11Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
[root@minhangk8s-01 ~]# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.10", GitCommit:"b8609d4dd75c5d6fba4a5eaa63a5507cb39a6e99", GitTreeState:"clean", BuildDate:"2023-10-18T11:44:31Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
The connection to the server localhost:8080 was refused - did you specify the right host or port?
- kubelet 구동
[root@minhangk8s-01 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
-> systemctl에 enable 시켜두어 매번 재부팅 때마다 자동으로 띄우도록 합니다.
[root@minhangk8s-01 ~]# ps -ef | grep kubelet
root 35977 35665 0 20:12 pts/1 00:00:00 grep --color=auto kubelet
-> 헌데 확인해보면 프로세스가 떠있지 않는 상태입니다.
[root@minhangk8s-01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Tue 2023-10-24 20:10:52 UTC; 6s ago
Docs: https://kubernetes.io/docs/
Process: 35918 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 35918 (code=exited, status=1/FAILURE)
Oct 24 20:10:52 minhangk8s-01 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Oct 24 20:10:52 minhangk8s-01 systemd[1]: Unit kubelet.service entered failed state.
Oct 24 20:10:52 minhangk8s-01 systemd[1]: kubelet.service failed.
-> status 확인해보면 뭔가 에러 발생해서 뜨지 않고 있다는 것을 알 수 있습니다. 아직초기화를 진행하지 않아서 그렇습니다.
kubeadm을 통한 kubernetes 초기화
(참고 사이트: https://v1-26.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
[root@minhangk8s-01 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
I1024 20:55:19.376527 35860 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.10
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "minhangk8s-01" could not be reached
[WARNING Hostname]: hostname "minhangk8s-01": lookup minhangk8s-01 on 168.126.63.1:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E1024 20:55:19.862528 35868 remote_runtime.go:616] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\""
time="2023-10-24T20:55:19Z" level=fatal msg="getting status of runtime: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\""
, error: exit status 1
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
-> 대뜸 초기화 명령어 실행부터 해봤더니 여러가지 에러가 발생했습니다...
에러 처리
- tc 명령어 없음
[WARNING FileExisting-tc]: tc not found in system path
부분 참고
yum으로 tc 명령어 설치 진행합니다.
[root@minhangk8s-01 ~]# yum install tc
Loaded plugins: langpacks, priorities, update-motd
amzn2-core | 3.6 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================================================================================================
Installing:
iproute-tc x86_64 5.10.0-2.amzn2.0.3 amzn2-core 432 k
Transaction Summary
======================================================================================================================================================================================================================================
Install 1 Package
Total download size: 432 k
Installed size: 815 k
Is this ok [y/d/N]: y
Downloading packages:
iproute-tc-5.10.0-2.amzn2.0.3.x86_64.rpm | 432 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 1/1
Verifying : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 1/1
Installed:
iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3
Complete!
- hostname 찾을 수 없음
[WARNING Hostname]: hostname "minhangk8s-01" could not be reached
부분 참조
-> /etc/hosts 파일에 hostname 추가합니다.
[root@minhangk8s-01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost6 localhost6.localdomain6
192.168.100.161 MinhangK8S-01
192.168.100.162 MinhangK8S-02
192.168.100.163 MinhangK8S-03
192.168.100.164 MinhangK8S-04
192.168.100.165 MinhangK8S-05
-> vi 편집기를 통해 위와 같이 k8s 설치에 사용할 호스트명들을 집어 넣었습니다.
- bridge-nf-call-iptables 파일 없음
/proc/sys/net/bridge/bridge-nf-call-iptables does not exist
부분 참조
[root@minhangk8s-01 ~]# ls -l /proc/sys/net/bridge/bridge-nf-call-iptables
ls: cannot access /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
-> 실제로 파일이 없는 상태입니다.
[root@minhangk8s-01 ~]# modprobe br_netfilter
-> modprobe 사용하여 초기화합니다.
[root@minhangk8s-01 ~]# ls -l /proc/sys/net/bridge/bridge-nf-call-iptables
-rw-r--r-- 1 root root 0 Oct 24 20:29 /proc/sys/net/bridge/bridge-nf-call-iptables
-> 파일 생김
[root@minhangk8s-01 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
-> 해당 파일의 초기값이 0으로 되어있는데, 그냥 두면 1로 설정하라고 또 에러 뜰것임
[root@minhangk8s-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@minhangk8s-01 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
-> 미리 1로 바꿔둡니다.
- /proc/sys/net/ipv4/ip_forward 파일 내용 1로 변경하라
/proc/sys/net/ipv4/ip_forward contents are not set to 1
부분 참조 직전에 bridge-nf-call-iptables파일 1로 수정한 부분과 마찬가지입니다. 다만 파일은 이미 생성되어 있으므로 1로 변경만 해주면 됩니다.
[root@minhangk8s-01 ~]# ls -l /proc/sys/net/ipv4/ip_forward
-rw-r--r-- 1 root root 0 Oct 24 06:33 /proc/sys/net/ipv4/ip_forward
[root@minhangk8s-01 ~]# cat /proc/sys/net/ipv4/ip_forward
0
[root@minhangk8s-01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@minhangk8s-01 ~]# cat /proc/sys/net/ipv4/ip_forward
1
- containerd.sock 파일 없음
/var/run/containerd/containerd.sock: connect: no such file or directory
부분 참조 -> 처음에 runtime 설치를 안하고 설치과정을 시작해서 그렇습니다. containerd를 runtime으로 설치 진행합니다. (docker나 crio도 가능)
[root@minhangk8s-01 ~]# yum install containerd
Loaded plugins: langpacks, priorities, update-motd
amzn2-core | 3.6 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package containerd.x86_64 0:1.6.19-1.amzn2.0.5 will be installed
--> Processing Dependency: runc for package: containerd-1.6.19-1.amzn2.0.5.x86_64
--> Running transaction check
---> Package runc.x86_64 0:1.1.7-4.amzn2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================================================================================================
Installing:
containerd x86_64 1.6.19-1.amzn2.0.5 amzn2extra-docker 28 M
Installing for dependencies:
runc x86_64 1.1.7-4.amzn2 amzn2extra-docker 3.0 M
Transaction Summary
======================================================================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
Total download size: 31 M
Installed size: 111 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): runc-1.1.7-4.amzn2.x86_64.rpm | 3.0 MB 00:00:00
(2/2): containerd-1.6.19-1.amzn2.0.5.x86_64.rpm | 28 MB 00:00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 66 MB/s | 31 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : runc-1.1.7-4.amzn2.x86_64 1/2
Installing : containerd-1.6.19-1.amzn2.0.5.x86_64 2/2
Verifying : runc-1.1.7-4.amzn2.x86_64 1/2
Verifying : containerd-1.6.19-1.amzn2.0.5.x86_64 2/2
Installed:
containerd.x86_64 0:1.6.19-1.amzn2.0.5
Dependency Installed:
runc.x86_64 0:1.1.7-4.amzn2
Complete!
-> 설치는 yum으로 진행하면 되고, 추가적인 초기화 과정 필요합니다.
[root@minhangk8s-01 ~]# containerd config default | tee /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
... 중략 ...
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
-> config.toml 파일 초기화
해당 파일에 systemd를 cgroup driver로 사용하기 위한 설정 변경이 필요합니다. vi /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false
-> config.toml파일에서 SystemdCgroup으로 찾기해보면 딱 하나 나옵니다. 위와 같은 내용으로 구성되어 있는 블록이며, 여기서 SystemdCgroup = true
로 변경하고 저장해줍니다.
그리고 containerd를 기동해줍니다.
[root@minhangk8s-01 ~]# systemctl enable --now containerd
[root@minhangk8s-01 ~]# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2023-10-24 21:06:28 UTC; 3s ago
Docs: https://containerd.io
Process: 36662 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 36677 (containerd)
Tasks: 9
Memory: 13.3M
CGroup: /system.slice/containerd.service
└─36677 /usr/bin/containerd
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.919122697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="c...load cni config"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.919275118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.919309596Z" level=info msg=serving... address=/run/containerd/containerd.sock
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.919345366Z" level=info msg="containerd successfully booted in 0.021143s"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924024978Z" level=info msg="Start subscribing containerd event"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924225838Z" level=info msg="Start recovering state"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924360597Z" level=info msg="Start event monitor"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924519519Z" level=info msg="Start snapshots syncer"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924643966Z" level=info msg="Start cni network conf syncer for default"
Oct 24 21:06:28 minhangk8s-01 containerd[36677]: time="2023-10-24T21:06:28.924744136Z" level=info msg="Start streaming server"
Hint: Some lines were ellipsized, use -l to show in full.
[root@minhangk8s-01 ~]# ls -l /var/run/containerd/containerd.sock
srw-rw---- 1 root root 0 Oct 24 21:06 /var/run/containerd/containerd.sock
-> 이제 containerd 소켓 파일도 생겼습니다.
- kubeadm 초기화 재실행
[root@minhangk8s-01 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
I1024 21:08:35.047954 37233 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local minhangk8s-01] and IPs [10.96.0.1 192.168.100.61]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minhangk8s-01] and IPs [192.168.100.61 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minhangk8s-01] and IPs [192.168.100.61 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.002525 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node minhangk8s-01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node minhangk8s-01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: g2veat.u59md2i64tn12f71
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.61:6443 --token g2veat.u59md2i64tn12f71 \
--discovery-token-ca-cert-hash sha256:0c16e7ea8aa0142a482744df184d9f2282ed6070239af64b2e460dbf889108cd
-> 설치 완료되었습니다. 설치 옵션으로 넣어준 pod-network-cidr은 이름 그대로 파드간의 통신에 사용될 CIDR설정값인데, flannel을 사용할 예정이기에, 10.244.0.0/16으로 설정했습니다. 이 값을 다르게 주면 flannel 설정시 에러 생깁니다. -> discovery-token-ca-cert-hash 값은 다른 노드 가입시 사용해야 하므로 잘 저장해두어야 합니다.
- kubelet 구동 확인 초기화 전에는 떠있지 않았던 kubelet이 정상적으로 떠있는 것 확인합니다.
[root@minhangk8s-01 ~]# ps -ef | grep kubelet
root 37990 37612 3 21:09 ? 00:00:08 kube-apiserver --advertise-address=192.168.100.61 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 38127 1 1 21:09 ? 00:00:03 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9
root 38644 35654 0 21:13 pts/1 00:00:00 grep --color=auto kubelet
[root@minhangk8s-01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2023-10-24 21:09:31 UTC; 4min 3s ago
Docs: https://kubernetes.io/docs/
Main PID: 38127 (kubelet)
Tasks: 10
Memory: 37.3M
CGroup: /system.slice/kubelet.service
└─38127 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cont...
Oct 24 21:12:45 minhangk8s-01 kubelet[38127]: E1024 21:12:45.517650 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:12:50 minhangk8s-01 kubelet[38127]: E1024 21:12:50.518731 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:12:55 minhangk8s-01 kubelet[38127]: E1024 21:12:55.520286 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:00 minhangk8s-01 kubelet[38127]: E1024 21:13:00.520868 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:05 minhangk8s-01 kubelet[38127]: E1024 21:13:05.522334 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:10 minhangk8s-01 kubelet[38127]: E1024 21:13:10.524541 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:15 minhangk8s-01 kubelet[38127]: E1024 21:13:15.525152 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:20 minhangk8s-01 kubelet[38127]: E1024 21:13:20.525628 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:25 minhangk8s-01 kubelet[38127]: E1024 21:13:25.526819 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Oct 24 21:13:30 minhangk8s-01 kubelet[38127]: E1024 21:13:30.528932 38127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network pl...not initialized"
Hint: Some lines were ellipsized, use -l to show in full.
- KUBECONFIG 설정
[root@minhangk8s-01 ~]# kubectl get all
E1024 21:26:52.507964 39046 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E1024 21:26:52.508320 39046 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E1024 21:26:52.509325 39046 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E1024 21:26:52.509597 39046 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E1024 21:26:52.511074 39046 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
-> 설치가 완료되었음에도 kubectl 명령 결과가 이상하게 출력되는 상태일 것입니다.
[root@minhangk8s-01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@minhangk8s-01 ~]# source ~/.bash_profile
-> .bash_profile에 KUBECONFIG 환경변수를 추가해줍니다.
[root@minhangk8s-01 ~]# kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
-> 이제 kubectl 명령어가 정상적으로 출력됩니다.
복사 붙여넣기를 위한 명령어 모음
위 과정을 따라해보시기 위한 분들을 위해, 편하게 복사&붙여넣기 할 수 있도록, 사용되었던 명령어들만 모아봤습니다.
getenforce
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.26/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
ls -l /etc/yum.repos.d/kubernetes.repo
cat /etc/yum.repos.d/kubernetes.repo
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
kubelet --version
kubeadm version
kubectl version
systemctl enable --now kubelet
ps -ef | grep kubelet
systemctl status kubelet
kubeadm init --pod-network-cidr=10.244.0.0/16
yum install tc
ls -l /proc/sys/net/bridge/bridge-nf-call-iptables
modprobe br_netfilter
ls -l /proc/sys/net/bridge/bridge-nf-call-iptables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
ls -l /proc/sys/net/ipv4/ip_forward
cat /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/ip_forward
cat /proc/sys/net/ipv4/ip_forward
yum install containerd
containerd config default | tee /etc/containerd/config.toml
vi /etc/containerd/config.toml
/SystemdCgroup
SystemdCgroup = true
systemctl enable --now containerd
systemctl status containerd
ls -l /var/run/containerd/containerd.sock
kubeadm init --pod-network-cidr=10.244.0.0/16
ps -ef | grep kubelet
systemctl status kubelet
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
kubectl get all