2020.07.14
1. K8s 설치 개요
- 내용: K8s 1.18.1/1.15.11 (Single control plane) install, K8s Dashboard v2.0.0 / Weave scope install
- 환경 : Google Compute Engine, Centos 7.7
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- 참고: Kubeflow v1.0 설치 기준 요구사항
compatible kubernets version 1.14, 1.15 (https://www.kubeflow.org/docs/started/k8s/overview/)
resource : 4 CPU, 12 GB memory, 50 GB storage
2. Google Compute Engine (VM) 생성
- Cloud shell 접속
Google Account: ysjeon71.kubeflow3@gmail.com
- VM 생성
Master 노드의 CPU수는 2 이상이어야 함.
$ gcloud config set project my-kubeflow-274101
$ gcloud compute instances create master --image-family centos-7 --image-project=centos-cloud --machine-type n1-standard-2 --zone=us-east1-b
…
$ gcloud compute instances create worker-1 --image-family centos-7 --image-project=centos-cloud --machine-type n1-standard-1 --zone=us-east1-c
…
$ gcloud compute instances create worker-2 --image-family centos-7 --image-project=centos-cloud --machine-type n1-standard-1 --zone=us-east1-d
…
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
master us-east1-b g1-small 10.142.0.2 34.74.61.217 RUNNING
worker-1 us-east1-c n1-standard-1 10.142.0.3 34.73.0.109 RUNNING
worker-2 us-east1-d n1-standard-1 10.142.0.4 34.74.253.81 RUNNING
$
- VM start / stop 명령어
gcloud compute instances stop master --zone=us-east1-b && gcloud compute instances stop worker-1 --zone=us-east1-c && gcloud compute instances stop worker-2 --zone=us-east1-d
gcloud compute instances start master --zone=us-east1-b && gcloud compute instances start worker-1 --zone=us-east1-c && gcloud compute instances start worker-2 --zone=us-east1-d
- VM 접속 명령어
gcloud compute ssh master --zone=us-east1-b
gcloud compute ssh worker-1 --zone=us-east1-c
gcloud compute ssh worker-2 --zone=us-east1-d
3. 사전 작업
- VM (master, worker-1, worker-2) 사전 작업
$ sudo su -
# swapoff -a && echo 0 > /proc/sys/vm/swappiness
# systemctl disable firewalld && systemctl stop firewalld
# setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# modprobe br_netfilter
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system
# yum install docker -y && systemctl enable docker.service && systemctl start docker.service
- K8s 1.15.11 설치 시 실행
# echo 1 > /proc/sys/net/ipv4/ip_forward
# echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
4. K8s 설치하기
- K8s 주요 명령어 설치 (전체 노드)
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
#
- K8s 1.15.11 설치시 실행
# yum install -y kubeadm-1.15.11 kubelet-1.15.11 kubectl-1.15.11 --disableexcludes=kubernetes
…
================================================================================================
Package Arch Version Repository Size
================================================================================================
Installing:
kubeadm x86_64 1.15.11-0 kubernetes 8.9 M
kubectl x86_64 1.15.11-0 kubernetes 9.5 M
kubelet x86_64 1.15.11-0 kubernetes 22 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-5.el7_7.2 updates 187 k
cri-tools x86_64 1.13.0-0 kubernetes 5.1 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M
libnetfilter_cthelper x86_64 1.0.0-10.el7_7.1 updates 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.el7_7.1 updates 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
…
# systemctl enable kubelet && systemctl start kubelet
- K8s 1.18.1 설치시 실행
# yum install y kubelet kubeadm kubectl --disableexcludes=kubernetes
# systemctl enable kubelet && systemctl start kubelet
- Initializing your control-plane node (Master node)
[root@master ~]# kubeadm config images pull
I0417 00:45:09.653314 2715 version.go:248] remote version is much newer: v1.18.2; falling back to: stable-1.15
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.15.11
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.15.11
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.15.11
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.15.11
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.11 7cd3972af624 5 weeks ago 82.5 MB
k8s.gcr.io/kube-apiserver v1.15.11 0eaa5e1d871a 5 weeks ago 207 MB
k8s.gcr.io/kube-controller-manager v1.15.11 4d53b9ec5d96 5 weeks ago 159 MB
k8s.gcr.io/kube-scheduler v1.15.11 e671c2a84bb9 5 weeks ago 81.2 MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 15 months ago 40.3 MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 16 months ago 258 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742 kB
[root@master ~]# kubeadm init # --kubernetes-version=1.15.11
I0417 02:05:03.746158 12445 version.go:248] remote version is much newer: v1.18.2; falling back to: stable-1.15
…
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.142.0.2:6443 --token h2xfds.b2332sosatagu2yz \
--discovery-token-ca-cert-hash sha256:7f0491180db0b2bbdebd75c0c726d72cbab2f498f64980ee6c26583473b39040
[root@master ~]#
- Setup for K8s users
[ysjeon71@master ~]$ mkdir -p $HOME/.kube
[ysjeon71@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[ysjeon71@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[ysjeon71@master ~]$ kubectl version --short=true
Client Version: v1.15.11
Server Version: v1.15.11
[ysjeon71@master ~]$
- Joining worker nodes(worker-1, worker-2) to your cluster
# kubeadm join 10.142.0.2:6443 --token h2xfds.b2332sosatagu2yz \
--discovery-token-ca-cert-hash sha256:7f0491180db0b2bbdebd75c0c726d72cbab2f498f64980ee6c26583473b39040
- Installing a Pod network add-on (Weave Net) on master node
[root@master ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 9m15s v1.18.1
worker-1 Ready <none> 2m35s v1.18.1
worker-2 Ready <none> 2m26s v1.18.1
[root@master ~]#
- 설치 확인
[root@master ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
[root@master ~]#
5. K8s 테스트
- sock-shop 배포
[ysjeon71_kubeflow3@master ~]$ kubectl create ns sock-shop
namespace/sock-shop created
$ curl "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" -o sock-shop.yaml -L
$ cp sock-shop.yaml sock-shop.yaml_org
$ sed 's/extensions\/v1beta1/apps\/v1/' -i sock-shop.yaml # ==> K8s v1.9.0 이후 apiVersion 정보가 변경 됨
$ vi sock-shop.yaml
# ===> Deployment 관련해서 ‘spec:’ 절에 ‘selector:’ 추가
spec:
selector:
matchLabels:
name: ??????
$ diff sock-shop.yam_org sock-shop.yaml | head
1c1
< apiVersion: extensions/v1beta1
---
> apiVersion: apps/v1
8a9,11
> selector:
> matchLabels:
> name: carts-db
55c58
< apiVersion: extensions/v1beta1
$ kubectl apply -f sock-shop.yam -n sock-shop
…
$ kubectl get svc front-end -n sock-shop -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
front-end NodePort 10.99.85.50 <none> 80:30001/TCP 19s name=front-end
$
- Google cloud 방화벽 설정
ysjeon71@cloudshell:~ (my-kubeflow-274101)$ gcloud compute firewall-rules create allow-sock-shop --allow=tcp:30001
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/my-kubeflow-274101/global/firewalls/allow-sock-shop].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
allow-sock-shop default INGRESS 1000 tcp:30001 False
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
master us-east1-b n1-standard-2 10.142.0.2 34.74.61.51 RUNNING
worker-1 us-east1-c n1-standard-1 10.142.0.3 34.74.61.217 RUNNING
worker-2 us-east1-d n1-standard-1 10.142.0.4 34.74.253.81 RUNNING
$
- sock-shop 호출
http://34.74.61.217:30001/
'Kubernetes > Install' 카테고리의 다른 글
MetalLB (0) | 2021.09.15 |
---|---|
keepalived, haproxy for K8s (0) | 2021.09.15 |
K8s 구성 - MiniKube on MacOS (0) | 2021.09.14 |
K8s 구성 - KinD on MacOS (0) | 2021.09.14 |
K8s 구성 - HA K8s on bare-metal server (0) | 2021.09.14 |
댓글