1. Redhat 설치 및 구독
https://developers.redhat.com/
구독 시 시간동기화 필요
systemctl restart chronyd
subscription-manager unregister
subscription-manager register
sudo subscription-manager attach --auto
dnf update
2. nmcli 네트워크 설정
nmcli connection modify <프로필_이름> ipv4.addresses <IP주소>/<서브넷프리픽스>
nmcli connection modify <프로필_이름> ipv4.gateway <게이트웨이_주소>
nmcli connection modify <프로필_이름> ipv4.dns <DNS_서버_주소>
nmcli connection modify <프로필_이름> ipv4.method manual
3. CRIO - Kubernetes 설치
CRIO 공식 문서 참조 : https://cri-o.io/
Define the Kubernetes version and used CRI-O stream
KUBERNETES_VERSION=v1.30
CRIO_VERSION=v1.30
Distributions using rpm packages
Add the Kubernetes repository
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key
EOF
Add the CRI-O repository
cat <<EOF | tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key
EOF
Install package dependencies from the official repositories
dnf install -y container-selinux
Install the packages
dnf install -y cri-o kubelet kubeadm kubectl
Start CRI-O
systemctl start crio.service
Bootstrap a cluster
swapoff -a
modprobe br_netfilter
sysctl -w net.ipv4.ip_forward=1
kubeadm init # 실행 전 Node 복제
4. /etc/hosts 작성 후 Node 복제
127.0.0.1 localhost localhost.localdomain localhost4.localdomain4
::1 localhost localhost.localdomain localhost6.localdomain6
172.16.80.130 master
172.16.80.131 worker1
172.16.80.134 worker2
5. CP 설정과 Node join
kubeadm init CP Node에서
kubeadm join ... Worker Node에서
6. 네트워크 CNI 설치
helm package 부터 설치
curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts
/get-helm-3 | bash
export PATH=$PATH:/usr/local/bin # 기본 패스에 /usr/local/bin이 없을 경우 추가helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --namespace=kube-system7. Nginx-ingress 설치
Ingress란?
인그레스는 클러스터 외부에서 클러스터 내부 서비스로 HTTP와 HTTPS 경로를 노출한다. 트래픽 라우팅은 인그레스 리소스에 정의된 규칙에 의해 컨트롤된다.
Ingress Controller를 위한 Service는, AWS나 Openstack에서 동적으로 생성해 사용할 수 있는 Load Balancer 타입을 사용하는 것이 일반적이다.
하지만 On-premiss 환경에서는 외부에 노출될 IP를 따로 만들어 줘야 Load Balancer를 사용할 수 있다.
MetalLB 설치
on-premiss 환경에서 부하 분산 기능 및 외부 연결용 IP를 제공하기 위해 사용
Load-Balancer 기능을 제공하는 CNCF의 오픈소스 프로젝트
설치 전 설정
kubectl edit configmap -n kube-system kube-proxy
mode: “ipvs”
ipvs:
strictARP: truemode와 ipvs의 strictARP 수정(존재하는 내용 수정)
설치
kubectl apply -f <https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml>vi metalLB.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 172.16.80.20-172.16.80.30
autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- defaultaddress 주소는 host의 대역폭 안 겹치지 않는 주소 설정(본인: 172.16.80.0/24)
kubectl create -f metalLB.yaml
ingress-nginx 설치
helm repo add ingress-nginx <https://kubernetes.github.io/ingress-nginx>
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
kubectl get pods -n ingress-nginx예제로 쓸 Pod와 Service 설정(2개)
vi example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: hashicorp/http-echo
args:
- "-text=Hello from Kubernetes"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: example-app-service
namespace: default
spec:
selector:
app: example-app
ports:
- protocol: TCP
port: 80
targetPort: 5678http 요청이 들어오면 args에 text를 리턴해주는 앱
vi another.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: another-app
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: another-app
template:
metadata:
labels:
app: another-app
spec:
containers:
- name: another-app
image: hashicorp/http-echo
args:
- "-text=Hello from Another App"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: another-app-service
namespace: default
spec:
selector:
app: another-app
ports:
- protocol: TCP
port: 80
targetPort: 5678해당 service를 domain과 매칭해주는 ingress 생성
vi ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: example-app-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: another-app-service
port:
number: 80rules 밑에 host를 지정하면 해당 host로만 접근 가능(IP로 X)
k apply -f example.yaml
k apply -f another.yaml
k apply -f ingress.yamlIngress 작동 확인
k get svc -n ingress-nginx결과
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.122.114 172.16.80.20 80:31924/TCP,443:31114/TCP 6h22m
ingress-nginx-controller-admission ClusterIP 10.102.75.246 <none> 443/TCP 6h22mEXTERNAL-IP 가 정상적으로 들어오면 성공
Test
curl 172.16.80.20/app1
curl 172.16.80.20/app2지정했던 domain으로 테스트
8. 테스트할 앱을 위한 NFS
모든 노드에 설치
dnf install -y nfs-utils
서버에 설치
dnf install -y nfs-server
Master에서 공유 폴더 지정(일반 사용자로)
**
sudo apt-get update && sudo apt-get install -y nfs-kernel-server #ubuntu
sudo mkdir -m 1777 /share (chmod 1777 /share)
**
nfs-provisioner 설치
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
# 서버 이름 알맞게 변경
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=cp --set nfs.path=/share
SC Default 설정
kubectl edit sc nfs-client
# annotation에 추가
storageclass.kubernetes.io/is-default-class: "true"
9. Mysql 설정
# helm 자동완성
source <(helm completion bash)
echo "source <(helm completion bash)" >> ~/.bashrc
tail -1 ~/.bashrc
# mysql 설치
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm fetch bitnami/mysql
tar xf mysql${version}
cd mysql
vi values.yaml # (122 password ,130 test ,193 mysql )
테이블 데이터 삽입
10. 3-tier App 배포
backend deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: back
namespace: bank
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: readytodev/5team-back:2.0
ports:
- containerPort: 8080
env:
- name: MYSQL_HOST
value: mysql.mysql.svc.cluster.local
- name: MYSQL_PORT
value: "3306"
- name: PORT
value: "8080"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: back-secret
key: mysql-id
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: back-secret
key: mysql-pw
- name: MYSQL_DB
valueFrom:
secretKeyRef:
name: back-secret
key: MYSQL_DB
- name: MYSQL_SALT_DB
valueFrom:
secretKeyRef:
name: back-secret
key: MYSQL_SALT_DB
- name: NAVER_EMAIL
valueFrom:
secretKeyRef:
name: back-secret
key: NAVER_EMAIL
- name: NAVER_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
name: back-secret
key: NAVER_EMAIL_PASSWORD
- name: CORS_ORIGIN
valueFrom:
secretKeyRef:
name: back-secret
key: CORS_ORIGIN
- name: SESSION_SECRET_KEY
valueFrom:
secretKeyRef:
name: back-secret
key: SESSION_SECRET_KEY
- name: reCAPTCHA_SITE_KEY
valueFrom:
secretKeyRef:
name: back-secret
key: reCAPTCHA_SITE_KEY
- name: reCAPTCHA_SECRET_KEY
valueFrom:
secretKeyRef:
name: back-secret
key: reCAPTCHA_SECRET_KEY
vi back-service.yaml
apiVersion: v1
kind: Service
metadata:
name: bank-back
namespace: bank
spec:
selector:
app: back
ports:
- protocol: TCP
port: 8080 # 외부에서 접근할 포트
targetPort: 8080 # 컨테이너 내부의 포트
type: LoadBalancer
vi front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
namespace: bank
spec:
replicas: 1
selector:
matchLabels:
app: front
template:
metadata:
labels:
app: front
spec:
containers:
- name: front
image: readytodev/5team-front:2.0
imagePullPolicy: Always
ports:
- containerPort: 80
vi front-service.yaml
apiVersion: v1
kind: Service
metadata:
name: bank-front
namespace: bank
spec:
selector:
app: front
ports:
- protocol: TCP
port: 3000 # 외부에서 접근할 포트
targetPort: 80 # 컨테이너 내부의 포트
type: LoadBalancer
vi front-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: bank
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: bank-front
port:
number: 80
secret 자신의 환경에 맞게 설정
11. 모니터링
설치
Prometheus-community가 제공하는 helm chart 사용
Prometheus + Grafana + Prometheus Rules
k create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm pull prometheus-community/kube-prometheus-stack
tar xvfz kube-prometheus-stack-${version}.tgz
cd kube-prometheus-stack
vi values.yaml # 변경하고 싶은 설정 변경ingress 설정(values.yaml에서도 가능)
prometheus
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: "nginx"
rules:
- host: prometheus.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-kube-prometheus-prometheus
port:
number: 9090
grafana
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: "nginx"
rules:
- host: grafana.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-grafana
port:
number: 3000
지정한 host name으로 접속
LocalPC의 /etc/host 변경
External-IP prometheus.example.com
External-IP grafana.example.com
시간 불일치 시 NTP 설정 하기
systemctl restart chronyd
수집한 데이터를 사용하기 위한 Adapter 설치
helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring
kubectl edit deployment prometheus-adapter -n monitoring
프로메테우스의 서비스로 변경
- --prometheus-url=http://prometheus-kube-prometheus-prometheus.monitoring.svc:9090
커스텀 데이터 목록
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
12. HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: pod-autoscale
namespace: bank
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: back
minReplicas: 1
maxReplicas: 3
metrics:
- type: Pods
pods:
metric:
name: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
target:
type: AverageValue
averageValue: 0.4
Prometheus에서 데이터를 가져와 deployment/back에 적용(CPU 40%)
13. argoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d
NodePort로 argocd 접속