在 K8S 中部署 CI/CD 流程(一)部署 k8s 集群
🪬

在 K8S 中部署 CI/CD 流程(一)部署 k8s 集群

框架图

notion image

部署环境

硬件环境

  • i7-12700 32G 内存 1T HDD。

软件环境

  • 虚拟化软件: libvirt 8.0.0
  • 操作系统: Deepin 20.9
  • 内核版本: 5.15.77-amd64-desktop

正文

部署 k8s 集群

虚拟机部署

虚拟机 5个
选择的系统:ubuntu-22.04.5-live-server-amd64.iso
虚拟机名称
hostname
IP
配置
备注
k8s-m1
master1
192.168.122.11
2C4G
k8s-m2
master2
192.168.122.12
2C4G
k8s-m3
master3
192.168.122.13
2C4G
k8s-w1
worker1
192.168.122.21
6C8G
k8s-w2
worker2
192.168.122.22
6C8G

安装基础软件(后续内容都请用 root 命令执行)

apt-get update apt-get install -y apt-transport-https ca-certificates curl gpg containerd vim

安装 k8s

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl

关闭交换空间

sed -i '/swap/d' /etc/fstab swapoff -a

设置 IPV4 转发

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system modprobe br_netfilter echo '1' | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables cat <<EOF | sudo tee /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF echo "br_netfilter" >> /etc/modules-load.d/br_netfilter.conf

配置 CRI (运行时)

mkdir -p /etc/containerd/ && containerd config default > /etc/containerd/config.toml sed -i 's|registry.k8s.io|k8s.dockerproxy.net|g' /etc/containerd/config.toml sed -i 's|SystemdCgroup = false|SystemdCgroup = true|g' /etc/containerd/config.toml tee /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: true EOF

启动运行时及服务

systemctl enable kubelet systemctl enable containerd systemctl restart containerd

将机器克隆 4 次(关机克隆)

虚拟机配置 IP

sudo vim /etc/cloud/cloud.cfg.d/90-installer-network.cfg
内容为:
# This is the network config written by 'subiquity' network: ethernets: ens3: addresses: - 192.168.122.11/24 nameservers: addresses: - 114.114.114.114 search: [] routes: - to: default via: 192.168.122.1 version: 2
注意:重启生效,修改 /etc/netplat/ 下的配置,重启后无效。其中 192.168.122.11 为各个节点的 IP,千万不要都设置成一样的。

修改 hostname

echo "master1" > /etc/hostname # master1 各个节点的 hostname 值,请看上面的虚拟机部署配置表格

设置静态域名解析

echo “192.168.122.11 k8s.ljtian.com” >> /etc/hosts
注意:先设置成 master1 节点 IP,后面会设置成 kube-vip 的 IP。kube-vip 是一个 VIP(虚拟IP) + LB(负载均衡) 软件,实现 k8s API 的高可用。

拉取镜像

kubeadm config images pull \ --image-repository=registry.aliyuncs.com/google_containers \ --kubernetes-version="v1.30.6"

重启虚拟机

reboot

启动第一个 control-plane (控制平面) ,此内容仅在 master 1 节点执行

kubeadm init \ --control-plane-endpoint="k8s.ljtian.com:6443" \ --kubernetes-version=v1.30.6 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --image-repository=registry.aliyuncs.com/google_containers \ --upload-certs

输出内容为

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s.ljtian.com:6443 --token brxssc.gwje8ft83cgh78op \ --discovery-token-ca-cert-hash sha256:4235d1dbe303ad642358d2c5fd1780f50c468b869948877d82ac11a12b3df3e3 \ --control-plane --certificate-key 0e02922c4d67183a79d42d4b14d379518e183cbf374e47028e5e08492a1cd6eb Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s.ljtian.com:6443 --token brxssc.gwje8ft83cgh78op \ --discovery-token-ca-cert-hash sha256:4235d1dbe303ad642358d2c5fd1780f50c468b869948877d82ac11a12b3df3e3

根据输出的内容配置一下默认登录

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
执行 kubectl get pod -A 查看部署情况

根据输出的内容去各个节点分别执行对应的节点加入命令

master 节点加入方式
#You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s.ljtian.com:6443 --token brxssc.gwje8ft83cgh78op \ --discovery-token-ca-cert-hash sha256:4235d1dbe303ad642358d2c5fd1780f50c468b869948877d82ac11a12b3df3e3 \ --control-plane --certificate-key 0e02922c4d67183a79d42d4b14d379518e183cbf374e47028e5e08492a1cd6eb
worker 节点加入方式
#Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s.ljtian.com:6443 --token brxssc.gwje8ft83cgh78op \ --discovery-token-ca-cert-hash sha256:4235d1dbe303ad642358d2c5fd1780f50c468b869948877d82ac11a12b3df3e3

添加 CNI 插件

下载 yaml 文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改镜像地址

sed -i 's|docker.io|dockerproxy.net|g' kube-flannel.yml

应用 CNI

kubectl apply -f kube-flannel.yml

查看 CNI 状态

kubectl get pod -n kube-flannel # 5 个节点应该有 5 个 pod 都为 running 状态

输出内容

NAME READY STATUS RESTARTS AGE kube-flannel-ds-5b6nl 1/1 Running 0 4h37m kube-flannel-ds-l7l8m 1/1 Running 0 4h28m kube-flannel-ds-qgwqb 1/1 Running 0 4h37m kube-flannel-ds-qtgdw 1/1 Running 0 4h37m kube-flannel-ds-tntlh 1/1 Running 0 4h28m

部署 VIP (下方内容在 master1 节点执行)

确认集群状态正常的情况

kubectl get pod -A

输出内容

NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-5b6nl 1/1 Running 0 4h37m kube-flannel kube-flannel-ds-l7l8m 1/1 Running 0 4h28m kube-flannel kube-flannel-ds-qgwqb 1/1 Running 0 4h37m kube-flannel kube-flannel-ds-qtgdw 1/1 Running 0 4h37m kube-flannel kube-flannel-ds-tntlh 1/1 Running 0 4h28m kube-system coredns-cb4864fb5-d5qqj 1/1 Running 0 4h42m kube-system coredns-cb4864fb5-klsj9 1/1 Running 0 4h42m kube-system etcd-master1 1/1 Running 1 4h42m kube-system etcd-master2 1/1 Running 0 4h40m kube-system etcd-master3 1/1 Running 0 4h40m kube-system kube-apiserver-master1 1/1 Running 1 4h42m kube-system kube-apiserver-master2 1/1 Running 0 4h40m kube-system kube-apiserver-master3 1/1 Running 0 4h40m kube-system kube-controller-manager-master1 1/1 Running 1 4h42m kube-system kube-controller-manager-master2 1/1 Running 0 4h40m kube-system kube-controller-manager-master3 1/1 Running 0 4h40m kube-system kube-proxy-9p6mf 1/1 Running 0 4h28m kube-system kube-proxy-p49k8 1/1 Running 0 4h42m kube-system kube-proxy-szgqx 1/1 Running 0 4h28m kube-system kube-proxy-xjng9 1/1 Running 0 4h40m kube-system kube-proxy-zjr2z 1/1 Running 0 4h40m kube-system kube-scheduler-master1 1/1 Running 1 4h42m kube-system kube-scheduler-master2 1/1 Running 0 4h40m kube-system kube-scheduler-master3 1/1 Running 0 4h40m

拉取镜像

ctr image pull dockerproxy.net/plndr/kube-vip:v0.7.2

设置 虚拟 IP 与 网卡名

export VIP=192.168.122.100 export INTERFACE=ens3

生成静态 pod 文件,等待部署

ctr run --rm --net-host dockerproxy.net/plndr/kube-vip:v0.7.2 vip \ /kube-vip manifest pod \ --interface $INTERFACE \ --vip $VIP \ --controlplane \ --services \ --arp \ --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
注意:这里使用的是 0.7.2 版本。0.8 以上版本存在划分子网问题。如果未设置 192.168.122.0/24 子网会导致网络不通,但是传递参数怎么弄官网还没有更新,我就不使用了。

查看部署情况

kubectl get pod -n kube-system | grep '^kube-vip-'

输出情况

kube-vip-master1 1/1 Running 0 4h37m kube-vip-master2 1/1 Running 0 4h36m kube-vip-master3 1/1 Running 0 4h36m

修改静态域名解析

sed -i 's|192.168.122.11|192.168.122.100|' /etc/hosts

验证 VIP 情况

kubectl get pod -A # 能链接到就表示没有问题了

 
 
第一部分结束