0%

使用 Ansible 部署 K8s 集群

本文记录使用 Ansible 对 Debian 系统进行配置的方法,成功部署基于 Docker 的 K8s v1.30.3 集群。
后续:使用 containerd 作为 K8s 运行时

Kubernetes 也称为 K8s,是用于自动部署、扩缩和管理容器化应用程序的开源系统。
K8s 集群一般需要一台 master 和多台 nodemaster 不运行业务容器,只用于提供 api 调用入口,存储容器状态数据。
容器主要运行在各个 node 上面,由 master 进行调度。
搭建一个 K8s 集群,需要在多台机器配置静态 ip,安装 K8s 管理工具,容器运行时,以及拉取所需镜像等操作,为了减少重复操作,可以使用 Ansible 实现重复性和一致性的批量操作。

Ansible 是一个开源的自动化工具,用于配置管理、应用程序部署和任务自动化。
无需在被控主机上安装受控软件,只需要有 ssh 服务和 Python 环境,控制端会根据自动化脚本(playbook)的定义,复制对应的 Python 脚本(模块)到受控机完成操作。

实验时使用 4 个 Debian 系统,分别用于运行 Ansible、K8s 主节点,剩下 2 台用于 K8s 工作节点。
使用的系统镜像是 debian-live-12.6.0-amd64-standard.iso,在安装时注意不要创建 Swap 分区,安装完成后需要先进系统确认 ip 地址,启动 ssh 服务。

配置 Ansible 环境

在管理机安装 Ansible 后,在 ~/.ansible.cfg 添加以下内容:

1
2
[defaults]
inventory = ~/.ansible/inventory

指定了 Ansible 从 ~/.ansible/inventory 文件内查找被管理主机的信息,内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Master]
node1 ansible_host=192.168.1.201

[Worker]
node2 ansible_host=192.168.1.202
node3 ansible_host=192.168.1.203

[all:vars]
; 指定默认使用的用户名
ansible_user=k8s
; 指定默认使用的密码
; ansible_ssh_pass=k8s
; 指定默认使用的密钥
ansible_ssh_private_key_file=~/.ssh/k8s_key

根据实际情况修改配置,使用 ansible -m ping all 检查主机是否可以正常连接。

配置系统

编写 00-sudo-static-ip.yml 文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
- hosts: all
become: true
become_method: su
become_user: root

vars:
user_name: k8s
ip_addresses:
node1: 192.168.1.201
node2: 192.168.1.202
node3: 192.168.1.203

tasks:
- name: 为用户添加 SSH 公钥
authorized_key:
user: "{{ user_name }}"
key: "{{ lookup('file', '~/.ssh/k8s_key.pub') }}"

- name: 允许用户无密码使用 sudo
copy:
dest: /etc/sudoers.d/k8s
content: "{{ user_name }} ALL=(ALL) NOPASSWD:ALL"

- name: 更新 /etc/hosts 文件,添加节点 IP 和主机名映射
blockinfile:
path: /etc/hosts
block: |
192.168.1.200 node1
192.168.1.201 node2
192.168.1.202 node3

- name: /etc/network/interfaces 文件中删除关于 ens33 的配置
lineinfile:
path: /etc/network/interfaces
state: absent
regexp: "^.*ens33"

- name: ens33 接口创建静态 IP 配置
copy:
dest: /etc/network/interfaces.d/ens33
content: |
allow-hotplug ens33
iface ens33 inet static
address {{ ip_addresses[inventory_hostname] }}
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 119.29.29.29

全新安装的系统,没有配置静态 ip,用户也没有 sudo 权限。
为了方便后续管理,可以使用该 playbook 进行配置。
因为执行该脚本时还没有 sudo 权限,所以脚本提权的方式是 su root,需要使用 ansible-playbook 00-sudo-static-ip.yml -K 执行后,输入 root 的密码。

执行完成后,接下来创建 01-system-config.yml 文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
- hosts: all
become: true

handlers:
- name: 更新 GRUB 配置
command: update-grub

tasks:
- name: 设置 GRUB 超时时间为 0
lineinfile:
path: /etc/default/grub
regexp: "^GRUB_TIMEOUT="
line: "GRUB_TIMEOUT=0"
notify: 更新 GRUB 配置

- name: 配置 SSH 设置,禁用密码认证等
copy:
dest: /etc/ssh/sshd_config.d/sshd_nopass.conf
content: |
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
X11Forwarding no
PrintMotd no
ClientAliveInterval 120

- name: 设置 APT 源为清华大学镜像
copy:
dest: /etc/apt/sources.list
content: |
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ stable main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ stable-updates main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ stable-backports main contrib non-free non-free-firmware
deb https://mirrors.tuna.tsinghua.edu.cn/debian-security stable-security main contrib non-free non-free-firmware

- name: 清理和更新 APT 缓存及软件包
apt:
autoclean: true
autoremove: true
clean: true
update_cache: true
upgrade: full

- name: 安装必需的软件包
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gpg
- vim
- wget
- systemd-timesyncd

- name: 启用 IPv4 转发
sysctl:
name: net.ipv4.ip_forward
value: "1"
sysctl_set: true
sysctl_file: /etc/sysctl.d/k8s.conf
reload: true

使用 ansible-playbook 01-system-config.yml 执行。
上一个 playbook 设置了静态 ip,这个 playbook 设置了仅允许使用密钥连接 ssh,需要重启系统生效。
可以使用 ansible all -m shell -a "sudo reboot"

安装 K8s 和容器运行时

自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除。K8s 将无法直接使用 Docker 作为运行时,需要使用 cri-dockerd 组件。
具体情况可以看这里:容器运行时
除了 Docker 以外,在 Debian 还可以使用 containerd但 apt 源里的版本似乎无法正常工作,所以这里还是使用了 Docker + cri-dockerd。
经过测试,containerd 可以正常使用,只是需要初始化并修改配置文件。
后续:使用 containerd 作为 K8s 运行时

执行以下 playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
- hosts: all
become: true
tasks:
- name: 创建 APT 密钥目录
file:
path: /etc/apt/keyrings
state: directory
mode: "0755"

- name: 添加 Kubernetes APT 密钥
apt_key:
url: https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.30/deb/Release.key
keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg

- name: 添加 Kubernetes APT 仓库
apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.30/deb/ /"
filename: kubernetes

- name: 安装 Kubernetes 组件
apt:
name:
- kubelet
- kubeadm
- kubectl
update_cache: true

- name: Kubernetes 组件的包状态设置为 hold
dpkg_selections:
name: "{{ item }}"
selection: hold
loop:
- kubelet
- kubeadm
- kubectl

- name: 添加 Docker APT 密钥
apt_key:
url: https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/debian/gpg
keyring: /etc/apt/keyrings/docker.gpg

- name: 添加 Docker APT 仓库
apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/debian bookworm stable"
filename: docker

- name: 安装 Docker CE
apt:
name: docker-ce
update_cache: true

- name: 安装 cri-dockerd
apt:
deb: https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd_0.3.15.3-0.debian-bookworm_amd64.deb

创建 K8s 集群前的准备工作

K8s 是容器编排系统,创建和加入集群都需要在容器运行时内运行相应的容器。在创建或者加入集群时,K8s 会先要求容器运行时拉取对应的镜像文件。
如果机器没有稳定的互联网环境,配置集群将会卡在拉取镜像的步骤。
可以选择使用稳定的镜像源、为容器运行时配置代理,或者通过文件导入镜像的方法。

为 Docker 配置代理

如果局域网内有提供稳定互联网访问的 http 代理,可以使用以下 playbook 为 Docker 服务配置代理:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
- hosts: all
become: true

vars:
# 指定 http 代理的地址
proxy_address: http://192.168.1.100:8080
service_override_dir: /etc/systemd/system/docker.service.d
service_override_file: http-proxy.conf

handlers:
- name: 重启 Docker 服务
systemd_service:
state: restarted
daemon_reload: true
name: docker

tasks:
- name: 创建 Docker 服务的覆盖目录
file:
path: "{{ service_override_dir }}"
state: directory
mode: "0755"

- name: 检查 Docker 服务覆盖文件是否存在
stat:
path: "{{ service_override_dir }}/{{ service_override_file }}"
register: file_stat

- name: 创建 Docker 服务的 HTTP 代理配置文件
when: not file_stat.stat.exists
copy:
dest: "{{ service_override_dir }}/{{ service_override_file }}"
content: |
[Service]
Environment="HTTP_PROXY={{ proxy_address }}"
Environment="HTTPS_PROXY={{ proxy_address }}"
notify: 重启 Docker 服务

- name: 删除 Docker 服务的 HTTP 代理配置文件
when: file_stat.stat.exists
file:
path: "{{ service_override_dir }}/{{ service_override_file }}"
state: absent
notify: 重启 Docker 服务

首次执行该 playbook 将会为 Docker 服务配置 http 代理为 http://192.168.1.100:8080,再次执行就会撤销。

使用文件导入镜像

如果上述方法不可用,可以选择在有稳定互联网访问的机器通过 Docker 拉取所需的镜像,导出为文件,再让目标机器导入的方式,无需让每台机器重复下载。
使用命令 sudo kubeadm config images list 列出 K8s 需要的镜像列表:

1
2
3
4
5
6
7
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0

在集群创建后,还需要安装 容器网络接口(CNI) 和对应的 网络插件
这里选用的是 flannel,检查安装文件 kube-flannel.yml 发现需要以下镜像:

1
2
docker.io/flannel/flannel:v0.25.5
docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1

拉取镜像并导出为文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# K8s images
docker pull registry.k8s.io/kube-apiserver:v1.30.3
docker pull registry.k8s.io/kube-controller-manager:v1.30.3
docker pull registry.k8s.io/kube-scheduler:v1.30.3
docker pull registry.k8s.io/kube-proxy:v1.30.3
docker pull registry.k8s.io/coredns/coredns:v1.11.1
docker pull registry.k8s.io/pause:3.9
docker pull registry.k8s.io/etcd:3.5.12-0
# flannel images
docker pull docker.io/flannel/flannel:v0.25.5
docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1
# save images to tar file
docker save -o kube-apiserver.tar registry.k8s.io/kube-apiserver:v1.30.3
docker save -o kube-controller-manager.tar registry.k8s.io/kube-controller-manager:v1.30.3
docker save -o kube-scheduler.tar registry.k8s.io/kube-scheduler:v1.30.3
docker save -o kube-proxy.tar registry.k8s.io/kube-proxy:v1.30.3
docker save -o coredns.tar registry.k8s.io/coredns/coredns:v1.11.1
docker save -o pause.tar registry.k8s.io/pause:3.9
docker save -o etcd.tar registry.k8s.io/etcd:3.5.12-0
docker save -o flannel.tar docker.io/flannel/flannel:v0.25.5
docker save -o flannel-cni-plugin.tar docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1

接下来将导出的文件传输到目标机器或放到任意 http server 上,这里使用 miniserve 提供 http 服务,允许目标机器通过 url 访问到文件。
执行以下 playbook 导入镜像文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
- hosts: all
become: true

vars:
# 镜像文件 url 地址
images_file_src: http://192.168.1.100

tasks:
# 这里更好的方法是使用 docker_image 模块,但不支持通过 url 加载 tar 文件
- name: 通过 URL 下载并加载 Docker 镜像
command: sh -c "curl -sSL {{ images_file_src }}/{{ item }} | docker load"
loop:
- kube-apiserver.tar
- kube-controller-manager.tar
- kube-scheduler.tar
- kube-proxy.tar
- coredns.tar
- pause.tar
- etcd.tar
- flannel.tar
- flannel-cni-plugin.tar

镜像成功导入后,接下来可以开始创建集群。

初始化 K8s 集群

环境准备好后,可以开始初始化 K8s 集群。
安装 Docker + cri-dockerd 后,kubeadm 会识别到有多个运行时路径,需要使用 --cri-socket unix:///var/run/cri-dockerd.sock 参数指定为 cri-dockerd。
网络插件选择的是 flannel,需要根据要求配置 podCIDR 范围,添加参数 --pod-network-cidr=10.244.0.0/16

如果使用代理服务器的方式,可以在初始化前执行拉取镜像的命令:sudo kubeadm config images pull
这里使用的是导入文件的方式,镜像已经准备好,可以直接开始初始化。
执行的完整输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
$ sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.30.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.1.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.1.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.1.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.00300509s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 5.003449169s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 3g0usv.klakwqbix8lg32xz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.201:6443 --token 3g0usv.klakwqbix8lg32xz \
--discovery-token-ca-cert-hash sha256:36c362932da0dc232a42535bac9a79c214231b62c41471067c733320f58ef9d0

容器镜像已经提前准备好,初始化 K8s 集群只用了一分钟左右。
根据提示,在初始化集群的机器上执行:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置完成后,才可以在本机使用 kubectl 管理集群。

安装 flannel 插件:

1
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

而其他作为工作节点的机器,执行以下命令加入集群:

1
2
3
sudo kubeadm join 192.168.1.201:6443 --token 3g0usv.klakwqbix8lg32xz \
--discovery-token-ca-cert-hash sha256:36c362932da0dc232a42535bac9a79c214231b62c41471067c733320f58ef9d0 \
--cri-socket unix:///var/run/cri-dockerd.sock

注意这里也需要加上指定运行时的参数。
稍等几分钟,数据完成同步后,可以检查集群和容器的状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 9m26s v1.30.3
node2 Ready <none> 4m1s v1.30.3
node3 Ready <none> 4m2s v1.30.3

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-7hbv4 1/1 Running 0 2m10s 192.168.1.203 node3 <none> <none>
kube-flannel kube-flannel-ds-fkcmv 1/1 Running 0 2m10s 192.168.1.202 node2 <none> <none>
kube-flannel kube-flannel-ds-mz5g9 1/1 Running 0 2m10s 192.168.1.201 node1 <none> <none>
kube-system coredns-7db6d8ff4d-h5kqc 1/1 Running 0 10m 10.244.0.3 node1 <none> <none>
kube-system coredns-7db6d8ff4d-zw746 1/1 Running 0 10m 10.244.0.2 node1 <none> <none>
kube-system etcd-node1 1/1 Running 0 10m 192.168.1.201 node1 <none> <none>
kube-system kube-apiserver-node1 1/1 Running 0 10m 192.168.1.201 node1 <none> <none>
kube-system kube-controller-manager-node1 1/1 Running 0 10m 192.168.1.201 node1 <none> <none>
kube-system kube-proxy-2mqrv 1/1 Running 0 4m59s 192.168.1.202 node2 <none> <none>
kube-system kube-proxy-6qqbc 1/1 Running 0 10m 192.168.1.201 node1 <none> <none>
kube-system kube-proxy-rtzhn 1/1 Running 0 5m 192.168.1.203 node3 <none> <none>
kube-system kube-scheduler-node1 1/1 Running 0 10m 192.168.1.201 node1 <none> <none>

所有的节点和 Pod 都工作正常。至此,集群已部署完成。