经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » Kubernetes » 查看文章
使用 anasible 搭建一个多 master 多 worker 的 k8s 集群
来源:cnblogs  作者:Baocang  时间:2023/12/5 12:08:03  对本文有异议

使用 ansible 搭建一个多 master 多 worker 的 k8s 集群

kubernetes + istio 是目前最强大,也是最易于使用的服务网格方案。要使用kubernetes + istio, 首先要搭建 kubernets 集群。搭建kubernetes 集群的方式有很多,其中使用 anisble 自动化搭建 kubernetes 集群的方案非常便捷、可靠。

服务器列表

VIP 192.168.2.111

HOST ROLE IP CPU MEMORY
k8s-lvs-01 LVS MASTER 192.168.2.58 2C 4G
k8s-lvs-02 LVS BACKUP 192.168.2.233 2C 4G
k8s-main-01 K8S MASTER 192.168.2.85 4C 8G
k8s-main-02 K8S MASTER 192.168.2.155 4C 8G
k8s-main-03 K8S MASTER 192.168.2.254 4C 8G
k8s-node-01 K8S WORKER 192.168.2.110 4C 8G
k8s-node-02 K8S WORKER 192.168.2.214 4C 8G
k8s-node-03 K8S WORKER 192.168.2.36 4C 8G

1、在工作机上安装 ansible

GitHub: https://github.com/ansible/ansible

1.1、在 Linux 上安装 ansible:

安装之前可以先更新下apt源

  1. sudo apt-get update

安装 ansible:

  1. sudo apt-get install ansible

在ansible中使用密码方式设置集群必须要安装sshpass,如果不使用密码模式可以不安装:

  1. sudo apt-get install sshpass

如果 apt 找不到ansible和sshpass,可以设置 http://mirrors.aliyun.com/ubuntu 源后再安装,参考链接:https://developer.aliyun.com/mirror/ubuntu/

1.2、在 macOS 上安装 ansible

从 AppStore 安装 Xcode 后执行以下命令

  1. xcode-select --install

安装 homebrew

  1. /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

安装ansible:

  1. brew install --verbose ansible

要安装 sshpass,需要 使用 baocang/delicious 库:

  1. brew tap baocang/delicious

安装 sshpass:

  1. brew install --verbose sshpass

sshpass是开源的,baocang/delicious是以源码编译的方式安装。

1.3、为 anisble 编写 hosts 文件

使用自己习惯用的文件编辑器编辑hosts.ini, 并输入以下内容:

  1. [all:vars]
  2. kubernetes_vip=192.168.2.111
  3. keepalived_master_ip=192.168.2.58
  4. [lvs]
  5. k8s-lvs-01 ansible_host=192.168.2.58 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  6. k8s-lvs-02 ansible_host=192.168.2.233 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  7. [main]
  8. k8s-main-01 ansible_host=192.168.2.85 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  9. [masters]
  10. k8s-main-02 ansible_host=192.168.2.155 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  11. k8s-main-03 ansible_host=192.168.2.254 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  12. [workers]
  13. k8s-node-01 ansible_host=192.168.2.110 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  14. k8s-node-02 ansible_host=192.168.2.214 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  15. k8s-node-03 ansible_host=192.168.2.36 ansible_ssh_port=22 ansible_ssh_user=ansible ansible_ssh_pass="P@ssw0rd" ansible_sudo_pass="P@ssw0rd"
  16. [kubernetes:children]
  17. main
  18. masters
  19. workers

如果某个服务器的 ssh 是22,可以省略ansible_ssh_port

以上内容 使用 [lvs][main][masters][workers] 对服务器进行分组,格式如下:

IP地址 ssh端口号 ssh用户名 登录ssh时使用的密码 执行sudo时使用的密码

[kubernetes:children] 是将[lvs][main][masters][workers] 组中的服务器组合并放到名为kubernetes的组下。

默认还会有一个all组,包含所有的服务器列表

[all:vars]是为了定义变量,比如这里用变量kubernetes_vip存储 VIP,使用keepalived_master_ip存储 keepalived的 MASTER 节点 IP.

1.4、编写一个简单的ansible配置文件

这个配置夜会在每个服务器上创建一个.test.txt文件:

之后的配置不再指定文件名称,可以在全文最后找到完整的配置来执行。

文件名:demo-anisble-playbook.yml`

  1. ---
  2. - name: Demo
  3. hosts: lvs
  4. become: yes
  5. tasks:
  6. - name: Write current user to file named .test.txt
  7. shell: |
  8. echo `whoami` > .test.txt

然后执行以下命令:

  1. ansible-playbook -i hosts.ini demo-anisble-playbook.yml

得到如下输出:

  1. PLAY [Demo] *********************************************************************************
  2. TASK [Gathering Facts] **********************************************************************
  3. ok: [k8s-lvs-02]
  4. ok: [k8s-lvs-01]
  5. TASK [Write current user to file named .test.txt] *******************************************
  6. changed: [k8s-lvs-01]
  7. changed: [k8s-lvs-02]
  8. PLAY RECAP **********************************************************************************
  9. k8s-lvs-01 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
  10. k8s-lvs-02 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

执行完成后,在 lvs 的两个服务器上都有一个名为.test.txt的文件,内容为root,因为become: yes指定了默认以sudo方式运行。

echo whoami > .test.txt改为rm -rf .test.txt后再执行一遍即可完成清理工作

2、配置 LVS 服务器

2.1、什么是 VIP

VIP(Virtual IP Address,虚拟 IP 地址)是网络中一个不直接分配给某个特定网络接口的 IP 地址。在多种网络服务和应用中,VIP 通常用于实现高可用性、负载均衡和故障转移。

VIP 在不同场景中的应用:

  1. 负载均衡
    • 在负载均衡器(如 LVS)中,VIP 作为对外的统一访问点。客户端通过 VIP 连接服务,而负载均衡器将请求分发给后端的多个服务器。
    • 这样,即使某个后端服务器出现问题,负载均衡器仍然可以通过将流量重定向到其他健康服务器来保持服务的连续性。
  2. 高可用性集群
    • 在高可用性(HA)集群中,VIP 可以在多个节点间迁移,通常与故障检测机制一起使用。
    • 当主节点发生故障时,VIP 可以迅速移动到备用节点上,从而实现无缝的故障转移。
  3. 网络虚拟化
    • 在网络虚拟化环境中,VIP 用于抽象化物理服务器的实际网络地址。
    • 这允许在不同的物理服务器之间灵活地迁移虚拟机或服务。

VIP 的优势:

  • 灵活性:提供了一种机制,使得可以在不同的服务器或网络设备之间迁移服务,而无需修改客户端的配置。
  • 可用性:在故障检测和故障转移机制的帮助下,VIP 增强了服务的可用性和可靠性。
  • 简化管理:对外提供单一访问点,简化了网络管理和客户端配置。

注意事项:

  • VIP 需要正确配置和管理,以确保其在所需的网络场景中有效工作。
  • 在某些设置中,可能需要考虑安全性和权限控制,以防止非授权访问。
  • VIP 的配置和管理通常需要网络管理的专业知识。

2.2、LVS 工作模式

Linux Virtual Server (LVS) 提供了几种不同的工作模式,每种模式都有其特定的用途和网络流量处理方式。以下是 LVS 的主要工作模式:

  1. NAT (Network Address Translation) 模式
    • 在此模式下,LVS 服务器修改经过的数据包的源或目的 IP 地址。
    • 后端服务器将看到 LVS 服务器的 IP 地址作为客户端请求的源地址。
    • 回应包发回给 LVS 服务器,然后 LVS 服务器再将包发送回原始客户端。
  2. DR (Direct Routing) 模式
    • 客户端数据包的目的 IP 地址保持不变,直接路由到后端服务器。
    • 后端服务器响应直接发回客户端,绕过 LVS 服务器。
    • 此模式下,源 IP 地址保持不变。
  3. TUN (IP Tunneling) 模式
    • 数据包被封装在 IP 隧道中,然后发送到后端服务器。
    • 后端服务器解封装数据包,然后直接回应给客户端。
    • 此模式下,原始数据包的源和目的 IP 地址保持不变。
  4. Masquerade (Masq) 模式
    • 这是基于 NAT 的变体,其中 LVS 服务器将数据包的源 IP 地址替换为自己的 IP 地址。
    • 后端服务器看到的是 LVS 服务器的 IP 地址,而不是客户端的 IP 地址。
    • 回应包首先发回给 LVS 服务器,然后再由 LVS 服务器转发给客户端。

每种模式都有其独特的应用场景和优势。选择哪种模式取决于具体的需求,比如是否需要保留源 IP 地址、网络拓扑结构、以及性能考虑等。通常,DR 和 TUN 模式在性能方面表现更好,因为它们减少了 LVS 服务器的网络流量负载。然而,这些模式可能需要在网络配置上进行更多的调整。相反,NAT 和 Masquerade 模式更易于配置,但可能会影响性能,并且不保留原始的源 IP 地址。

2.3、LVS 负载均衡算法

Linux Virtual Server (LVS) 提供了多种负载均衡算法,用于决定如何将进入的请求分配到不同的后端服务器上。这些算法各有特点,适用于不同的场景。以下是 LVS 中常见的几种负载均衡算法:

  1. 轮询 (Round Robin, RR)
    • 按顺序依次将请求分配给每个服务器。
    • 当列表到达末尾时,会再次从头开始。
    • 适用于所有服务器处理能力相近的情况。
  2. 加权轮询 (Weighted Round Robin, WRR)
    • 类似于轮询,但每个服务器都有一个权重。
    • 服务器的处理能力越高,分配给它的请求就越多。
    • 适用于服务器性能不均的情况。
  3. 最少连接 (Least Connections, LC)
    • 将新请求分配给当前连接数最少的服务器。
    • 这种方式可以更公平地分配负载,特别是在会话长度不一的情况下。
  4. 加权最少连接 (Weighted Least Connections, WLC)
    • 结合了最少连接和服务器权重的特点。
    • 选择连接数与权重比例最低的服务器。
    • 更适合处理能力不均匀的服务器群。
  5. 基于局部性的最少连接 (Locality-Based Least Connections, LBLC)
    • 针对具有会话亲和性(session affinity)或持久性(persistence)的应用。
    • 尝试将来自同一客户端的请求发送到同一服务器。
  6. 基于局部性的加权最少连接 (Locality-Based Weighted Least Connections, LBLCR)
    • 类似于 LBLC,但加入了服务器权重。
    • 用于会话亲和性应用的不均匀环境。
  7. 目的 IP 哈希 (Destination Hashing, DH)
    • 基于请求的目的 IP 地址来分配请求。
    • 每个请求都固定分配给某个服务器,适合需要强会话亲和性的应用。
  8. 源 IP 哈希 (Source Hashing, SH)
    • 基于请求的源 IP 地址来分配请求。
    • 保证来自同一源 IP 的请求总是分配到同一服务器,适合需要会话亲和性的场景。

根据应用需求和服务器性能,可以选择最适合的负载均衡算法。例如,如果服务器性能大致相同,轮询或加权轮询可能是个好选择;如果服务器性能不同,可以考虑使用加权最少连接算法。对于需要会话持久性的应用,基于哈希的算法可能更加适合。

两台lvs 服务器都需要安装ipvsadm和keepalived组件,其中ipvsadm用于管理和想看ipvs规则,keepalived用于管理VIP和生成ipvs规则,进行健康检查等。

2.4、编写 keepalived.conf.j2 模板文件

在k8s-setup.yml文件同目录下的resources目录中创建 keepalived.conf.j2 文件

如果没有机器名,可以将 ansible_hostname 改为判断 ansible_host 或 ansible_default_ipv4.address 用IP判断

文件名:resources/keepalived.conf.j2

  1. vrrp_instance VI_1 {
  2. state {{ 'MASTER' if ansible_host == keepalived_master_ip else 'BACKUP' }}
  3. interface ens160
  4. virtual_router_id 51
  5. priority {{ 255 if ansible_host == keepalived_master_ip else 254 }}
  6. advert_int 1
  7. authentication {
  8. auth_type PASS
  9. auth_pass 123456
  10. }
  11. virtual_ipaddress {
  12. {{ kubernetes_vip }}/24
  13. }
  14. }
  15. # masters with port 6443
  16. virtual_server {{ kubernetes_vip }} 6443 {
  17. delay_loop 6
  18. lb_algo wlc
  19. lb_kind DR
  20. persistence_timeout 360
  21. protocol TCP
  22. {% for host in groups['main'] %}
  23. # {{ host }}
  24. real_server {{ hostvars[host]['ansible_host'] }} 6443 {
  25. weight 1
  26. SSL_GET {
  27. url {
  28. path /livez?verbose
  29. status_code 200
  30. }
  31. connect_timeout 3
  32. nb_get_retry 3
  33. delay_before_retry 3
  34. }
  35. }
  36. {% endfor %}
  37. {% for host in groups['masters'] %}
  38. # {{ host }}
  39. real_server {{ hostvars[host]['ansible_host'] }} 6443 {
  40. weight 1
  41. SSL_GET {
  42. url {
  43. path /livez?verbose
  44. status_code 200
  45. }
  46. connect_timeout 3
  47. nb_get_retry 3
  48. delay_before_retry 3
  49. }
  50. }
  51. {% endfor %}
  52. }
  53. # workers with port 80
  54. virtual_server {{ kubernetes_vip }} 80 {
  55. delay_loop 6
  56. lb_algo wlc
  57. lb_kind DR
  58. persistence_timeout 7200
  59. protocol TCP
  60. {% for host in groups['workers'] %}
  61. # {{ host }}
  62. real_server {{ hostvars[host]['ansible_host'] }} 80 {
  63. weight 1
  64. TCP_CHECK {
  65. connect_timeout 10
  66. connect_port 80
  67. }
  68. }
  69. {% endfor %}
  70. }
  71. # workers with port 443
  72. virtual_server {{ kubernetes_vip }} 443 {
  73. delay_loop 6
  74. lb_algo wlc
  75. lb_kind DR
  76. persistence_timeout 7200
  77. protocol TCP
  78. {% for host in groups['workers'] %}
  79. # {{ host }}
  80. real_server {{ hostvars[host]['ansible_host'] }} 443 {
  81. weight 1
  82. TCP_CHECK {
  83. connect_timeout 10
  84. connect_port 443
  85. }
  86. }
  87. {% endfor %}
  88. }

vrrp_instance 指定了实例名为 VI_1,在 k8s-lvs-01 上配置为 MASTER,在其余机器上配置为BACKUP

interface ens160 是当前网卡上的接口名称,通过 ip addr 或 ifconfig 命令可获取,输出中带有当前节点IP地址的就是,初始情况下,有一个回路lo接口和另一个类似 ens160、en0之类的接口

advert_int 1 指定了Keepalived发送VRRP通告的时间间隔为1秒

priority 一般在 BACKUP节点上应该低于 MASTER节点

virtual_ipaddress中配置的就是VIP(Virtual IP)

virtual_server 用于定义 虚拟服务器,一个虚拟服务器下有多个真实的服务器(real_server)

lb_algo wlc 指定了负载均衡算法
lb_kind DR 指定了使用 Direct Routing 模式路由数据

{% for host in groups['masters'] %} 这些语句顶格写是为了防止出现锁进错误

2.5、 ipvsadm 和 keepalived 的 ansible 配置文件

  1. ---
  2. - name: Setup Load Balancer with IPVS and Keepalived
  3. hosts: lvs
  4. become: yes
  5. tasks:
  6. # Upgrade all installed packages to their latest versions
  7. - name: Upgrade all installed apt packages
  8. apt:
  9. upgrade: 'yes'
  10. update_cache: yes
  11. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  12. # Install IP Virtual Server (IPVS) administration utility
  13. - name: Install ipvsadm for IPVS management
  14. apt:
  15. name: ipvsadm
  16. state: present
  17. # Install keepalived for high availability
  18. - name: Install Keepalived for load balancing
  19. apt:
  20. name: keepalived
  21. state: present
  22. # Deploy keepalived configuration from a Jinja2 template
  23. - name: Deploy keepalived configuration file
  24. template:
  25. src: resources/keepalived.conf.j2
  26. dest: /etc/keepalived/keepalived.conf
  27. # Restart keepalived to apply changes
  28. - name: Restart Keepalived service
  29. service:
  30. name: keepalived
  31. state: restarted

3、安装 kubernetes 和 containerd.io

3.1、各个组件的功能

  1. kubelet
    • kubelet 是运行在所有 Kubernetes 集群节点上的一个代理,负责管理该节点上的容器。
    • 它监控由 API 服务器下发的指令(PodSpecs,Pod 的规格描述),确保容器的状态与这些规格描述相匹配。
    • kubelet 负责维护容器的生命周期,比如启动、停止、重启容器,以及实现健康检查。
  2. kubeadm
    • kubeadm 是一个工具,用于快速部署 Kubernetes 集群。
    • 它帮助你初始化集群(设置 master 节点),加入新的节点到集群中,以及进行必要的配置以建立一个可工作的集群。
    • kubeadm 并不管理集群中的节点或者 Pod,它只负责集群的启动和扩展。
  3. kubectl
    • kubectl 是 Kubernetes 集群的命令行工具,允许用户与集群进行交互。
    • 它可以用来部署应用、检查和管理集群资源以及查看日志等。
    • kubectl 主要与 Kubernetes API 服务器通信,执行用户的命令。
  4. containerd.io
    • containerd.io 是一个容器运行时,Kubernetes 用它来运行容器。
    • 它负责镜像的拉取、容器的运行以及容器的生命周期管理。
    • containerd.io 是 Docker 的核心组件之一,但在 Kubernetes 中可以独立使用,不依赖于完整的 Docker 工具集。

3.2、安装 Kubernetes 依赖项

  1. ---
  2. - name: Install kubernetes packages and containerd.io
  3. hosts: kubernetes
  4. become: yes
  5. tasks:
  6. # Upgrade all installed packages to their latest versions
  7. - name: Upgrade all installed apt packages
  8. apt:
  9. upgrade: 'yes'
  10. update_cache: yes
  11. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  12. # Install required packages for Kubernetes and Docker setup
  13. - name: Install prerequisites for Kubernetes and Docker
  14. apt:
  15. name:
  16. - ca-certificates
  17. - curl
  18. - gnupg
  19. update_cache: yes
  20. cache_valid_time: 3600
  21. # Ensure the keyring directory exists for storing GPG keys
  22. - name: Create /etc/apt/keyrings directory for GPG keys
  23. file:
  24. path: /etc/apt/keyrings
  25. state: directory
  26. mode: '0755'
  27. # Add Docker's official GPG key
  28. - name: Add official Docker GPG key to keyring
  29. apt_key:
  30. url: https://download.docker.com/linux/ubuntu/gpg
  31. keyring: /etc/apt/keyrings/docker.gpg
  32. state: present
  33. # Add Docker's apt repository
  34. - name: Add Docker repository to apt sources
  35. apt_repository:
  36. # repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  37. repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  38. filename: docker
  39. update_cache: yes
  40. notify: Update apt cache
  41. # Add Kubernetes' GPG key
  42. - name: Add Kubernetes GPG key to keyring
  43. apt_key:
  44. url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
  45. keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  46. state: present
  47. # Add Kubernetes' apt repository
  48. - name: Add Kubernetes repository to apt sources
  49. lineinfile:
  50. path: /etc/apt/sources.list.d/kubernetes.list
  51. line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
  52. create: yes
  53. notify: Update apt cache
  54. # Install Kubernetes packages
  55. - name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
  56. apt:
  57. name:
  58. - kubelet
  59. - kubeadm
  60. - kubectl
  61. - containerd.io
  62. state: present
  63. # Hold the installed packages to prevent automatic updates
  64. - name: Hold Kubernetes packages and containerd.io
  65. dpkg_selections:
  66. name: "{{ item }}"
  67. selection: hold
  68. loop:
  69. - kubelet
  70. - kubeadm
  71. - kubectl
  72. - containerd.io
  73. handlers:
  74. # Handler to update apt cache when notified
  75. - name: Update apt cache
  76. apt:
  77. update_cache: yes

如果要指定 repo 的 arch,可以如下方式使用(以docker为例):

  1. repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"

3.3、转发 IPv4 并让 iptables 看到桥接流量

  1. ---
  2. - name: Configure Kubernetes prerequisites
  3. hosts: kubernetes
  4. become: yes # to run tasks that require sudo
  5. tasks:
  6. - name: Load Kernel Modules
  7. copy:
  8. content: |
  9. overlay
  10. br_netfilter
  11. dest: /etc/modules-load.d/k8s.conf
  12. notify: Load Modules
  13. - name: Set Sysctl Parameters
  14. copy:
  15. content: |
  16. net.bridge.bridge-nf-call-iptables = 1
  17. net.bridge.bridge-nf-call-ip6tables = 1
  18. net.ipv4.ip_forward = 1
  19. dest: /etc/sysctl.d/k8s.conf
  20. notify: Apply Sysctl
  21. handlers:
  22. - name: Load Modules
  23. modprobe:
  24. name: "{{ item }}"
  25. state: present
  26. loop:
  27. - overlay
  28. - br_netfilter
  29. - name: Apply Sysctl
  30. command: sysctl --system

4、预加载 k8s 需要的镜像

containerd 其实也用的到一个 k8s中用的到的镜像,只不过可能版本更低,不过可以改成相同版本的,所以在配置 containerd 之前预加载 k8s 需要的镜像。

  1. ---
  2. - name: Prefetch kubernetes images
  3. hosts: kubernetes
  4. become: true
  5. tasks:
  6. - name: Get kubeadm version
  7. command: kubeadm version -o short
  8. register: kubeadm_version
  9. - name: List Kubernetes images for the specific kubeadm version
  10. command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
  11. register: kubernetes_images
  12. - name: Pull and retag Kubernetes images from Aliyun registry
  13. block:
  14. - name: List old images in k8s.io namespace
  15. command: ctr -n k8s.io images list -q
  16. register: old_images_list
  17. - name: Pull Kubernetes image from Aliyun
  18. command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
  19. loop: "{{ kubernetes_images.stdout_lines }}"
  20. when: item not in old_images_list.stdout
  21. loop_control:
  22. label: "{{ item }}"
  23. - name: Retag Kubernetes image
  24. command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
  25. loop: "{{ kubernetes_images.stdout_lines }}"
  26. when: item not in old_images_list.stdout
  27. loop_control:
  28. label: "{{ item }}"
  29. - name: List new images in k8s.io namespace
  30. command: ctr -n k8s.io images list -q
  31. register: new_images_list
  32. - name: Remove images from Aliyun registry
  33. command: "ctr -n k8s.io images remove {{ item }}"
  34. loop: "{{ new_images_list.stdout_lines }}"
  35. when: item.startswith('registry.aliyuncs.com/google_containers')
  36. loop_control:
  37. label: "{{ item }}"
  38. # # Optional: Remove old SHA256 tags if necessary
  39. # - name: Remove old SHA256 tags
  40. # command: "ctr -n k8s.io images remove {{ item }}"
  41. # loop: "{{ new_images_list.stdout_lines }}"
  42. # when: item.startswith('sha256:')
  43. # loop_control:
  44. # label: "{{ item }}"
  45. - name: Restart containerd service
  46. service:
  47. name: containerd
  48. state: restarted

通过anisble 执行后就完成了所有的镜像预加载工作,也可以在kubeadm的init配置文件中改写imageRepository: registry.k8s.io来省略这步操作。

注释的内容是删除 sha256:开头的镜像,建议不要删除.

  1. ctr -n k8s.io images ls -q
  2. registry.k8s.io/coredns/coredns:v1.10.1
  3. registry.k8s.io/etcd:3.5.9-0
  4. registry.k8s.io/kube-apiserver:v1.28.4
  5. registry.k8s.io/kube-controller-manager:v1.28.4
  6. registry.k8s.io/kube-proxy:v1.28.4
  7. registry.k8s.io/kube-scheduler:v1.28.4
  8. registry.k8s.io/pause:3.9
  9. sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
  10. sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
  11. sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
  12. sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
  13. sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
  14. sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
  15. sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc

SHA256 标签在 Docker 和容器技术中扮演着重要角色,主要涉及到镜像的完整性和版本控制。以下是 SHA256 标签的作用和删除它们的潜在好处与坏处:

SHA256 标签的作用:

  1. 完整性验证:SHA256 标签是镜像内容的加密哈希。它们用于验证镜像内容的完整性,确保在传输过程中没有被篡改。
  2. 唯一标识:每个镜像都有一个唯一的 SHA256 哈希,这有助于区分不同的镜像版本,即使它们可能有相同的标签(如 latest)。
  3. 版本控制:在多次构建和更新镜像时,SHA256 哈希有助于追踪特定的镜像版本。

删除 SHA256 标签的好处:

  1. 简化镜像管理:删除长的 SHA256 标签可以简化镜像管理,使得只通过人类可读的标签(如 v1.0.0)来识别镜像。
  2. 减少混淆:对于那些不需要详细追踪每个镜像版本的场景,移除这些长的哈希标签可以减少视觉上的混淆。

删除 SHA256 标签的坏处:

  1. 丢失详细版本信息:删除 SHA256 标签意味着丢失了一个精确指向特定镜像版本的方式。如果需要回溯或审核,这可能是个问题。
  2. 潜在的安全风险:在某些环境中,保留 SHA256 标签可以作为额外的安全措施,以确保镜像在拉取和部署过程中保持不变。

总的来说,是否删除 SHA256 标签取决于您的具体需求和管理偏好。在生产环境中,通常建议保留这些标签以便于版本控制和安全审核。在开发或测试环境中,如果标签的数量导致管理变得繁琐,可以考虑移除它们。

5、配置 containerd

containerd 默认的配置文件禁用了cri插件,并且默认SystemdCgroup = false,在开始配置kubernetes 前,需要启用cri插件,并且设置SystemdCgroup = true

SystemdCgroup=true 是一个配置选项,通常在容器运行时环境的配置文件中使用,特别是在与 Docker 或 Kubernetes 相关的设置中。这个选项涉及到 Linux 中的 cgroup(控制组)管理,具体来说是 cgroup 的第二版本(cgroup v2)。

4.1、cgroup 简介

  • 控制组 (cgroup):cgroup 是 Linux 内核的一个特性,允许您分配、优先排序、限制、记录和隔离资源的使用(如 CPU、内存、磁盘 I/O 等)。
  • cgroup v1 与 v2:cgroup v1 是最初的版本,而 cgroup v2 是一个更新的版本,提供了更一致和简化的接口。

5.2、Systemd 与 cgroup

  • Systemd:Systemd 是一个系统和服务管理器,它为现代 Linux 发行版提供了初始化系统、服务管理等功能。Systemd 也可以用来管理 cgroup。
  • Systemd 与 cgroup 集成:当 Systemd 作为进程管理器时,它可以使用 cgroup 来管理系统资源。在 cgroup v2 中,Systemd 提供了更好的集成和资源管理。

5.3、SystemdCgroup=true 的含义

当您在容器运行时的配置文件中设置 SystemdCgroup=true 时,您告诉容器运行时(如 Docker 或 containerd)使用 Systemd 来管理容器的 cgroup。这样做的优点包括:

  1. 更好的资源管理:Systemd 提供了对资源限制和优先级的细粒度控制,允许更有效地管理容器使用的资源。
  2. 统一的管理:使用 Systemd 管理 cgroup 可以和系统中的其他服务管理机制保持一致。
  3. 更好的兼容性:特别是对于使用 cgroup v2 的系统,使用 Systemd 进行管理可以确保更好的兼容性和稳定性。

5.4、使用场景

在 Kubernetes 环境中,这个配置通常出现在 Kubelet 的配置中,确保容器的资源管理与系统的其它部分保持一致。这对于保证系统资源的高效利用和稳定运行非常重要。

5.5、修改 containerd 配置文件

通过将 disabled_plugins = ["cri"] 中内容清空可以启用cri插件,设置 SystemdCgroup=true 可以使用 Systemd 来管理容器的 cgroup,并且修改 sandbox_image,使其与 k8s 保持一致。

  1. ---
  2. - name: Configure containerd
  3. hosts: kubernetes
  4. become: true
  5. tasks:
  6. - name: Get Kubernetes images list
  7. command: kubeadm config images list
  8. register: kubernetes_images
  9. - name: Set pause image variable
  10. set_fact:
  11. pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
  12. - name: Generate default containerd config
  13. command: containerd config default
  14. register: containerd_config
  15. changed_when: false
  16. - name: Write containerd config to file
  17. copy:
  18. dest: /etc/containerd/config.toml
  19. content: "{{ containerd_config.stdout }}"
  20. mode: '0644'
  21. - name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
  22. lineinfile:
  23. path: /etc/containerd/config.toml
  24. regexp: "{{ item.regexp }}"
  25. line: "{{ item.line }}"
  26. loop:
  27. - { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
  28. - { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
  29. - name: Restart containerd service
  30. service:
  31. name: containerd
  32. state: restarted

6、初始化 main master 节点

使用 kubeadm 初始化 k8s 集群时,只需要初始化一个 master 节点,其余的 master 和 worker 节点都可以使用 kubeadm join命令加入,所以需要先初始化 main master 节点。

6.1、生成 kubeadm init用的 token

kubeadm 默认的 token 是 abcdef.0123456789abcdef,其格式要求为"[a-z0-9]{6}.[a-z0-9]{16}, 可以通过下面这条命令来生成:

  1. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6; echo -n '.'; LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16

6.2、编写 kubeadm-init.yaml.j2 模板文件

文件名:resources/kubeadm-init.yaml.j2

通过kubeadm config print init-defaults命令获取到默认的配置文件,然后修改为如下内容:

  1. apiVersion: kubeadm.k8s.io/v1beta3
  2. bootstrapTokens:
  3. - token: {{ token }}
  4. ttl: 0s
  5. usages:
  6. - signing
  7. - authentication
  8. description: "kubeadm bootstrap token"
  9. groups:
  10. - system:bootstrappers:kubeadm:default-node-token
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. advertiseAddress: {{ ansible_host }}
  14. bindPort: 6443
  15. nodeRegistration:
  16. criSocket: unix:///var/run/containerd/containerd.sock
  17. imagePullPolicy: IfNotPresent
  18. name: k8s-main-01
  19. taints: null
  20. ---
  21. apiServer:
  22. certSANs:
  23. - {{ kubernetes_vip }}
  24. - {{ ansible_host }}
  25. {% for host in groups['masters'] %}
  26. - {{ hostvars[host]['ansible_host'] }}
  27. {% endfor %}
  28. - k8s-main-01
  29. - k8s-main-02
  30. - k8s-main-03
  31. - kubernetes.cluster
  32. timeoutForControlPlane: 4m0s
  33. apiVersion: kubeadm.k8s.io/v1beta3
  34. certificatesDir: /etc/kubernetes/pki
  35. clusterName: kubernetes
  36. controllerManager: {}
  37. dns: {}
  38. etcd:
  39. local:
  40. dataDir: /var/lib/etcd
  41. imageRepository: registry.k8s.io
  42. kind: ClusterConfiguration
  43. kubernetesVersion: 1.28.4
  44. controlPlaneEndpoint: "{{ kubernetes_vip }}:6443"
  45. networking:
  46. dnsDomain: cluster.local
  47. podSubnet: 10.244.0.0/12
  48. serviceSubnet: 10.96.0.0/12
  49. scheduler: {}
  50. ---
  51. kind: KubeletConfiguration
  52. apiVersion: kubelet.config.k8s.io/v1beta1
  53. cgroupDriver: systemd
  54. ---
  55. ---
  56. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  57. kind: KubeProxyConfiguration
  58. mode: ipvs

ttl: 0s指定了永不过期。

推荐的做法是设置合理的过期时间,后续使用kubeadm token create --print-join-command的方式获取 token。

6.3、生成 kubeadm init 配置文件

通过 register 捕获 shell 脚本生成的内容,并且在 Generate kubeadm config file 子任务中定义名为token的变量接收来自 stdout 输出的内容。

使用 register 的基本原理:

  • 捕获输出:当您在某个任务上使用 register 关键字时,Ansible 会捕获该任务的输出并将其存储在您指定的变量中。
  • 变量使用:注册后的变量可以在 playbook 的后续任务中使用,允许您根据之前任务的输出来决定后续操作

编写 ansible 任务配置文件内容如下:

  1. ---
  2. - name: Initialize Kubernetes Cluster on Main Master
  3. hosts: main
  4. become: true
  5. tasks:
  6. - name: Generate Kubernetes init token
  7. shell: >
  8. LC_CTYPE=C tr -dc 'a-z' </dev/urandom | head -c 1;
  9. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 5;
  10. echo -n '.';
  11. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
  12. register: k8s_init_token
  13. - name: Generate kubeadm config file
  14. template:
  15. src: resources/kubeadm-init.yaml.j2
  16. dest: kubeadm-init.yaml
  17. vars:
  18. token: "{{ k8s_init_token.stdout }}"

以上配置文件会在 /home/ansible 目录下生成一个名为kubeadm-init.yaml的文件,用户名是在 hosts.ini 中 main 组的服务器通过 ansible_ssh_user 指定的。可以登录到服务器检查 kubeadm-init.yaml 文件的内容。

6.4、使用 kubeadm 创建 主 master 节点

resources/cilium-linux-amd64.tar.gz文件下载方式参考

https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-cilium-cli

在 linux 上安装 cilium-cli 时这样下载

  1. CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
  2. CLI_ARCH=amd64
  3. if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
  4. curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  5. # 在Linux上这样检查文件 sha256
  6. # sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
  7. # 在Mac上这样检查 sha256
  8. # shasum -a 256 -c cilium-linux-${CLI_ARCH}.tar.gz.sha256sum

在 Mac 上安装 cilium-cli 时这样下载

  1. CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
  2. CLI_ARCH=amd64
  3. if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
  4. curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
  5. shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum

这是在上面的配置文件基础上增加的内容

建议与后面的加入其它 masters 和 workers 的步骤放同一个文件中一起执行,这样可以通过set_fact设置的变量直接执行kubeadm join命令。

下面的配置文件也将 masters 和 workers 上需要的join命令存到了 .master_join_command.txt.worker_join_command.txt中,单独加入节点,或后续手动执行kubeadm join可以使用其中的命令。

下面的配置直接在 k8s 的 main master 上安装cilium-cli, 将下载好的文件放到 resources/cilium-linux-amd64.tar.gz 目录:

  1. ---
  2. - name: Initialize Kubernetes Cluster on Main Master
  3. hosts: main
  4. become: true
  5. tasks:
  6. - name: Check if IP address is already present
  7. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  8. register: ip_check
  9. ignore_errors: yes
  10. failed_when: false
  11. changed_when: false
  12. - name: Debug print ip_check result
  13. debug:
  14. msg: "{{ ip_check }}"
  15. - name: Add IP address to loopback interface
  16. command:
  17. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  18. when: ip_check.rc != 0
  19. - name: Generate Kubernetes init token
  20. shell: >
  21. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
  22. echo -n '.';
  23. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
  24. register: k8s_init_token
  25. - name: Generate kubeadm config file
  26. template:
  27. src: resources/kubeadm-init.yaml.j2
  28. dest: kubeadm-init.yaml
  29. vars:
  30. token: "{{ k8s_init_token.stdout }}"
  31. - name: Initialize the Kubernetes cluster using kubeadm
  32. command:
  33. cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --upload-certs --config kubeadm-init.yaml
  34. register: kubeadm_init
  35. - name: Set fact for master join command
  36. set_fact:
  37. master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane.*', multiline=True) }}"
  38. cacheable: yes
  39. run_once: true
  40. - name: Set fact for worker join command
  41. set_fact:
  42. worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
  43. cacheable: yes
  44. run_once: true
  45. # - name: Create the target directory if it doesn't exist
  46. # file:
  47. # path: ~/.kube
  48. # state: directory
  49. # owner: "{{ ansible_user_id }}"
  50. # group: "{{ ansible_user_id }}"
  51. # mode: '0755'
  52. # when: not ansible_check_mode # This ensures it only runs when not in check mode
  53. # - name: Copy kube admin config to ansible user directory
  54. # copy:
  55. # src: /etc/kubernetes/admin.conf
  56. # dest: ~/.kube/config
  57. # remote_src: yes
  58. # owner: "{{ ansible_user_id }}"
  59. # group: "{{ ansible_user_id }}"
  60. # mode: '0644'
  61. - name: Write master join command to .master_join_command.txt
  62. copy:
  63. content: "{{ master_join_command }}"
  64. dest: ".master_join_command.txt"
  65. mode: '0664'
  66. delegate_to: localhost
  67. - name: Append worker join command to .worker_join_command.txt
  68. lineinfile:
  69. path: ".worker_join_command.txt"
  70. line: "{{ worker_join_command }}"
  71. create: yes
  72. delegate_to: localhost
  73. - name: Install cilium on Main Master
  74. hosts: main
  75. become: true
  76. tasks:
  77. - name: Ensure tar is installed (Debian/Ubuntu)
  78. apt:
  79. name: tar
  80. state: present
  81. when: ansible_os_family == "Debian"
  82. - name: Check for Cilium binary in /usr/local/bin
  83. stat:
  84. path: /usr/local/bin/cilium
  85. register: cilium_binary
  86. - name: Transfer and Extract Cilium
  87. unarchive:
  88. src: resources/cilium-linux-amd64.tar.gz
  89. dest: /usr/local/bin
  90. remote_src: no
  91. when: not cilium_binary.stat.exists
  92. - name: Install cilium to the Kubernetes cluster
  93. environment:
  94. KUBECONFIG: /etc/kubernetes/admin.conf
  95. command:
  96. cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
  97. - name: Wait for Kubernetes cluster to become ready
  98. environment:
  99. KUBECONFIG: /etc/kubernetes/admin.conf
  100. command: kubectl get nodes
  101. register: kubectl_output
  102. until: kubectl_output.stdout.find("Ready") != -1
  103. retries: 20
  104. delay: 30

如果想继续使用kube-proxy,删除kubeadm init命令中的--skip-phases=addon/kube-proxy参数和cilium install命令中的--set kubeProxyReplacement=true

6.5、将其余的 masters 加入集群

  1. ---
  2. - name: Join Masters to the Cluster
  3. hosts: masters
  4. become: true
  5. tasks:
  6. - name: Joining master node to the Kubernetes cluster
  7. shell:
  8. cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
  9. ignore_errors: yes
  10. - name: Wait for node to become ready
  11. environment:
  12. KUBECONFIG: /etc/kubernetes/admin.conf
  13. command: kubectl get nodes
  14. register: kubectl_output
  15. until: kubectl_output.stdout.find("NotReady") == -1
  16. retries: 20
  17. delay: 30
  18. - name: Check if IP address is already present
  19. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  20. register: ip_check
  21. ignore_errors: yes
  22. failed_when: false
  23. changed_when: false
  24. - name: Debug print ip_check result
  25. debug:
  26. msg: "{{ ip_check }}"
  27. - name: Add IP address to loopback interface
  28. command:
  29. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  30. when: ip_check.rc != 0

6.6、将其余的 works 加入集群

  1. ---
  2. - name: Join Worker Nodes to the Cluster
  3. hosts: workers
  4. become: true
  5. tasks:
  6. - name: Joining master node to the Kubernetes cluster
  7. shell:
  8. cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
  9. ignore_errors: yes
  10. - name: Check if IP address is already present
  11. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  12. register: ip_check
  13. ignore_errors: yes
  14. failed_when: false
  15. changed_when: false
  16. - name: Debug print ip_check result
  17. debug:
  18. msg: "{{ ip_check }}"
  19. - name: Add IP address to loopback interface
  20. command:
  21. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  22. when: ip_check.rc != 0

完整的 ansible playbook 配置

只需要执行这个配置文件,加上resources目录中的三个文件,就可以完成上面所有的操作

依赖的文件:

  • 参考 2.4、编写 keepalived.conf.j2 模板文件
  • 参考 6.2、编写 kubeadm-init.yaml.j2 模板文件
  • 参考 6.4、使用 kubeadm 创建 主 master 节点 中下载的 cilium-cli 文件,需要 linux 版本
  1. ---
  2. - name: Setup Load Balancer with IPVS and Keepalived
  3. hosts: lvs
  4. become: yes
  5. tasks:
  6. # Upgrade all installed packages to their latest versions
  7. - name: Upgrade all installed apt packages
  8. apt:
  9. upgrade: 'yes'
  10. update_cache: yes
  11. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  12. # Install IP Virtual Server (IPVS) administration utility
  13. - name: Install ipvsadm for IPVS management
  14. apt:
  15. name: ipvsadm
  16. state: present
  17. # Install keepalived for high availability
  18. - name: Install Keepalived for load balancing
  19. apt:
  20. name: keepalived
  21. state: present
  22. # Deploy keepalived configuration from a Jinja2 template
  23. - name: Deploy keepalived configuration file
  24. template:
  25. src: resources/keepalived.conf.j2
  26. dest: /etc/keepalived/keepalived.conf
  27. # Restart keepalived to apply changes
  28. - name: Restart Keepalived service
  29. service:
  30. name: keepalived
  31. state: restarted
  32. - name: Install kubernetes packages and containerd.io
  33. hosts: kubernetes
  34. become: yes
  35. tasks:
  36. # Upgrade all installed packages to their latest versions
  37. - name: Upgrade all installed apt packages
  38. apt:
  39. upgrade: 'yes'
  40. update_cache: yes
  41. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  42. # Install required packages for Kubernetes and Docker setup
  43. - name: Install prerequisites for Kubernetes and Docker
  44. apt:
  45. name:
  46. - ca-certificates
  47. - curl
  48. - gnupg
  49. update_cache: yes
  50. cache_valid_time: 3600
  51. # Ensure the keyring directory exists for storing GPG keys
  52. - name: Create /etc/apt/keyrings directory for GPG keys
  53. file:
  54. path: /etc/apt/keyrings
  55. state: directory
  56. mode: '0755'
  57. # Add Docker's official GPG key
  58. - name: Add official Docker GPG key to keyring
  59. apt_key:
  60. url: https://download.docker.com/linux/ubuntu/gpg
  61. keyring: /etc/apt/keyrings/docker.gpg
  62. state: present
  63. # Add Docker's apt repository
  64. - name: Add Docker repository to apt sources
  65. apt_repository:
  66. # repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  67. repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  68. filename: docker
  69. update_cache: yes
  70. notify: Update apt cache
  71. # Add Kubernetes' GPG key
  72. - name: Add Kubernetes GPG key to keyring
  73. apt_key:
  74. url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
  75. keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  76. state: present
  77. # Add Kubernetes' apt repository
  78. - name: Add Kubernetes repository to apt sources
  79. lineinfile:
  80. path: /etc/apt/sources.list.d/kubernetes.list
  81. line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
  82. create: yes
  83. notify: Update apt cache
  84. # Install Kubernetes packages
  85. - name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
  86. apt:
  87. name:
  88. - kubelet
  89. - kubeadm
  90. - kubectl
  91. - containerd.io
  92. state: present
  93. # Hold the installed packages to prevent automatic updates
  94. - name: Hold Kubernetes packages and containerd.io
  95. dpkg_selections:
  96. name: "{{ item }}"
  97. selection: hold
  98. loop:
  99. - kubelet
  100. - kubeadm
  101. - kubectl
  102. - containerd.io
  103. handlers:
  104. # Handler to update apt cache when notified
  105. - name: Update apt cache
  106. apt:
  107. update_cache: yes
  108. - name: Configure Kubernetes prerequisites
  109. hosts: kubernetes
  110. become: yes # to run tasks that require sudo
  111. tasks:
  112. - name: Load Kernel Modules
  113. copy:
  114. content: |
  115. overlay
  116. br_netfilter
  117. dest: /etc/modules-load.d/k8s.conf
  118. notify: Load Modules
  119. - name: Set Sysctl Parameters
  120. copy:
  121. content: |
  122. net.bridge.bridge-nf-call-iptables = 1
  123. net.bridge.bridge-nf-call-ip6tables = 1
  124. net.ipv4.ip_forward = 1
  125. dest: /etc/sysctl.d/k8s.conf
  126. notify: Apply Sysctl
  127. handlers:
  128. - name: Load Modules
  129. modprobe:
  130. name: "{{ item }}"
  131. state: present
  132. loop:
  133. - overlay
  134. - br_netfilter
  135. - name: Apply Sysctl
  136. command: sysctl --system
  137. - name: Prefetch kubernetes images
  138. hosts: kubernetes
  139. become: true
  140. tasks:
  141. - name: Get kubeadm version
  142. command: kubeadm version -o short
  143. register: kubeadm_version
  144. - name: List Kubernetes images for the specific kubeadm version
  145. command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
  146. register: kubernetes_images
  147. - name: Pull and retag Kubernetes images from Aliyun registry
  148. block:
  149. - name: List old images in k8s.io namespace
  150. command: ctr -n k8s.io images list -q
  151. register: old_images_list
  152. - name: Pull Kubernetes image from Aliyun
  153. command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
  154. loop: "{{ kubernetes_images.stdout_lines }}"
  155. when: item not in old_images_list.stdout
  156. loop_control:
  157. label: "{{ item }}"
  158. - name: Retag Kubernetes image
  159. command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
  160. loop: "{{ kubernetes_images.stdout_lines }}"
  161. when: item not in old_images_list.stdout
  162. loop_control:
  163. label: "{{ item }}"
  164. - name: List new images in k8s.io namespace
  165. command: ctr -n k8s.io images list -q
  166. register: new_images_list
  167. - name: Remove images from Aliyun registry
  168. command: "ctr -n k8s.io images remove {{ item }}"
  169. loop: "{{ new_images_list.stdout_lines }}"
  170. when: item.startswith('registry.aliyuncs.com/google_containers')
  171. loop_control:
  172. label: "{{ item }}"
  173. # # Optional: Remove old SHA256 tags if necessary
  174. # - name: Remove old SHA256 tags
  175. # command: "ctr -n k8s.io images remove {{ item }}"
  176. # loop: "{{ new_images_list.stdout_lines }}"
  177. # when: item.startswith('sha256:')
  178. # loop_control:
  179. # label: "{{ item }}"
  180. - name: Configure containerd
  181. hosts: kubernetes
  182. become: true
  183. tasks:
  184. - name: Get Kubernetes images list
  185. command: kubeadm config images list
  186. register: kubernetes_images
  187. - name: Set pause image variable
  188. set_fact:
  189. pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
  190. - name: Generate default containerd config
  191. command: containerd config default
  192. register: containerd_config
  193. changed_when: false
  194. - name: Write containerd config to file
  195. copy:
  196. dest: /etc/containerd/config.toml
  197. content: "{{ containerd_config.stdout }}"
  198. mode: '0644'
  199. - name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
  200. lineinfile:
  201. path: /etc/containerd/config.toml
  202. regexp: "{{ item.regexp }}"
  203. line: "{{ item.line }}"
  204. loop:
  205. - { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
  206. - { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
  207. - name: Restart containerd service
  208. service:
  209. name: containerd
  210. state: restarted
  211. - name: Initialize Kubernetes Cluster on Main Master
  212. hosts: main
  213. become: true
  214. tasks:
  215. - name: Check if IP address is already present
  216. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  217. register: ip_check
  218. ignore_errors: yes
  219. failed_when: false
  220. changed_when: false
  221. - name: Debug print ip_check result
  222. debug:
  223. msg: "{{ ip_check }}"
  224. - name: Add IP address to loopback interface
  225. command:
  226. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  227. when: ip_check.rc != 0
  228. - name: Generate Kubernetes init token
  229. shell: >
  230. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
  231. echo -n '.';
  232. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
  233. register: k8s_init_token
  234. - name: Generate kubeadm config file
  235. template:
  236. src: resources/kubeadm-init.yaml.j2
  237. dest: kubeadm-init.yaml
  238. vars:
  239. token: "{{ k8s_init_token.stdout }}"
  240. - name: Initialize the Kubernetes cluster using kubeadm
  241. command:
  242. cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --config kubeadm-init.yaml --upload-certs
  243. register: kubeadm_init
  244. - name: Set fact for master join command
  245. set_fact:
  246. master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane', multiline=True) }}"
  247. cacheable: yes
  248. run_once: true
  249. - name: Set fact for worker join command
  250. set_fact:
  251. worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
  252. cacheable: yes
  253. run_once: true
  254. # - name: Create the target directory if it doesn't exist
  255. # file:
  256. # path: ~/.kube
  257. # state: directory
  258. # owner: "{{ ansible_user_id }}"
  259. # group: "{{ ansible_user_id }}"
  260. # mode: '0755'
  261. # when: not ansible_check_mode # This ensures it only runs when not in check mode
  262. # - name: Copy kube admin config to ansible user directory
  263. # copy:
  264. # src: /etc/kubernetes/admin.conf
  265. # dest: ~/.kube/config
  266. # remote_src: yes
  267. # owner: "{{ ansible_user_id }}"
  268. # group: "{{ ansible_user_id }}"
  269. # mode: '0644'
  270. - name: Write master join command to .master_join_command.txt
  271. copy:
  272. content: "{{ master_join_command }}"
  273. dest: ".master_join_command.txt"
  274. mode: '0664'
  275. delegate_to: localhost
  276. - name: Append worker join command to .worker_join_command.txt
  277. lineinfile:
  278. path: ".worker_join_command.txt"
  279. line: "{{ worker_join_command }}"
  280. create: yes
  281. delegate_to: localhost
  282. - name: Install cilium on Main Master
  283. hosts: main
  284. become: true
  285. tasks:
  286. - name: Ensure tar is installed (Debian/Ubuntu)
  287. apt:
  288. name: tar
  289. state: present
  290. when: ansible_os_family == "Debian"
  291. - name: Check for Cilium binary in /usr/local/bin
  292. stat:
  293. path: /usr/local/bin/cilium
  294. register: cilium_binary
  295. - name: Transfer and Extract Cilium
  296. unarchive:
  297. src: resources/cilium-linux-amd64.tar.gz
  298. dest: /usr/local/bin
  299. remote_src: no
  300. when: not cilium_binary.stat.exists
  301. - name: Install cilium to the Kubernetes cluster
  302. environment:
  303. KUBECONFIG: /etc/kubernetes/admin.conf
  304. command:
  305. cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
  306. - name: Wait for Kubernetes cluster to become ready
  307. environment:
  308. KUBECONFIG: /etc/kubernetes/admin.conf
  309. command: kubectl get nodes
  310. register: kubectl_output
  311. until: kubectl_output.stdout.find("Ready") != -1
  312. retries: 20
  313. delay: 30
  314. - name: Join Masters to the Cluster
  315. hosts: masters
  316. become: true
  317. tasks:
  318. - name: Joining master node to the Kubernetes cluster
  319. shell:
  320. cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
  321. ignore_errors: yes
  322. - name: Join Worker Nodes to the Cluster
  323. hosts: workers
  324. become: true
  325. tasks:
  326. - name: Joining master node to the Kubernetes cluster
  327. shell:
  328. cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
  329. ignore_errors: yes

用于重置 Kubernetes 节点的配置

  1. ---
  2. - name: Setup Load Balancer with IPVS and Keepalived
  3. hosts: lvs
  4. become: yes
  5. tasks:
  6. # Upgrade all installed packages to their latest versions
  7. - name: Upgrade all installed apt packages
  8. apt:
  9. upgrade: 'yes'
  10. update_cache: yes
  11. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  12. # Install IP Virtual Server (IPVS) administration utility
  13. - name: Install ipvsadm for IPVS management
  14. apt:
  15. name: ipvsadm
  16. state: present
  17. # Install keepalived for high availability
  18. - name: Install Keepalived for load balancing
  19. apt:
  20. name: keepalived
  21. state: present
  22. # Deploy keepalived configuration from a Jinja2 template
  23. - name: Deploy keepalived configuration file
  24. template:
  25. src: resources/keepalived.conf.j2
  26. dest: /etc/keepalived/keepalived.conf
  27. # Restart keepalived to apply changes
  28. - name: Restart Keepalived service
  29. service:
  30. name: keepalived
  31. state: restarted
  32. - name: Install kubernetes packages and containerd.io
  33. hosts: kubernetes
  34. become: yes
  35. tasks:
  36. # Upgrade all installed packages to their latest versions
  37. - name: Upgrade all installed apt packages
  38. apt:
  39. upgrade: 'yes'
  40. update_cache: yes
  41. cache_valid_time: 3600 # Cache is considered valid for 1 hour
  42. # Install required packages for Kubernetes and Docker setup
  43. - name: Install prerequisites for Kubernetes and Docker
  44. apt:
  45. name:
  46. - ca-certificates
  47. - curl
  48. - gnupg
  49. update_cache: yes
  50. cache_valid_time: 3600
  51. # Ensure the keyring directory exists for storing GPG keys
  52. - name: Create /etc/apt/keyrings directory for GPG keys
  53. file:
  54. path: /etc/apt/keyrings
  55. state: directory
  56. mode: '0755'
  57. # Add Docker's official GPG key
  58. - name: Add official Docker GPG key to keyring
  59. apt_key:
  60. url: https://download.docker.com/linux/ubuntu/gpg
  61. keyring: /etc/apt/keyrings/docker.gpg
  62. state: present
  63. # Add Docker's apt repository
  64. - name: Add Docker repository to apt sources
  65. apt_repository:
  66. # repo: "deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  67. repo: "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
  68. filename: docker
  69. update_cache: yes
  70. notify: Update apt cache
  71. # Add Kubernetes' GPG key
  72. - name: Add Kubernetes GPG key to keyring
  73. apt_key:
  74. url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
  75. keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  76. state: present
  77. # Add Kubernetes' apt repository
  78. - name: Add Kubernetes repository to apt sources
  79. lineinfile:
  80. path: /etc/apt/sources.list.d/kubernetes.list
  81. line: 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /'
  82. create: yes
  83. notify: Update apt cache
  84. # Install Kubernetes packages
  85. - name: Install Kubernetes packages (kubelet, kubeadm, kubectl) and containerd.io
  86. apt:
  87. name:
  88. - kubelet
  89. - kubeadm
  90. - kubectl
  91. - containerd.io
  92. state: present
  93. # Hold the installed packages to prevent automatic updates
  94. - name: Hold Kubernetes packages and containerd.io
  95. dpkg_selections:
  96. name: "{{ item }}"
  97. selection: hold
  98. loop:
  99. - kubelet
  100. - kubeadm
  101. - kubectl
  102. - containerd.io
  103. handlers:
  104. # Handler to update apt cache when notified
  105. - name: Update apt cache
  106. apt:
  107. update_cache: yes
  108. - name: Configure Kubernetes prerequisites
  109. hosts: kubernetes
  110. become: yes # to run tasks that require sudo
  111. tasks:
  112. - name: Load Kernel Modules
  113. copy:
  114. content: |
  115. overlay
  116. br_netfilter
  117. dest: /etc/modules-load.d/k8s.conf
  118. notify: Load Modules
  119. - name: Set Sysctl Parameters
  120. copy:
  121. content: |
  122. net.bridge.bridge-nf-call-iptables = 1
  123. net.bridge.bridge-nf-call-ip6tables = 1
  124. net.ipv4.ip_forward = 1
  125. dest: /etc/sysctl.d/k8s.conf
  126. notify: Apply Sysctl
  127. handlers:
  128. - name: Load Modules
  129. modprobe:
  130. name: "{{ item }}"
  131. state: present
  132. loop:
  133. - overlay
  134. - br_netfilter
  135. - name: Apply Sysctl
  136. command: sysctl --system
  137. - name: Prefetch kubernetes images
  138. hosts: kubernetes
  139. become: true
  140. tasks:
  141. - name: Get kubeadm version
  142. command: kubeadm version -o short
  143. register: kubeadm_version
  144. - name: List Kubernetes images for the specific kubeadm version
  145. command: "kubeadm config images list --kubernetes-version={{ kubeadm_version.stdout }}"
  146. register: kubernetes_images
  147. - name: Pull and retag Kubernetes images from Aliyun registry
  148. block:
  149. - name: List old images in k8s.io namespace
  150. command: ctr -n k8s.io images list -q
  151. register: old_images_list
  152. - name: Pull Kubernetes image from Aliyun
  153. command: "ctr -n k8s.io images pull registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }}"
  154. loop: "{{ kubernetes_images.stdout_lines }}"
  155. when: item not in old_images_list.stdout
  156. loop_control:
  157. label: "{{ item }}"
  158. - name: Retag Kubernetes image
  159. command: "ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/{{ item.split('/')[-1] }} {{ item }}"
  160. loop: "{{ kubernetes_images.stdout_lines }}"
  161. when: item not in old_images_list.stdout
  162. loop_control:
  163. label: "{{ item }}"
  164. - name: List new images in k8s.io namespace
  165. command: ctr -n k8s.io images list -q
  166. register: new_images_list
  167. - name: Remove images from Aliyun registry
  168. command: "ctr -n k8s.io images remove {{ item }}"
  169. loop: "{{ new_images_list.stdout_lines }}"
  170. when: item.startswith('registry.aliyuncs.com/google_containers')
  171. loop_control:
  172. label: "{{ item }}"
  173. # # Optional: Remove old SHA256 tags if necessary
  174. # - name: Remove old SHA256 tags
  175. # command: "ctr -n k8s.io images remove {{ item }}"
  176. # loop: "{{ new_images_list.stdout_lines }}"
  177. # when: item.startswith('sha256:')
  178. # loop_control:
  179. # label: "{{ item }}"
  180. - name: Restart containerd service
  181. service:
  182. name: containerd
  183. state: restarted
  184. - name: Configure containerd
  185. hosts: kubernetes
  186. become: true
  187. tasks:
  188. - name: Get Kubernetes images list
  189. command: kubeadm config images list
  190. register: kubernetes_images
  191. - name: Set pause image variable
  192. set_fact:
  193. pause_image: "{{ kubernetes_images.stdout_lines | select('match', '^registry.k8s.io/pause:') | first }}"
  194. - name: Generate default containerd config
  195. command: containerd config default
  196. register: containerd_config
  197. changed_when: false
  198. - name: Write containerd config to file
  199. copy:
  200. dest: /etc/containerd/config.toml
  201. content: "{{ containerd_config.stdout }}"
  202. mode: '0644'
  203. - name: Replace 'sandbox_image' and 'SystemdCgroup' in containerd config
  204. lineinfile:
  205. path: /etc/containerd/config.toml
  206. regexp: "{{ item.regexp }}"
  207. line: "{{ item.line }}"
  208. loop:
  209. - { regexp: '^\s*sandbox_image\s*=.*$', line: ' sandbox_image = "{{ pause_image }}"' }
  210. - { regexp: 'SystemdCgroup =.*', line: ' SystemdCgroup = true' }
  211. - name: Restart containerd service
  212. service:
  213. name: containerd
  214. state: restarted
  215. - name: Initialize Kubernetes Cluster on Main Master
  216. hosts: main
  217. become: true
  218. tasks:
  219. - name: Check if IP address is already present
  220. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  221. register: ip_check
  222. ignore_errors: yes
  223. failed_when: false
  224. changed_when: false
  225. - name: Debug print ip_check result
  226. debug:
  227. msg: "{{ ip_check }}"
  228. - name: Add IP address to loopback interface
  229. command:
  230. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  231. when: ip_check.rc != 0
  232. - name: Generate Kubernetes init token
  233. shell: >
  234. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 6;
  235. echo -n '.';
  236. LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | head -c 16
  237. register: k8s_init_token
  238. - name: Generate kubeadm config file
  239. template:
  240. src: resources/kubeadm-init.yaml.j2
  241. dest: kubeadm-init.yaml
  242. vars:
  243. token: "{{ k8s_init_token.stdout }}"
  244. - name: Initialize the Kubernetes cluster using kubeadm
  245. command:
  246. cmd: kubeadm init --v=5 --skip-phases=addon/kube-proxy --upload-certs --config kubeadm-init.yaml
  247. register: kubeadm_init
  248. - name: Set fact for master join command
  249. set_fact:
  250. master_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*--control-plane.*', multiline=True) }}"
  251. cacheable: yes
  252. run_once: true
  253. - name: Set fact for worker join command
  254. set_fact:
  255. worker_join_command: "{{ kubeadm_init.stdout | regex_search('kubeadm join(.*\\n)+?.*sha256:[a-z0-9]{64}', multiline=True) }}"
  256. cacheable: yes
  257. run_once: true
  258. # - name: Create the target directory if it doesn't exist
  259. # file:
  260. # path: ~/.kube
  261. # state: directory
  262. # owner: "{{ ansible_user_id }}"
  263. # group: "{{ ansible_user_id }}"
  264. # mode: '0755'
  265. # when: not ansible_check_mode # This ensures it only runs when not in check mode
  266. # - name: Copy kube admin config to ansible user directory
  267. # copy:
  268. # src: /etc/kubernetes/admin.conf
  269. # dest: ~/.kube/config
  270. # remote_src: yes
  271. # owner: "{{ ansible_user_id }}"
  272. # group: "{{ ansible_user_id }}"
  273. # mode: '0644'
  274. - name: Write master join command to .master_join_command.txt
  275. copy:
  276. content: "{{ master_join_command }}"
  277. dest: ".master_join_command.txt"
  278. mode: '0664'
  279. delegate_to: localhost
  280. - name: Append worker join command to .worker_join_command.txt
  281. lineinfile:
  282. path: ".worker_join_command.txt"
  283. line: "{{ worker_join_command }}"
  284. create: yes
  285. delegate_to: localhost
  286. - name: Install cilium on Main Master
  287. hosts: main
  288. become: true
  289. tasks:
  290. - name: Ensure tar is installed (Debian/Ubuntu)
  291. apt:
  292. name: tar
  293. state: present
  294. when: ansible_os_family == "Debian"
  295. - name: Check for Cilium binary in /usr/local/bin
  296. stat:
  297. path: /usr/local/bin/cilium
  298. register: cilium_binary
  299. - name: Transfer and Extract Cilium
  300. unarchive:
  301. src: resources/cilium-linux-amd64.tar.gz
  302. dest: /usr/local/bin
  303. remote_src: no
  304. when: not cilium_binary.stat.exists
  305. - name: Install cilium to the Kubernetes cluster
  306. environment:
  307. KUBECONFIG: /etc/kubernetes/admin.conf
  308. command:
  309. cmd: cilium install --version 1.14.4 --set kubeProxyReplacement=true
  310. - name: Wait for Kubernetes cluster to become ready
  311. environment:
  312. KUBECONFIG: /etc/kubernetes/admin.conf
  313. command: kubectl get nodes
  314. register: kubectl_output
  315. until: kubectl_output.stdout.find("Ready") != -1
  316. retries: 20
  317. delay: 30
  318. - name: Join Masters to the Cluster
  319. hosts: masters
  320. become: true
  321. tasks:
  322. - name: Joining master node to the Kubernetes cluster
  323. shell:
  324. cmd: "{{ hostvars['k8s-main-01']['master_join_command'] }}"
  325. ignore_errors: yes
  326. - name: Wait for node to become ready
  327. environment:
  328. KUBECONFIG: /etc/kubernetes/admin.conf
  329. command: kubectl get nodes
  330. register: kubectl_output
  331. until: kubectl_output.stdout.find("NotReady") == -1
  332. retries: 20
  333. delay: 30
  334. - name: Check if IP address is already present
  335. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  336. register: ip_check
  337. ignore_errors: yes
  338. failed_when: false
  339. changed_when: false
  340. - name: Debug print ip_check result
  341. debug:
  342. msg: "{{ ip_check }}"
  343. - name: Add IP address to loopback interface
  344. command:
  345. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  346. when: ip_check.rc != 0
  347. - name: Join Worker Nodes to the Cluster
  348. hosts: workers
  349. become: true
  350. tasks:
  351. - name: Joining master node to the Kubernetes cluster
  352. shell:
  353. cmd: "{{ hostvars['k8s-main-01']['worker_join_command'] }}"
  354. ignore_errors: yes
  355. - name: Check if IP address is already present
  356. shell: "ip addr show dev lo | grep {{ kubernetes_vip }}"
  357. register: ip_check
  358. ignore_errors: yes
  359. failed_when: false
  360. changed_when: false
  361. - name: Debug print ip_check result
  362. debug:
  363. msg: "{{ ip_check }}"
  364. - name: Add IP address to loopback interface
  365. command:
  366. cmd: "ip addr add {{ kubernetes_vip }}/32 dev lo"
  367. when: ip_check.rc != 0

原文链接:https://www.cnblogs.com/javennie/p/setup-k8s-cluster-via-anisble.html

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号