经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » Kubernetes » 查看文章
kubeadm安装K8S单master双节点集群
来源:cnblogs  作者:飞到问情  时间:2019/1/7 9:31:22  对本文有异议

宿主机:
master:172.16.40.97
node1:172.16.40.98
node2:172.16.40.99

# 一、k8s初始化环境:(三台宿主机)

关闭防火墙和selinux

  1. systemctl stop firewalld && systemctl disable firewalld
  2. sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config
  3. setenforce 0

设置时间同步客户端

  1. yum install chrony -y
  2. cat <<EOF > /etc/chrony.conf
  3. server ntp.aliyun.com iburst
  4. stratumweight 0
  5. driftfile /var/lib/chrony/drift
  6. rtcsync
  7. makestep 10 3
  8. bindcmdaddress 127.0.0.1
  9. bindcmdaddress ::1
  10. keyfile /etc/chrony.keys
  11. commandkey 1
  12. generatecommandkey
  13. logchange 0.5
  14. logdir /var/log/chrony
  15. EOF
  16. systemctl restart chronyd

各主机之间相互DNS解析和ssh登录

升级内核

  1. wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  2. yum install wget git jq psmisc -y
  3. wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
  4. yum install https://mirrors.aliyun.com/saltstack/yum/redhat/salt-repo-latest-2.el7.noarch.rpm
  5. sed -i "s/repo.saltstack.com/mirrors.aliyun.com\/saltstack/g" /etc/yum.repos.d/salt-latest.repo
  6. yum update -y

更新重启

自选版本

  1. export Kernel_Vsersion=4.18.9-1
  2. wget http://mirror.rc.usf.edu/compute_lock/elrepo/kernel/el7/x86_64/RPMS/kernel-ml{,-devel}-${Kernel_Vsersion}.el7.elrepo.x86_64.rpm
  3. yum localinstall -y kernel-ml*

查看这个内核里是否有这个内核模块

  1. find /lib/modules -name '*nf_conntrack_ipv4*' -type f

修改内核启动顺序,默认启动的顺序应该为1,升级以后内核是往前面插入,为0(如果每次启动时需要手动选择哪个内核,该步骤可以省略)

  1. grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg

使用下面命令看看确认下是否启动默认内核指向上面安装的内核

  1. grubby --default-kernel

docker官方的内核检查脚本建议(RHEL7/CentOS7: User namespaces disabled; add ‘user_namespace.enable=1’ to boot command line),使用下面命令开启

  1. grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

重新加载内核

  1. reboot

需要设定/etc/sysctl.d/k8s.conf的系统参数

  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.ipv4.ip_forward = 1
  3. net.bridge.bridge-nf-call-ip6tables = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. fs.may_detach_mounts = 1
  6. vm.overcommit_memory=1
  7. vm.panic_on_oom=0
  8. fs.inotify.max_user_watches=89100
  9. fs.file-max=52706963
  10. fs.nr_open=52706963
  11. net.netfilter.nf_conntrack_max=2310720
  12. EOF
  13. sysctl --system

检查系统内核和模块是否适合运行 docker (仅适用于 linux 系统)

  1. curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh
  2. bash ./check-config.sh

安装docker-ce

  1. curl -fsSL "https://get.docker.com/" | bash -s -- --mirror Aliyun
  2. mkdir -p /etc/docker/
  3. cat>/etc/docker/daemon.json<<EOF
  4. {
  5. "registry-mirrors": ["https://fz5yth0r.mirror.aliyuncs.com"],
  6. "storage-driver": "overlay2",
  7. "storage-opts": [
  8. "overlay2.override_kernel_check=true"
  9. ],
  10. "log-driver": "json-file",
  11. "log-opts": {
  12. "max-size": "100m",
  13. "max-file": "3"
  14. }
  15. }
  16. EOF

设置docker开机启动,CentOS安装完成后docker需要手动设置docker命令补全

  1. yum install -y epel-release bash-completion && cp /usr/share/bash-completion/completions/docker /etc/bash_completion.d/
  2. systemctl enable --now docker

#二、安装k8s集群**

三台宿主机进行kubectl kubelet kubeadm安装:

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF
  10. yum install -y kubelet kubeadm kubectl
  11. systemctl enable kubelet

master宿主机忽略交换分区未关闭warning:

  1. cat <<EOF > /etc/sysconfig/kubelet
  2. KUBELET_EXTRA_ARGS=--fail-swap-on=false
  3. EOF
  4. systemctl daemon-reload

master节点进行kubeadm初始化

  1. kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --ignore-preflight-errors=Swap
  1. *[init] Using Kubernetes version: v1.13.1
  2. [preflight] Running pre-flight checks
  3. [preflight] Pulling images required for setting up a Kubernetes cluster
  4. [preflight] This might take a minute or two, depending on the speed of your internet connection
  5. [preflight] You can also perform this action in beforehand using kubeadm config images pull
  6. [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env
  7. [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml
  8. [kubelet-start] Activating the kubelet service
  9. [certs] Using certificateDir folder “/etc/kubernetes/pki
  10. [certs] Generating ca certificate and key
  11. [certs] Generating apiserver-kubelet-client certificate and key
  12. [certs] Generating apiserver certificate and key
  13. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.40.97]
  14. [certs] Generating front-proxy-ca certificate and key
  15. [certs] Generating front-proxy-client certificate and key
  16. [certs] Generating etcd/ca certificate and key
  17. [certs] Generating etcd/server certificate and key
  18. [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.16.40.97 127.0.0.1 ::1]
  19. [certs] Generating etcd/peer certificate and key
  20. [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.40.97 127.0.0.1 ::1]
  21. [certs] Generating etcd/healthcheck-client certificate and key
  22. [certs] Generating apiserver-etcd-client certificate and key
  23. [certs] Generating sa key and public key
  24. [kubeconfig] Using kubeconfig folder “/etc/kubernetes
  25. [kubeconfig] Writing admin.conf kubeconfig file
  26. [kubeconfig] Writing kubelet.conf kubeconfig file
  27. [kubeconfig] Writing controller-manager.conf kubeconfig file
  28. [kubeconfig] Writing scheduler.conf kubeconfig file
  29. [control-plane] Using manifest folder “/etc/kubernetes/manifests
  30. [control-plane] Creating static Pod manifest for kube-apiserver
  31. [control-plane] Creating static Pod manifest for kube-controller-manager
  32. [control-plane] Creating static Pod manifest for kube-scheduler
  33. [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests
  34. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
  35. [apiclient] All control plane components are healthy after 20.003620 seconds
  36. [uploadconfig] storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
  37. [kubelet] Creating a ConfigMap kubelet-config-1.13 in namespace kube-system with the configuration for the kubelets in the cluster
  38. [patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock to the Node API object master as an annotation
  39. [mark-control-plane] Marking the node master as control-plane by adding the label node-role.kubernetes.io/master=’’”
  40. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  41. [bootstrap-token] Using token: 2s9xxt.8lgyw6yzt21qq8xf
  42. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  43. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  44. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  45. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  46. [bootstraptoken] creating the cluster-info ConfigMap in the kube-public namespace
  47. [addons] Applied essential addon: CoreDNS
  48. [addons] Applied essential addon: kube-proxy
  49.  
  50. Your Kubernetes master has initialized successfully!
  51.  
  52. To start using your cluster, you need to run the following as a regular user:
  53.  
  54. mkdir -p $HOME/.kube
  55. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  56. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  57.  
  58. You should now deploy a pod network to the cluster.
  59. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:
  60. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  61.  
  62. You can now join any number of machines by running the following on each node
  63. as root:
  64.  
  65. kubeadm join 172.16.40.97:6443 token 2s9xxt.8lgyw6yzt21qq8xf discovery-token-ca-cert-hash sha256:c141fb0608b4b83136272598d2623589d73546762abc987391479e8e049b0d76*

各节点用kubectl访问访问集群

  1. mkdir -p $HOME/.kube
  2. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. chown $(id -u):$(id -g) $HOME/.kube/config

接下来我们来安装flannel网络插件

  1. wget https://raw.githubusercontent.com/sky-daiji/k8s-yaml/master/kube-flannel.yml
  2. kubectl apply -f kube-flannel.yml

master节点查看集群状态

  1. [root@master ~]# kubectl get cs
  2. NAME STATUS MESSAGE ERROR
  3. controller-manager Healthy ok
  4. scheduler Healthy ok
  5. etcd-0 Healthy {"health": "true"}

添加各节点进去集群

  1. kubeadm join 172.16.40.97:6443 --token 2s9xxt.8lgyw6yzt21qq8xf --discovery-token-ca-cert-hash sha256:c141fb0608b4b83136272598d2623589d73546762abc987391479e8e049b0d76

查看节点是否都添加到集群里

  1. [root@master ~]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 15m v1.13.1
  4. node1 Ready <none> 13m v1.13.1
  5. node2 Ready <none> 13m v1.13.1

查看k8s各自组件运行情况

安装kuber-dashboard插件

  1. wget https://github.com/sky-daiji/k8s-yaml/blob/master/kubernetes-dashboard.yaml
  2. wget https://github.com/sky-daiji/k8s-yaml/blob/master/admin-token.yaml
  3. kubectl apply -f kubernetes-dashboard.yaml -f admin-token.yaml

查看kubernetes-dashboard插件安装是否成功

  1. kubectl get pod -n kube-system |grep kubernetes-dashboard

访问Dashboard

https://172.16.40.97:30091
选择Token令牌模式登录。

  1. kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system

 注意所有宿主机都需要科学上网,你懂得!

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号