干净的Centsot7.4、4G内存、2个CPU
最小化安装,最好带虚拟化
注意:脚本中配置静态网卡根据实际网卡名称配置,我用的是ens33
可以用 sed -i "s/ens33/(实际网卡名)/g" 文件路径 进行替换
#!/bin/bashecho "正在执行安装k8s环境初始化..."#关闭防火墙/usr/bin/iptables -F >/dev/null 2>&1/usr/bin/iptables -X >/dev/null 2>&1/usr/bin/systemctl disable firewalld.service >/dev/null 2>&1/usr/bin/systemctl stop firewalld.service >/dev/null 2>&1echo "执行关闭防火墙..."#禁用SELINUX/usr/sbin/setenforce 0/usr/bin/sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/configecho "执行关闭selinux..."#关闭swap/usr/sbin/swapoff -aecho "vm.swappiness = 0">> /etc/sysctl.confecho "执行关闭swap..."#配置静态网络echo "配置静态网络..."cat << EOF >/etc/sysconfig/network-scripts/ifcfg-ens33TYPE="Ethernet"BOOTPROTO="static"IPADDR="`ifconfig ens33 | grep broadcast | awk -F " " '{print $2}'`"NETMASK="`ifconfig ens33 | grep broadcast | awk -F " " '{print $4}'`"GATREWAY="`route -n | grep UG | awk -F " " '{print$2}'`"NAME="ens33"DEVICE="ens33"ONBOOT="yes"PEERDNS="yes"DNS1="114.114.114.114"DNS2="8.8.8.8"DNS3="`route -n | grep UG | awk -F " " '{print$2}'`"EOFcat <<EOF >/etc/sysconfig/networkGATEWAY=`route -n | grep UG | awk -F " " '{print$2}'`EOF#配置yum源cat << EOF >/etc/yum.repos.d/163.repo[163]name=163baseurl=http://mirrors.163.com/centos/7/os/x86_64/gpgcheck=0enable=1EOFcat << EOF >/etc/yum.repos.d/epel.repo[epel]name=epelbaseurl=https://mirrors.aliyun.com/epel/7/x86_64/enabled=1gpgcheck=0EOFecho "写入网络yum源..."#更新yum源yum clean all >/dev/null 2>&1yum makecache >/dev/null 2>&1echo "更新yum源..."#安装wget、ansibleyum install -y wget >/dev/null 2>&1yum install -y ansible >/dev/null 2>&1echo "安装wget、ansible工具..."#定义ansible组cat << EOF >>/etc/ansible/hosts[k8s]`ifconfig ens33 | grep broadcast | awk -F " " '{print $2}'`EOFecho "配置ansible组..."#设置主机名echo -n "请输入主机名:"read -p "" name/usr/bin/hostnamectl --static set-hostname $nameif [ $? = 0 ]then echo "修改主机名成功..."else echo "修改主机名失败..." exitfi echo "初始化完成..."echo "正在执行重启操作..."sleep 3/usr/sbin/init 6
ssh root@本机ip
若不执行,ansible执行会报错
注意:shell脚本我统一放在/root/start-sh/目录下
mkdir -p /root/start-sh/
cd /root/start-sh/
vim docker-k8s.sh
创建docker-k8s.sh脚本,其内容是拉取k8s镜像
#!/bin/bash#下载镜像docker pull mirrorgooglecontainers/kube-apiserver:v1.14.0docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.0docker pull mirrorgooglecontainers/kube-scheduler:v1.14.0docker pull mirrorgooglecontainers/kube-proxy:v1.14.0docker pull mirrorgooglecontainers/pause:3.1docker pull mirrorgooglecontainers/etcd:3.3.10docker pull coredns/coredns:1.3.1docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64#给镜像打tagdocker tag mirrorgooglecontainers/kube-apiserver:v1.14.0 k8s.gcr.io/kube-apiserver:v1.14.0docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0docker tag mirrorgooglecontainers/kube-scheduler:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0docker tag mirrorgooglecontainers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64#删除原镜像docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.0 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.0 docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.0 docker rmi mirrorgooglecontainers/kube-proxy:v1.14.0 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.3.10 docker rmi coredns/coredns:1.3.1docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
创建k8s目录,编写kube-flannel.yaml文件(参照https://www.codesheep.cn/kube-flannel-yml/?spm=a2c4e.11153940.blogcont682810.11.6f853974iYJ4BU)
该内容目的是pod的网络通信
mkdir -p /root/kube-system/
cd /root/kube-system/
vim kube-flannel.yaml
---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelrules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannelsubjects:- kind: ServiceAccount name: flannel namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: flannel namespace: kube-system---kind: ConfigMapapiVersion: v1metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flanneldata: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
yaml文件建好后使用一个脚本启动它
vim start-pod-network
#!/bin/bash#设置kubectlecho "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profilesource /etc/profile#安装pod network/usr/sbin/sysctl net.bridge.bridge-nf-call-iptables=1kubectl apply -f /root/k8s/kube-system/kube-flannel.yaml
准备好k8s需要的环境文件后编写ansible剧本
mkdir -p /root/asnible
cd /root/ansible
vim k8s.yml
注意:init k8s我使用了一个awk获取本地ip也是根据ens33网卡
---- hosts: k8s remote_user: root tasks: - name: off iptables shell: iptables -F && iptables -X - name: wget CentOS-Base.repo shell: wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo - name: update CentOS-Base.repo shell: sed -i "s/[$]releasever/7/g" /etc/yum.repos.d/CentOS-Base.repo - name: k8s.repo shell: echo -e [kubernetes]"\n"name=Kubernetes Repo"\n"baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/"\n"gpgcheck=0"\n"gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg >/etc/yum.repos.d/k8s.repo - name: wget docker-ce.repo shell: wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker.repo - name: yum makecache shell: yum clean all && yum makecache - name: install epel-release yum: name=epel-release state=present - name: install container-selinux yum: name=container-selinux state=present - name: install docker yum: name=docker state=present - name: update docker-selinux shell: sed -i "s/OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'/OPTIONS='--log-driver=journald --signature-verification=false'/g" /etc/sysconfig/docker - name: start docker service: name=docker enabled=yes state=restarted - name: install kubelet yum: name=kubelet state=present - name: install kubeadm yum: name=kubeadm state=present - name: install kubectl yum: name=kubectl state=present - name: start kubelet service: name=kubelet enabled=yes state=restarted - name: pull k8s iso shell: bash /root/start-sh/docker-k8s.sh - name: off swap shell: swapoff -a - name: init k8s ignore_errors: yes shell: kubeadm init --kubernetes-version=v1.14.0 --apiserver-advertise-address `ifconfig ens33 | grep broadcast | awk -F " " '{print $2}'` --pod-network-cidr=10.244.0.0/16 - name: install pod network script: /root/start-sh/start-pod-network.sh - name: source kubcetl shell: source /etc/profile
cd /root/ansible/
ansible-playbook k8s.yml --ask-pass
输入密码
开始执行中,过程会有些慢,因为需拉取镜像
完成后执行 kubectl get pod -n kube-system查看pod状态
若输入命令无效,再执行一次source /etc/profile
k8s环境部署好了
------------------------------------------------------------------------------------------------------------------------------------
技术不足望多见谅,若ansible有更好的优化方式可以留言交流
谢谢,祝工作顺利,身体健康
原文链接:http://www.cnblogs.com/linux-cbr/p/10694767.html
本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728