经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » Kubernetes » 查看文章
K8S 高级调度方式
来源:cnblogs  作者:klvchen  时间:2018/11/30 9:20:44  对本文有异议

可以使用高级调度分为:

  • 节点选择器: nodeSelector、nodeName
  • 节点亲和性调度: nodeAffinity
  • Pod亲和性调度:PodAffinity
  • Pod反亲和性调度:podAntiAffinity

nodeSelector, nodeName

  1. cd; mkdir schedule; cd schedule/
  2. vi pod-demo.yaml
  3. # 内容为
  4. apiVersion: v1
  5. kind: Pod
  6. metadata:
  7. name: pod-demo
  8. labels:
  9. app: myapp
  10. tier: frontend
  11. spec:
  12. containers:
  13. - name: myapp
  14. image: ikubernetes/myapp:v1
  15. nodeSelector:
  16. disktype: harddisk
  17. kubectl apply -f pod-demo.yaml
  18. kubectl get pods
  19. kubectl describe pod pod-demo
  20. # 运行结果:
  21. Warning FailedScheduling 2m3s (x25 over 3m15s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
  22. # 打上标签
  23. kubectl label node node2 disktype=harddisk
  24. # 正常启动
  25. kubectl get pods

nodeAffinity

requiredDuringSchedulingIgnoredDuringExecution 硬亲和性 必须满足亲和性。
preferredDuringSchedulingIgnoredDuringExecution 软亲和性 能满足最好,不满足也没关系。

硬亲和性:
matchExpressions : 匹配表达式,这个标签可以指定一段,例如pod中定义的key为zone,operator为In(包含那些),values为 foo和bar。就是在node节点中包含foo和bar的标签中调度
matchFields : 匹配字段 和上面的意思 不过他可以不定义标签值,可以定义

  1. # 选择在 node 有 zone 标签值为 foo 或 bar 值的节点上运行 pod
  2. vi pod-nodeaffinity-demo.yaml
  3. # 内容为
  4. apiVersion: v1
  5. kind: Pod
  6. metadata:
  7. name: pod-node-affinity-demo
  8. labels:
  9. app: myapp
  10. tier: frontend
  11. spec:
  12. containers:
  13. - name: myapp
  14. image: ikubernetes/myapp:v1
  15. affinity:
  16. nodeAffinity:
  17. requiredDuringSchedulingIgnoredDuringExecution:
  18. nodeSelectorTerms:
  19. - matchExpressions:
  20. - key: zone
  21. operator: In
  22. values:
  23. - foo
  24. - bar
  25. kubectl apply -f pod-nodeaffinity-demo.yaml
  26. kubectl describe pod pod-node-affinity-demo
  27. # 运行结果:
  28. Warning FailedScheduling 2s (x8 over 20s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
  29. # 给其中一个node打上foo的标签
  30. kubectl label node node1 zone=foo
  31. # 正常启动
  32. kubectl get pods

软亲和性 :

  1. cp pod-nodeaffinity-demo.yaml pod-nodeaffinity-demo-2.yaml
  2. vi pod-nodeaffinity-demo-2.yaml
  3. # 内容为
  4. apiVersion: v1
  5. kind: Pod
  6. metadata:
  7. name: pod-node-affinity-demo-2
  8. labels:
  9. app: myapp
  10. tier: frontend
  11. spec:
  12. containers:
  13. - name: myapp
  14. image: ikubernetes/myapp:v1
  15. affinity:
  16. nodeAffinity:
  17. preferredDuringSchedulingIgnoredDuringExecution:
  18. - preference:
  19. matchExpressions:
  20. - key: zone
  21. operator: In
  22. values:
  23. - foo
  24. - bar
  25. weight: 60
  26. kubectl apply -f pod-nodeaffinity-demo-2.yaml

podAffinity

Pod亲和性场景,我们的k8s集群的节点分布在不同的区域或者不同的机房,当服务A和服务B要求部署在同一个区域或者同一机房的时候,我们就需要亲和性调度了。

labelSelector : 选择跟那组Pod亲和
namespaces : 选择哪个命名空间
topologyKey : 指定节点上的哪个键

  1. kubectl get pods
  2. kubectl delete pod pod-node-affinity-demo pod-node-affinity-demo-2 pod-demo
  3. cd ~/schedule/
  4. vi pod-required-affinity-demo.yaml
  5. # 内容为:
  6. apiVersion: v1
  7. kind: Pod
  8. metadata:
  9. name: pod-first
  10. labels:
  11. app: myapp
  12. tier: frontend
  13. spec:
  14. containers:
  15. - name: myapp
  16. image: ikubernetes/myapp:v1
  17. ---
  18. apiVersion: v1
  19. kind: Pod
  20. metadata:
  21. name: pod-second
  22. labels:
  23. app: db
  24. tier: db
  25. spec:
  26. containers:
  27. - name: busybox
  28. image: busybox
  29. imagePullPolicy: IfNotPresent
  30. command: ["sh","-c","sleep 3600"]
  31. affinity:
  32. podAffinity:
  33. requiredDuringSchedulingIgnoredDuringExecution:
  34. - labelSelector:
  35. matchExpressions:
  36. - {key: app, operator: In, values: ["myapp"]}
  37. topologyKey: kubernetes.io/hostname
  38. kubectl apply -f pod-required-affinity-demo.yaml
  39. kubectl get pods -o wide
  40. # 运行结果,两个 pod 在同一 node 节点上
  41. NAME READY STATUS RESTARTS AGE IP NODE
  42. pod-first 1/1 Running 0 11s 10.244.1.6 node1
  43. pod-second 1/1 Running 0 11s 10.244.1.5 node1

podAntiAffinity

Pod反亲和性场景,当应用服务A和数据库服务B要求尽量不要在同一台节点上的时候。

  1. kubectl delete -f pod-required-affinity-demo.yaml
  2. cp pod-required-affinity-demo.yaml pod-required-anti-affinity-demo.yaml
  3. vi pod-required-anti-affinity-demo.yaml
  4. # 内容为
  5. apiVersion: v1
  6. kind: Pod
  7. metadata:
  8. name: pod-first
  9. labels:
  10. app: myapp
  11. tier: frontend
  12. spec:
  13. containers:
  14. - name: myapp
  15. image: ikubernetes/myapp:v1
  16. ---
  17. apiVersion: v1
  18. kind: Pod
  19. metadata:
  20. name: pod-second
  21. labels:
  22. app: backend
  23. tier: db
  24. spec:
  25. containers:
  26. - name: busybox
  27. image: busybox:latest
  28. imagePullPolicy: IfNotPresent
  29. command: ["sh","-c","sleep 3600"]
  30. affinity:
  31. podAntiAffinity:
  32. requiredDuringSchedulingIgnoredDuringExecution:
  33. - labelSelector:
  34. matchExpressions:
  35. - {key: app, operator: In, values: ["myapp"]}
  36. topologyKey: kubernetes.io/hostname
  37. kubectl apply -f pod-required-anti-affinity-demo.yaml
  38. kubectl get pods -o wide
  39. # 运行结果,两个 pod 不在同一个 node
  40. NAME READY STATUS RESTARTS AGE IP NODE
  41. pod-first 1/1 Running 0 5s 10.244.2.4 node2
  42. pod-second 1/1 Running 0 5s 10.244.1.7 node1
  43. kubectl delete -f pod-required-anti-affinity-demo.yaml
  44. # 如果硬反亲和性定义的标签两个节点都有,则第二个 Pod 没法进行调度,如下面的的 zone=foo
  45. # 给两个 node 打上同一个标签 zone=foo
  46. kubectl label nodes node2 zone=foo
  47. kubectl label nodes node1 zone=foo
  48. vi pod-required-anti-affinity-demo.yaml
  49. # 内容为:
  50. apiVersion: v1
  51. kind: Pod
  52. metadata:
  53. name: pod-first
  54. labels:
  55. app: myapp
  56. tier: frontend
  57. spec:
  58. containers:
  59. - name: myapp
  60. image: ikubernetes/myapp:v1
  61. ---
  62. apiVersion: v1
  63. kind: Pod
  64. metadata:
  65. name: pod-second
  66. labels:
  67. app: backend
  68. tier: db
  69. spec:
  70. containers:
  71. - name: busybox
  72. image: busybox:latest
  73. imagePullPolicy: IfNotPresent
  74. command: ["sh","-c","sleep 3600"]
  75. affinity:
  76. podAntiAffinity:
  77. requiredDuringSchedulingIgnoredDuringExecution:
  78. - labelSelector:
  79. matchExpressions:
  80. - {key: app, operator: In, values: ["myapp"]}
  81. topologyKey: zone
  82. kubectl apply -f pod-required-anti-affinity-demo.yaml
  83. kubectl get pods -o wide
  84. # 结果如下,pod-second 没法启动
  85. NAME READY STATUS RESTARTS AGE IP NODE
  86. pod-first 1/1 Running 0 12s 10.244.1.8 node1
  87. pod-second 0/1 Pending 0 12s <none> <none>
  88. kubectl delete -f pod-required-anti-affinity-demo.yaml

污点容忍调度(Taint和Toleration)

taints and tolerations 允许将某个节点做标记,以使得所有的pod都不会被调度到该节点上。但是如果某个pod明确制定了 tolerates 则可以正常调度到被标记的节点上。

  1. # 可以使用命令行为 Node 节点添加 Taints:
  2. kubectl taint nodes node1 key=value:NoSchedule

operator可以定义为:
Equal:表示key是否等于value,默认
Exists:表示key是否存在,此时无需定义value

tain 的 effect 定义对 Pod 排斥效果:
NoSchedule:仅影响调度过程,对现存的Pod对象不产生影响;
NoExecute:既影响调度过程,也影响显著的Pod对象;不容忍的Pod对象将被驱逐
PreferNoSchedule: 表示尽量不调度

  1. # 查看节点的 taint
  2. kubectl describe node master
  3. kubectl get pods -n kube-system
  4. kubectl describe pods kube-apiserver-master -n kube-system
  5. # 为 node1 打上污点
  6. kubectl taint node node1 node-type=production:NoSchedule
  7. vi deploy-demo.yaml
  8. # 内容为:
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. metadata:
  12. name: myapp-deploy
  13. namespace: default
  14. spec:
  15. replicas: 2
  16. selector:
  17. matchLabels:
  18. app: myapp
  19. release: canary
  20. template:
  21. metadata:
  22. labels:
  23. app: myapp
  24. release: canary
  25. spec:
  26. containers:
  27. - name: myapp
  28. image: ikubernetes/myapp:v1
  29. ports:
  30. - name: http
  31. containerPort: 80
  32. kubectl apply -f deploy-demo.yaml
  33. kubectl get pods -o wide
  34. # 运行结果:
  35. NAME READY STATUS RESTARTS AGE IP NODE
  36. myapp-deploy-69b47bc96d-cwt79 1/1 Running 0 5s 10.244.2.6 node2
  37. myapp-deploy-69b47bc96d-qqrwq 1/1 Running 0 5s 10.244.2.5 node2
  38. # 为 node2 打上污点
  39. kubectl taint node node2 node-type=dev:NoExecute
  40. # NoExecute 将会驱逐没有容忍该污点的 pod,因两个node节点都有污点,pod没有定义容忍,导致没有节点可以启动pod
  41. kubectl get pods -o wide
  42. # 运行结果:
  43. NAME READY STATUS RESTARTS AGE IP NODE
  44. myapp-deploy-69b47bc96d-psl8f 0/1 Pending 0 14s <none> <none>
  45. myapp-deploy-69b47bc96d-q296k 0/1 Pending 0 14s <none> <none>
  46. # 定义Toleration(容忍)
  47. vi deploy-demo.yaml
  48. apiVersion: apps/v1
  49. kind: Deployment
  50. metadata:
  51. name: myapp-deploy
  52. namespace: default
  53. spec:
  54. replicas: 2
  55. selector:
  56. matchLabels:
  57. app: myapp
  58. release: canary
  59. template:
  60. metadata:
  61. labels:
  62. app: myapp
  63. release: canary
  64. spec:
  65. containers:
  66. - name: myapp
  67. image: ikubernetes/myapp:v2
  68. ports:
  69. - name: http
  70. containerPort: 80
  71. tolerations:
  72. - key: "node-type"
  73. operator: "Equal"
  74. value: "production"
  75. effect: "NoSchedule"
  76. kubectl apply -f deploy-demo.yaml
  77. # pod 容忍 node1 的 tain ,可以在 node1 上运行
  78. ubectl get pods -o wide
  79. NAME READY STATUS RESTARTS AGE IP NODE
  80. myapp-deploy-65cc47f858-tmpnz 1/1 Running 0 10s 10.244.1.10 node1
  81. myapp-deploy-65cc47f858-xnklh 1/1 Running 0 13s 10.244.1.9 node1
  82. # 定义Toleration,是否存在 node-type 这个key 且 effect 值为 NoSchedule
  83. vi deploy-demo.yaml
  84. apiVersion: apps/v1
  85. kind: Deployment
  86. metadata:
  87. name: myapp-deploy
  88. namespace: default
  89. spec:
  90. replicas: 2
  91. selector:
  92. matchLabels:
  93. app: myapp
  94. release: canary
  95. template:
  96. metadata:
  97. labels:
  98. app: myapp
  99. release: canary
  100. spec:
  101. containers:
  102. - name: myapp
  103. image: ikubernetes/myapp:v2
  104. ports:
  105. - name: http
  106. containerPort: 80
  107. tolerations:
  108. - key: "node-type"
  109. operator: "Exists"
  110. value: ""
  111. effect: "NoSchedule"
  112. kubectl apply -f deploy-demo.yaml
  113. kubectl get pods -o wide
  114. NAME READY STATUS RESTARTS AGE IP NODE
  115. myapp-deploy-559f559bcc-6jfqq 1/1 Running 0 10s 10.244.1.11 node1
  116. myapp-deploy-559f559bcc-rlwp2 1/1 Running 0 9s 10.244.1.12 node1
  117. ##定义Toleration,是否存在 node-type 这个key 且 effect 值为空,则包含所有的值
  118. vi deploy-demo.yaml
  119. apiVersion: apps/v1
  120. kind: Deployment
  121. metadata:
  122. name: myapp-deploy
  123. namespace: default
  124. spec:
  125. replicas: 2
  126. selector:
  127. matchLabels:
  128. app: myapp
  129. release: canary
  130. template:
  131. metadata:
  132. labels:
  133. app: myapp
  134. release: canary
  135. spec:
  136. containers:
  137. - name: myapp
  138. image: ikubernetes/myapp:v2
  139. ports:
  140. - name: http
  141. containerPort: 80
  142. tolerations:
  143. - key: "node-type"
  144. operator: "Exists"
  145. value: ""
  146. effect: ""
  147. kubectl apply -f deploy-demo.yaml
  148. # 两个 pod 均衡调度到两个节点
  149. kubectl get pods -o wide
  150. NAME READY STATUS RESTARTS AGE IP NODE
  151. myapp-deploy-5d9c6985f5-hn4k2 1/1 Running 0 2m 10.244.1.13 node1
  152. myapp-deploy-5d9c6985f5-lkf9q 1/1 Running 0 2m 10.244.2.7 node2
 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号