ReplicaSet 通过 Pod 模板创建多个 Pod 副本,这些副本除了名字和 IP 地址不同,没有其他差异。若 Pod 模板指定了 PVC,则其创建的所有 Pod 共享相同的 PVC 和 PV
集群应用可能要求实例具有唯一的网络标识。可针对每个实例创建一个独立的 Service 来提供稳定的网络地址(因为服务 IP 固定)。但 Pod 无法获取该 IP,不能在别的 Pod 里通过 IP 自行注册
a-0.foo.default.svc.cluster.local
foo.default.svc.cluster.local
卷声明模板
StatefulSet 可以有一个或多个卷声明模板,会在创建 Pod 前创建持久卷声明,并绑定到 Pod 实例上
① 容器准备
docker.io/luksa/kubia-pet
② 手动创建存储卷
apiVersion: v1kind: Listitems:- apiVersion: v1 kind: PersistentVolume metadata: name: pv-a # 持久卷名称 pv-a、pv-b、pv-c spec: capacity: storage: 1Mi # 持久卷大小 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle # 卷被声明释放后,空间被回收再利用 nfs: # 卷使用 nfs 持久磁盘。见 https://www.cnblogs.com/lb477/p/14713883.html server: 192.168.11.210 path: "/nfs/pv-a"...
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-a # 持久卷名称 pv-a、pv-b、pv-c
spec:
capacity:
storage: 1Mi # 持久卷大小
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle # 卷被声明释放后,空间被回收再利用
nfs: # 卷使用 nfs 持久磁盘。见 https://www.cnblogs.com/lb477/p/14713883.html
server: 192.168.11.210
path: "/nfs/pv-a"
...
③ 创建控制 Service
apiVersion: v1kind: Servicemetadata: name: kubiaspec: clusterIP: None # StatefulSet 的控制 Service 必须是 headless 模式 selector: app: kubia ports: - name: http port: 80
kind: Service
name: kubia
clusterIP: None # StatefulSet 的控制 Service 必须是 headless 模式
selector:
app: kubia
ports:
- name: http
port: 80
④ 创建 StatefulSet
apiVersion: apps/v1kind: StatefulSetmetadata: name: kubiaspec: selector: matchLabels: app: kubia serviceName: kubia replicas: 2 template: metadata: labels: app: kubia spec: containers: - name: kubia image: luksa/kubia-pet ports: - name: http containerPort: 8080 volumeMounts: - name: data mountPath: /var/data # Pod 中的容器会把 pvc 数据卷嵌入指定目录 volumeClaimTemplates: # 创建持久卷声明的模板,会为每个 Pod 创建并关联一个 pvc - metadata: name: data spec: resources: requests: storage: 1Mi accessModes: - ReadWriteOnce
apiVersion: apps/v1
kind: StatefulSet
matchLabels:
serviceName: kubia
replicas: 2
template:
labels:
containers:
- name: kubia
image: luksa/kubia-pet
containerPort: 8080
volumeMounts:
- name: data
mountPath: /var/data # Pod 中的容器会把 pvc 数据卷嵌入指定目录
volumeClaimTemplates: # 创建持久卷声明的模板,会为每个 Pod 创建并关联一个 pvc
- metadata:
name: data
resources:
requests:
storage: 1Mi
⑤ 查看创建结果
$ kubectl get pod -wNAME READY STATUS RESTARTS AGEkubia-0 0/1 ContainerCreating 0 35skubia-0 1/1 Running 0 53skubia-1 0/1 Pending 0 0skubia-1 0/1 ContainerCreating 0 3skubia-1 1/1 Running 0 20s$ kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-a 1Mi RWO Recycle Bound default/data-kubia-0 18mpv-b 1Mi RWO Recycle Bound default/data-kubia-1 18mpv-c 1Mi RWO Recycle Available 18m$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata-kubia-0 Bound pv-a 1Mi RWO 2m3sdata-kubia-1 Bound pv-b 1Mi RWO 70s
$ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
kubia-0 0/1 ContainerCreating 0 35s
kubia-0 1/1 Running 0 53s
kubia-1 0/1 Pending 0 0s
kubia-1 0/1 ContainerCreating 0 3s
kubia-1 1/1 Running 0 20s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-a 1Mi RWO Recycle Bound default/data-kubia-0 18m
pv-b 1Mi RWO Recycle Bound default/data-kubia-1 18m
pv-c 1Mi RWO Recycle Available 18m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kubia-0 Bound pv-a 1Mi RWO 2m3s
data-kubia-1 Bound pv-b 1Mi RWO 70s
<apiServerHost>:<port>/api/v1/namespaces/default/pods/kubia-0/proxy/<path>
$ kubectl proxyStarting to serve on 127.0.0.1:8001$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/You've hit kubia-0Data stored on this pod: No data posted yet
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/
You've hit kubia-0
Data stored on this pod: No data posted yet
测试
# 1. 应用的状态独立$ curl -X POST -d "Hello kubia-0" localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/Data stored on pod kubia-0$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/You've hit kubia-0Data stored on this pod: Hello kubia-0$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-1/proxy/You've hit kubia-1Data stored on this pod: No data posted yet# 2. 重新启动一个完全相同的 Pod(新的 Pod 可能被调度到其他节点)$ kubectl delete pod kubia-0pod "kubia-0" deleted$ kubectl get podNAME READY STATUS RESTARTS AGEkubia-0 0/1 ContainerCreating 0 1skubia-1 1/1 Running 0 106m$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/You've hit kubia-0Data stored on this pod: Hello kubia-0
# 1. 应用的状态独立
$ curl -X POST -d "Hello kubia-0" localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/
Data stored on pod kubia-0
Data stored on this pod: Hello kubia-0
$ curl localhost:8001/api/v1/namespaces/default/pods/kubia-1/proxy/
You've hit kubia-1
# 2. 重新启动一个完全相同的 Pod(新的 Pod 可能被调度到其他节点)
$ kubectl delete pod kubia-0
pod "kubia-0" deleted
$ kubectl get pod
kubia-0 0/1 ContainerCreating 0 1s
kubia-1 1/1 Running 0 106m
暴露 StatefulSet 的 Pod
# 一个常规的 ClusterIP Service,只能在集群内部访问apiVersion: v1kind: Servicemetadata: name: kubia-publicspec: selector: app: kubia ports: - port: 80 targetPort: 8080
# 一个常规的 ClusterIP Service,只能在集群内部访问
name: kubia-public
- port: 80
targetPort: 8080
$ curl localhost:8001/api/v1/namespaces/default/services/kubia-public/proxy/You've hit kubia-1 / 0
$ curl localhost:8001/api/v1/namespaces/default/services/kubia-public/proxy/
You've hit kubia-1 / 0
SRV 记录:指向提供指定服务的服务器的主机名和端口号
获取 StatefulSet 里的所有 Pod 信息
# 运行一个名为 srvlookup 的一次性 Pod,关联控制台并在终止后立即删除$ kubectl run -it srvlookup --image=tutum/dnsutils --rm --restart=Never -- dig SRV kubia.default.svc.cluster.local;; ANSWER SECTION:kubia.default.svc.cluster.local. 30 IN SRV 0 50 80 kubia-0.kubia.default.svc.cluster.local.kubia.default.svc.cluster.local. 30 IN SRV 0 50 80 kubia-1.kubia.default.svc.cluster.local.;; ADDITIONAL SECTION:kubia-0.kubia.default.svc.cluster.local. 30 IN A 10.244.0.15kubia-1.kubia.default.svc.cluster.local. 30 IN A 10.244.0.16...# 返回的 SRV 记录顺序随机
# 运行一个名为 srvlookup 的一次性 Pod,关联控制台并在终止后立即删除
$ kubectl run -it srvlookup --image=tutum/dnsutils --rm --restart=Never -- dig SRV kubia.default.svc.cluster.local
;; ANSWER SECTION:
kubia.default.svc.cluster.local. 30 IN SRV 0 50 80 kubia-0.kubia.default.svc.cluster.local.
kubia.default.svc.cluster.local. 30 IN SRV 0 50 80 kubia-1.kubia.default.svc.cluster.local.
;; ADDITIONAL SECTION:
kubia-0.kubia.default.svc.cluster.local. 30 IN A 10.244.0.15
kubia-1.kubia.default.svc.cluster.local. 30 IN A 10.244.0.16
# 返回的 SRV 记录顺序随机
让节点返回所有集群节点的数据
可通过关闭节点的 eth0 网络接口模拟节点的网络断开
kubectl delete pod kubia-0 --force --grace-period 0
原文链接:http://www.cnblogs.com/lb477/p/14898850.html
本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728