網(wǎng)站開(kāi)發(fā) 價(jià)格差異百度號(hào)碼認(rèn)證平臺(tái)官網(wǎng)首頁(yè)
目錄
pod親和性與反親和性
pod親和性
pod反親和性
?pod狀態(tài)與重啟策略
pod狀態(tài)
pod重啟策略
本文主要介紹了pod資源與pod相關(guān)的親和性,以及pod的重啟策略
pod親和性與反親和性
pod親和性(podAffinity)有兩種 1.podaffinity,即聯(lián)系比較緊密的pod更傾向于使用同一個(gè)區(qū)域 比如tomcat和nginx這樣資源的利用效率更高
2.podunaffinity,即兩套完全相同,或兩套完全不同功能的服務(wù) 為了不互相影響容災(zāi)效果,或者讓服務(wù)之間不會(huì)互相影響,更傾向于不適用同一個(gè)區(qū)域
那么如何判斷是不是“同一個(gè)區(qū)域”就非常重要
#查看幫助
kubectl explain pods.spec.affinity.podAffinity
preferredDuringSchedulingIgnoredDuringExecution #軟親和性,盡可能在一起
requiredDuringSchedulingIgnoredDuringExecution #硬親和性,一定要在一起
pod親和性
#硬親和性
kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecutionlabelSelector <Object> #以標(biāo)簽為篩選條件,選擇一組親和的podnamespaceSelector <Object> #以命名空間為篩選條件,選擇一組親和的podnamespaces <[]string> #確定命名空間的位置topologyKey <string> -required- #拓?fù)溥壿嬫I,根據(jù)xx判斷是否是同一位置cat > qinhe-pod1.yaml << EOF
apiVersion: v1
kind: Pod
metadata:name: qinhe1namespace: defaultlabels:user: ws
spec:containers:- name: qinhe1image: docker.io/library/nginximagePullPolicy: IfNotPresent
EOF
kubectl apply -f qinhe-pod1.yaml #定義一個(gè)初始的pod,后面的pod可以依次為參照echo "
apiVersion: v1
kind: Pod
metadata:name: qinhe2labels:app: app1
spec:containers:- name: qinhe2image: docker.io/library/nginximagePullPolicy: IfNotPresentaffinity:podAffinity: # 和pod親和性requiredDuringSchedulingIgnoredDuringExecution:- labelSelector: # 以標(biāo)簽為篩選條件matchExpressions: # 以表達(dá)式進(jìn)行匹配- {key: user, operator: In, values: ["ws"]}topologyKey: kubernetes.io/hostname
#帶有kubernetes.io/hostname標(biāo)簽相同的被認(rèn)為是同一個(gè)區(qū)域,即以主機(jī)名區(qū)分
#標(biāo)簽的node被認(rèn)為是統(tǒng)一位置
" > qinhe-pod2.yaml
kubectl apply -f qinhe-pod2.yamlkubectl get pods -owide #因?yàn)閔ostname node1和node2不同,所以只會(huì)調(diào)度到node1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qinhe1 1/1 Running 0 68s 10.10.179.9 ws-k8s-node1 <none> <none>
qinhe2 1/1 Running 0 21s 10.10.179.10 ws-k8s-node1 <none> <none>#修改
...topologyKey: beta.kubernetes.io/arch
... #node1和node2這兩個(gè)標(biāo)簽都相同
kubectl delete -f qinhe-pod2.yaml
kubectl apply -f qinhe-pod2.yaml
kubectl get pods -owide #再查看時(shí)會(huì)發(fā)現(xiàn)qinhe2分到了node2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qinhe1 1/1 Running 0 4m55s 10.10.179.9 ws-k8s-node1 <none> <none>
qinhe2 1/1 Running 0 15s 10.10.234.68 ws-k8s-node2 <none> <none>#清理環(huán)境
kubectl delete -f qinhe-pod1.yaml
kubectl delete -f qinhe-pod2.yaml
pod反親和性
kubectl explain pods.spec.affinity.podAntiAffinity
preferredDuringSchedulingIgnoredDuringExecution <[]Object>
requiredDuringSchedulingIgnoredDuringExecution <[]Object>#硬親和性
#創(chuàng)建qinhe-pod3.yaml
cat > qinhe-pod3.yaml << EOF
apiVersion: v1
kind: Pod
metadata:name: qinhe3namespace: defaultlabels:user: ws
spec:containers:- name: qinhe3image: docker.io/library/nginximagePullPolicy: IfNotPresent
EOF#創(chuàng)建qinhe-pod4.yaml
echo "
apiVersion: v1
kind: Pod
metadata:name: qinhe4labels:app: app1
spec:containers:- name: qinhe4image: docker.io/library/nginximagePullPolicy: IfNotPresentaffinity:podAntiAffinity: # 和pod親和性requiredDuringSchedulingIgnoredDuringExecution:- labelSelector: # 以標(biāo)簽為篩選條件matchExpressions: # 以表達(dá)式進(jìn)行匹配- {key: user, operator: In, values: ["ws"]} #表達(dá)式user=wstopologyKey: kubernetes.io/hostname #以hostname作為區(qū)分是否同個(gè)區(qū)域
" > qinhe-pod4.yaml
kubectl apply -f qinhe-pod3.yaml
kubectl apply -f qinhe-pod4.yaml
#分配到了不同的node
kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qinhe3 1/1 Running 0 9s 10.10.179.11 ws-k8s-node1 <none> <none>
qinhe4 1/1 Running 0 8s 10.10.234.70 ws-k8s-node2 <none> <none>#修改topologyKey
pod4修改為topologyKey: user
kubectl label nodes ws-k8s-node1 user=xhy
kubectl label nodes ws-k8s-node2 user=xhy
#現(xiàn)在node1和node2都會(huì)被pod4識(shí)別為同一位置,因?yàn)閚ode的label中user值相同kubectl delete -f qinhe-pod4.yaml
kubectl apply -f qinhe-pod4.yaml
#直接顯示離線
kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qinhe3 1/1 Running 0 9m59s 10.10.179.12 ws-k8s-node1 <none> <none>
qinhe4 0/1 Pending 0 2s <none> <none> <none> <none>
#查看日志
Warning FailedScheduling 74s default-scheduler 0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..#pod反親和性的軟親和性與node親和性的軟親和性同理#清理環(huán)境
kubectl label nodes ws-k8s-node1 user-
kubectl label nodes ws-k8s-node2 user-
kubectl delete -f qinhe-pod3.yaml
kubectl delete -f qinhe-pod4.yaml
?pod狀態(tài)與重啟策略
參考文檔:Pod 的生命周期 | Kubernetes
pod狀態(tài)
1.pending——掛起
(1)正在創(chuàng)建pod,檢查存儲(chǔ)、網(wǎng)絡(luò)、下載鏡像等問(wèn)題
(2)條件不滿足,比如硬親和性,污點(diǎn)等調(diào)度條件不滿足
2.failed——失敗
至少有一個(gè)容器因?yàn)槭《V?#xff0c;即非0狀態(tài)退出
3.unknown——未知
apiserver連不上node節(jié)點(diǎn)的kubelet,通常是網(wǎng)絡(luò)問(wèn)題
4.Error——錯(cuò)誤
5.succeeded——成功
pod所有容器成功終止
6.Unschedulable
pod不能被調(diào)度
7.PodScheduled
正在調(diào)度中
8.Initialized
pod初始化完成
9.ImagePullBackOff
容器拉取失敗
10.evicted
node節(jié)點(diǎn)資源不足
11.CrashLoopBackOff
容器曾經(jīng)啟動(dòng),但又異常退出了
pod重啟策略
當(dāng)容器異常時(shí),可以通過(guò)設(shè)置RestartPolicy字段,設(shè)置pod重啟策略來(lái)對(duì)pod進(jìn)行重啟等操作
#查看幫助
kubectl explain pod.spec.restartPolicy
KIND: Pod
VERSION: v1
FIELD: restartPolicy <string>
DESCRIPTION:Restart policy for all containers within the pod. One of Always, OnFailure,Never. Default to Always. More info:<https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy>Possible enum values:- `"Always"` #只要異常退出,立即自動(dòng)重啟- `"Never"` #不會(huì)重啟容器- `"OnFailure"`#容器錯(cuò)誤退出,即退出碼不為0時(shí),則自動(dòng)重啟#測(cè)試Always策略,創(chuàng)建always.yaml
cat > always.yaml << EOF
apiVersion: v1
kind: Pod
metadata:name: always-podnamespace: default
spec:restartPolicy: Alwayscontainers:- name: test-podimage: docker.io/library/tomcatimagePullPolicy: IfNotPresent
EOF
kubectl apply -f always.yaml
kubectl get po #查看狀態(tài)
NAME READY STATUS RESTARTS AGE
always-pod 1/1 Running 0 22s
#進(jìn)入容器去關(guān)閉容器
kubectl exec -it always-pod -- /bin/bash
shutdown.sh
#查看當(dāng)前狀態(tài),可以看到always-pod重啟計(jì)數(shù)器為1
kubectl get po
NAME READY STATUS RESTARTS AGE
always-pod 1/1 Running 1 (5s ago) 70s#測(cè)試never策略,創(chuàng)建never.yaml
cat > never.yaml << EOF
apiVersion: v1
kind: Pod
metadata:name: never-podnamespace: default
spec:restartPolicy: Nevercontainers:- name: test-podimage: docker.io/library/tomcatimagePullPolicy: IfNotPresent
EOF
kubectl apply -f never.yaml
kubectl exec -it never-pod -- /bin/bash
shutdown.sh
#不會(huì)重啟,狀態(tài)為completed
kubectl get pods | grep never
never-pod 0/1 Completed 0 73s#測(cè)試OnFailure策略,創(chuàng)建onfailure.yaml
cat > onfailure.yaml << EOF
apiVersion: v1
kind: Pod
metadata:name: onfailure-podnamespace: default
spec:restartPolicy: OnFailurecontainers:- name: test-podimage: docker.io/library/tomcatimagePullPolicy: IfNotPresent
EOF
kubectl apply -f onfailure.yaml
#進(jìn)去后進(jìn)行異常退出
kubectl exec -it onfailure-pod -- /bin/bash
kill 1
#查看pods狀態(tài),已經(jīng)重啟
kubectl get po | grep onfailure
onfailure-pod 1/1 Running 1 (43s ago) 2m11s
#進(jìn)入后進(jìn)行正常退出
kubectl exec -it onfailure-pod -- /bin/bash
shutdown.sh
#查看pods狀態(tài),沒(méi)有重啟,進(jìn)入completed狀態(tài)
kubectl get po | grep onfailure
onfailure-pod 0/1 Completed 1 3m58s#清理環(huán)境
kubectl delete -f always.yaml
kubectl delete -f never.yaml
kubectl delete -f onfailure.yaml