寫(xiě)在前面
附:集群搭建請(qǐng)移步:
Kubernetes(k8s)集群搭建,完整無(wú)坑,不需要科學(xué)上網(wǎng)~
Controllers官網(wǎng)文檔:https://kubernetes.io/docs/concepts/workloads/controllers/
一、ReplicationController(RC)
1、官方解釋
官網(wǎng):https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/
官網(wǎng)原文:A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.
ReplicationController定義了一個(gè)期望的場(chǎng)景,即聲明某種Pod的副本數(shù)量在任意時(shí)刻都符合某個(gè)預(yù)期值,所以RC的定義包含以下幾個(gè)部分:
- Pod期待的副本數(shù)(replicas)
- 用于篩選目標(biāo)Pod的Label Selector
- 當(dāng)Pod的副本數(shù)量小于預(yù)期數(shù)量時(shí),用于創(chuàng)建新Pod的Pod模板(template)
也就是說(shuō)通過(guò)RC實(shí)現(xiàn)了集群中Pod的高可用,減少了傳統(tǒng)IT環(huán)境中手工運(yùn)維的工作。
2、舉個(gè)例子
(1)創(chuàng)建名為nginx_replication.yaml
kind:表示要新建對(duì)象的類(lèi)型
spec.selector:表示需要管理的Pod的label,這里表示包含app: nginx的label的Pod都會(huì)被該RC管理
spec.replicas:表示受此RC管理的Pod需要運(yùn)行的副本數(shù),永遠(yuǎn)保持副本數(shù)為這個(gè)數(shù)量
spec.template:表示用于定義Pod的模板,比如Pod名稱(chēng)、擁有的label以及Pod中運(yùn)行的應(yīng)用等
通過(guò)改變RC里Pod模板中的鏡像版本,可以實(shí)現(xiàn)Pod的升級(jí)功能
kubectl apply -f nginx_replication.yaml,此時(shí)k8s會(huì)在所有可用的Node上,創(chuàng)建3個(gè)Pod,并且每個(gè)Pod都有一個(gè)app: nginx的label,同時(shí)每個(gè)Pod中都運(yùn)行了一個(gè)nginx容器。
如果某個(gè)Pod發(fā)生問(wèn)題,Controller Manager能夠及時(shí)發(fā)現(xiàn),然后根據(jù)RC的定義,創(chuàng)建一個(gè)新的Pod
擴(kuò)縮容:kubectl scale rc nginx --replicas=5
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# 編輯yaml文件
[root@m ~]# vi nginx_replication.yaml
# 創(chuàng)建pod
[root@m ~]# kubectl apply -f nginx_replication.yaml
replicationcontroller/nginx created
# 獲取pod信息
[root@m ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2fw2t 0/1 ContainerCreating 0 15s
nginx-hqcwh 0/1 ContainerCreating 0 15s
nginx-sks62 0/1 ContainerCreating 0 15s
# 查看詳細(xì)信息
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-2fw2t 1/1 Running 0 75s 192.168.80.196 w2 <none> <none>
nginx-hqcwh 1/1 Running 0 75s 192.168.190.68 w1 <none> <none>
nginx-sks62 1/1 Running 0 75s 192.168.190.67 w1 <none> <none>
# 刪除指定pod(會(huì)自動(dòng)重啟,永遠(yuǎn)保持副本數(shù)不變,即使宕機(jī)也會(huì)重啟)
kubectl delete pods nginx-2fw2t
kubectl get pods
# 擴(kuò)容為5個(gè)
kubectl scale rc nginx --replicas=5
kubectl get pods
nginx-8fctt 0/1 ContainerCreating 0 2s
nginx-9pgwk 0/1 ContainerCreating 0 2s
nginx-hksg8 1/1 Running 0 6m50s
nginx-q7bw5 1/1 Running 0 6m50s
nginx-wzqkf 1/1 Running 0 99s
# 刪除pod,必須通過(guò)yaml文件進(jìn)行刪除
kubectl delete -f nginx_replication.yaml
3、小總結(jié)
ReplicationController通過(guò)selector來(lái)管理template(pod),selector中的key-value需要對(duì)應(yīng)template中的labels,否則會(huì)找不到。
同時(shí)支持?jǐn)U縮容,副本數(shù)永遠(yuǎn)保持不變。
二、ReplicaSet(RS)
1、官方解釋
官網(wǎng):https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
官網(wǎng)原文:A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
在Kubernetes v1.2時(shí),RC就升級(jí)成了另外一個(gè)概念:Replica Set,官方解釋為“下一代RC”
ReplicaSet和RC沒(méi)有本質(zhì)的區(qū)別,kubectl中絕大部分作用于RC的命令同樣適用于RS
RS與RC唯一的區(qū)別是:RS支持基于集合的Label Selector(Set-based selector),而RC只支持基于等式的Label Selector(equality-based selector)
,這使得Replica Set的功能更強(qiáng)
2、舉個(gè)例子
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: frontend
spec:
matchLabels:
tier: frontend
matchExpressions:
- {key:tier,operator: In,values: [frontend]}
template:
...
操作與ReplicationController(RC)一致,注意:一般情況下,我們很少單獨(dú)使用Replica Set,它主要是被Deployment這個(gè)更高的資源對(duì)象所使用,從而形成一整套Pod創(chuàng)建、刪除、更新的編排機(jī)制
。當(dāng)我們使用Deployment時(shí),無(wú)須關(guān)心它是如何創(chuàng)建和維護(hù)Replica Set的,這一切都是自動(dòng)發(fā)生的。同時(shí),無(wú)需擔(dān)心跟其他機(jī)制的不兼容問(wèn)題(比如ReplicaSet不支持rolling-update但Deployment支持)。
三、Deployment(用的最多)
1、官方解釋
官網(wǎng):https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
A Deployment provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
Deployment相對(duì)RC最大的一個(gè)升級(jí)就是我們可以隨時(shí)知道當(dāng)前Pod“部署”的進(jìn)度。
2、舉個(gè)例子
創(chuàng)建一個(gè)Deployment對(duì)象來(lái)生成對(duì)應(yīng)的Replica Set并完成Pod副本的創(chuàng)建過(guò)程,檢查Deploymnet的狀態(tài)來(lái)看部署動(dòng)作是否完成(Pod副本的數(shù)量是否達(dá)到預(yù)期的值)
永遠(yuǎn)保持pod為3個(gè),并且隨時(shí)可以知道pod的部署進(jìn)度!
(1)創(chuàng)建nginx_deployment.yaml文件
apiVersion: apps/v1 # 版本
kind: Deployment # 類(lèi)型
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # 副本數(shù)
selector: # selector 匹配pod的lebel
matchLabels:
app: nginx
template: # pod
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
# 根據(jù)nginx_deployment.yaml文件創(chuàng)建pod
[root@m ~]# kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx-deployment created
# 查看pod
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6dd86d77d-6q66c 1/1 Running 0 75s 192.168.190.70 w1 <none> <none>
nginx-deployment-6dd86d77d-f98jt 1/1 Running 0 75s 192.168.80.199 w2 <none> <none>
nginx-deployment-6dd86d77d-wcxlf 1/1 Running 0 75s 192.168.80.198 w2 <none> <none>
# 查看deployment
[root@m ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 18s
# 查看ReplicaSet
[root@m ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6dd86d77d 3 3 0 23s
[root@m ~]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 0/3 3 0 29s nginx nginx:1.7.9 app=nginx
(2)版本滾動(dòng)更新
# 當(dāng)前nginx的版本
[root@m ~]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 3/3 3 3 2m36s nginx nginx:1.7.9 app=nginx
# 更新nginx的image版本
[root@m ~]# kubectl set image deployment nginx-deployment nginx=nginx:1.9.1
deployment.extensions/nginx-deployment image updated
# 查看更新后的版本
[root@m ~]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 3/3 1 3 3m21s nginx nginx:1.9.1 app=nginx
# 發(fā)現(xiàn)之前的版本已經(jīng)被刪了,新版本是啟動(dòng)狀態(tài)
[root@m ~]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
nginx-deployment-6dd86d77d 0 0 0 4m41s nginx nginx:1.7.9 app=nginx,pod-template-hash=6dd86d77d
nginx-deployment-784b7cc96d 3 3 3 96s nginx nginx:1.9.1 app=nginx,pod-template-hash=784b7cc96d
3、注意事項(xiàng)
通常來(lái)說(shuō)我們使用Deployment只管理一個(gè)pod,也就是一個(gè)應(yīng)用。
四、Labels and Selectors
1、官方解釋
label,顧名思義,就是給一些資源打上標(biāo)簽的,由key-value鍵值對(duì)組成。
官網(wǎng):https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
官方解釋?zhuān)篖abels are key/value pairs that are attached to objects, such as pods.
2、舉個(gè)例子
表示名稱(chēng)為nginx-deployment的pod,有一個(gè)label,key為app,value為nginx。
我們可以將具有同一個(gè)label的pod,交給selector管理
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector: # 匹配具有同一個(gè)label屬性的pod標(biāo)簽
matchLabels:
app: nginx
template: # 定義pod的模板
metadata:
labels:
app: nginx # 定義當(dāng)前pod的label屬性,app為key,value為nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
# 查看pod的label標(biāo)簽
[root@m ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-784b7cc96d-25js4 1/1 Running 0 4m39s app=nginx,pod-template-hash=784b7cc96d
nginx-deployment-784b7cc96d-792lj 1/1 Running 0 5m24s app=nginx,pod-template-hash=784b7cc96d
nginx-deployment-784b7cc96d-h5x2k 1/1 Running 0 3m54s app=nginx,pod-template-hash=784b7cc96d
五、Namespace
1、什么是Namespace
[root@m ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-784b7cc96d-25js4 1/1 Running 0 8m19s
nginx-deployment-784b7cc96d-792lj 1/1 Running 0 9m4s
nginx-deployment-784b7cc96d-h5x2k 1/1 Running 0 7m34s
[root@m ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-f67d5b96f-7p9cg 1/1 Running 2 17h
calico-node-6pvpg 1/1 Running 0 141m
calico-node-m9d5l 1/1 Running 0 141m
calico-node-pvvt8 1/1 Running 2 17h
coredns-fb8b8dccf-bbvtp 1/1 Running 2 17h
coredns-fb8b8dccf-hhfb5 1/1 Running 2 17h
etcd-m 1/1 Running 2 17h
kube-apiserver-m 1/1 Running 2 17h
kube-controller-manager-m 1/1 Running 2 17h
kube-proxy-5hmwn 1/1 Running 0 141m
kube-proxy-bv4z4 1/1 Running 0 141m
kube-proxy-rn8sq 1/1 Running 2 17h
kube-scheduler-m 1/1 Running 2 17h
上面我們查看的pod是不一樣的,因?yàn)檫@些pod分屬不同的Namespace。
# 查看一下當(dāng)前的命名空間
[root@m ~]# kubectl get namespaces
NAME STATUS AGE
default Active 17h
kube-node-lease Active 17h
kube-public Active 17h
kube-system Active 17h
其實(shí)說(shuō)白了,命名空間就是為了隔離不同的資源,比如:Pod、Service、Deployment等??梢栽谳斎朊畹臅r(shí)候指定命名空間-n
,如果不指定,則使用默認(rèn)的命名空間:default。
2、創(chuàng)建命名空間
創(chuàng)建myns-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: myns
# 創(chuàng)建namespace
kubectl apply -f myns-namespace.yaml
# 查看namespace列表使用kubectl get ns也可以
[root@m ~]# kubectl get namespaces
NAME STATUS AGE
default Active 17h
kube-node-lease Active 17h
kube-public Active 17h
kube-system Active 17h
myns Active 11s
3、創(chuàng)建指定命名空間下的pod
# 比如創(chuàng)建一個(gè)pod,屬于myns命名空間下
vi nginx-pod.yaml
kubectl apply -f nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: myns
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
#查看myns命名空間下的Pod和資源
# 默認(rèn)查看default namespace
kubectl get pods
# 指定namespace
kubectl get pods -n myns
kubectl get all -n myns
kubectl get pods --all-namespaces #查找所有命名空間下的pod
六、Network
1、回顧docker的網(wǎng)絡(luò)
(1)單機(jī)docker
在單機(jī)docker中,容器與容器之間網(wǎng)絡(luò)通訊是通過(guò)網(wǎng)絡(luò)橋進(jìn)行連接的。
docker網(wǎng)絡(luò)詳解,自定義docker網(wǎng)絡(luò)
(2)docker-swarm 多機(jī)集群
當(dāng)docker-swarm 多機(jī)集群下,是如何通信的呢?通過(guò)overlay網(wǎng)絡(luò),將數(shù)據(jù)包通過(guò)互聯(lián)網(wǎng)進(jìn)行傳輸。
(3)k8s中pod網(wǎng)絡(luò)
k8s里面,又將網(wǎng)絡(luò)提高了一個(gè)復(fù)雜度。
我們都知道K8S最小的操作單位是Pod,先思考一下同一個(gè)Pod中多個(gè)容器要進(jìn)行通信,是可以的嗎?
由官網(wǎng)的這段話可以看出,同一個(gè)pod中的容器是共享網(wǎng)絡(luò)ip地址和端口號(hào)的,通信顯然沒(méi)問(wèn)題:
Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports.
那如果是通過(guò)容器的名稱(chēng)進(jìn)行通信呢?就需要將所有pod中的容器加入到同一個(gè)容器的網(wǎng)絡(luò)中,我們把該容器稱(chēng)作為pod中的pause container。
我們發(fā)現(xiàn),每個(gè)pod中都會(huì)有一個(gè)pause container,所有創(chuàng)建的container都會(huì)連接到它上面。
[root@w1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
559a6e5ab486 94ec7e53edfc "nginx -g 'daemon of…" 2 hours ago Up 2 hours k8s_nginx_nginx-deployment-784b7cc96d-h5x2k_default_f730b118-1a17-11ee-ad40-5254004d77d3_0
60f048b660b1 k8s.gcr.io/pause:3.1 "/pause" 2 hours ago Up 2 hours
2、集群內(nèi)Pod與Node之間的通信
(1)案例分析
我們準(zhǔn)備一個(gè)nginx-pod、一個(gè)busybox-pod:
# nginx_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
# busybox_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
# 將兩個(gè)pod運(yùn)行起來(lái),并且查看運(yùn)行情況
kubectl apply -f nginx_pod.yaml
kubectl apply -f busybox_pod.yaml
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 2m1s 192.168.190.73 w1 <none> <none>
nginx-pod 1/1 Running 0 2m25s 192.168.80.201 w2 <none> <none>
我們發(fā)現(xiàn),兩個(gè)應(yīng)用分別部署在了w1節(jié)點(diǎn)和w2節(jié)點(diǎn),各自生成了一個(gè)ip:192.168.190.73、 192.168.80.201
而且,不管在master節(jié)點(diǎn)還是worker節(jié)點(diǎn),訪問(wèn)任意一個(gè)pod都是可以訪問(wèn)通的。這就是網(wǎng)絡(luò)插件(比如calico)的功勞
,它不光幫我們生成了pod的ip,并且將這些pod之間的網(wǎng)絡(luò)也都處理好了。
但是這個(gè)ip,只限于集群內(nèi)訪問(wèn)
。
(2)How to implement the Kubernetes Cluster networking model–Calico
官網(wǎng)
:https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model
- pods on a node can communicate with all pods on all nodes without NAT
- agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
- pods in the host network of a node can communicate with all pods on all nodes without NAT
得益于這個(gè)網(wǎng)絡(luò)插件,集群內(nèi)部不管是pod訪問(wèn)pod,還是pod訪問(wèn)node,還是node訪問(wèn)pod,都是可以成功的。
3、集群內(nèi)Service-Cluster IP
對(duì)于上述的Pod雖然實(shí)現(xiàn)了集群內(nèi)部互相通信,但是Pod是不穩(wěn)定的,比如通過(guò)Deployment管理Pod,隨時(shí)可能對(duì)Pod進(jìn)行擴(kuò)縮容,這時(shí)候Pod的IP地址是變化的
。
我們希望能夠有一個(gè)固定的IP,使得集群內(nèi)能夠訪問(wèn)。也就是能夠把相同或者具有關(guān)聯(lián)的Pod,打上Label,組成Service。而Service有固定的IP,不管Pod怎么創(chuàng)建和銷(xiāo)毀,都可以通過(guò)Service的IP進(jìn)行訪問(wèn)。
(1)官方描述
Service官網(wǎng):https://kubernetes.io/docs/concepts/services-networking/service/
An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
可以將Service理解為一個(gè)nginx。
(2)舉個(gè)例子:pod地址不穩(wěn)定
創(chuàng)建whoami-deployment.yaml文件,并且apply
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deployment
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: jwilder/whoami
ports:
- containerPort: 8000
# 創(chuàng)建
kubectl apply -f whoami-deployment.yaml
# 查看詳細(xì)信息
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-deployment-678b64444d-6wltg 1/1 Running 0 100s 192.168.80.202 w2 <none> <none>
whoami-deployment-678b64444d-cjpzr 1/1 Running 0 100s 192.168.190.74 w1 <none> <none>
whoami-deployment-678b64444d-v7zfg 1/1 Running 0 100s 192.168.80.203 w2 <none> <none>
# 刪除一個(gè)pod
[root@m ~]# kubectl delete pod whoami-deployment-678b64444d-6wltg
pod "whoami-deployment-678b64444d-6wltg" deleted
# 會(huì)自動(dòng)又生成一個(gè)pod,但是地址明顯變化了
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-deployment-678b64444d-cjpzr 1/1 Running 0 2m59s 192.168.190.74 w1 <none> <none>
whoami-deployment-678b64444d-l4dgz 1/1 Running 0 20s 192.168.190.75 w1 <none> <none>
whoami-deployment-678b64444d-v7zfg 1/1 Running 0 2m59s 192.168.80.203 w2 <none> <none>
我們通過(guò)測(cè)試發(fā)現(xiàn),pod的地址確實(shí)是一直在變化著的,并不穩(wěn)定。
(3)創(chuàng)建service
# 查看當(dāng)前service,默認(rèn)只有一個(gè)kubernetes的service
[root@m ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
# 創(chuàng)建一個(gè)service
[root@m ~]# kubectl expose deployment whoami-deployment
service/whoami-deployment exposed
# 查看當(dāng)前service
[root@m ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
whoami-deployment ClusterIP 10.109.104.247 <none> 8000/TCP 9s
# 刪除service
#kubectl delete service whoami-deployment
此時(shí)我們創(chuàng)建的service,也有一個(gè)cluster-ip,我們嘗試訪問(wèn)這個(gè)ip,發(fā)現(xiàn)可以訪問(wèn)whoami的這三個(gè)node,并且自動(dòng)做了負(fù)載均衡:
[root@m ~]# curl 10.109.104.247:8000
I'm whoami-deployment-678b64444d-l4dgz
[root@m ~]# curl 10.109.104.247:8000
I'm whoami-deployment-678b64444d-cjpzr
[root@m ~]# curl 10.109.104.247:8000
I'm whoami-deployment-678b64444d-v7zfg
(4)查看service詳細(xì)信息
[root@m ~]# kubectl describe svc whoami-deployment
Name: whoami-deployment
Namespace: default
Labels: app=whoami
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP: 10.109.104.247
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 192.168.190.74:8000,192.168.190.75:8000,192.168.80.203:8000
Session Affinity: None
Events: <none>
我們發(fā)現(xiàn),下面掛在了三個(gè)pod,此時(shí)我們擴(kuò)容一下:
# 擴(kuò)容
kubectl scale deployment whoami-deployment --replicas=5
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-deployment-678b64444d-btvm4 1/1 Running 0 27s 192.168.80.204 w2 <none> <none>
whoami-deployment-678b64444d-cjpzr 1/1 Running 0 14m 192.168.190.74 w1 <none> <none>
whoami-deployment-678b64444d-l4dgz 1/1 Running 0 11m 192.168.190.75 w1 <none> <none>
whoami-deployment-678b64444d-nfg4b 1/1 Running 0 27s 192.168.190.76 w1 <none> <none>
whoami-deployment-678b64444d-v7zfg 1/1 Running 0 14m 192.168.80.203 w2 <none> <none>
此時(shí)再查看service詳細(xì)信息:
[root@m ~]# kubectl describe svc whoami-deployment
Name: whoami-deployment
Namespace: default
Labels: app=whoami
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP: 10.109.104.247
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 192.168.190.74:8000,192.168.190.75:8000,192.168.190.76:8000 + 2 more...
Session Affinity: None
Events: <none>
我們發(fā)現(xiàn),service的ip,在集群內(nèi)任意節(jié)點(diǎn)和pod也都能夠訪問(wèn)(外網(wǎng)不能訪問(wèn))。
(5)使用yaml創(chuàng)建service
其實(shí)對(duì)于Service的創(chuàng)建,不僅僅可以使用kubectl expose,也可以定義一個(gè)yaml文件
apiVersion: v1
kind: Service # 類(lèi)型
metadata:
name: my-service # name
spec:
selector:
app: MyApp # 對(duì)應(yīng)deployment的selector與label
ports:
- protocol: TCP
port: 80 # service自己的端口
targetPort: 9376 # 目標(biāo)端口,對(duì)應(yīng)deployment的端口
type: Cluster
(6)小總結(jié)
其實(shí)Service存在的意義就是為了Pod的不穩(wěn)定性,而上述探討的就是關(guān)于Service的一種類(lèi)型Cluster IP,只能供集群內(nèi)訪問(wèn)。
4、外部服務(wù)訪問(wèn)集群中的pod:Service-NodePort(不推薦)
相當(dāng)于在Node節(jié)點(diǎn)上,對(duì)外暴露一個(gè)端口,這個(gè)端口與pod服務(wù)進(jìn)行綁定。
(1)舉個(gè)例子
根據(jù)whoami-deployment.yaml創(chuàng)建pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deployment
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: jwilder/whoami
ports:
- containerPort: 8000
# 創(chuàng)建
kubectl apply -f whoami-deployment.yaml
# 查看詳細(xì)信息
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-deployment-678b64444d-cn462 1/1 Running 0 10s 192.168.80.206 w2 <none> <none>
whoami-deployment-678b64444d-j8r7f 1/1 Running 0 10s 192.168.190.77 w1 <none> <none>
whoami-deployment-678b64444d-wwl47 1/1 Running 0 10s 192.168.80.205 w2 <none> <none>
# 創(chuàng)建一個(gè)NodePort類(lèi)型的service
[root@m ~]# kubectl expose deployment whoami-deployment --type=NodePort
service/whoami-deployment exposed
# 查看service
[root@m ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
whoami-deployment NodePort 10.100.197.231 <none> 8000:31222/TCP 6s
我們發(fā)現(xiàn),這個(gè)service的類(lèi)型就是NodePort 了,仍然有一個(gè)cluster-ip,在集群內(nèi)部使用這個(gè)cluster-ip仍然是可用的:
[root@m ~]# curl 10.100.197.231:8000
I'm whoami-deployment-678b64444d-j8r7f
[root@m ~]# curl 10.100.197.231:8000
I'm whoami-deployment-678b64444d-wwl47
[root@m ~]# curl 10.100.197.231:8000
I'm whoami-deployment-678b64444d-cn462
我們還發(fā)現(xiàn)PORT那一欄,有這樣的一個(gè)標(biāo)識(shí):8000:31222/TCP,說(shuō)明8000端口已經(jīng)被映射到了31222端口。
我們可以從外界來(lái)訪問(wèn)集群內(nèi)部的服務(wù)了。
(2)總結(jié)
使用NodePort的方式可以實(shí)現(xiàn)外部服務(wù)訪問(wèn)集群中的pod,但是端口占用太多,生產(chǎn)不推薦使用。
[root@m ~]# kubectl delete -f whoami-deployment.yaml
deployment.apps "whoami-deployment" deleted
[root@m ~]# kubectl delete svc whoami-deployment
service "whoami-deployment" deleted
(3)使用yaml文件一鍵部署
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 3
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: tomcat
type: NodePort
vi my-tomcat.yaml
kubectl apply -f my-tomcat.yaml
kubectl get pods -o wide
kubectl get deployment
kubectl get svc
瀏覽器想要訪問(wèn)這個(gè)tomcat,也就是外部要訪問(wèn)該tomcat,用之前的Service-NodePort的方式是可以的,比如暴露一個(gè)32008端口,只需要訪問(wèn)192.168.0.61:32008即可。
Service-NodePort的方式生產(chǎn)環(huán)境不推薦使用。
5、外部服務(wù)訪問(wèn)集群中的pod:Service-LoadBalance(不推薦)
Service-LoadBalance通常需要第三方云提供商支持,有約束性,我們也不推薦使用
6、外部服務(wù)訪問(wèn)集群中的pod:Ingress(推薦)
(1)官網(wǎng)解釋
官網(wǎng):https://kubernetes.io/docs/concepts/services-networking/ingress/
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress can provide load balancing, SSL termination and name-based virtual hosting.
Ingress就是幫助我們?cè)L問(wèn)集群內(nèi)的服務(wù)的。
官網(wǎng)Ingress
:https://kubernetes.io/docs/concepts/services-networking/ingress/
GitHub Ingress Nginx
:https://github.com/kubernetes/ingress-nginx
Nginx Ingress Controller
:https://kubernetes.github.io/ingress-nginx/
(2)舉個(gè)例子
① 先刪除之前的tomcat及service
# 先刪除之前的tomcat及service
kubectl delete -f my-tomcat.yaml
② 部署tomcat
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: tomcat
# 部署tomcat
vi tomcat.yaml
kubectl apply -f tomcat.yaml
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat-deployment-6b9d6f8547-6d4z4 1/1 Running 0 2m22s 192.168.80.208 w2 <none> <none>
[root@m ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
tomcat-service ClusterIP 10.102.167.248 <none> 80/TCP 2m36s
此時(shí),集群內(nèi)訪問(wèn)tomcat是可以訪問(wèn)的:
curl 192.168.80.208:8080
curl 10.102.167.248
③ 部署ingress-controller
通過(guò)NodePort的方式會(huì)占用所有節(jié)點(diǎn)的主機(jī)端口,我們使用ingress的方式只需要指定一臺(tái)主機(jī)端口即可。
# 確保nginx-controller運(yùn)行到w1節(jié)點(diǎn)上
kubectl label node w1 name=ingress
# 使用HostPort方式運(yùn)行,需要增加配置(看下圖)
#hostNetwork: true
# 搜索nodeSelector,并且要確保w1節(jié)點(diǎn)上的80和443端口沒(méi)有被占用,鏡像拉取需要較長(zhǎng)的時(shí)間,這塊注意一下哦
kubectl apply -f mandatory.yaml
kubectl get all -n ingress-nginx
# mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
name: ingress
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
# 通過(guò)我們的配置,一定是部署在w1節(jié)點(diǎn)上的
[root@m ~]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-controller-7c66dcdd6c-qrgpg 1/1 Running 0 113s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller 1/1 1 1 113s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-controller-7c66dcdd6c 1 1 1 113s
[root@m ~]# kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-7c66dcdd6c-qrgpg 1/1 Running 0 116s 192.168.56.101 w1 <none> <none>
此時(shí),w1節(jié)點(diǎn)的80、443端口就打通了。文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-535088.html
④ 創(chuàng)建ingress并定義轉(zhuǎn)發(fā)規(guī)則
#nginx-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: tomcat.cxf.com # 域名
http:
paths:
- path: /
backend:
serviceName: tomcat-service
servicePort: 80
# 創(chuàng)建ingress
kubectl apply -f nginx-ingress.yaml
[root@m ~]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress tomcat.cxf.com 80 18s
[root@m ~]# kubectl describe ingress nginx-ingress
Name: nginx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
tomcat.cxf.com
/ tomcat-service:80 (192.168.80.208:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"nginx-ingress","namespace":"default"},"spec":{"rules":[{"host":"tomcat.cxf.com","http":{"paths":[{"backend":{"serviceName":"tomcat-service","servicePort":80},"path":"/"}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 29s nginx-ingress-controller Ingress default/nginx-ingress
此時(shí)訪問(wèn)http://tomcat.cxf.com/ 就可以訪問(wèn)到tomcat了(需要使用域名)。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-535088.html
到了這里,關(guān)于Kubernetes(k8s)入門(mén):核心組件詳解的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!