国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS

這篇具有很好參考價(jià)值的文章主要介紹了Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

Kubernetes概述
使用kubeadm快速部署一個(gè)k8s集群
Kubernetes高可用集群二進(jìn)制部署(一)主機(jī)準(zhǔn)備和負(fù)載均衡器安裝
Kubernetes高可用集群二進(jìn)制部署(二)ETCD集群部署
Kubernetes高可用集群二進(jìn)制部署(三)部署api-server
Kubernetes高可用集群二進(jìn)制部署(四)部署kubectl和kube-controller-manager、kube-scheduler
Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二進(jìn)制部署(六)Kubernetes集群節(jié)點(diǎn)添加

1. 工作節(jié)點(diǎn)(worker node)部署

1.1 docker安裝及配置

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
systemctl enable docker
systemctl start docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://8i185852.mirror.aliyuncs.com"]
}
EOF

必須配置native.cgroupdriver,不配置這個(gè)步驟會(huì)導(dǎo)致kubelet啟動(dòng)失敗

systemctl restart docker

1.2 部署kubelet

在k8s-master1(同時(shí)作為控制平面和數(shù)據(jù)平面)上操作

1.2.1 創(chuàng)建kubelet-bootstrap.kubeconfig
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

#192.168.10.100 VIP(虛擬IP)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.10.100:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
#創(chuàng)建集群角色綁定
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl describe clusterrolebinding cluster-system-anonymous

kubectl describe clusterrolebinding kubelet-bootstrap
1.2.2 創(chuàng)建kubelet配置文件
[root@k8s-master1 k8s-work]# cat > kubelet.json << "EOF"
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.10.103", #當(dāng)前主機(jī)地址
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",                    
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.2"]
}
EOF
1.2.3 創(chuàng)建kubelet配置文件
cat > kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --rotate-certificates \
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
1.2.4 同步文件到集群節(jié)點(diǎn)
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
cp kubelet.json /etc/kubernetes/
cp kubelet.service /usr/lib/systemd/system/
for i in  k8s-master2 k8s-master3 k8s-worker1;do scp kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done

for i in  k8s-master2 k8s-master3 k8s-worker1;do scp ca.pem $i:/etc/kubernetes/ssl/;done

for i in k8s-master2 k8s-master3 k8s-worker1;do scp kubelet.service $i:/usr/lib/systemd/system/;done
說(shuō)明:
kubelet.json中address需要修改為當(dāng)前主機(jī)IP地址。

vim /etc/kubernetes/kubelet.json
1.2.5 創(chuàng)建目錄及啟動(dòng)服務(wù)

在所有worker節(jié)點(diǎn)執(zhí)行

mkdir -p /var/lib/kubelet
mkdir -p /var/log/kubernetes
systemctl daemon-reload
systemctl enable --now kubelet

systemctl status kubelet
# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   <none>   12s   v1.21.10
k8s-master2   NotReady   <none>   19s   v1.21.10
k8s-master3   NotReady   <none>   19s   v1.21.10
k8s-worker1   NotReady   <none>   18s   v1.21.10

NotReady是因?yàn)榫W(wǎng)絡(luò)還沒(méi)有啟動(dòng)

# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION
csr-b949p   7m55s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
csr-c9hs4   3m34s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
csr-r8vhp   5m50s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
csr-zb4sr   3m40s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
說(shuō)明:
確認(rèn)kubelet服務(wù)啟動(dòng)成功后,接著到master上Approve一下bootstrap請(qǐng)求。

1.3 部署kube-proxy

1.3.1 創(chuàng)建kube-proxy證書(shū)請(qǐng)求文件
[root@k8s-master1 k8s-work]# cat > kube-proxy-csr.json << "EOF"
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "kubemsb",
      "OU": "CN"
    }
  ]
}
EOF
1.3.2 生成證書(shū)
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
1.3.3 創(chuàng)建kubeconfig文件
#設(shè)置管理集群
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.10.100:6443 --kubeconfig=kube-proxy.kubeconfig
#設(shè)置證書(shū)
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
#設(shè)置上下文
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
#使用上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
1.3.4 創(chuàng)建服務(wù)配置文件
cat > kube-proxy.yaml << "EOF"
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.10.103 #本機(jī)地址
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/103 #pod網(wǎng)絡(luò),不用改
healthzBindAddress: 192.168.10.103:10256 #本機(jī)地址
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.10.103:10249 #本機(jī)地址
mode: "ipvs" #ipvs比iptables更適用于大型集群
EOF
1.3.5 創(chuàng)建服務(wù)啟動(dòng)管理文件
cat >  kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
1.3.6 同步文件到集群工作節(jié)點(diǎn)主機(jī)
cp kube-proxy*.pem /etc/kubernetes/ssl/
cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
cp kube-proxy.service /usr/lib/systemd/system/
for i in k8s-master2 k8s-master3 k8s-worker1;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
for i in k8s-master2 k8s-master3 k8s-worker1;do scp  kube-proxy.service $i:/usr/lib/systemd/system/;done
說(shuō)明:
修改kube-proxy.yaml中IP地址為當(dāng)前主機(jī)IP.

vim /etc/kubernetes/kube-proxy.yaml 
1.3.7 服務(wù)啟動(dòng)
#創(chuàng)建WorkingDirectory
mkdir -p /var/lib/kube-proxy

systemctl daemon-reload
systemctl enable --now kube-proxy

systemctl status kube-proxy

2. 網(wǎng)絡(luò)組件部署 Calico

2.1 下載

wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml

2.2 修改文件

vim calico.yaml 
#修改如下兩行,取消注釋
3683             - name: CALICO_IPV4POOL_CIDR
3684               value: "10.244.0.0/16"  #pod網(wǎng)絡(luò)

2.3 應(yīng)用文件

kubectl apply -f calico.yaml

2.4 驗(yàn)證應(yīng)用結(jié)果

[root@k8s-master1 k8s-work]# kubectl get pods -n kube-system
NAME                                       READY   STATUS              RESTARTS   AGE
calico-kube-controllers-7cc8dd57d9-dcwjv   0/1     ContainerCreating   0          94s
calico-node-2pmqz                          0/1     Init:0/3            0          94s
calico-node-9ms2r                          0/1     Init:0/3            0          94s
calico-node-tj5rt                          0/1     Init:0/3            0          94s
calico-node-wnjcv                          0/1     PodInitializing     0          94s
[root@k8s-master1 k8s-work]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS                  RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
calico-kube-controllers-7cc8dd57d9-dcwjv   0/1     ContainerCreating       0          2m29s   <none>           k8s-master2   <none>           <none>
calico-node-2pmqz                          0/1     Init:0/3                0          2m29s   192.168.10.103   k8s-master1   <none>           <none>
calico-node-9ms2r                          0/1     Init:ImagePullBackOff   0          2m29s   192.168.10.105   k8s-master3   <none>           <none>
calico-node-tj5rt                          0/1     Init:0/3                0          2m29s   192.168.10.106   k8s-worker1   <none>           <none>
calico-node-wnjcv                          0/1     PodInitializing         0          2m29s   192.168.10.104   k8s-master2   <none>           <none>
[root@k8s-master1 k8s-work]# 

長(zhǎng)時(shí)間STATUS沒(méi)有發(fā)生變化,可以通過(guò)以下命令查看詳細(xì)信息

kubectl describe pod calico-node-gndtg -n kube-system

Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS,云原生,運(yùn)維,kubernetes,kubelet,容器

如果有pod一直處于Init:ImagePullBackOff,等待很長(zhǎng)時(shí)間還是沒(méi)有Runing 可以嘗試下載鏡像包通過(guò)ftp上傳到服務(wù)器上。

https://github.com/projectcalico/calico/releases?page=3找到需要的版本下載,上傳images目錄下對(duì)應(yīng)的鏡像到服務(wù)器

docker load -i calico-pod2daemon-flexvol.tar
docker load -i calico-kube-controllers.tar 
docker load -i calico-cni.tar 
docker load -i calico-node.tar

docker images

我這里有四臺(tái)工作節(jié)點(diǎn),其中一臺(tái)執(zhí)行命令后正常下載運(yùn)行Runing,另外三臺(tái)等了很久一直處于pull狀態(tài),最后采用了以上方法解決,總結(jié)下來(lái)還是網(wǎng)絡(luò)問(wèn)題。

如果一直處于Pending,檢查一下看看node是否被打污點(diǎn)了

kubectl describe node k8s-master2 |grep Taint
#刪除污點(diǎn)
kubectl taint nodes k8s-master2 key:NoSchedule-

污點(diǎn)值有三個(gè),如下:

NoSchedule:一定不被調(diào)度
PreferNoSchedule:盡量不被調(diào)度【也有被調(diào)度的幾率】
NoExecute:不會(huì)調(diào)度,并且還會(huì)驅(qū)逐Node已有Pod

最后終于Ready

# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7cc8dd57d9-pd44j   1/1     Running   0          70m
kube-system   calico-node-bpqfr                          1/1     Running   0          70m
kube-system   calico-node-f8c6t                          1/1     Running   0          70m
kube-system   calico-node-gndtg                          1/1     Running   0          70m
kube-system   calico-node-pptqm                          1/1     Running   0          70m

Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS,云原生,運(yùn)維,kubernetes,kubelet,容器

# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   5h    v1.21.10
k8s-master2   Ready    <none>   5h    v1.21.10
k8s-master3   Ready    <none>   5h    v1.21.10
k8s-worker1   Ready    <none>   5h    v1.21.10

Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS,云原生,運(yùn)維,kubernetes,kubelet,容器

3. 部署CoreDNS

用于實(shí)現(xiàn)k8s內(nèi)服務(wù)間名稱解析,例如k8s之間部署了兩個(gè)服務(wù) 想通過(guò)名稱進(jìn)行訪問(wèn),或者是k8s集群內(nèi)的服務(wù)想訪問(wèn)互聯(lián)網(wǎng)中的一些服務(wù)。

k8s-master1/data/k8s-work/下執(zhí)行:

cat >  coredns.yaml << "EOF"
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local  in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.8.4
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.2 #需要和上邊指定的clusterDNS IP一致
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
 
EOF
kubectl apply -f coredns.yaml
# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7cc8dd57d9-pd44j   1/1     Running   1          24h
kube-system   calico-node-bpqfr                          1/1     Running   1          24h
kube-system   calico-node-f8c6t                          1/1     Running   1          24h
kube-system   calico-node-gndtg                          1/1     Running   2          24h
kube-system   calico-node-pptqm                          1/1     Running   1          24h
kube-system   coredns-675db8b7cc-xlwsp                   1/1     Running   0          3m21s
#kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
calico-kube-controllers-7cc8dd57d9-pd44j   1/1     Running   1          24h     10.244.224.2     k8s-master2   <none>           <none>
calico-node-bpqfr                          1/1     Running   1          24h     192.168.10.103   k8s-master1   <none>           <none>
calico-node-f8c6t                          1/1     Running   1          24h     192.168.10.104   k8s-master2   <none>           <none>
calico-node-gndtg                          1/1     Running   2          24h     192.168.10.106   k8s-worker1   <none>           <none>
calico-node-pptqm                          1/1     Running   1          24h     192.168.10.105   k8s-master3   <none>           <none>
coredns-675db8b7cc-xlwsp                   1/1     Running   0          3m47s   10.244.159.129   k8s-master1   <none>           <none>

和Calico一樣,如果一直處于ImagePullBackOff,查看后是因?yàn)槔ョR像的問(wèn)題,可嘗試將鏡像本地下載后,上傳到服務(wù)器load

鏡像下載網(wǎng)站,去docker hub搜索要下載的鏡像和版本,下載到本地后上傳至服務(wù)器

docker load -i coredns-coredns-1.8.4-.tar
docker images
#標(biāo)簽不對(duì)應(yīng)的話重新打標(biāo)簽
docker tag 鏡像id coredns/coredns:v1.8.4

到這步我還是沒(méi)有正常啟動(dòng),提示如下信息

kubectl describe pod coredns-675db8b7cc-q6l95 -n kube-system

Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS,云原生,運(yùn)維,kubernetes,kubelet,容器

嘗試刪除pod后,重新創(chuàng)建CoreDNS Pod就正常了

# 查看日志
kubectl logs -f coredns-675db8b7cc-q6l95 -n kube-system

# 刪除并重新創(chuàng)建CoreDNS Pod
kubectl delete pod coredns-675db8b7cc-q6l95 -n kube-system
kubectl apply -f coredns.yaml

4. 部署應(yīng)用驗(yàn)證

在k8s-master1上創(chuàng)建pod

[root@k8s-master1 k8s-work]# cat >  nginx.yaml  << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-web
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service #可以通過(guò)不同的方式對(duì)k8s集群服務(wù)進(jìn)行訪問(wèn)
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001 #把k8s集群中運(yùn)行應(yīng)用的80端口映射到30001端口
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
EOF
kubectl apply -f nginx.yaml
# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP              NODE          NOMINATED NODE   READINESS GATES
nginx-web-qzvw4   1/1     Running   0          58s   10.244.194.65   k8s-worker1   <none>           <none>
nginx-web-spw5t   1/1     Running   0          58s   10.244.224.1    k8s-master2   <none>           <none>
# kubectl get all
NAME                  READY   STATUS    RESTARTS   AGE
pod/nginx-web-jnbhx   1/1     Running   1          23h

NAME                              DESIRED   CURRENT   READY   AGE
replicationcontroller/nginx-web   1         1         1       2d

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes               ClusterIP   10.96.0.1     <none>        443/TCP        3d6h
service/nginx-service-nodeport   NodePort    10.96.72.89   <none>        80:30001/TCP   2d

查看是否有30001端口

ss -anput | grep ":30001"

可以看到每臺(tái)worker節(jié)點(diǎn)都有

訪問(wèn):http://192.168.10.103:30001,http://192.168.10.104:30001,http://192.168.10.105:30001,http://192.168.10.106:30001文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-623716.html

#查看組件狀態(tài)
kubectl get cs
#查看pod
kubectl get pods

到了這里,關(guān)于Kubernetes高可用集群二進(jìn)制部署(五)kubelet、kube-proxy、Calico、CoreDNS的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 基于ansible的自動(dòng)化二進(jìn)制模式部署高可用Kubernetes集群

    基于ansible的自動(dòng)化二進(jìn)制模式部署高可用Kubernetes集群

    【說(shuō)明】 本文檔詳細(xì)描述了在openEuler 22.03 LTS上通過(guò)ansible以二進(jìn)制模式自動(dòng)化部署高可用Kubernetes集群(適用k8s v1.26版本)。 本文檔參考了小陳運(yùn)維的ansible劇本,并進(jìn)行了適當(dāng)改造,使之適用于openEuler 22.03 LTS,并改用nginx實(shí)現(xiàn)高可用;僅采用containerd作為容器運(yùn)行時(shí);采用ca

    2024年02月08日
    瀏覽(30)
  • CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二進(jìn)制部署

    CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二進(jìn)制部署

    查看版本關(guān)系 1.1 軟件獲取 所有軟件均為開(kāi)源軟件,源文件從官方鏈接或官方鏡像鏈接地址獲取。 1.1.1 centos 7.9 https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/ 下載2009或2207其中任何一個(gè)版本都可以。 下載:https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Min

    2024年02月04日
    瀏覽(45)
  • CentOS7.9+Kubernetes1.29.2+Docker25.0.3高可用集群二進(jìn)制部署

    CentOS7.9+Kubernetes1.29.2+Docker25.0.3高可用集群二進(jìn)制部署

    Kubernetes高可用集群(Kubernetes1.29.2+Docker25.0.3)二進(jìn)制部署 二進(jìn)制軟件部署flannel v0.22.3網(wǎng)絡(luò),使用的etcd是版本3,與之前使用版本2不同。查看官方文檔進(jìn)行了解。 截至北京時(shí)間2024年2月15日凌晨,k8s已經(jīng)更新至1.29.2版。從v1.24起,Docker不能直接作為k8s的容器運(yùn)行時(shí)。因?yàn)镈ocker龐

    2024年02月19日
    瀏覽(43)
  • Kubernetes高可用集群二進(jìn)制部署(四)部署kubectl和kube-controller-manager、kube-scheduler

    Kubernetes高可用集群二進(jìn)制部署(四)部署kubectl和kube-controller-manager、kube-scheduler

    Kubernetes概述 使用kubeadm快速部署一個(gè)k8s集群 Kubernetes高可用集群二進(jìn)制部署(一)主機(jī)準(zhǔn)備和負(fù)載均衡器安裝 Kubernetes高可用集群二進(jìn)制部署(二)ETCD集群部署 Kubernetes高可用集群二進(jìn)制部署(三)部署api-server Kubernetes高可用集群二進(jìn)制部署(四)部署kubectl和kube-controller-man

    2024年02月12日
    瀏覽(23)
  • Kubernetes - CentOS7搭建k8s_v1.18集群高可用(kubeadm/二進(jìn)制包部署方式)實(shí)測(cè)配置驗(yàn)證手冊(cè)

    Kubernetes - CentOS7搭建k8s_v1.18集群高可用(kubeadm/二進(jìn)制包部署方式)實(shí)測(cè)配置驗(yàn)證手冊(cè)

    一、Kubernetes—k8s是什么 Kubernetes 這個(gè)名字源于希臘語(yǔ),意為“舵手“或”飛行員\\\"。 Kubernetes,簡(jiǎn)稱K8s,中間有8個(gè)字符用8代替縮寫(xiě)。 Google于2014年開(kāi)源項(xiàng)目,為容器化應(yīng)用提供集群和管理的開(kāi)源工具,Kubernetes目標(biāo)是讓部署容器化的應(yīng)用簡(jiǎn)單并且高效,提供了應(yīng)用部署,規(guī)劃,更

    2024年04月27日
    瀏覽(25)
  • 《Kubernetes部署篇:Ubuntu20.04基于二進(jìn)制安裝安裝kubeadm、kubelet和kubectl》

    《Kubernetes部署篇:Ubuntu20.04基于二進(jìn)制安裝安裝kubeadm、kubelet和kubectl》

    由于客戶網(wǎng)絡(luò)處于專網(wǎng)環(huán)境下, 使用kubeadm工具安裝K8S集群 ,由于無(wú)法連通互聯(lián)網(wǎng),所有無(wú)法使用apt工具安裝kubeadm、kubelet、kubectl,當(dāng)然你也可以使用apt-get工具在一臺(tái)能夠連通互聯(lián)網(wǎng)環(huán)境的服務(wù)器上下載kubeadm、kubelet、kubectl軟件包,然后拷貝到專網(wǎng)主機(jī)上,通過(guò)dpkg工具安裝

    2024年02月10日
    瀏覽(24)
  • 【云原生】Kubernetes二進(jìn)制--多節(jié)點(diǎn)Master集群高可用

    【云原生】Kubernetes二進(jìn)制--多節(jié)點(diǎn)Master集群高可用

    作用 :實(shí)現(xiàn)高可用 apiserver 對(duì)外安全通信端口 6443 ,對(duì)內(nèi)端口 8080 1、實(shí)現(xiàn)高可用方法 etcd:etcd群集至少是3副本,奇數(shù)臺(tái),通過(guò)raft算法,保證數(shù)據(jù)的一致性 node節(jié)點(diǎn):承載業(yè)務(wù),跟Master進(jìn)行對(duì)接 master節(jié)點(diǎn):高可用使用keepalived+LB方案,keepalived能夠提供VIP和主備,LB實(shí)現(xiàn)負(fù)載均衡

    2024年02月04日
    瀏覽(29)
  • 二進(jìn)制搭建Kubernetes集群(一)——部署etcd集群和單master

    二進(jìn)制搭建Kubernetes集群(一)——部署etcd集群和單master

    注意:生產(chǎn)環(huán)境中,etcd集群和master、node節(jié)點(diǎn)都應(yīng)該部署在不同的機(jī)器上,此處為了實(shí)驗(yàn)方便,將三臺(tái)etcd節(jié)點(diǎn)分別部署在了master和node節(jié)點(diǎn)上了 k8s集群master01:192.168.126.27?? ?kube-apiserver kube-controller-manager kube-scheduler etcd k8s集群master02:192.168.80.21 k8s集群node01:192.168.80.11?? ?

    2024年02月10日
    瀏覽(30)
  • k8s1.27.2版本二進(jìn)制高可用集群部署

    k8s1.27.2版本二進(jìn)制高可用集群部署

    說(shuō)明:本次實(shí)驗(yàn)共有5臺(tái)主機(jī),3臺(tái)master節(jié)點(diǎn)同時(shí)又是worker,os128、os129、os130 節(jié)點(diǎn)主機(jī)容器運(yùn)行時(shí)用的containerd,worker131、worker132主機(jī)的用的docker 主機(jī)名 IP 組件 系統(tǒng) os128 192.168.177.128 etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerd CentOS7.9 os129 192.16

    2024年01月22日
    瀏覽(72)
  • [kubernetes]二進(jìn)制部署k8s集群-基于containerd

    k8s從1.24版本開(kāi)始不再直接支持docker,但可以自行調(diào)整相關(guān)配置,實(shí)現(xiàn)1.24版本后的k8s還能調(diào)用docker。其實(shí)docker自身也是調(diào)用containerd,與其k8s通過(guò)docker再調(diào)用containerd,不如k8s直接調(diào)用containerd,以減少性能損耗。 除了containerd,比較流行的容器運(yùn)行時(shí)還有podman,但是podman官方安裝

    2024年02月12日
    瀏覽(29)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包