国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s的etcd啟動(dòng)報(bào)錯(cuò)

這篇具有很好參考價(jià)值的文章主要介紹了k8s的etcd啟動(dòng)報(bào)錯(cuò)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

背景

電腦休眠狀態(tài)意外斷電導(dǎo)致虛擬機(jī)直接進(jìn)入關(guān)機(jī)狀態(tài)。

問題

kubectl命令報(bào)錯(cuò)

[root@master01 ~]#kubectl get node 
The connection to the server master01.kktb.org:6443 was refused - did you specify the right host or port?

kubelet服務(wù)報(bào)錯(cuò)

Oct 15 08:39:37 master01.kktb.org kubelet[747]: E1015 08:39:37.251784     747 kubelet.go:2424] "Error getting node" err="node \"master01.kktb.org\" not found"
Oct 15 08:39:37 master01.kktb.org kubelet[747]: E1015 08:39:37.329952     747 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://master01.kktb.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master01.kktb.org?timeout=10s": dial tcp 10.0.6.5:6443: connect: connection refused
Oct 15 08:39:37 master01.kktb.org kubelet[747]: E1015 08:39:37.352614     747 kubelet.go:2424] "Error getting node" err="node \"master01.kktb.org\" not found"
Oct 15 08:39:37 master01.kktb.org kubelet[747]: I1015 08:39:37.384900     747 kubelet_node_status.go:70] "Attempting to register node" node="master01.kktb.org"
Oct 15 08:39:37 master01.kktb.org kubelet[747]: E1015 08:39:37.385258     747 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://master01.kktb.org:6443/api/v1/nodes\": dial tcp 10.0.6.5:6443: connect: connection refused" node="master01.kktb.org"

6443端口無任何程序監(jiān)聽,判斷可能是etcd出現(xiàn)了故障

master節(jié)點(diǎn)上的容器

[root@master01 kubelet.service.d]#docker ps 
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS     NAMES
9924571324d2   e3ed7dee73e9                "kube-scheduler --au…"   18 minutes ago   Up 18 minutes             k8s_kube-scheduler_kube-scheduler-master01.kktb.org_kube-system_bca604f760be36f9a20d9b66b0bf821d_7
9d6171792af2   88784fb4ac2f                "kube-controller-man…"   18 minutes ago   Up 18 minutes             k8s_kube-controller-manager_kube-controller-manager-master01.kktb.org_kube-system_13461bcfe1de9f0e85e957cde36c42d2_10
eb3fdcf0b392   registry.k8s.io/pause:3.6   "/pause"                 18 minutes ago   Up 18 minutes             k8s_POD_etcd-master01.kktb.org_kube-system_67c8baa117269300c9b5e18d5b0a0e44_5
302e36ffb616   registry.k8s.io/pause:3.6   "/pause"                 18 minutes ago   Up 18 minutes             k8s_POD_kube-scheduler-master01.kktb.org_kube-system_bca604f760be36f9a20d9b66b0bf821d_5
a8f8f02011c7   registry.k8s.io/pause:3.6   "/pause"                 18 minutes ago   Up 18 minutes             k8s_POD_kube-controller-manager-master01.kktb.org_kube-system_13461bcfe1de9f0e85e957cde36c42d2_5
3fde7a0d5aae   registry.k8s.io/pause:3.6   "/pause"                 18 minutes ago   Up 18 minutes             k8s_POD_kube-apiserver-master01.kktb.org_kube-system_5846def9d7d3ade7311eb0f023db33ff_5

鏡像都存在

[root@master01 kubelet.service.d]#docker images 
REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE
istio/proxyv2                        1.19.1    b3547b3ef18b   2 weeks ago     251MB
ubuntu                               jammy     3565a89d9e81   2 weeks ago     77.8MB
ubuntu                               latest    3565a89d9e81   2 weeks ago     77.8MB
flannel/flannel                      v0.22.3   e23f7ca36333   3 weeks ago     70.2MB
flannel/flannel-cni-plugin           v1.2.0    a55d1bad692b   2 months ago    8.04MB
hello-world                          latest    9c7a54a9a43c   5 months ago    13.3kB
k8s.gcr.io/kube-apiserver            v1.24.0   529072250ccc   17 months ago   130MB
k8s.gcr.io/kube-proxy                v1.24.0   77b49675beae   17 months ago   110MB
k8s.gcr.io/kube-scheduler            v1.24.0   e3ed7dee73e9   17 months ago   51MB
k8s.gcr.io/kube-controller-manager   v1.24.0   88784fb4ac2f   17 months ago   119MB
k8s.gcr.io/etcd                      3.5.3-0   aebe758cef4c   18 months ago   299MB
k8s.gcr.io/pause                     3.7       221177c6082a   19 months ago   711kB
ikubernetes/proxy                    v0.1.1    a4fcedf206e8   23 months ago   62.4MB
k8s.gcr.io/coredns/coredns           v1.8.6    a4ca41631cc7   2 years ago     46.8MB
registry.k8s.io/pause                3.6       6270bb605e12   2 years ago     683kB
ikubernetes/demoapp                  v1.0      3342b7518915   3 years ago     92.7MB

ps -a看到退出的容器

[root@master01 kubelet.service.d]#docker ps -a 
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS                      PORTS     NAMES
07195c9438d3   aebe758cef4c                "etcd --advertise-cl…"   34 seconds ago   Exited (2) 33 seconds ago             k8s_etcd_etcd-master01.kktb.org_kube-system_67c8baa117269300c9b5e18d5b0a0e44_30
532fabab4479   529072250ccc                "kube-apiserver --ad…"   3 minutes ago    Exited (1) 3 minutes ago              k8s_kube-apiserver_kube-apiserver-master01.kktb.org_kube-system_5846def9d7d3ade7311eb0f023db33ff_27

etcd容器報(bào)錯(cuò)

[root@master01 kubelet.service.d]#docker logs  07195c9438d3
{"level":"info","ts":"2023-10-15T09:31:34.713Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://10.0.6.5:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--initial-advertise-peer-urls=https://10.0.6.5:2380","--initial-cluster=master01.kktb.org=https://10.0.6.5:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://10.0.6.5:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://10.0.6.5:2380","--name=master01.kktb.org","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2023-10-15T09:31:34.714Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
{"level":"info","ts":"2023-10-15T09:31:34.714Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.6.5:2380"]}
{"level":"info","ts":"2023-10-15T09:31:34.714Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-10-15T09:31:34.714Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://10.0.6.5:2379","https://127.0.0.1:2379"]}
{"level":"info","ts":"2023-10-15T09:31:34.714Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"0452feec7","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":true,"name":"master01.kktb.org","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.0.6.5:2380"],"listen-peer-urls":["https://10.0.6.5:2380"],"advertise-client-urls":["https://10.0.6.5:2379"],"listen-client-urls":["https://10.0.6.5:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-10-15T09:31:34.716Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"655.189μs"}
{"level":"info","ts":"2023-10-15T09:31:35.189Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":450049,"snapshot-size":"11 kB"}
{"level":"warn","ts":"2023-10-15T09:31:35.190Z","caller":"snap/db.go:88","msg":"failed to find [SNAPSHOT-INDEX].snap.db","snapshot-index":450049,"snapshot-file-path":"/var/lib/etcd/member/snap/000000000006de01.snap.db","error":"snap: snapshot file doesn't exist"}
{"level":"panic","ts":"2023-10-15T09:31:35.190Z","caller":"etcdserver/server.go:515","msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.NewServer\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdserver/server.go:515\ngo.etcd.io/etcd/server/v3/embed.StartEtcd\n\t/go/src/go.etcd.io/etcd/release/etcd/server/embed/etcd.go:245\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcd\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/etcd.go:228\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/etcd.go:123\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/go/src/go.etcd.io/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/go/gos/go1.16.15/src/runtime/proc.go:225"}
panic: failed to recover v3 backend from snapshot

是由于恢復(fù)快照數(shù)據(jù)失敗。查看其他兩個(gè)節(jié)點(diǎn)是否也有同樣報(bào)錯(cuò)。

幸運(yùn)的是節(jié)點(diǎn)2的etcd啟動(dòng)正常

[root@master02 snap]#docker ps 
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS     NAMES
daa59123cbbf   aebe758cef4c                "etcd --advertise-cl…"   50 seconds ago   Up 50 seconds             k8s_etcd_etcd-master02.kktb.org_kube-system_3564c10547c1c588ab6c79bcad0e90d0_19

一些關(guān)鍵字日志

[root@master02 snap]#docker logs -f daa59123cbbf

{"level":"info","ts":"2023-10-15T09:39:06.599Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"2.526769ms"}
{"level":"warn","ts":"2023-10-15T09:39:06.599Z","caller":"wal/util.go:90","msg":"ignored file in WAL directory","path":"0000000000000004-00000000000685b1.wal.broken"}
{"level":"info","ts":"2023-10-15T09:39:07.119Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":450048,"snapshot-size":"11 kB"}
{"level":"info","ts":"2023-10-15T09:39:07.119Z","caller":"etcdserver/server.go:521","msg":"recovered v3 backend from snapshot","backend-size-bytes":6635520,"backend-size":"6.6 MB","backend-size-in-use-bytes":2555904,"backend-size-in-use":"2.6 MB"}
{"level":"warn","ts":"2023-10-15T09:39:07.119Z","caller":"wal/util.go:90","msg":"ignored file in WAL directory","path":"0000000000000004-00000000000685b1.wal.broken"}
{"level":"info","ts":"2023-10-15T09:39:07.161Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"e3d7c840b56e7ccb","local-member-id":"77c05bfe945af1a1","commit-index":458027}
{"level":"info","ts":"2023-10-15T09:39:07.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77c05bfe945af1a1 switched to configuration voters=(6311186738797484145 8628998035010679201 11257876798449243399)"}
{"level":"info","ts":"2023-10-15T09:39:07.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77c05bfe945af1a1 became follower at term 16"}



{"level":"warn","ts":"2023-10-15T09:40:06.425Z","caller":"etcdhttp/metrics.go:86","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
{"level":"warn","ts":"2023-10-15T09:40:07.179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5795d6e69d2e1871","rtt":"0s","error":"dial tcp 10.0.6.5:2380: connect: connection refused"}
{"level":"warn","ts":"2023-10-15T09:40:07.179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9c3c034d28add507","rtt":"0s","error":"dial tcp 10.0.6.7:2380: connect: connection refused"}
{"level":"warn","ts":"2023-10-15T09:40:07.179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9c3c034d28add507","rtt":"0s","error":"dial tcp 10.0.6.7:2380: connect: connection refused"}
{"level":"warn","ts":"2023-10-15T09:40:07.179Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5795d6e69d2e1871","rtt":"0s","error":"dial tcp 10.0.6.5:2380: connect: connection refused"}

解決

解決問題的方法,直接將節(jié)點(diǎn)1的etcd數(shù)據(jù)目錄文件夾移除,嘗試將etcd2上面的數(shù)據(jù)拷貝過來

# 先將節(jié)點(diǎn)1的數(shù)據(jù)文件移動(dòng)至opt中
[root@master01 ~]#mv /var/lib/etcd/member /opt/
# 節(jié)點(diǎn)2拷貝數(shù)據(jù)到節(jié)點(diǎn)1
[root@master02 snap]#scp -r /var/lib/etcd/member 10.0.6.5:/var/lib/etcd/

etcd、api-server服務(wù)恢復(fù)
集群狀態(tài)

sh-5.1# etcdctl --endpoints=https://10.0.6.5:2379,https://10.0.6.6:2379,https://10.0.6.7:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key member list --write-out=table
+------------------+---------+-------------------+-----------------------+-----------------------+------------+
|        ID        | STATUS  |       NAME        |      PEER ADDRS       |     CLIENT ADDRS      | IS LEARNER |
+------------------+---------+-------------------+-----------------------+-----------------------+------------+
| 5795d6e69d2e1871 | started | master01.kktb.org | https://10.0.6.5:2380 | https://10.0.6.5:2379 |      false |
| 77c05bfe945af1a1 | started | master02.kktb.org | https://10.0.6.6:2380 | https://10.0.6.6:2379 |      false |
| 9c3c034d28add507 | started | master03.kktb.org | https://10.0.6.7:2380 | https://10.0.6.7:2379 |      false |
+------------------+---------+-------------------+-----------------------+-----------------------+------------+

kubectl命令使用正常文章來源地址http://www.zghlxwxcb.cn/news/detail-791024.html

[root@master01 ~]#kubectl get node 
NAME                STATUS   ROLES           AGE   VERSION
master01.kktb.org   Ready    control-plane   22d   v1.24.3
master02.kktb.org   Ready    control-plane   22d   v1.24.3
master03.kktb.org   Ready    control-plane   22d   v1.24.3

到了這里,關(guān)于k8s的etcd啟動(dòng)報(bào)錯(cuò)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 解決k8s node節(jié)點(diǎn)報(bào)錯(cuò): Failed to watch *v1.Secret: unknown

    解決k8s node節(jié)點(diǎn)報(bào)錯(cuò): Failed to watch *v1.Secret: unknown

    現(xiàn)象: ?這個(gè)現(xiàn)象是發(fā)生在k8s集群證書過期,重新續(xù)簽證書以后。 記得master節(jié)點(diǎn)的/etc/kubernetes/kubelet.conf文件已經(jīng)復(fù)制到node節(jié)點(diǎn)了。 但是為什么還是報(bào)這個(gè)錯(cuò),然后運(yùn)行證書檢查命令看一下:? ?看樣子是差/etc/kubernetes/pki/apiserver.crt文件。 但是從master節(jié)點(diǎn)scpapiserver.crt文件以

    2024年01月16日
    瀏覽(28)
  • k8s搭建集群報(bào)錯(cuò)failed to set up sandbox container “xxx“ network for pod “coredns-xxx“:networkPlugin cni fa

    k8s搭建集群報(bào)錯(cuò)failed to set up sandbox container “xxx“ network for pod “coredns-xxx“:networkPlugin cni fa

    今天在搭建k8s集群時(shí)發(fā)現(xiàn)coredns一直處于containerCreating狀態(tài),如下圖所示: 到相應(yīng)的node節(jié)點(diǎn)上查看日志去排查問題: kubectl describe pods -n kube-system coredns-66bff467f8-n7q8f 發(fā)現(xiàn)報(bào)錯(cuò)的日志如下圖: 我的是因?yàn)橹安渴鸬臅r(shí)候選擇了calico網(wǎng)絡(luò)插件,這次啟動(dòng)選的是flannel,導(dǎo)致node節(jié)點(diǎn)上

    2024年02月16日
    瀏覽(23)
  • k8s部署解成功解決node節(jié)點(diǎn)一直處于NotReady狀態(tài)的問題,報(bào)錯(cuò)failed to load Kubelet config file /var/lib/kubelet/config.yaml

    k8s部署解成功解決node節(jié)點(diǎn)一直處于NotReady狀態(tài)的問題,報(bào)錯(cuò)failed to load Kubelet config file /var/lib/kubelet/config.yaml

    我在部署k8s的時(shí)候host1節(jié)點(diǎn)一直顯示NotReady 報(bào)錯(cuò)便報(bào),直接經(jīng)典看日志解決問題思路哈哈哈 看日志找報(bào)錯(cuò)點(diǎn),找問題解決問題,思路一定要清晰。 在host1節(jié)點(diǎn)中查看報(bào)錯(cuò)信息,代碼: 由日志信息可知,報(bào)錯(cuò)原因是不能從/var/llib/kubelet/config.yaml下載到kubelet的配置。 錯(cuò)誤原因估計(jì)

    2024年02月11日
    瀏覽(22)
  • failed to get sandbox image “k8s.gcr.io/pause:3.6“: failed to pull image “k8s.gcr.io/pause:3.6“

    從日志能夠看到k8s核心服務(wù)的pod創(chuàng)建失敗,因?yàn)楂@取pause鏡像失敗,總是從k8s.gcr.io下載。 經(jīng)過確認(rèn),k8s 1.26中啟用了CRI sandbox(pause) image的配置支持。 之前通過kubeadm init –image-repository設(shè)置的鏡像地址,不再會(huì)傳遞給cri運(yùn)行時(shí)去下載pause鏡像 而是需要在cri運(yùn)行時(shí)的配置文件中設(shè)

    2024年02月16日
    瀏覽(26)
  • K8s錯(cuò)誤處理:Failed to create pod sandbox

    K8s創(chuàng)建Pod時(shí),使用kubectl describe命令查看Pod事件,發(fā)現(xiàn)在拉取鏡像前出現(xiàn)報(bào)錯(cuò),報(bào)錯(cuò)內(nèi)容為: 該文件為DNS配置文件,一般由systemd-resolved服務(wù)管理,不能由用戶修改。那些指點(diǎn)的人說把Master里的復(fù)制一份到Node中的人,實(shí)際上是行不通的。 如果你的systemd-resolved服務(wù)狀態(tài)是active的

    2024年02月12日
    瀏覽(23)
  • k8s Failed to create pod sandbox錯(cuò)誤處理

    錯(cuò)誤信息: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image \\\"k8s.gcr.io/pause:3.2\\\": failed to pull image \\\"k8s.gcr.io/pause:3.2\\\": failed to pull and unpack image \\\"k8s.gcr.io/pause:3.2\\\": failed to resolve reference \\\"k8s.gcr.io/pause:3.2\\\": failed to do request: Head \\\"https://k8s.gcr.io/v2/pause/manifests/3.2\\\": dial

    2024年02月16日
    瀏覽(26)
  • k8s kubectl 啟動(dòng)失敗Unit kubelet.service entered failed state.

    k8s kubectl 啟動(dòng)失敗Unit kubelet.service entered failed state.

    懷疑是配置文件的路徑出現(xiàn)問題 使用命令查看具體的報(bào)錯(cuò)信息: 報(bào)錯(cuò)顯示沒有?/usr/local/bin/kubelet 這個(gè)文件或目錄,實(shí)際ls查看是有的。 此時(shí)配置文件存在,懷疑是路徑出現(xiàn)問題,找不到。后來檢查kubelet.service文件,發(fā)現(xiàn)配置了 “WorkingDirectory”,給它指定了工作目錄。重新

    2024年01月16日
    瀏覽(24)
  • k8s etcd 簡介

    k8s etcd 簡介

    Etcd是CoreOS基于Raft協(xié)議開發(fā)的分布式key-value存儲(chǔ),可用于服務(wù)發(fā)現(xiàn)、共享配置以及一致性保障(如數(shù)據(jù)庫選主、分布式鎖等)。 如,Etcd也可以作為微服務(wù)的注冊(cè)中心,比如SpringCloud也基于ETCD實(shí)現(xiàn)了注冊(cè)中心功能,可以替代earka,具體參考:Spring Cloud Etcd 在分布式系統(tǒng)中,如何

    2024年02月10日
    瀏覽(26)
  • k8s之etcd

    k8s之etcd

    ????????etcd 是云原生架構(gòu)中重要的基礎(chǔ)組件。有如下特點(diǎn): 簡單:安裝配置簡單,而且提供了 HTTP API 進(jìn)行交互,使用也很簡單 鍵值對(duì)存儲(chǔ):將數(shù)據(jù)存儲(chǔ)在分層組織的目錄中,如同在標(biāo)準(zhǔn)文件系統(tǒng)中 監(jiān)測(cè)變更:監(jiān)測(cè)特定的鍵或目錄以進(jìn)行更改,并對(duì)值的更改做出反應(yīng) 安

    2024年04月24日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包