国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s - kubelet啟動失敗處理記錄

這篇具有很好參考價值的文章主要介紹了k8s - kubelet啟動失敗處理記錄。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

測試環(huán)境好久沒有使用了,啟動kubelet發(fā)現(xiàn)失敗了,查看狀態(tài),每看到具體報錯點:

[root@node1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
?? Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
? Drop-In: /usr/lib/systemd/system/kubelet.service.d
?????????? └─10-kubeadm.conf
?? Active: activating (auto-restart) (Result: exit-code) since Thu 2023-08-03 22:24:50 CST; 5s ago
???? Docs: https://kubernetes.io/docs/
? Process: 2651 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
?Main PID: 2651 (code=exited, status=1/FAILURE)

Aug 03 22:24:50 node1 kubelet[2651]: Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_E...
Aug 03 22:24:50 node1 kubelet[2651]: --tls-min-version string?????????????????????????????????? Minimum TLS version supported. Possible values: VersionTLS...
Aug 03 22:24:50 node1 kubelet[2651]: --tls-private-key-file string????????????????????????????? File containing x509 private key matching --tls-cert-file....
Aug 03 22:24:50 node1 kubelet[2651]: --topology-manager-policy string?????????????????????????? Topology Manager policy to use. Possible values: 'none', '...
Aug 03 22:24:50 node1 kubelet[2651]: --topology-manager-scope string??????????????????????????? Scope to which topology hints applied. Topology Manager co...
Aug 03 22:24:50 node1 kubelet[2651]: -v, --v Level????????????????????????????????????????????????? number for the log level verbosity
Aug 03 22:24:50 node1 kubelet[2651]: --version version[=true]?????????????????????????????????? Print version information and quit
Aug 03 22:24:50 node1 kubelet[2651]: --vmodule pattern=N,...??????????????????????????????????? comma-separated list of pattern=N settings for fi...g format)
Aug 03 22:24:50 node1 kubelet[2651]: --volume-plugin-dir string???????????????????????????????? The full path of the directory in which to search for addi...
Aug 03 22:24:50 node1 kubelet[2651]: --volume-stats-agg-period duration???????????????????????? Specifies interval for kubelet to calculate and cache the ...
Hint: Some lines were ellipsized, use -l to show in full.

看看日志吧:journalctl -xu kubelet

Aug 03 22:05:14 node1 kubelet[1391]: Error: failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such
Aug 03 22:05:14 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Aug 03 22:05:14 node1 kubelet[1391]: Usage:
Aug 03 22:05:14 node1 kubelet[1391]: kubelet [flags]
Aug 03 22:05:14 node1 kubelet[1391]: Flags:
Aug 03 22:05:14 node1 kubelet[1391]: --add-dir-header?????????????????????????????????????????? If true, adds the file directory to the header of the log mes
Aug 03 22:05:14 node1 kubelet[1391]: --address ip?????????????????????????????????????????????? The IP address for the Kubelet to serve on (set to '0.0.0.0'
Aug 03 22:05:14 node1 kubelet[1391]: --allowed-unsafe-sysctls strings?????????????????????????? Comma-separated whitelist of unsafe sysctls or unsafe sysctl
Aug 03 22:05:14 node1 kubelet[1391]: --alsologtostderr????????????????????????????????????????? log to standard error as well as files (DEPRECATED: will be r
Aug 03 22:05:14 node1 kubelet[1391]: --anonymous-auth?????????????????????????????????????????? Enables anonymous requests to the Kubelet server. Requests th
Aug 03 22:05:14 node1 systemd[1]: Unit kubelet.service entered failed state.
Aug 03 22:05:14 node1 kubelet[1391]: --application-metrics-count-limit int????????????????????? Max number of application metrics to store (per container) (d
Aug 03 22:05:14 node1 kubelet[1391]: --authentication-token-webhook???????????????????????????? Use the TokenReview API to determine authentication for beare
Aug 03 22:05:14 node1 kubelet[1391]: --authentication-token-webhook-cache-ttl duration????????? The duration to cache responses from the webhook token authen
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-mode string??????????????????????????????? Authorization mode for Kubelet server. Valid options are Alwa
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-webhook-cache-authorized-ttl duration????? The duration to cache 'authorized' responses from the webhook
Aug 03 22:05:14 node1 systemd[1]: kubelet.service failed.
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-webhook-cache-unauthorized-ttl duration??? The duration to cache 'unauthorized' responses from the webho
Aug 03 22:05:14 node1 kubelet[1391]: --azure-container-registry-config string?????????????????? Path to the file containing Azure container registry configur

google了下,應(yīng)該是證書過期了。

[root@node1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE??????????????? EXPIRES????????????????? RESIDUAL TIME?? CERTIFICATE AUTHORITY?? EXTERNALLY MANAGED
admin.conf???????????????? Jul 14, 2023 15:36 UTC?? <invalid>?????? ca????????????????????? no
apiserver????????????????? Jul 14, 2023 15:36 UTC?? <invalid>?????? ca????????????????????? no
apiserver-etcd-client????? Jul 14, 2023 15:36 UTC?? <invalid>?????? etcd-ca???????????????? no
apiserver-kubelet-client?? Jul 14, 2023 15:36 UTC?? <invalid>?????? ca????????????????????? no
controller-manager.conf??? Jul 14, 2023 15:36 UTC?? <invalid>?????? ca????????????????????? no
etcd-healthcheck-client??? Jul 14, 2023 15:36 UTC?? <invalid>?????? etcd-ca???????????????? no
etcd-peer????????????????? Jul 14, 2023 15:36 UTC?? <invalid>?????? etcd-ca???????????????? no
etcd-server??????????????? Jul 14, 2023 15:36 UTC?? <invalid>?????? etcd-ca???????????????? no
front-proxy-client???????? Jul 14, 2023 15:36 UTC?? <invalid>?????? front-proxy-ca????????? no
scheduler.conf???????????? Jul 14, 2023 15:36 UTC?? <invalid>?????? ca????????????????????? no

CERTIFICATE AUTHORITY?? EXPIRES????????????????? RESIDUAL TIME?? EXTERNALLY MANAGED
ca????????????????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no
etcd-ca???????????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no
front-proxy-ca????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no

重新生成吧:

[root@node1 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@node1 ~]#

再次驗證:

[root@node1 ~]#? kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE??????????????? EXPIRES????????????????? RESIDUAL TIME?? CERTIFICATE AUTHORITY?? EXTERNALLY MANAGED
admin.conf???????????????? Aug 02, 2024 14:59 UTC?? 364d??????????? ca????????????????????? no
apiserver????????????????? Aug 02, 2024 14:59 UTC?? 364d??????????? ca????????????????????? no
apiserver-etcd-client????? Aug 02, 2024 14:59 UTC?? 364d??????????? etcd-ca???????????????? no
apiserver-kubelet-client?? Aug 02, 2024 14:59 UTC?? 364d??????????? ca????????????????????? no
controller-manager.conf??? Aug 02, 2024 14:59 UTC?? 364d??????????? ca????????????????????? no
etcd-healthcheck-client??? Aug 02, 2024 14:59 UTC?? 364d??????????? etcd-ca???????????????? no
etcd-peer????????????????? Aug 02, 2024 14:59 UTC?? 364d??????????? etcd-ca???????????????? no
etcd-server??????????????? Aug 02, 2024 14:59 UTC?? 364d??????????? etcd-ca???????????????? no
front-proxy-client???????? Aug 02, 2024 14:59 UTC?? 364d??????????? front-proxy-ca????????? no
scheduler.conf???????????? Aug 02, 2024 14:59 UTC?? 364d??????????? ca????????????????????? no

CERTIFICATE AUTHORITY?? EXPIRES????????????????? RESIDUAL TIME?? EXTERNALLY MANAGED
ca????????????????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no
etcd-ca???????????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no
front-proxy-ca????????? Jul 11, 2032 15:36 UTC?? 8y????????????? no

但發(fā)現(xiàn)還是沒有:/etc/kubernetes/bootstrap-kubelet.conf? 繼續(xù)執(zhí)行

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} /etc/kubernetes/pki/backup1 一定要mv走
$ kubeadm init --apiserver-advertise-address=192.168.56.101? phase certs all
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} /etc/kubernetes/backup1 一定要mv走
$ kubeadm init --apiserver-advertise-address=192.168.56.101 phase kubeconfig all
$ reboot
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
還需要將ca.crt拷貝到其它節(jié)點: google/baidu上不知道為什么都漏了這一步
[root@node1 kubernetes]# scp -rp kubelet.conf node2:/etc/kubernetes
[root@node1 pki]# scp -rp pki/ca.crt node2:/etc/kubernetes/pki
$ scp -rp /etc/kubernetes/admin.conf? node2:/root/.kube/config

驗證一下:

kubelet auto-restart,k8s開發(fā)基礎(chǔ),kubelet,云原生文章來源地址http://www.zghlxwxcb.cn/news/detail-823967.html

到了這里,關(guān)于k8s - kubelet啟動失敗處理記錄的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • k8s 啟動 elasticsearch 失敗: [failed to bind service]

    具體的錯誤信息 k logs -f elasticsearch-0 -n kube-system 排查最后導(dǎo)致啟動失敗的原因是,我的 elasticsearch 使用到了 pv、pvc 來做持久化存儲,但是 elasticsearch 對我的掛載目錄沒有執(zhí)行權(quán)限。 chmod 777 elasticsearch 之后重啟 pod 即可。

    2024年02月15日
    瀏覽(22)
  • K8S之kubelet介紹

    K8S之kubelet介紹

    ?一、總體概述: 一旦Pod被調(diào)度到對應(yīng)的宿主機之后,后續(xù)要做的事情就是創(chuàng)建這個Pod,并管理這個Pod的生命周期,這里面包括:Pod的增刪改查等操作,在K8S里面這部分功能是通過kubelet 這個核心組件來完成的。 ? 對于一個Pod來說,它里面一般會存在多個容器,每個容器里面

    2024年02月06日
    瀏覽(25)
  • 【博客694】k8s kubelet 狀態(tài)更新機制

    場景: 當(dāng) Kubernetes 中 Node 節(jié)點出現(xiàn)狀態(tài)異常的情況下,節(jié)點上的 Pod 會被重新調(diào)度到其他節(jié)點上去,但是有的時候我們會發(fā)現(xiàn)節(jié)點 Down 掉以后,Pod 并不會立即觸發(fā)重新調(diào)度,這實際上就是和 Kubelet 的狀態(tài)更新機制密切相關(guān)的,Kubernetes 提供了一些參數(shù)配置來觸發(fā)重新調(diào)度的時

    2024年02月13日
    瀏覽(25)
  • k8s之節(jié)點kubelet預(yù)留資源配置

    k8s之節(jié)點kubelet預(yù)留資源配置

    最近k8s在使用過程中遇到這樣一個問題 由于Pod沒有對內(nèi)存及CPU進行限制,導(dǎo)致Pod在運行過程中所需的內(nèi)存超過了節(jié)點本身的內(nèi)存(OOM),從而導(dǎo)致節(jié)點崩潰,使得運行在該節(jié)點上的所有Pod都失敗了 為了解決這個問題以及提高節(jié)點的穩(wěn)定性,綜合k8s的一些特性,方案如下 每個

    2024年02月04日
    瀏覽(27)
  • 第四篇:k8s之節(jié)點kubelet預(yù)留資源配置

    第四篇:k8s之節(jié)點kubelet預(yù)留資源配置

    最近k8s在使用過程中遇到這樣一個問題 由于Pod沒有對內(nèi)存及CPU進行限制,導(dǎo)致Pod在運行過程中所需的內(nèi)存超過了節(jié)點本身的內(nèi)存(OOM),從而導(dǎo)致節(jié)點崩潰,使得運行在該節(jié)點上的所有Pod都失敗了 為了解決這個問題以及提高節(jié)點的穩(wěn)定性,綜合k8s的一些特性,方案如下 每個

    2024年02月15日
    瀏覽(21)
  • k8s中,kubelet 出現(xiàn)問題, k8s-master node not found.

    k8s中,kubelet 出現(xiàn)問題, k8s-master node not found.

    由于一些其他問題,需要kubeadm reset,在做完kubeadm reset后,出現(xiàn)以下問題。 執(zhí)行systemctl restart kubelet。?出現(xiàn) k8s-master node not found,盡管 kubelet依舊是active狀態(tài) 排查 hostname、etc/hosts等均正常。 通過journalctl -xe kubelet查看,依舊是如下錯誤。 ?經(jīng)過多方嘗試未果。又嘗試著kubelet初

    2024年02月07日
    瀏覽(23)
  • k8s kubelet coredns ubuntu修改dns配置文件讀取路徑

    coredns 服務(wù)默認使用節(jié)點上的dns配置,由于在 Ubuntu18 及以上版本,默認是啟用 systemd-resolved 服務(wù)的,且配置nameserver配置文件默認為 /run/systemd/resolve/resolv.conf , 且kubelet默認的dns文件讀取也是該文件。因不習(xí)慣使用該服務(wù)來管理dns配置,所以需要修改kubelet的默認讀取dns的文件

    2024年02月15日
    瀏覽(24)
  • k8s報錯的解決辦法: kubelet的日志出現(xiàn) Error getting node的報錯。

    k8s報錯的解決辦法: kubelet的日志出現(xiàn) Error getting node的報錯。

    先看一下報錯的圖片 這是在初始化maser的時候報的錯。 我的環(huán)境情況 k8s:1.22.12 docker: 18.06.3 操作系統(tǒng)以及內(nèi)核版本 解決辦法 首先檢查你到底寫沒寫錯 確定你的主機名字無誤后,那么就進行這一步吧,無奈之舉了 修改docker的配置文件 修改kubelet的啟動項 或者 直接使用老版本

    2023年04月09日
    瀏覽(32)
  • 云上攻防-云原生篇&Kubernetes&K8s安全&API&Kubelet未授權(quán)訪問&容器執(zhí)行

    云上攻防-云原生篇&Kubernetes&K8s安全&API&Kubelet未授權(quán)訪問&容器執(zhí)行

    Kubernetes是一個開源的, 用于編排云平臺中多個主機上的容器化的應(yīng)用,目標(biāo)是讓部署容器化的應(yīng)用能簡單并且高效的使用, 提供了應(yīng)用部署,規(guī)劃,更新,維護的一種機制 。其核心的特點就是能夠自主的管理容器來保證云平臺中的容器按照用戶的期望狀態(tài)運行著,管理員可

    2024年02月08日
    瀏覽(31)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包