国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

記錄k8s kube-controller-manager-k8s-master kube-scheduler-k8s-master重啟

這篇具有很好參考價(jià)值的文章主要介紹了記錄k8s kube-controller-manager-k8s-master kube-scheduler-k8s-master重啟。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

1、報(bào)錯(cuò)如下

I0529 01:47:12.679312 ? ? ? 1 event.go:307] "Event occurred" object="k8s-node-1" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="CIDRNotAvailable" message="Node k8s-node-1 status is now: CIDRNotAvailable"
E0529 01:48:44.516760 ? ? ? 1 controller_utils.go:262] Error while processing Node Add/Delete: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
I0529 01:48:44.516885 ? ? ? 1 event.go:307] "Event occurred" object="k8s-master" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="CIDRNotAvailable" message="Node k8s-master status is now: CIDRNotAvailable"
E0529 01:49:28.020461 ? ? ? 1 controller_utils.go:262] Error while processing Node Add/Delete: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range
I0529 01:49:28.020839 ? ? ? 1 event.go:307] "Event occurred" object="k8s-node-2" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="CIDRNotAvailable" message="Node k8s-node-2 status is now: CIDRNotAvailable"

2、可能原因k8s初始化cluster-cidr和service-cluster-ip-range子網(wǎng)劃分沖突

(原? ? --apiserver-advertise-address 10.19.3.15? --service-cidr 10.245.0.0/12 ? ?--pod-network-cidr 10.244.0.0/16)

3、修改/etc/kubernetes/manifests/kube-controller-manager.yaml

- --cluster-cidr=10.96.0.0/16? ? ? ? ? ? ? ? #修改此處

修改后不用重啟組件或者集群,稍等后pod自動(dòng)重建

?kubectl logs ?kube-controller-manager-k8s-master ?-n kube-system查看沒有報(bào)錯(cuò)

(未修改之前,即使pod運(yùn)行正常,查看pod日志還是有上述報(bào)錯(cuò))

4、修改后沒有發(fā)現(xiàn)問題,請(qǐng)大家在測(cè)試環(huán)境測(cè)試。

///

更新------------------新建測(cè)試環(huán)境依然存在類似問題

centos7? kube-1.27.1? containerd 1.6.19?(master為esxi虛擬機(jī))

集群狀態(tài)都正常,系統(tǒng)硬盤IO也正常。依照網(wǎng)上的加大心跳時(shí)間、修改硬盤參數(shù),都不能解決。

最后問題可能出現(xiàn)在Esxi這個(gè)集群的存儲(chǔ)上,虛擬機(jī)遷移硬盤至另外的存儲(chǔ)主機(jī)。因?yàn)槟壳耙呀?jīng)24小時(shí)ks和kc組件未重啟了,還不能最后排除,但是大概率就是這兒的問題。

日志如下,希望可以幫助需要的人:

1、containerd報(bào)錯(cuò)

Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.394405539+08:00" level=info msg="StopPodSandbox for \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\""
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.394537456+08:00" level=info msg="TearDown network for sandbox \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\" successfully"
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.394591107+08:00" level=info msg="StopPodSandbox for \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\" returns successfully"
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.395525689+08:00" level=info msg="RemovePodSandbox for \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\""
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.395780557+08:00" level=info msg="Forcibly stopping sandbox \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\""
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.396142546+08:00" level=info msg="TearDown network for sandbox \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\" successfully"
Jun 15 11:56:54 k8s-master containerd[34171]: time="2023-06-15T11:56:54.401661796+08:00" level=info msg="RemovePodSandbox \"219a26b5fa3428801e99f2fc9b801a503d547536f66b6e659b3e6083df9e9340\" returns successfully"
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.140016886+08:00" level=info msg="shim disconnected" id=8fd74e39f764d17344f0d5a0cfb92d6ade56421b0ce54d73d2971477d7a49eec
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.140641397+08:00" level=warning msg="cleaning up after shim disconnected" id=8fd74e39f764d17344f0d5a0cfb92d6ade56421b0ce54d73d2971477d7a49eec namespace=k8s.io
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.140835000+08:00" level=info msg="cleaning up dead shim"
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.152377826+08:00" level=warning msg="cleanup warnings time=\"2023-06-15T11:59:21+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=989621 runtime=io.containerd.runc.v2\n"
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.642769894+08:00" level=info msg="CreateContainer within sandbox \"2d27aa3e82f08d67ab9c6b8b821a324b86ba717b5d18c5729b382c488bd2f23f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.741636615+08:00" level=info msg="CreateContainer within sandbox \"2d27aa3e82f08d67ab9c6b8b821a324b86ba717b5d18c5729b382c488bd2f23f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"53940ebcbf87d9313bb88b1656bc189745b04414888305a7ec47aef9c55fcdaf\""
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.742513549+08:00" level=info msg="StartContainer for \"53940ebcbf87d9313bb88b1656bc189745b04414888305a7ec47aef9c55fcdaf\""
Jun 15 11:59:21 k8s-master containerd[34171]: time="2023-06-15T11:59:21.855170471+08:00" level=info msg="StartContainer for \"53940ebcbf87d9313bb88b1656bc189745b04414888305a7ec47aef9c55fcdaf\" returns successfully"
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.392743400+08:00" level=info msg="shim disconnected" id=8eaef5a52f673266d0e141ae17a2d12ee377b7f08ad4a3f65d77f3abe0902c45
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.392850972+08:00" level=warning msg="cleaning up after shim disconnected" id=8eaef5a52f673266d0e141ae17a2d12ee377b7f08ad4a3f65d77f3abe0902c45 namespace=k8s.io
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.392869777+08:00" level=info msg="cleaning up dead shim"
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.405071189+08:00" level=warning msg="cleanup warnings time=\"2023-06-15T11:59:26+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=989732 runtime=io.containerd.runc.v2\n"
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.665183619+08:00" level=info msg="CreateContainer within sandbox \"24ebd020c4398151bdd87a97849fe02c4880f88bf132407e07ed5fad7c088932\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.719306795+08:00" level=info msg="CreateContainer within sandbox \"24ebd020c4398151bdd87a97849fe02c4880f88bf132407e07ed5fad7c088932\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e9f21ce36cace252a65544577beda1c6573e7473afc25d059df5d9234f18308b\""
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.720429540+08:00" level=info msg="StartContainer for \"e9f21ce36cace252a65544577beda1c6573e7473afc25d059df5d9234f18308b\""
Jun 15 11:59:26 k8s-master containerd[34171]: time="2023-06-15T11:59:26.834912477+08:00" level=info msg="StartContainer for \"e9f21ce36cace252a65544577beda1c6573e7473afc25d059df5d9234f18308b\" returns successfully"

2、etcd報(bào)錯(cuò)

{"level":"warn","ts":"2023-06-15T03:11:46.440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"647.783649ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1773161178989142284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:274019 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:433 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >>","response":"size:18"}
{"level":"info","ts":"2023-06-15T03:11:46.440Z","caller":"traceutil/trace.go:171","msg":"trace[1077021032] linearizableReadLoop","detail":"{readStateIndex:311409; appliedIndex:311408; }","duration":"393.007455ms","start":"2023-06-15T03:11:46.047Z","end":"2023-06-15T03:11:46.440Z","steps":["trace[1077021032] 'read index received' ?(duration: 42.34μs)","trace[1077021032] 'applied index is now lower than readState.Index' ?(duration: 392.963682ms)"],"step_count":2}
{"level":"warn","ts":"2023-06-15T03:11:46.440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"393.171763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:6"}
{"level":"info","ts":"2023-06-15T03:11:46.440Z","caller":"traceutil/trace.go:171","msg":"trace[2049818894] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:274023; }","duration":"393.252264ms","start":"2023-06-15T03:11:46.047Z","end":"2023-06-15T03:11:46.440Z","steps":["trace[2049818894] 'agreement among raft nodes before linearized reading' ?(duration: 393.104647ms)"],"step_count":1}
{"level":"warn","ts":"2023-06-15T03:11:46.440Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-15T03:11:46.047Z","time spent":"393.363051ms","remote":"127.0.0.1:48410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":30,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2023-06-15T03:11:46.440Z","caller":"traceutil/trace.go:171","msg":"trace[499563569] transaction","detail":"{read_only:false; response_revision:274023; number_of_response:1; }","duration":"709.954696ms","start":"2023-06-15T03:11:45.730Z","end":"2023-06-15T03:11:46.440Z","steps":["trace[499563569] 'process raft request' ?(duration: 61.343055ms)","trace[499563569] 'compare' ?(duration: 647.534581ms)"],"step_count":2}
{"level":"warn","ts":"2023-06-15T03:11:46.440Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-15T03:11:45.730Z","time spent":"710.061185ms","remote":"127.0.0.1:48342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":493,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:274019 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:433 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >"}
{"level":"info","ts":"2023-06-15T03:16:07.151Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":273955}
{"level":"info","ts":"2023-06-15T03:16:07.153Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":273955,"took":"1.552321ms","hash":1950790470}
{"level":"info","ts":"2023-06-15T03:16:07.153Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1950790470,"revision":273955,"compact-revision":273469}
{"level":"info","ts":"2023-06-15T03:20:43.437Z","caller":"traceutil/trace.go:171","msg":"trace[126699866] transaction","detail":"{read_only:false; response_revision:274895; number_of_response:1; }","duration":"307.20387ms","start":"2023-06-15T03:20:43.130Z","end":"2023-06-15T03:20:43.437Z","steps":["trace[126699866] 'process raft request' ?(duration: 307.036766ms)"],"step_count":1}
{"level":"warn","ts":"2023-06-15T03:20:43.438Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-15T03:20:43.130Z","time spent":"307.376032ms","remote":"127.0.0.1:48342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":465,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-scheduler\" mod_revision:274893 > success:<request_put:<key:\"/registry/leases/kube-system/kube-scheduler\" value_size:414 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-scheduler\" > >"}
{"level":"warn","ts":"2023-06-15T03:21:05.415Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1773161178989145684,"retry-timeout":"500ms"}文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-705774.html

到了這里,關(guān)于記錄k8s kube-controller-manager-k8s-master kube-scheduler-k8s-master重啟的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 詳解K8s 鏡像緩存管理kube-fledged

    本文分享自華為云社區(qū)《K8s 鏡像緩存管理 kube-fledged 認(rèn)知》,作者: 山河已無(wú)恙。 我們知道? k8s ?上的容器調(diào)度需要在調(diào)度的節(jié)點(diǎn)行拉取當(dāng)前容器的鏡像,在一些特殊場(chǎng)景中, 需要 快速啟動(dòng)和/或擴(kuò)展 的應(yīng)用程序。例如,由于數(shù)據(jù)量激增,執(zhí)行實(shí)時(shí)數(shù)據(jù)處理的應(yīng)用程序需要

    2024年04月15日
    瀏覽(25)
  • k8s安裝kube-promethues(0.7版本)

    目錄 k8s安裝kube-promethues(0.7版本) 一.檢查本地k8s版本,下載對(duì)應(yīng)安裝包 二.安裝前準(zhǔn)備 1.文件分類整理 2.查看K8s集群是否安裝NFS持久化存儲(chǔ),如果沒有則需要安裝配置 1).安裝NFS服務(wù) 2).k8s注冊(cè)nfs服務(wù) 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    瀏覽(19)
  • kube-controller-manager和kube-scheduler不能正常啟動(dòng)

    kube-controller-manager和kube-scheduler不能正常啟動(dòng)

    ?kube-controller-manager-k8s-worker01和kube-scheduler-k8s-worker01沒有啟動(dòng)起來(lái) 原因: 解決:進(jìn)入/etc/kubernetes/manifests 編輯 將鏡像地址修改為 然后重啟kubelet:systemctl restart kubelet.service

    2024年02月07日
    瀏覽(18)
  • k8s安裝promethues,kube-promethues安裝法

    目錄 k8s安裝kube-promethues(0.7版本) 一.檢查本地k8s版本,下載對(duì)應(yīng)安裝包 二.安裝前準(zhǔn)備 1.文件分類整理 2.查看K8s集群是否安裝NFS持久化存儲(chǔ),如果沒有則需要安裝配置 1).安裝NFS服務(wù) 2).k8s注冊(cè)nfs服務(wù) 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    瀏覽(25)
  • k8s安裝promethues監(jiān)控,kube-promethues安裝法

    目錄 k8s安裝kube-promethues(0.7版本) 一.檢查本地k8s版本,下載對(duì)應(yīng)安裝包 二.安裝前準(zhǔn)備 1.文件分類整理 2.查看K8s集群是否安裝NFS持久化存儲(chǔ),如果沒有則需要安裝配置 1).安裝NFS服務(wù) 2).k8s注冊(cè)nfs服務(wù) 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    瀏覽(17)
  • CKS之k8s安全基準(zhǔn)工具:kube-bench

    CKS之k8s安全基準(zhǔn)工具:kube-bench

    ????????CIS Kubernetes Benchmark 由互聯(lián)網(wǎng)安全中心(CIS)社區(qū)維護(hù),旨在提供 Kubernetes 的安全配置基線,旨在為互聯(lián)網(wǎng)環(huán)境提供免費(fèi)的安全防御方案。CIS是一個(gè)非營(yíng)利性組織,其制定的安全基準(zhǔn)覆蓋了多個(gè)領(lǐng)域,包括操作系統(tǒng)、中間件、應(yīng)用程序等多個(gè)層面。 ? ? ? ? CIS官網(wǎng):

    2024年04月10日
    瀏覽(33)
  • 夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s組件(下)---使用kube-state-metrics監(jiān)控K8s對(duì)象

    夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s組件(下)---使用kube-state-metrics監(jiān)控K8s對(duì)象

    目錄 (一)前言 (二)categraf作為Daemonset的方式去運(yùn)行監(jiān)控k8s組件 ?(1)1.24版本以下的k8s集群部署方法: ①創(chuàng)建autu.yaml綁定權(quán)限 ②Daemonset部署categraf采集監(jiān)控kubelet,kube-proxy ③測(cè)試數(shù)據(jù)是否采集成功 ?(2)1.24版本以上的k8s集群部署方法: ①創(chuàng)建secret token 綁定sa賬號(hào) ③測(cè)試認(rèn)證 ④Daemo

    2024年02月09日
    瀏覽(33)
  • K8s安全配置:CIS基準(zhǔn)與kube-bench工具

    K8s安全配置:CIS基準(zhǔn)與kube-bench工具

    01、概述 K8s集群往往會(huì)因?yàn)榕渲貌划?dāng)導(dǎo)致存在入侵風(fēng)險(xiǎn),如K8S組件的未授權(quán)訪問、容器逃逸和橫向攻擊等。為了保護(hù)K8s集群的安全,我們必須仔細(xì)檢查安全配置。 CIS Kubernetes基準(zhǔn)提供了集群安全配置的最佳實(shí)踐,主要聚焦在兩個(gè)方面:主節(jié)點(diǎn)安全配置和工作節(jié)點(diǎn)安全配置。主

    2024年02月14日
    瀏覽(24)
  • prometheus監(jiān)控k8s kube-proxy target down

    prometheus監(jiān)控k8s kube-proxy target down

    解決 修改配置 刪除 kube-proxy pod 使之重啟應(yīng)用配置

    2024年02月14日
    瀏覽(25)
  • 通過kube-apiserver訪問K8s集群中的App

    通過kube-apiserver訪問K8s集群中的App

    K8s集群中的App(或者svc),通常使用ClusterIP,NodePort,Loadbalancer這些方式訪問,但是你也可以通過Kube-apiserver(管理面)來(lái)訪問App。 在《跟唐老師學(xué)習(xí)云網(wǎng)絡(luò) - Kubernetes網(wǎng)絡(luò)實(shí)現(xiàn)》里面,提到K8s集群里面的容器,有幾種訪問方法: LoadBalancer Ingress ClusterIP NodePort 這里就不再分析

    2024年01月19日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包