国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Unable to connect to the server: x509: certificate has expired or is not yet valid

這篇具有很好參考價值的文章主要介紹了Unable to connect to the server: x509: certificate has expired or is not yet valid。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

手動更新所有證書,執(zhí)行命令

kubeadm alpha certs renew all

更新用戶配置

kubeadm alpha kubeconfig user --client-name=admin
kubeadm alpha kubeconfig user --org system:masters --client-name kubernetes-admin  > /etc/kubernetes/admin.conf
kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf
kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > /etc/kubernetes/kubelet.conf
kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf

用更新后的admin.conf替換/root/.kube/config文件

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@hadoop101 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate has expired or is not yet valid
[root@hadoop101 ~]# cd /etc/kubernetes/pki
[root@hadoop101 pki]# openssl x509 -in apiserver.crt -noout -text |grep ' Not '
            Not Before: Aug  7 13:30:11 2021 GMT
            Not After : Aug  7 13:30:11 2022 GMT
[root@hadoop101 pki]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

W0220 23:39:44.971317   11117 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Aug 07, 2022 13:30 UTC   <invalid>                               no
apiserver                  Aug 07, 2022 13:30 UTC   <invalid>       ca                      no
apiserver-etcd-client      Aug 07, 2022 13:30 UTC   <invalid>       etcd-ca                 no
apiserver-kubelet-client   Aug 07, 2022 13:30 UTC   <invalid>       ca                      no
controller-manager.conf    Aug 07, 2022 13:30 UTC   <invalid>                               no
etcd-healthcheck-client    Aug 07, 2022 13:30 UTC   <invalid>       etcd-ca                 no
etcd-peer                  Aug 07, 2022 13:30 UTC   <invalid>       etcd-ca                 no
etcd-server                Aug 07, 2022 13:30 UTC   <invalid>       etcd-ca                 no
front-proxy-client         Aug 07, 2022 13:30 UTC   <invalid>       front-proxy-ca          no
scheduler.conf             Aug 07, 2022 13:30 UTC   <invalid>                               no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Aug 05, 2031 13:30 UTC   8y              no
etcd-ca                 Aug 05, 2031 13:30 UTC   8y              no
front-proxy-ca          Aug 05, 2031 13:30 UTC   8y              no
[root@hadoop101 pki]# kubeadm alpha certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

W0220 23:41:15.686121   11419 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
[root@hadoop101 pki]# ll
總用量 56
-rw-r--r-- 1 root root 1220 220 23:41 apiserver.crt
-rw-r--r-- 1 root root 1090 220 23:41 apiserver-etcd-client.crt
-rw------- 1 root root 1675 220 23:41 apiserver-etcd-client.key
-rw------- 1 root root 1679 220 23:41 apiserver.key
-rw-r--r-- 1 root root 1099 220 23:41 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 220 23:41 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1025 87 2021 ca.crt
-rw------- 1 root root 1679 87 2021 ca.key
drwxr-xr-x 2 root root  162 87 2021 etcd
-rw-r--r-- 1 root root 1038 87 2021 front-proxy-ca.crt
-rw------- 1 root root 1675 87 2021 front-proxy-ca.key
-rw-r--r-- 1 root root 1058 220 23:41 front-proxy-client.crt
-rw------- 1 root root 1679 220 23:41 front-proxy-client.key
-rw------- 1 root root 1679 87 2021 sa.key
-rw------- 1 root root  451 87 2021 sa.pub
[root@hadoop101 pki]# openssl x509 -in apiserver.crt -noout -text |grep ' Not '
            Not Before: Aug  7 13:30:11 2021 GMT
            Not After : Feb 20 15:41:16 2024 GMT
[root@hadoop101 pki]# cd /etc/kubernetes
[root@hadoop101 kubernetes]# kubeadm alpha kubeconfig user --client-name=admin
I0220 23:44:51.209752   12126 version.go:252] remote version is much newer: v1.26.1; falling back to: stable-1.18
W0220 23:44:52.663332   12126 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EZ3dOekV6TXpBeE1Wb1hEVE14TURnd05URXpNekF4TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmo4CkgrZW5lSEdWek9KTUJyK3dhQkxoOVhtTXVqSFpXcmVWT0ZxT0EvMENPdnllSEJ4dkJqNGM4SjRNeEJQd0R1NmQKMmEydXdKbHZpUm1ndWJIMllyT3VvY1hMem5jUUV4S2tzeG5KdzhocHVpMVAxK2tYd2hpaWFlRjhib25iTnFMWgovZ0RVQnB6aUI4ZkxGSE1lTXBmS3lYc0pWMjcxNG4xUnhVb1VobFlJaHpaNWZ2TlFYZUVXRkZxM1k2Rm5IUEFBClVoNUxhakNzc21NSVpuL0E4UUU5QlpqYTFXeEhlcHNGeEJoUTRiUFhnWXRWa0UwbEF0MzFnNVV4blhHSEU3OCsKazFNc3lvK0dOdVJRS2JUZHRkQ0Q2UzFwOWhnQ0JFSmdSeTFWOFV6ZkhabXZFVCttMk9WME9EY2w0SkxHWng4RApwbVZMa2J0bldhQ1lYOElxcVRjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHdnB5cjNTQzRCRy9wTDA3b0pNMFFKWTkrZHIKWHhkYkROWS9oQmRzRXpiZWRkQnptZ0hnekF1eUN1UDE1SmZRSVA3YmxUZjdMTGF5Qk9EOHRPQ3dXdGhvM1VZNgpMSjN3TnRzTmE2MEFlaDd6aVVFaEVWdWJLekJFOEZLTUNMeitOZzk2YSsxRUhjeWZjU2YybTVaeFFUNWxaRk90CllpMjUzdFQzckpVMFdWcmtCeE1KOXY5YkxlMkpKWFVtLzdzSWJCWmVpdExXTEhNdkhYMDZMUWkwbHhxWGVhU3YKY0tLaEJCeGN1Q3poRk4wKzFxODVFcVJkS2JvTVQ1K1hRV0s1TGVORHd2K1luQ1k5Z0FHS0NCM2JwZ3JaTXdEdgozTFJTMC9ZeC8yWFdOZDZUNklYSlk5SWhYTitrK3ozYzhpbVFzTWtHWVVwWDVFbnNtQk5CUUplaGkwTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.10.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: admin@kubernetes
current-context: admin@kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN6akNDQWJhZ0F3SUJBZ0lJWTJnV2ZpTEpBbmN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBNE1EY3hNek13TVRGYUZ3MHlOREF5TWpBeE5UUTBOVE5hTUJBeApEakFNQmdOVkJBTVRCV0ZrYldsdU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnM3enpDb2oyZndGUEJQeVFtZHVoYjJ6Z1M5VktUdjhyVDM0eDhGYmhjREQ4cDJKUjdKWUhaSXU4Z0MrS09xa2EKcXRLTUZlUnRoN3UrQnFTUWdqQVBrdm5lREJ3Z2lyWnQ0Vnp5WFpZWUdHS3dQMVNpVnB1KzBERUpQMVNhYzhIVApWdDk0eHhjc09zTC95alYyZjRIY1R1NlpjNDB6Q1VETm1hR3ZpNzYxV0hUdEpzOE1iN01QVU9lNkdqN2dXR0xVCjRpcVlPbm1Pa2IzeUJuVUxIbjZKZVV2RVVJZERMMFZsOVYxc0RQLys0dmc3U0lKMEtjNE5MT01YQWprMzVUTVIKYmhVeW04SzdDK1p1cnNUUjlpbnNTejMzT1B6YVVnYzFSSGl3SFpFc3V1UWQweGh3clR0OW9sdFgveXNrYUplMwpkVW1jeTN3a0RUU2N5b2o3MmdndFRRSURBUUFCb3ljd0pUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0V3WURWUjBsCkJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNiMUtwbHBxT2w4VHhzaUVwZHMKaEFxWms1TTlRV1hiZUxvWGkxMm1kNVQ1ZVRxR2FYa1ltYWVHM3k5WFN4UGhUSy9WdUx1NlNtUmc1THJPbThsYgpIL3AyOGw0bDA4SzRWTjAxTWI4TlM2OXlWcXN0KzIrakgzZEpuSkYxSlEwNlBMQjdpdjVVQXJGRVlmdGlhVFJtCnJxVFpoNndyUVBMd1dnWEQya3NzaXVreEdGTitOS3pqMGRtOTloZUtsdzhzdjVhWXVJNlpCR1NCYkY1cmJ3MW8KeU01R3laMDJPVWVzSEFXSUxHN2NGZnRMb1Y1R1NRMUt4YTV3VCtBa0RJbVAzOFFwSDhGdG45dEJ2TWFyb3RyNgozSTFhaFd4a3o0UGhuaTJVMGIvM2plTWJENnlIaVIwVmkyRFEzbnRCN2ovM2hHdUYycnJYaG45dXdoOWJ3QTFNCnhOcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBczd6ekNvajJmd0ZQQlB5UW1kdWhiMnpnUzlWS1R2OHJUMzR4OEZiaGNERDhwMkpSCjdKWUhaSXU4Z0MrS09xa2FxdEtNRmVSdGg3dStCcVNRZ2pBUGt2bmVEQndnaXJadDRWenlYWllZR0dLd1AxU2kKVnB1KzBERUpQMVNhYzhIVFZ0OTR4eGNzT3NML3lqVjJmNEhjVHU2WmM0MHpDVURObWFHdmk3NjFXSFR0SnM4TQpiN01QVU9lNkdqN2dXR0xVNGlxWU9ubU9rYjN5Qm5VTEhuNkplVXZFVUlkREwwVmw5VjFzRFAvKzR2ZzdTSUowCktjNE5MT01YQWprMzVUTVJiaFV5bThLN0MrWnVyc1RSOWluc1N6MzNPUHphVWdjMVJIaXdIWkVzdXVRZDB4aHcKclR0OW9sdFgveXNrYUplM2RVbWN5M3drRFRTY3lvajcyZ2d0VFFJREFRQUJBb0lCQVFDVDJidzdVRXNrVWxsRQpDdGFRR2NFRVBaV01DOW5pZmJpTTNZd0szZ3o0RXZQaVpOaHJPMGE5aU16NHpTSngrcVQ3RzlNc053bDZmQTltCnUzdzcrM2owT0NKVjU5VkZCYWdCbUVtdWZrYzMyQWFQTWZtUU1QR1hwSjZzdjlXRm4wMVB5dWc1TFhDdXJiVm8KQ3U1OUdMKzNGa0tZY1BBb2ptd1NFcFNxNmFlWEtNYjFKTDlKL1BlQTQzdlZnQXVlbm5aRFdGWG14OWVoZERxZAp0N21PdVMxRTNEWUZLaitZaTEvWGpQRVBteGRCSmFXWW16YStoN1czd0lUblJFbW1Ob0FlR3F2aFB2Q3RKb2hzCkEvK24vMWY3cEs2RG04cWgyYnFNNEhUdmR0K2hsVmVnMHV4Wi9oRjZMV3JPam41eFlaWU5PV05oY0pveG9LdTgKazZaRlFFckJBb0dCQU5rcUxidlVzRk1tbDJVYzZLRUVGUkNWbzlRWTVhT2pnK05WQ1cxWEF4V2Y0Q2xtb2Y3UAo1REl6aXNEYVJjUXZ1bmQ1RWdXKzl3MlcvVzcweWNOOWY4a1pJTnJiTDd6cFJYS24zM01ocFdBVytrL1RjT3lECjZXek9oaFRhY1ppblNhQnhKKzJCOTZCVEFnRFluRUtJT25lQTE3VytjVmNVWWVhL1l2V0NjTHRKQW9HQkFOUGgKWDg5VEd2Y1BEVVNlL2I4Q0RCUkcxYTdRVzZkWGNFdlQxa1pKc2tWalpsUjNMNCtZMVhUNGc2T0RJWWNIWnZPQgpMWEM4WkI0a0ZJZzFnaVFjeU9wMUl4a2VMUUJnVHVIYjkxVFd5R3BoS2hZZGJDMmpaVTR1RTZZRjVTVlJHQXhIClA5dFk0UEZxSXZ1UEJib1h3SjkwdWs2aVdYZjRUbDU5elpBVHpuM2xBb0dCQUw4ZmZleUhQVCtSQVVEOTlrWnYKWDFLZlAvWVVpMVkvUEgzQWczRjFXTU9aVnlGWXNFMmdEVWVaVVE1MWkxMGtYRWwxaGtVRVVrM2xpdG95R2JneApKVnVJLy85ZFZHQkFOTnk1bmRDbjFmSUJodjdtS2NZZU9qdUdiejYvR2FhdDVBQ09WZ09UbEtuSEpFWTJYUis0CjRTdjNldUQ2NEtrd3lSRFpjM0I3QWxmeEFvR0FKNkJ5QzlOdUtxQzlDWVYyelo5elpPTnVtWGhNZS9xbGZQa00KalM3QlVhcnFlNGVpOUlkUC9NVngwVVg0SWtubkhrbWRsd1VVOEhJdENPQ0JDNEg2cmFia3ZwRGZONy9MWVFDRAp2SEZESUdvMXRkY2c0VlE2NFNsSzhYVU95ekRrZjM5ZjJRVkJaTVZGNzZockdNZlNkY0FlREJEZkRNbjYxajlQCkQ5QTBnV1VDZ1lCVGdicGFmVkpwWVA5UVU2YjdzV0Jpc3A1UDJZOHBJOGlNWW53ZUI4eGcvT2dlU0lEWk4yNUgKS2hKUW5IU0tMdFBWWGtPVXgrbG9LWnpudTBxZjhpMUhnU3dIdGpiZjNib1FhNGdmcFJOWVZFQlYzTGJ1eldTdApKVU5UR0FBRE9OV0lyMDN5RDZCMlNCMk9FUFpxYUNqUzVPZTZzV0RXalJvQ2Z0NmtGeU9nK2c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

[root@hadoop101 kubernetes]# kubeadm alpha kubeconfig user --org system:masters --client-name kubernetes-admin  > /etc/kubernetes/admin.conf
I0220 23:44:56.000673   12148 version.go:252] remote version is much newer: v1.26.1; falling back to: stable-1.18
W0220 23:44:57.284954   12148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@hadoop101 kubernetes]# kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf
I0220 23:44:58.420659   12164 version.go:252] remote version is much newer: v1.26.1; falling back to: stable-1.18
W0220 23:44:59.647838   12164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@hadoop101 kubernetes]# kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > /etc/kubernetes/kubelet.conf
I0220 23:45:00.850679   12187 version.go:252] remote version is much newer: v1.26.1; falling back to: stable-1.18
W0220 23:45:02.317471   12187 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@hadoop101 kubernetes]# kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf
I0220 23:45:06.973889   12208 version.go:252] remote version is much newer: v1.26.1; falling back to: stable-1.18
W0220 23:45:07.855108   12208 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@hadoop101 kubernetes]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆蓋"/root/.kube/config"? y
[root@hadoop101 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
hadoop101   Ready    master   562d   v1.18.0
hadoop102   Ready    <none>   562d   v1.18.0
hadoop103   Ready    <none>   562d   v1.18.0
[root@hadoop101 kubernetes]# systemctl restart kube-apiserver
Failed to restart kube-apiserver.service: Unit not found.
[root@hadoop101 kubernetes]# systemctl restart kube-apiserver
Failed to restart kube-apiserver.service: Unit not found.
[root@hadoop101 kubernetes]# kubectl -n david-test get po -o wide
No resources found in david-test namespace.
[root@hadoop101 kubernetes]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
nginx-f89759699-qnfxv   1/1     Running   0          562d   10.244.2.3   hadoop103   <none>           <none>
[root@hadoop101 kubernetes]# kubectl get modes
error: the server doesn't have a resource type "modes"
[root@hadoop101 kubernetes]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
hadoop101   Ready    master   562d   v1.18.0
hadoop102   Ready    <none>   562d   v1.18.0
hadoop103   Ready    <none>   562d   v1.18.0
[root@hadoop101 kubernetes]# kubectl get po -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
nginx-f89759699-qnfxv   1/1     Running   0          562d   10.244.2.3   hadoop103   <none>           <none>
[root@hadoop101 kubernetes]#

k8s解決證書過期官方文檔:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
幫助文檔: https://www.cnblogs.com/00986014w/p/13095628.html文章來源地址http://www.zghlxwxcb.cn/news/detail-762033.html

到了這里,關(guān)于Unable to connect to the server: x509: certificate has expired or is not yet valid的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • EKS 解決Unable to connect to the server 問題

    EKS 解決Unable to connect to the server 問題

    報錯內(nèi)容如下 1、先配置日志查詢我們的eks所屬用戶 2、查詢?nèi)罩?執(zhí)行以下查詢語句 3、查詢當(dāng)前主機aws configure 4、執(zhí)行操作遠程k8s 報錯如下 5、降低kubectl版本 各版本下載地址 https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

    2024年02月12日
    瀏覽(98)
  • svn: E170013: Unable to connect to a repository at URL ‘‘ svn: E230001: Server SSL certificate

    svn: E170013: Unable to connect to a repository at URL ‘‘ svn: E230001: Server SSL certificate

    svn: E170013: Unable to connect to a repository at URL ‘https://127.0.0.1/svn/xxxx/trunk’ svn: E230001: Server SSL certificate verification failed: certificate issued for a different hostname, issuer is not trusted 意思是服務(wù)器的SSL證書驗證失敗,證書為不同主機名頒發(fā)。 解決方法:打開CMD并執(zhí)行如下命令 然后會讓你選

    2024年02月02日
    瀏覽(29)
  • Docker推送拉取鏡像到Harbor報錯:Get “https://198.30.0.111:8443/v2/“: tls: failed to verify certificate: x509:

    Docker推送拉取鏡像到Harbor報錯:Get “https://198.30.0.111:8443/v2/“: tls: failed to verify certificate: x509:

    天行健,君子以自強不息;地勢坤,君子以厚德載物。 每個人都有惰性,但不斷學(xué)習(xí)是好好生活的根本,共勉! 文章均為學(xué)習(xí)整理筆記,分享記錄為主,如有錯誤請指正,共同學(xué)習(xí)進步。 在docker中推送鏡像到Harbor鏡像倉庫中,報錯 內(nèi)容如下 截圖如下 使用docker推送鏡像到

    2024年04月22日
    瀏覽(38)
  • K8S之Kuboard顯示已導(dǎo)入【創(chuàng)建 X509KeyPair 失敗】 tls: failed to find any PEM data in certificate input

    K8S之Kuboard顯示已導(dǎo)入【創(chuàng)建 X509KeyPair 失敗】 tls: failed to find any PEM data in certificate input

    kuboard用著用著,突然發(fā)現(xiàn),只是 已導(dǎo)入 狀態(tài),而不是 已就緒 的狀態(tài),且重新導(dǎo)入kubeconfig顯示 【創(chuàng)建 X509KeyPair 失敗】 tls: failed to find any PEM data in certificate input 的提示。 具體問題截圖如下: 發(fā)現(xiàn)集群狀態(tài)正常 發(fā)現(xiàn)pod狀態(tài)也是運行正常的 也是OK的 既然都看上去正常,那么,

    2024年02月04日
    瀏覽(76)
  • unable to connect to the server: net/http: tls handshake timeout已解決

    unable to connect to the server: net/http: tls handshake timeout已解決

    在自己電腦上學(xué)習(xí)k8s,使用kind安裝了一個集群,過了一段時間再打開發(fā)現(xiàn)了一個問題。 執(zhí)行 kubectl get po 的時候有報錯 查了半個小時也沒解決,有說內(nèi)存不夠,也有說要重啟服務(wù)的,也有說重啟docker的,關(guān)閉交換分區(qū)的,還有代理不對什么的,都沒有解決我的問題,最后我試

    2024年02月13日
    瀏覽(96)
  • kubectl命令報錯:Unable to connect to the server: dial tcp XXX:16443: connect: no route to host

    kubectl命令報錯:Unable to connect to the server: dial tcp XXX:16443: connect: no route to host

    前提 架構(gòu): keepalived+haproxy+kubernetes 問題說明 kubernetes集群好久不用了,今天打開集群執(zhí)行一個 kubectl get nodes 命令,報錯如下: Unable to connect to the server: dial tcp 192.168.2.XXX:16443: connect: no route to host 分析原因 出現(xiàn)這個問題幾種原因, 集群壞了:如果報錯的IP是master1的節(jié)點IP或虛

    2024年02月09日
    瀏覽(130)
  • kubesphere-- Unable to connect to the server: dial tcp 192.168.211.182:6443: no route to host

    kubesphere-- Unable to connect to the server: dial tcp 192.168.211.182:6443: no route to host

    ? ? ? ? 最近鼓搗kubesphere單節(jié)點安裝,部署服務(wù),使用流水線用的好好的,有兩次重啟服務(wù)器突然顯示kubesphere連接不上。初步排查,使用 kubectl get pods --all-namespaces 命令,顯示kubectl不可用,缺少路由。 Unable to connect to the server: dial tcp 192.168.211.182:6443: connect: no route to host ? ?

    2024年02月03日
    瀏覽(28)
  • MINIO 配置https報錯:x509:cannot validate certificate【已解決】

    MINIO 配置https報錯:x509:cannot validate certificate【已解決】

    為MINIO配置https時,首先按官方要求官網(wǎng)描述(How to secure access to MinIO server with TLS),將TLS的公私鑰放到:{{HOME}}/.minio/certs 里。 注意: 私鑰需要命名為:private.key 公鑰需要命名為:public.crt (如果公鑰是以pem格式結(jié)尾,可直接改為crt格式) 但配置完成后會遇到如下錯誤,x509:c

    2024年02月14日
    瀏覽(92)
  • docker login : x509: certificate signed by unknown authority

    docker login 登錄harbor鏡像倉庫報錯. 修改docker配置文件, 將訪問的鏡像倉庫地址加入到docker的 /etc/docker/daemon.json 配置中. 新增參數(shù) insecure-registries . 重啟docker服務(wù) systemctl restart docker 登錄

    2024年02月11日
    瀏覽(25)
  • 【Node.js】npm ERR! request to https://registry.npm.taobao.org/cnpm failed ... certificate has expired

    【Node.js】npm ERR! request to https://registry.npm.taobao.org/cnpm failed ... certificate has expired

    在使用NPM安裝包的過程中,出現(xiàn)以下錯誤: 請求到的 https://registry.npm.taobao.org/cnpm 失敗,原因是證書已經(jīng)過期。 2024年1月22日,淘寶NPM鏡像站的域名 registry.npm.taobao.org 的SSL證書過期。 清理NPM的緩存 切換到新的NPM鏡像站 檢查是否成功切換到新的鏡像站 如果這個命令返回的是

    2024年02月20日
    瀏覽(22)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包