前言:
在kubesphere部署的過程中,由于kubernetes集群的版本和kubesphere的版本不匹配,因此想要回退重新部署,但發(fā)現(xiàn)要用的namespace? kubesphere-system?普通的刪除方法無效,一直處于Terminating狀態(tài)
[root@centos1 ~]# kubectl get ns
NAME STATUS AGE
default Active 12h
kube-flannel Active 95m
kube-node-lease Active 12h
kube-public Active 12h
kube-system Active 12h
kubesphere-system Terminating 27m
新部署由于namespace一直是刪除狀態(tài),無法繼續(xù)進行:
[root@centos1 ~]# kubectl apply -f kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
Warning: Detected changes to resource kubesphere-system which is currently being deleted.
namespace/kubesphere-system unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer configured
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
Error from server (Forbidden): error when creating "kubesphere-installer.yaml": serviceaccounts "ks-installer" is forbidden: unable to create new content in namespace kubesphere-system because it is being terminated
Error from server (Forbidden): error when creating "kubesphere-installer.yaml": deployments.apps "ks-installer" is forbidden: unable to create new content in namespace kubesphere-system because it is being terminated
具體表現(xiàn)為一直掛在刪除界面:
[root@centos1 ~]# kubectl delete ns kubesphere-system
namespace "kubesphere-system" deleted
^C
[root@centos1 ~]# kubectl delete ns kubesphere-system
namespace "kubesphere-system" deleted
^C
下面就本次拍錯和最終解決方案做一個比較詳細的說明
一,
解決方案一
這個說來慚愧,不過也是比較常規(guī)的,因為百分之九十的錯誤可以通過重啟服務(wù)解決,百分之九十九的錯誤可以通過重啟服務(wù)器解決,但很不幸,這次的namespace異常狀態(tài)是那百分之一
重啟服務(wù),重啟服務(wù)器沒什么好說的,該方案無效
二,
解決方案二
刪除命令增加強制刪除參數(shù)
kubectl delete ns kubesphere-system --force --grace-period=0
實際效果不盡如人意,仍然沒有完成刪除:
可以看到該命令貼心(無用)的給了一個警告,現(xiàn)在是立刻刪除,不會等待Terminating狀態(tài)結(jié)束的立刻刪除,然而并沒有卵用
[root@centos1 ~]# kubectl delete ns kubesphere-system --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "kubesphere-system" force deleted
三,
解決方案三
其實普通的方式已經(jīng)確定是無法刪除的,那么,現(xiàn)在有兩條路,一個是通過etcd直接刪除,一個是通過apiserver服務(wù)的api來進行刪除
那么,etcd直接刪除是有一定的風(fēng)險的,因此,這里使用api刪除
1,
獲取namespace的頂用文件,格式為json
kubectl get ns kubesphere-system -o json > /tmp/kubesphere.json
文件關(guān)鍵內(nèi)容如下:
"spec": {
"finalizers": [
"kubernetes"
]
},
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"kubesphere-system\"}}\n"
},
"creationTimestamp": "2023-06-29T15:16:45Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2023-06-30T04:28:31Z",
"finalizers": [
"finalizers.kubesphere.io/namespaces"
],
?刪除后:
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"kubesphere-system\"}}\n"
},
"creationTimestamp": "2023-06-29T15:16:45Z",
"deletionGracePeriodSeconds": 0,
"spec": {
},
?
Finalize字段的說明:
Finalizers字段屬于 Kubernetes GC 垃圾收集器,是一種刪除攔截機制,能夠讓控制器實現(xiàn)異步的刪除前(Pre-delete)回調(diào)。其存在于任何一個資源對象的 Meta 中,在 k8s 源碼中聲明為 []string,該 Slice 的內(nèi)容為需要執(zhí)行的攔截器名稱。
對帶有 Finalizer 的對象的第一個刪除請求會為其 metadata.deletionTimestamp 設(shè)置一個值,但不會真的刪除對象。一旦此值被設(shè)置,finalizers 列表中的值就只能被移除。
當 metadata.deletionTimestamp 字段被設(shè)置時,負責(zé)監(jiān)測該對象的各個控制器會通過輪詢對該對象的更新請求來執(zhí)行它們所要處理的所有 Finalizer。 當所有 Finalizer 都被執(zhí)行過,資源被刪除。
metadata.deletionGracePeriodSeconds 的取值控制對更新的輪詢周期。
每個控制器要負責(zé)將其 Finalizer 從列表中去除。
每執(zhí)行完一個就從 finalizers 中移除一個,直到 finalizers 為空,之后其宿主資源才會被真正的刪除。
因此,將finalizers字段刪除即可,(有得情況是只有spec字段有finalizers,有得情況是spec和metadata都有,總之所有finalizers刪除即可,一般是只有一個spec包含finalizers)
2,
將該json文件放置到root根目錄,開啟apiserver的代理:
[root@centos1 ~]# kubectl proxy --port=8001
Starting to serve on 127.0.0.1:8001
3,
重新開一個shell窗口,調(diào)用api開始刪除,命令如下:
curl -k -H "Content-Type: application/json" -X PUT --data-binary @kubesphere.json http://127.0.0.1:8001/api/v1/namespaces/kubesphere-system/finalize
輸出如下表示刪除成功:
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kubesphere-system",
"uid": "7a1c9fed-dbe3-4d65-9f57-db93f7a358f7",
"resourceVersion": "18113",
"creationTimestamp": "2023-06-24T02:27:18Z",
"deletionTimestamp": "2023-06-24T02:28:29Z",
"labels": {
"kubernetes.io/metadata.name": "kubesphere-system"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"kubesphere-system\"}}\n"
},
"managedFields": [
{
"manager": "kubectl-client-side-apply",
"operation": "Update",
"apiVersion": "v1",
"time": "2023-06-24T02:27:18Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:kubernetes.io/metadata.name":{}}}}
},
{
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2023-06-24T02:28:35Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"NamespaceContentRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionContentFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamespaceFinalizersRemaining\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},
"subresource": "status"
}
]
},
"spec": {
},
"status": {
"phase": "Terminating",
"conditions": [
{
"type": "NamespaceDeletionDiscoveryFailure",
"status": "True",
"lastTransitionTime": "2023-06-24T02:28:34Z",
"reason": "DiscoveryFailed",
"message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
},
{
"type": "NamespaceDeletionGroupVersionParsingFailure",
"status": "False",
"lastTransitionTime": "2023-06-24T02:28:35Z",
"reason": "ParsedGroupVersions",
"message": "All legacy kube types successfully parsed"
},
{
"type": "NamespaceDeletionContentFailure",
"status": "False",
"lastTransitionTime": "2023-06-24T02:28:35Z",
"reason": "ContentDeleted",
"message": "All content successfully deleted, may be waiting on finalization"
},
{
"type": "NamespaceContentRemaining",
"status": "False",
"lastTransitionTime": "2023-06-24T02:28:35Z",
"reason": "ContentRemoved",
"message": "All content successfully removed"
},
{
"type": "NamespaceFinalizersRemaining",
"status": "False",
"lastTransitionTime": "2023-06-24T02:28:35Z",
"reason": "ContentHasNoFinalizers",
"message": "All content-preserving finalizers finished"
}
]
}
}
檢查是否刪除了Terminating狀態(tài)的namespace?kubesphere-system:文章來源:http://www.zghlxwxcb.cn/news/detail-705758.html
root@centos1 ~]# kubectl get ns
NAME STATUS AGE
default Active 13h
kube-flannel Active 104m
kube-node-lease Active 13h
kube-public Active 13h
kube-system Active 13h
注:打錯了namespace的錯誤調(diào)用api(激動了,kubesphere-system?給打成了kubespheer-system):
[root@centos1 ~]# curl -k -H "Content-Type: application/json" -X PUT --data-binary @kubesphere.json http://127.0.0.1:8001/api/v1/namespaces/kubespheer-system/finalize
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the name of the object (kubesphere-system) does not match the name on the URL (kubespheer-system)",
"reason": "BadRequest",
"code": 400
那么,有得時候遇到刪除不掉的pod也是可以用此方法刪除的,等以后碰到了我在補充哈。文章來源地址http://www.zghlxwxcb.cn/news/detail-705758.html
到了這里,關(guān)于云原生|kubernetes|刪除不掉的namespace 一直處于Terminating狀態(tài)的解決方案的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!