国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

這篇具有很好參考價(jià)值的文章主要介紹了云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

前言:

本文將主要就在centos7操作系統(tǒng)下已有的一個(gè)利用kubeadm部署的集群內(nèi)在線安裝kubesphere做一個(gè)介紹,該kubernetes集群是使用的etcd外部集群。

kubernetes集群的搭建本文不做過多介紹,具體的搭建流程見我的博客:

云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群_kubeadm etcd集群_晚風(fēng)_END的博客-CSDN博客

下面開始就在現(xiàn)有集群內(nèi)部署kubesphere做一個(gè)詳細(xì)的介紹。

一,

kubernetes集群的狀態(tài)介紹

可以看到,該集群使用的是外部etcd,pod里沒有etcd嘛,計(jì)劃網(wǎng)絡(luò)插件安裝flannel,kubernetes的版本是1.22.16

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

?二,

kubesphere部署的前提條件

1,需要開啟集群監(jiān)控服務(wù),也就是metric?server

http://[![asciicast](https://asciinema.org/a/[![asciicast](https://asciinema.org/a/[![asciicast](https://asciinema.org/a/14.png)](https://asciinema.org/a/14))](https://asciinema.org/a/14))](https://asciinema.org/a/14)

詳細(xì)部署方法見我的博客:kubesphere安裝部署附帶Metrics server的安裝_晚風(fēng)_END的博客-CSDN博客?

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

?

2,需要一個(gè)默認(rèn)的storage?class,?也就是存儲(chǔ)插件

安裝文檔見我的博客:

kubernetes學(xué)習(xí)之持久化存儲(chǔ)StorageClass(4---nfs存儲(chǔ)服務(wù))_fuseim.pri/ifs_晚風(fēng)_END的博客-CSDN博客

3,

如果是在虛擬機(jī)里練習(xí)部署安裝,那么,虛擬機(jī)的內(nèi)存建議至少8G,否則會(huì)安裝失敗或者不能正常運(yùn)行

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

?文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-509570.html

4,

由于kubernetes集群的版本是1.22.16是比較高的,因此,kubesphere的版本也需要比較高,本例測(cè)試用的kubesphere的版本是3.3.2

查詢kubernetes和kubesphere的版本依賴,見下面網(wǎng)址:

Prerequisites

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

?

三,

etcd證書的處理

healthcheck開始的這些證書在使用內(nèi)部堆疊etcd集群時(shí)是自動(dòng)生成的,而我們是使用的外部etcd,這些證書是沒有的

這些證書的作用是kubesphere啟動(dòng)的時(shí)候Prometheus的存活探針使用,而在安裝kubesphere的時(shí)候,我們是無(wú)法修改源碼的,因此,將外部etcd的證書改名后,放入定義的目錄就可以使得kubesphere正常安裝了(那個(gè)secret必須要有得哦,否則會(huì)安裝失敗,檢驗(yàn)安裝狀態(tài)那一步會(huì)過不去,Prometheus會(huì)無(wú)法啟動(dòng))。

[root@centos1 ~]# cp /opt/etcd/ssl/server.pem  /etc/kubernetes/pki/etcd/healthcheck-client.crt
[root@centos1 ~]# cp /opt/etcd/ssl/server-key.pem  /etc/kubernetes/pki/etcd/healthcheck-client.key
[root@centos1 ~]# cp /opt/etcd/ssl/ca.pem  /etc/kubernetes/pki/etcd/ca.crt

[root@centos1 data]# scp -r /etc/kubernetes/pki/etcd/* slave1:/etc/kubernetes/pki/etcd/
apiserver-etcd-client-key.pem                                                                                                                                                   100% 1675     1.1MB/s   00:00    
apiserver-etcd-client.pem                                                                                                                                                       100% 1338     1.3MB/s   00:00    
ca.crt                                                                                                                                                                          100% 1265     2.0MB/s   00:00    
ca.pem                                                                                                                                                                          100% 1265     1.6MB/s   00:00    
healthcheck-client.crt                                                                                                                                                          100% 1338     2.6MB/s   00:00    
healthcheck-client.key                                                                                                                                                          100% 1675     2.6MB/s   00:00    
[root@centos1 data]# scp -r /etc/kubernetes/pki/etcd/* slave2:/etc/kubernetes/pki/etcd/
apiserver-etcd-client-key.pem                                                                                                                                                   100% 1675     1.0MB/s   00:00    
apiserver-etcd-client.pem                                                                                                                                                       100% 1338     2.0MB/s   00:00    
ca.crt                                                                                                                                                                          100% 1265     2.3MB/s   00:00    
ca.pem                                                                                                                                                                          100% 1265     2.0MB/s   00:00    
healthcheck-client.crt                                                                                                                                                          100% 1338     2.6MB/s   00:00    
healthcheck-client.key     

 kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/healthcheck-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/etcd/healthcheck-client.key             

四,

kubesphere的正式安裝

kubesphere主要是兩個(gè)資源清單文件,文件內(nèi)容如下:

下載地址:

https://www.kubesphere.io/docs/v3.3/quick-start/minimal-kubesphere-on-k8s/#prerequisites

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)?

?

[root@centos1 ~]# cat kubesphere-installer.yaml 
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              x-kubernetes-preserve-unknown-fields: true
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
      - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - edgeruntime.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - application.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'


---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-installer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-installer
  template:
    metadata:
      labels:
        app: ks-installer
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.3.2
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
          readOnly: true
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time
[root@centos1 ~]# cat cluster-configuration.yaml 
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    # adminPassword: ""     # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used.
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  # dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 192.168.123.11  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort

    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: true
      enableHA: false
      volumeSize: 2Gi # Redis PVC size.
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.
        enabled: false
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    # resources: {}
    jenkinsMemoryLim: 4Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 2Gi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    #   volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plug-in installation.
      nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
        enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: false # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    istio:  # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
        cni:
          enabled: true
  edgeruntime:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false
    kubeedge:        # kubeedge configurations
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
            - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true 
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:        # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.
    enabled: false   # Enable or disable Gatekeeper.
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    # image: 'alpine:3.15' # There must be an nsenter program in the image

五,

查看日志

[root@centos1 ~]# kubectl get po -A
NAMESPACE           NAME                                      READY   STATUS    RESTARTS        AGE
kube-flannel        kube-flannel-ds-679sl                     1/1     Running   0               81m
kube-flannel        kube-flannel-ds-g6xx5                     1/1     Running   0               81m
kube-flannel        kube-flannel-ds-mtq4v                     1/1     Running   0               81m
kube-system         coredns-7f6cbbb7b8-cndqt                  1/1     Running   0               4d14h
kube-system         coredns-7f6cbbb7b8-pk4mv                  1/1     Running   0               4d14h
kube-system         kube-apiserver-master                     1/1     Running   0               82m
kube-system         kube-controller-manager-master            1/1     Running   6 (27m ago)     4d14h
kube-system         kube-proxy-7bqs7                          1/1     Running   3 (4d13h ago)   4d14h
kube-system         kube-proxy-8hkdn                          1/1     Running   3 (4d13h ago)   4d14h
kube-system         kube-proxy-jkghf                          1/1     Running   3 (4d13h ago)   4d14h
kube-system         kube-scheduler-master                     1/1     Running   6 (27m ago)     4d14h
kube-system         metrics-server-55b9b69769-85nf6           1/1     Running   0               6m37s
kube-system         nfs-client-provisioner-686ddd45b9-nx85p   1/1     Running   0               15m
kubesphere-system   ks-installer-846c78ddbf-fvg7p             1/1     Running   0               22s
[root@centos1 ~]# kubectl logs -n kubesphere-system -f  ks-installer-846c78ddbf-fvg7p
2023-06-28T12:44:02+08:00 INFO     : shell-operator latest
2023-06-28T12:44:02+08:00 INFO     : Use temporary dir: /tmp/shell-operator
2023-06-28T12:44:02+08:00 INFO     : Initialize hooks manager ...
2023-06-28T12:44:02+08:00 INFO     : Search and load hooks ...
2023-06-28T12:44:02+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2023-06-28T12:44:02+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2023-06-28T12:44:02+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
2023-06-28T12:44:02+08:00 INFO     : Initializing schedule manager ...
2023-06-28T12:44:02+08:00 INFO     : KUBE Init Kubernetes client
2023-06-28T12:44:02+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully

最終正確的日志:

**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.123.12:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-06-28 13:42:14
#####################################################

云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

?

[root@centos1 ~]# kubectl get po -A
NAMESPACE                      NAME                                                      READY   STATUS      RESTARTS         AGE
argocd                         devops-argocd-application-controller-0                    1/1     Running     0                49m
argocd                         devops-argocd-applicationset-controller-b88d4b875-6tjjl   1/1     Running     0                49m
argocd                         devops-argocd-dex-server-5f4c69cdb8-9stkk                 1/1     Running     0                49m
argocd                         devops-argocd-notifications-controller-6d86f8974f-dtb9n   1/1     Running     0                49m
argocd                         devops-argocd-redis-655969589d-8f9gt                      1/1     Running     0                49m
argocd                         devops-argocd-repo-server-f77687668-2xvbc                 1/1     Running     0                49m
argocd                         devops-argocd-server-6c55bbb84f-hqt7s                     1/1     Running     0                49m
istio-system                   istio-cni-node-9bfbn                                      1/1     Running     0                45m
istio-system                   istio-cni-node-cpcxg                                      1/1     Running     0                45m
istio-system                   istio-cni-node-fp4g4                                      1/1     Running     0                45m
istio-system                   istio-ingressgateway-68cb85486d-hnlxj                     1/1     Running     0                45m
istio-system                   istiod-1-11-2-6784498b47-bz4fb                            1/1     Running     0                50m
istio-system                   jaeger-collector-7cd595d96d-wcxcn                         1/1     Running     0                11m
istio-system                   jaeger-operator-6f94b6594f-z9wft                          1/1     Running     0                24m
istio-system                   jaeger-query-c9568b97c-7wj4g                              2/2     Running     0                6m32s
istio-system                   kiali-6558c65c47-k9tkw                                    1/1     Running     0                10m
istio-system                   kiali-operator-6648dcb67d-vxvtg                           1/1     Running     0                24m
kube-flannel                   kube-flannel-ds-679sl                                     1/1     Running     0                144m
kube-flannel                   kube-flannel-ds-g6xx5                                     1/1     Running     0                144m
kube-flannel                   kube-flannel-ds-mtq4v                                     1/1     Running     0                144m
kube-system                    coredns-7f6cbbb7b8-cndqt                                  1/1     Running     0                4d15h
kube-system                    coredns-7f6cbbb7b8-pk4mv                                  1/1     Running     0                4d15h
kube-system                    kube-apiserver-master                                     1/1     Running     0                145m
kube-system                    kube-controller-manager-master                            1/1     Running     11 (8m33s ago)   4d15h
kube-system                    kube-proxy-7bqs7                                          1/1     Running     3 (4d14h ago)    4d15h
kube-system                    kube-proxy-8hkdn                                          1/1     Running     3 (4d14h ago)    4d15h
kube-system                    kube-proxy-jkghf                                          1/1     Running     3 (4d14h ago)    4d15h
kube-system                    kube-scheduler-master                                     0/1     Error       10 (8m45s ago)   4d15h
kube-system                    metrics-server-55b9b69769-85nf6                           1/1     Running     0                69m
kube-system                    nfs-client-provisioner-686ddd45b9-nx85p                   0/1     Error       6 (5m56s ago)    78m
kube-system                    snapshot-controller-0                                     1/1     Running     0                52m
kubesphere-controls-system     default-http-backend-5bf68ff9b8-hdqh7                     1/1     Running     0                51m
kubesphere-controls-system     kubectl-admin-6dbcb94855-lgf9q                            1/1     Running     0                20m
kubesphere-devops-system       devops-28132140-r9p22                                     0/1     Completed   0                46m
kubesphere-devops-system       devops-28132170-vqz2p                                     0/1     Completed   0                16m
kubesphere-devops-system       devops-apiserver-54f87654c6-bqf67                         1/1     Running     2 (16m ago)      49m
kubesphere-devops-system       devops-controller-7f765f68d4-8x4kb                        1/1     Running     0                49m
kubesphere-devops-system       devops-jenkins-c8b495c5-8xhzq                             1/1     Running     4 (5m53s ago)    49m
kubesphere-devops-system       s2ioperator-0                                             1/1     Running     0                49m
kubesphere-logging-system      elasticsearch-logging-data-0                              1/1     Running     0                51m
kubesphere-logging-system      elasticsearch-logging-data-1                              1/1     Running     2 (5m55s ago)    48m
kubesphere-logging-system      elasticsearch-logging-discovery-0                         1/1     Running     0                51m
kubesphere-logging-system      fluent-bit-6r2f2                                          1/1     Running     0                46m
kubesphere-logging-system      fluent-bit-lwknk                                          1/1     Running     0                46m
kubesphere-logging-system      fluent-bit-wft6n                                          1/1     Running     0                46m
kubesphere-logging-system      fluentbit-operator-6fdb65899c-cp6xr                       1/1     Running     0                51m
kubesphere-logging-system      ks-events-exporter-f7f75f84d-6cx2t                        2/2     Running     0                45m
kubesphere-logging-system      ks-events-operator-684486db88-62kgt                       1/1     Running     0                50m
kubesphere-logging-system      ks-events-ruler-8596865dcf-9m4tl                          2/2     Running     0                45m
kubesphere-logging-system      ks-events-ruler-8596865dcf-ds5qn                          2/2     Running     0                45m
kubesphere-logging-system      kube-auditing-operator-84857bf967-6lpv7                   1/1     Running     0                50m
kubesphere-logging-system      kube-auditing-webhook-deploy-64cfb8c9f8-s4swb             1/1     Running     0                46m
kubesphere-logging-system      kube-auditing-webhook-deploy-64cfb8c9f8-xgm4k             1/1     Running     0                46m
kubesphere-logging-system      logsidecar-injector-deploy-586fb644fc-h4jsx               2/2     Running     0                5m31s
kubesphere-logging-system      logsidecar-injector-deploy-586fb644fc-qtt52               2/2     Running     0                5m31s
kubesphere-monitoring-system   alertmanager-main-0                                       2/2     Running     0                42m
kubesphere-monitoring-system   alertmanager-main-1                                       2/2     Running     0                42m
kubesphere-monitoring-system   alertmanager-main-2                                       2/2     Running     0                42m
kubesphere-monitoring-system   kube-state-metrics-687d66b747-9c2tg                       3/3     Running     0                49m
kubesphere-monitoring-system   node-exporter-4jkpr                                       2/2     Running     0                49m
kubesphere-monitoring-system   node-exporter-8fzzd                                       2/2     Running     0                49m
kubesphere-monitoring-system   node-exporter-wm27p                                       2/2     Running     0                49m
kubesphere-monitoring-system   notification-manager-deployment-78664576cb-fdgft          2/2     Running     0                11m
kubesphere-monitoring-system   notification-manager-deployment-78664576cb-ztqw4          2/2     Running     0                11m
kubesphere-monitoring-system   notification-manager-operator-7d44854f54-fkzrv            1/2     Error       2 (5m54s ago)    49m
kubesphere-monitoring-system   prometheus-k8s-0                                          2/2     Running     0                42m
kubesphere-monitoring-system   prometheus-k8s-1                                          2/2     Running     0                42m
kubesphere-monitoring-system   prometheus-operator-8955bbd98-7jv9m                       2/2     Running     0                49m
kubesphere-monitoring-system   thanos-ruler-kubesphere-0                                 2/2     Running     1 (8m16s ago)    42m
kubesphere-monitoring-system   thanos-ruler-kubesphere-1                                 2/2     Running     0                42m
kubesphere-system              ks-apiserver-7f4d67c7bc-wjwtg                             1/1     Running     0                51m
kubesphere-system              ks-console-5c9fcbc67b-rfnxp                               1/1     Running     0                51m
kubesphere-system              ks-controller-manager-75ccc66ccf-sl29v                    0/1     Error       3 (8m43s ago)    51m
kubesphere-system              ks-installer-846c78ddbf-fvg7p                             1/1     Running     0                62m
kubesphere-system              minio-859cb4d777-7pzsv                                    1/1     Running     0                52m
kubesphere-system              openldap-0                                                1/1     Running     1 (51m ago)      52m
kubesphere-system              openpitrix-import-job-2t2lp                               0/1     Completed   0                6m12s
kubesphere-system              redis-68d7fd7b96-nhcfx                                    1/1     Running     0                52m

到了這里,關(guān)于云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 云原生|kubernetes|使用cri-docker部署基于kubeadm-1.25.4的集群

    云原生|kubernetes|使用cri-docker部署基于kubeadm-1.25.4的集群

    前言: kubernetes的部署從1.24版本開始后,棄用docker-shim,也就是說(shuō)部署1.24版本后的集群不能使用docker-ce了。 比較清晰的解決方案有兩個(gè),一是使用containerd,這個(gè)是一個(gè)新的支持cri標(biāo)準(zhǔn)的shim,一個(gè)是使用cri-docker這樣的中間插件形式, 一頭通過CRI跟kubelet交互,另一頭跟docker

    2024年02月07日
    瀏覽(20)
  • CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】

    CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】

    好消息好消息,阿里云全線降價(jià),大量服務(wù)提供免費(fèi)甚至永久的試用,趕緊來(lái)薅大廠羊毛吧,坐電梯即可直達(dá)! 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/1000370359/ 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/

    2023年04月22日
    瀏覽(23)
  • 云原生之容器編排實(shí)踐-基于CentOS7搭建三個(gè)節(jié)點(diǎn)的Kubernetes集群

    云原生之容器編排實(shí)踐-基于CentOS7搭建三個(gè)節(jié)點(diǎn)的Kubernetes集群

    前面采用 minikube 作為 Kubernetes 環(huán)境來(lái)體驗(yàn)學(xué)習(xí) Kubernetes 基本概念與操作,這樣避免了初學(xué)者在裸金屬主機(jī)上搭建 Kubernetes 集群的復(fù)雜度,但是隨著產(chǎn)品功能的逐漸完善,我們需要過渡到生產(chǎn)環(huán)境中的 K8S 集群模式;而在實(shí)際上生產(chǎn)環(huán)境之前,我們先在本地虛擬機(jī)上進(jìn)行了環(huán)境

    2024年02月19日
    瀏覽(23)
  • CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】?jī)?nèi)有福利

    CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】?jī)?nèi)有福利

    好消息好消息,阿里云全線降價(jià),大量服務(wù)提供免費(fèi)甚至永久的試用,趕緊來(lái)薅大廠羊毛吧,坐電梯即可直達(dá)! 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/1000370359/ 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/

    2023年04月27日
    瀏覽(22)
  • Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    宿主機(jī)系統(tǒng) 集群角色 服務(wù)器IP 主機(jī)名稱 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服務(wù)器初始化及網(wǎng)絡(luò)配置 VMware安裝Centos7并初始化網(wǎng)絡(luò)使外部可以訪問** 注意事項(xiàng):請(qǐng)一定要看完上面這篇文章再執(zhí)行下面的操

    2024年02月03日
    瀏覽(55)
  • 【aliyun ECS】CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】?jī)?nèi)有福利

    【aliyun ECS】CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云實(shí)驗(yàn)室】?jī)?nèi)有福利

    好消息好消息,阿里云全線降價(jià),大量服務(wù)提供免費(fèi)甚至永久的試用,趕緊來(lái)薅大廠羊毛吧,坐電梯即可直達(dá)! 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/1000370359/ 送福利!阿里云熱門產(chǎn)品免費(fèi)領(lǐng)(含ECS),點(diǎn)擊進(jìn)入 :https://click.aliyun.com/m/

    2023年04月27日
    瀏覽(24)
  • 云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄)

    云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄)

    kubesphere的離線化部署指的是通過自己搭建的harbor私有倉(cāng)庫(kù)拉取鏡像,完全不依賴于外部網(wǎng)絡(luò)的方式部署。 我的kubernetes集群是一個(gè)單master節(jié)點(diǎn),雙工作節(jié)點(diǎn),總計(jì)三個(gè)節(jié)點(diǎn)的版本為1.22.16的集群。 該集群只是初始化完成了,網(wǎng)絡(luò)插件什么的都還沒有安裝,本文計(jì)劃做一個(gè)整合

    2024年02月12日
    瀏覽(20)
  • Centos7下Kubernets kubeadm方式安裝常見問題(一)

    目錄 常見問題一,error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: “cgroupfs” ?常見問題二、error: “Failed to load kubelet config file” err=“failed to load Kubelet config file /var/lib/kubelet/config.yaml” 常見問題一,error: fa

    2024年02月11日
    瀏覽(19)
  • CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二進(jìn)制部署

    CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二進(jìn)制部署

    查看版本關(guān)系 1.1 軟件獲取 所有軟件均為開源軟件,源文件從官方鏈接或官方鏡像鏈接地址獲取。 1.1.1 centos 7.9 https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/ 下載2009或2207其中任何一個(gè)版本都可以。 下載:https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Min

    2024年02月04日
    瀏覽(45)
  • CentOS7.9+Kubernetes1.29.2+Docker25.0.3高可用集群二進(jìn)制部署

    CentOS7.9+Kubernetes1.29.2+Docker25.0.3高可用集群二進(jìn)制部署

    Kubernetes高可用集群(Kubernetes1.29.2+Docker25.0.3)二進(jìn)制部署 二進(jìn)制軟件部署flannel v0.22.3網(wǎng)絡(luò),使用的etcd是版本3,與之前使用版本2不同。查看官方文檔進(jìn)行了解。 截至北京時(shí)間2024年2月15日凌晨,k8s已經(jīng)更新至1.29.2版。從v1.24起,Docker不能直接作為k8s的容器運(yùn)行時(shí)。因?yàn)镈ocker龐

    2024年02月19日
    瀏覽(43)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包