国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄)

這篇具有很好參考價值的文章主要介紹了云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

前言:

kubesphere的離線化部署指的是通過自己搭建的harbor私有倉庫拉取鏡像,完全不依賴于外部網(wǎng)絡(luò)的方式部署。

我的kubernetes集群是一個單master節(jié)點,雙工作節(jié)點,總計三個節(jié)點的版本為1.22.16的集群。

該集群只是初始化完成了,網(wǎng)絡(luò)插件什么的都還沒有安裝,本文計劃做一個整合,將metric?server,網(wǎng)絡(luò)插件,storageclass? nfs存儲插件的部署整合到這一個文章中來,在將kubesphere這些部署依賴安裝完畢后,將鏡像推送到自己搭建的一個帶有證書的harbor私有倉庫內(nèi),然后,通過私有倉庫秒速完成kubesphere部署。

一,

集群環(huán)境介紹

master 192.168.123.11

slave1 192.168.123.12

slave2 192.168.123.13

集群是使用kubeadm安裝部署的,etcd是使用的外部自建etcd集群,操作系統(tǒng)是centos7

離線的images下載地址:

鏈接:https://pan.baidu.com/s/1EjTX4gmhRb1c0JYMLWaL_w?pwd=xshe?
提取碼:xshe?

?

harbor私有倉庫?https://192.168.123.14

[root@centos1 ~]# kubectl get no -owide
NAME      STATUS     ROLES                  AGE     VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
centos2   NotReady   <none>                 6d19h   v1.22.16   192.168.123.12   <none>        CentOS Linux 7 (Core)   5.16.9-1.el7.elrepo.x86_64   docker://20.10.7
centos3   NotReady   <none>                 6d19h   v1.22.16   192.168.123.13   <none>        CentOS Linux 7 (Core)   5.16.9-1.el7.elrepo.x86_64   docker://20.10.7
master    NotReady   control-plane,master   6d19h   v1.22.16   192.168.123.11   <none>        CentOS Linux 7 (Core)   5.16.9-1.el7.elrepo.x86_64   docker://20.10.7
[root@centos1 ~]# kubectl get po -A -owide
NAMESPACE     NAME                             READY   STATUS    RESTARTS        AGE     IP               NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-7f6cbbb7b8-cndqt         0/1     Pending   0               6d19h   <none>           <none>    <none>           <none>
kube-system   coredns-7f6cbbb7b8-pk4mv         0/1     Pending   0               6d19h   <none>           <none>    <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   6 (14m ago)     6d19h   192.168.123.11   master    <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   6 (14m ago)     6d19h   192.168.123.11   master    <none>           <none>
kube-system   kube-proxy-7bqs7                 1/1     Running   3 (6d18h ago)   6d19h   192.168.123.13   centos3   <none>           <none>
kube-system   kube-proxy-8hkdn                 1/1     Running   3 (6d18h ago)   6d19h   192.168.123.12   centos2   <none>           <none>
kube-system   kube-proxy-jkghf                 1/1     Running   6 (14m ago)     6d19h   192.168.123.11   master    <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   6 (14m ago)     6d19h   192.168.123.11   master    <none>           <none>

云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄),kubernetes,云原生,kubernetes,容器,數(shù)據(jù)庫,linux

?二,

網(wǎng)絡(luò)插件flannel的安裝(三個kubernetes節(jié)點都執(zhí)行)

####注,如果cidr不是10.244.0.0?那么需要更改為實際的

? net-conf.json: |
? ? {
? ? ? "Network": "10.244.0.0/16",
? ? ? "Backend": {
? ? ? ? "Type": "vxlan"
? ? ? }
? ? }

[root@centos1 ~]# tar Cvxf /opt/cni/bin/ cni-plugins-linux-amd64-v1.2.0.tgz 
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local
kubectl apply -f kube-flannel.yml
 cat kube-flannel.yml 
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.22.0
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.22.0
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock

等待片刻后,就可以看到整個集群正常了。

[root@centos1 ~]# kubectl get po -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS        AGE
kube-flannel   kube-flannel-ds-5c6qs            1/1     Running   0               119s
kube-flannel   kube-flannel-ds-gf966            1/1     Running   0               119s
kube-flannel   kube-flannel-ds-pklq5            1/1     Running   0               119s
kube-system    coredns-7f6cbbb7b8-cndqt         1/1     Running   0               6d19h
kube-system    coredns-7f6cbbb7b8-pk4mv         1/1     Running   0               6d19h
kube-system    kube-apiserver-master            1/1     Running   6 (21m ago)     6d19h
kube-system    kube-controller-manager-master   1/1     Running   6 (22m ago)     6d19h
kube-system    kube-proxy-7bqs7                 1/1     Running   3 (6d18h ago)   6d19h
kube-system    kube-proxy-8hkdn                 1/1     Running   3 (6d18h ago)   6d19h
kube-system    kube-proxy-jkghf                 1/1     Running   6 (22m ago)     6d19h
kube-system    kube-scheduler-master            1/1     Running   6 (22m ago)     6d19h

三,

metric?server的安裝部署

編輯?vim? /etc/kubernetes/manifests/kube-apiserver.yaml這個文件,在此文件內(nèi)添加? ? - --enable-aggregator-routing=true 這個字段即可,apiserver會自動重啟生效網(wǎng)絡(luò)聚合功能。

具體插入位置建議在enable下面,示例如下:

spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.123.11
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --enable-aggregator-routing=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem

部署文件內(nèi)容如下:

[root@centos1 ~]# cat components-metrics.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP       #刪掉 ExternalIP,Hostname這兩個,這里已經(jīng)改好了
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls                             #加上該啟動參數(shù)
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.4.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

執(zhí)行結(jié)果如下:

[root@centos1 ~]# kubectl apply -f components-metrics.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

稍等片刻,等pod啟動后,驗證metric?server是否部署成功:

[root@centos1 ~]# kubectl get po -A
NAMESPACE      NAME                              READY   STATUS    RESTARTS        AGE
kube-flannel   kube-flannel-ds-5c6qs             1/1     Running   0               9m34s
kube-flannel   kube-flannel-ds-gf966             1/1     Running   0               9m34s
kube-flannel   kube-flannel-ds-pklq5             1/1     Running   0               9m34s
kube-system    coredns-7f6cbbb7b8-cndqt          1/1     Running   0               6d19h
kube-system    coredns-7f6cbbb7b8-pk4mv          1/1     Running   0               6d19h
kube-system    kube-apiserver-master             1/1     Running   0               3m7s
kube-system    kube-controller-manager-master    1/1     Running   0               3m6s
kube-system    kube-proxy-7bqs7                  1/1     Running   3 (6d18h ago)   6d19h
kube-system    kube-proxy-8hkdn                  1/1     Running   3 (6d18h ago)   6d19h
kube-system    kube-proxy-jkghf                  1/1     Running   6 (29m ago)     6d19h
kube-system    kube-scheduler-master             1/1     Running   0               3m6s
kube-system    metrics-server-55b9b69769-gdgvp   1/1     Running   0               84s
[root@centos1 ~]# kubectl top no
\NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
centos2   44m          1%     428Mi           5%        
centos3   42m          1%     444Mi           5%        
master    82m          2%     938Mi           11%       

四,

nfs?storageclass存儲插件的安裝

三個kubernetes節(jié)點都執(zhí)行:

yum install nfs-utils rpcbind -y
systemctl enable nfs rpcbind && systemctl start nfs rpcbind

計劃nfs服務(wù)是安裝在master節(jié)點,因此,在master節(jié)點執(zhí)行:

####注,nfs服務(wù)的配置文件內(nèi)的IP網(wǎng)段需要根據(jù)實際修改哦,這些千萬不要忘了修改。

[root@centos1 ~]# mkdir -p /data/nfs-sc
[root@centos1 ~]# cat /etc/exports
/data/nfs-sc  10.244.0.0/16(rw,no_root_squash,no_subtree_check) 192.168.123.11(rw,no_root_squash,no_subtree_check) 192.168.123.0/24(rw,no_root_squash,no_subtree_check)

[root@centos1 ~]# systemctl restart nfs rpcbind

在14服務(wù)器上執(zhí)行:

####注,這一步是在kubernetes節(jié)點啟用harbor的客戶端,因為登錄的時候需要讀取相關(guān)證書。

[root@centos4 harbor]# scp -r /etc/docker/certs.d/ 192.168.123.11:/etc/docker/
root@192.168.123.11's password: 
192.168.123.14.cert                                                                                                                                        100% 2057     1.8MB/s   00:00    
192.168.123.14.key                                                                                                                                         100% 3243     3.4MB/s   00:00    
ca.crt                                                                                                                                                     100% 2033     2.2MB/s   00:00    
[root@centos4 harbor]# scp -r /etc/docker/certs.d/ 192.168.123.12:/etc/docker/
root@192.168.123.12's password: 
192.168.123.14.cert                                                                                                                                        100% 2057     2.4MB/s   00:00    
192.168.123.14.key                                                                                                                                         100% 3243     4.0MB/s   00:00    
ca.crt                                                                                                                                                     100% 2033     2.7MB/s   00:00    
[root@centos4 harbor]# scp -r /etc/docker/certs.d/ 192.168.123.13:/etc/docker/
root@192.168.123.13's password: 
192.168.123.14.cert                                                                                                                                        100% 2057     2.4MB/s   00:00    
192.168.123.14.key                                                                                                                                         100% 3243     4.5MB/s   00:00    
ca.crt                           

回到master服務(wù)器執(zhí)行:

#####注,這一步是為了生成harbor倉庫的登錄信息,可以看到輸出表明登錄信息是保存到了?/root/.docker/config.json這個文件內(nèi)

[root@centos1 ~]# docker login https://192.168.123.14
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

將密鑰進行base64加密

####注,加密登錄信息,以提供給secret使用

cat /root/.docker/config.json | base64 -w 0

新建secret部署文件,保存上述生成的密鑰

####注,一定要準確復(fù)制上面的加密信息,填寫到此文件內(nèi)

[root@centos1 ~]# cat harbor_secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: harbor-login
type: kubernetes.io/dockerconfigjson
data:
   # 這里添加上述base64加密后的密鑰
   .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjEyMy4xNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZVMmhwWjNWaGJtZGZNekk9IgoJCX0KCX0KfQ==
[root@centos1 ~]# kubectl apply -f harbor_secret.yaml 
secret/harbor-login created

nfs存儲插件的部署文件(三個文件按順序執(zhí)行,里面的相關(guān)ip,secret等等根據(jù)自己的實際修改):

####注,第一個文件沒什么說的,不需要做任何更改,第二個文件使用的是harbor私有倉庫,也算是提前測試了一下harbor是否功能正常,這個文件內(nèi)的IP什么的一定要仔細修改

[root@centos1 ~]# cat serviceacount.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch","create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
 
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
 
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@centos1 ~]# cat deploy-nfs.yaml 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: nfs-client-provisioner 
--- 
kind: Deployment 
apiVersion: apps/v1  
metadata: 
  name: nfs-client-provisioner 
  namespace: kube-system
spec: 
  replicas: 1 
  strategy: 
    type: Recreate 
  selector: 
    matchLabels: 
      app: nfs-client-provisioner 
  template: 
    metadata: 
      labels: 
        app: nfs-client-provisioner 
    spec: 
      serviceAccountName: nfs-client-provisioner 
      containers: 
        - name: nfs-client-provisioner 
          image: 192.168.123.14/library/registry.cn-shanghai.aliyuncs.com/c7n/nfs-client-provisioner:v3.1.0-k8s1.11
          imagePullPolicy: IfNotPresent
          volumeMounts: 
            - name: nfs-client-root 
              mountPath: /persistentvolumes 
          env: 
            - name: PROVISIONER_NAME 
              value: fuseim.pri/ifs 
            - name: NFS_SERVER 
              value: 192.168.123.11
            - name: NFS_PATH 
              value: /data/nfs-sc 
      volumes: 
        - name: nfs-client-root 
          nfs: 
            server: 192.168.123.11
            path: /data/nfs-sc

      imagePullSecrets:
      - name: harbor-login
[root@centos1 ~]# cat storageclass-nfs.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs
reclaimPolicy: Delete
allowVolumeExpansion: True  #允許pvc創(chuàng)建后擴容

初步驗證:

[root@centos1 ~]# kubectl get sc
NAME                            PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           true                   10m

五,

命令補全的設(shè)置

考慮到后面的操作需要使用命令行的場景就非常多了,為了方便后續(xù)操作,設(shè)置命令補全

yum -y install bash-completion


source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >>/etc/profile
echo "source /usr/share/bash-completion/bash_completion" >>/etc/profile
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

使用別名k代替kubectl命令

####注,這個沒什么好說的了,干就行了

echo "alias k=kubectl">>/etc/profile
echo "complete -F __start_kubectl k">>/etc/profile
source /etc/profile

六,

外部etcd證書的處理

####注,一般外部etcd基本都是和我的部署目錄一樣,沒什么好說的,注意證書的對應(yīng)關(guān)系。

[root@centos1 ~]# cp /opt/etcd/ssl/server.pem  /etc/kubernetes/pki/etcd/healthcheck-client.crt
[root@centos1 ~]# cp /opt/etcd/ssl/server-key.pem  /etc/kubernetes/pki/etcd/healthcheck-client.key
[root@centos1 ~]# cp /opt/etcd/ssl/ca.pem  /etc/kubernetes/pki/etcd/ca.crt
 
[root@centos1 data]# scp -r /etc/kubernetes/pki/etcd/* slave1:/etc/kubernetes/pki/etcd/
                                                                                                                                                 100% 1675     1.1MB/s   00:00    
apiserver-etcd-client.pem                                                                                                                                                       100% 1338     1.3MB/s   00:00    
ca.crt                                                                                                                                                                          100% 1265     2.0MB/s   00:00    
ca.pem                                                                                                                                                                          100% 1265     1.6MB/s   00:00    
healthcheck-client.crt                                                                                                                                                          100% 1338     2.6MB/s   00:00    
healthcheck-client.key                                                                                                                                                          100% 1675     2.6MB/s   00:00    
[root@centos1 data]# scp -r /etc/kubernetes/pki/etcd/* slave2:/etc/kubernetes/pki/etcd/
                                                                                                                                                  100% 1675     1.0MB/s   00:00    
apiserver-etcd-client.pem                                                                                                                                                       100% 1338     2.0MB/s   00:00    
ca.crt                                                                                                                                                                          100% 1265     2.3MB/s   00:00    
ca.pem                                                                                                                                                                          100% 1265     2.0MB/s   00:00    
healthcheck-client.crt                                                                                                                                                          100% 1338     2.6MB/s   00:00    
healthcheck-client.key     
 
 

####?注:提前創(chuàng)建了namespace

[root@centos1 ~]# kubectl create ns kubesphere-monitoring-system
namespace/kubesphere-monitoring-system created
[root@centos1 ~]# kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/healthcheck-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/etcd/healthcheck-client.key             



secret/kube-etcd-client-certs created

七,

上傳離線鏡像到harbor私有倉庫

1,

百度云的鏡像是我收集的,從百度云下載下來后,隨便找一個有Docker環(huán)境的服務(wù)器,將離線鏡像導(dǎo)入服務(wù)器,本例是使用的kubernetes的master節(jié)點(雖然違規(guī),但方便):

for i in `ls /root/image/*`;do docker load <$i;done

輸出大概是這樣的:

7f4c27344f24: Loading layer [==================================================>]  3.072kB/3.072kB
Loaded image: quay.io/argoproj/argocd:v2.3.3
f424150e7bdd: Loading layer [==================================================>]  12.29kB/12.29kB
af2908c6d8d4: Loading layer [==================================================>]  2.192MB/2.192MB
eb1df1609b52: Loading layer [==================================================>]  22.53MB/22.53MB
9dc4b900734e: Loading layer [==================================================>]  2.048kB/2.048kB
658356a2e199: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: redis:5.0.14-alpine
a0d30d692d38: Loading layer [==================================================>]  25.52MB/25.52MB
ea119ba57232: Loading layer [==================================================>]  2.048kB/2.048kB
4093453af757: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: redis:6.2.6-alpine
Loaded image: registry.aliyuncs.com/google_containers/coredns:v1.8.4
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.16
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.16
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.22.16
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.16
Loaded image: registry.aliyuncs.com/google_containers/pause:3.5

2,

將離線鏡像以鏡像名稱:鏡像版本號的形式保存到文本文件內(nèi),那么,腳本應(yīng)該如下:

#!/bin/bash
docker images|while read i t _;do
    [[ "${t}" == "TAG" ]] && continue
    echo $i:$t
done

通過重定向命令導(dǎo)入到指定文件內(nèi):

bash 腳本名 > images-list-new.txt

文件內(nèi)容如下:

[root@centos1 ~]# cat images-list-new.txt 
flannel/flannel:v0.22.0
kubesphere/ks-installer:v3.3.2
kubesphere/ks-controller-manager:v3.3.2
kubesphere/ks-console:v3.3.2
kubesphere/ks-apiserver:v3.3.2
kubesphere/devops-controller:ks-v3.3.2
kubesphere/devops-tools:ks-v3.3.2
kubesphere/devops-apiserver:ks-v3.3.2
kubesphere/openpitrix-jobs:v3.3.2
kubesphere/log-sidecar-injector:v1.2.0
flannel/flannel-cni-plugin:v1.1.2
registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.16
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.16
registry.aliyuncs.com/google_containers/kube-proxy:v1.22.16
registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.16
kubesphere/kube-state-metrics:v2.5.0
kubesphere/ks-jenkins:v3.3.0-2.319.1
kubesphere/fluent-bit:v1.8.11
kubesphere/s2ioperator:v3.2.1
quay.io/argoproj/argocd:v2.3.3
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
thanosio/thanos:v0.25.2
prom/prometheus:v2.34.0
kubesphere/fluentbit-operator:v0.13.0
quay.io/argoproj/argocd-applicationset:v0.4.1
kubesphere/kube-events-ruler:v0.4.0
kubesphere/kube-events-operator:v0.4.0
kubesphere/kube-events-exporter:v0.4.0
kubesphere/elasticsearch-oss:6.8.22
prom/node-exporter:v1.3.1
redis:5.0.14-alpine
redis:6.2.6-alpine
haproxy:2.0.25-alpine
ghcr.io/dexidp/dex:v2.30.2
alpine:3.14
kubesphere/kubectl:v1.22.0
kubesphere/notification-manager:v1.4.0
jaegertracing/jaeger-operator:1.27
jaegertracing/jaeger-es-index-cleaner:1.27
jaegertracing/jaeger-query:1.27
jaegertracing/jaeger-collector:1.27
jaegertracing/jaeger-agent:1.27
kubesphere/notification-tenant-sidecar:v3.2.0
kubesphere/notification-manager-operator:v1.4.0
prom/alertmanager:v0.23.0
istio/proxyv2:1.11.1
istio/install-cni:1.11.1
istio/pilot:1.11.1
kubesphere/kube-auditing-operator:v0.2.0
kubesphere/kube-auditing-webhook:v0.2.0
kubesphere/kube-rbac-proxy:v0.11.0
kubesphere/kiali-operator:v1.38.1
kubesphere/kiali:v1.38
kubesphere/ks-installer:v3.1.1
registry.aliyuncs.com/google_containers/coredns:v1.8.4
docker:19.03
nginx:1.18
registry.aliyuncs.com/google_containers/pause:3.5
jimmidyson/configmap-reload:v0.5.0
csiplugin/snapshot-controller:v4.0.0
registry.cn-shanghai.aliyuncs.com/c7n/nfs-client-provisioner:v3.1.0-k8s1.11
registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.4.1
kubesphere/kube-rbac-proxy:v0.8.0
osixia/openldap:1.3.0
kubesphere/elasticsearch-curator:v5.7.6
minio/mc:RELEASE.2019-08-07T23-14-43Z
minio/minio:RELEASE.2019-08-07T01-59-21Z
tomcat:8.5.41-alpine
mirrorgooglecontainers/defaultbackend-amd64:1.4
ananwaresystems/webarchive:1.0

根據(jù)上面步驟生成的鏡像信息,修改所有鏡像標簽,并推送到私有Harbor倉庫。

vim push-images.sh

#!/bin/bash
for i in `cat images-list-new.txt`;
do
docker tag $i 192.168.123.14/library/$i
echo "tag xiu gai chengong"
docker push 192.168.123.14/library/$i
echo "push chenggong"
done

輸出如下:

push chenggong
tag xiu gai chengong
The push refers to repository [192.168.123.14/library/ananwaresystems/webarchive]
5f70bf18a086: Layer already exists 
bcd447c7ceca: Pushed 
3973dc7c145c: Pushed 
f8245f5490d6: Pushed 
7f02483a9752: Pushed 
1.0: digest: sha256:bd4ef0cff8106548b898b77c5c2d9b2a8b3b312efd236a84c354218e2445aa52 size: 1767
push chenggong


部分鏡像修正,重新推送:


docker tag 192.168.123.14/library/docker:19.03 192.168.123.14/library/library/docker:19.03
docker push 192.168.123.14/library/library/docker:19.03

docker tag redis:5.0.14-alpine 192.168.123.14/library/library/redis:5.0.14-alpine
docker push 192.168.123.14/library/library/redis:5.0.14-alpine

docker tag 192.168.123.14/library/redis:6.2.6-alpine    192.168.123.14/library/library/redis:6.2.6-alpine
docker push 192.168.123.14/library/library/redis:6.2.6-alpine

docker tag 192.168.123.14/library/quay.io/argoproj/argocd-applicationset:v0.4.1  192.168.123.14/library/argoproj/argocd-applicationset:v0.4.1
docker push 192.168.123.14/library/argoproj/argocd-applicationset:v0.4.1

docker tag  192.168.123.14/library/ghcr.io/dexidp/dex:v2.30.2 192.168.123.14/library/dexidp/dex:v2.30.2
docker push 192.168.123.14/library/dexidp/dex:v2.30.2

docker tag  192.168.123.14/library/alpine:3.14 192.168.123.14/library/library/alpine:3.14
docker push 192.168.123.14/library/library/alpine:3.14

八,

開始正式部署(部署時間大概為5分鐘,非常迅速)

###注,kubesphere-install這個文件使用的是私有倉庫,因此,image修改了,并加了拉取secret。

[root@centos1 ~]# cat kubesphere-installer.yaml 
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              x-kubernetes-preserve-unknown-fields: true
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
      - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - edgeruntime.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - application.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'


---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-installer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-installer
  template:
    metadata:
      labels:
        app: ks-installer
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: 192.168.123.14/library/kubesphere/ks-installer:v3.3.2
        imagePullPolicy: "Always"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
          readOnly: true
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time
      imagePullSecrets:
      - name: harbor-login

部署執(zhí)行文件,該文件內(nèi)的IP地址和etcd的檢查點需要根據(jù)實際情況填寫,由于前面推送的是library,因此,local_registry: 192.168.123.14/library

[root@centos1 ~]# cat cluster-configuration.yaml 
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    # adminPassword: ""     # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used.
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: 192.168.123.14/library        # Add your private registry address if it is needed.
  # dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 192.168.123.11  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort

    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: true
      enableHA: false
      volumeSize: 2Gi # Redis PVC size.
    openldap:
      enabled: true
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.
        enabled: false
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    # resources: {}
    jenkinsMemoryLim: 4Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 2Gi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    #   volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plug-in installation.
      nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
        enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: false # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    istio:  # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
        cni:
          enabled: true
  edgeruntime:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false
    kubeedge:        # kubeedge configurations
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
            - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true 
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:        # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.
    enabled: false   # Enable or disable Gatekeeper.
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    # image: 'alpine:3.15' # There must be an nsenter program in the image

查看部署日志:

[root@centos1 ~]# k logs -n kubesphere-system ks-installer-85dcff96b4-7qpl5 -f

末尾輸出如下:
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.123.12:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-06-30 18:35:27
#####################################################

最終安裝完成后的web界面:

云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄),kubernetes,云原生,kubernetes,容器,數(shù)據(jù)庫,linux

?云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄),kubernetes,云原生,kubernetes,容器,數(shù)據(jù)庫,linux

?兩個web界面的初始密碼都是P@88w0rd,賬號統(tǒng)一為admin

離線部署kubesphere完成?。。。。。。。。。?!文章來源地址http://www.zghlxwxcb.cn/news/detail-520144.html

到了這里,關(guān)于云原生|kubernetes|centos7下離線化部署kubesphere-3.3.2---基于kubernetes-1.22.16(從網(wǎng)絡(luò)插件開始記錄)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • 云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

    云原生|kubernetes|centos7下的kubeadm部署的集群內(nèi)在線部署kubesphere(外部etcd)

    本文將主要就在centos7操作系統(tǒng)下已有的一個利用kubeadm部署的集群內(nèi)在線安裝kubesphere做一個介紹,該kubernetes集群是使用的etcd外部集群。 kubernetes集群的搭建本文不做過多介紹,具體的搭建流程見我的博客: 云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群_kubeadm

    2024年02月11日
    瀏覽(23)
  • CentOS 7 下離線安裝RabbitMQ教程

    CentOS 7 下離線安裝RabbitMQ教程

    CentOS 7 下安裝RabbitMQ教程 一、做準備(VMWare 虛擬機上的 CentOS 7 鏡像 上安裝的) (1)準備RabbitMQ的安裝包(rabbitmq-server-3.8.5-1.el7.noarch)下載地址mq (2)還得準備erlang語言(erlang-21.3.8.16-1.el7.x86_64) (3)這兩個軟件的對應(yīng)版本 地址 也可以使用下載好的離線安裝包 離線安裝包

    2024年04月10日
    瀏覽(21)
  • 云原生之容器編排實踐-基于CentOS7搭建三個節(jié)點的Kubernetes集群

    云原生之容器編排實踐-基于CentOS7搭建三個節(jié)點的Kubernetes集群

    前面采用 minikube 作為 Kubernetes 環(huán)境來體驗學(xué)習(xí) Kubernetes 基本概念與操作,這樣避免了初學(xué)者在裸金屬主機上搭建 Kubernetes 集群的復(fù)雜度,但是隨著產(chǎn)品功能的逐漸完善,我們需要過渡到生產(chǎn)環(huán)境中的 K8S 集群模式;而在實際上生產(chǎn)環(huán)境之前,我們先在本地虛擬機上進行了環(huán)境

    2024年02月19日
    瀏覽(23)
  • centos7 部署kubernetes(帶自動部署腳本)

    centos7 部署kubernetes(帶自動部署腳本)

    目錄 一、實驗規(guī)劃 1、規(guī)劃表 2、安裝前宿主機檢查 1.配置主機名 2.制作ssh免密(VM1中執(zhí)行) ?3.修改hosts 文件 4. 修改內(nèi)核相關(guān)參數(shù) 5.加載模塊 6.?清空iptables、關(guān)閉防火墻、關(guān)閉交換空間、禁用selinux 7.?安裝ipvs與時鐘同步 8.配置docker的yum源、安裝docker及開機自啟 ?3、安裝k

    2024年02月12日
    瀏覽(49)
  • 云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    前面利用kubekey部署了一個簡單的非高可用,etcd單實例的kubernetes集群,經(jīng)過研究,發(fā)現(xiàn)部署過程可以簡化,省去了一部分下載過程(主要是下載kubernetes組件的過程)只是kubernetes版本會固定在1.22.16版本,etcd集群可以部署成生產(chǎn)用的外部集群,并且apiserver等等組件也是高可用,

    2024年02月15日
    瀏覽(28)
  • 云原生|kubernetes |kubelet服務(wù)加入系統(tǒng)守護進程supervisor(centos7系統(tǒng)下演示通過)

    kubelet 是 Kubernetes 集群中的一個重要組件,運行在每個節(jié)點上,負責(zé)管理該節(jié)點上的容器和Pod。它與控制平面(如 API Server 和 kube-controller-manager)通信,確保節(jié)點上的容器與期望的狀態(tài)保持一致。 以下是 kubelet 的一些主要功能和工作原理: Pod 管理:kubelet 負責(zé)管理節(jié)點上的

    2024年02月05日
    瀏覽(20)
  • linux centos7系統(tǒng)離線部署mysql-8.0.35免安裝版本

    linux centos7系統(tǒng)離線部署mysql-8.0.35免安裝版本

    在CentOS中默認安裝有MariaDB,是MySQL的一個分支,主要由開源社區(qū)維護。 CentOS 7及以上版本已經(jīng)不再使用MySQL數(shù)據(jù)庫,而是使用MariaDB數(shù)據(jù)庫。 如果直接安裝MySQL,會和MariaDB的文件沖突。 因此,需要先卸載自帶的MariaDB,再安裝MySQL。 查看版本: 卸載 檢查是否卸載干凈: 注意:

    2024年01月18日
    瀏覽(31)
  • CentOS 7.9 二進制離線部署 Kubernetes v1.28.7

    CentOS 7.9 二進制離線部署 Kubernetes v1.28.7

    二進制部署 Kubernetes 是一種將 Kubernetes 組件以二進制文件的形式部署到服務(wù)器上的方法。與使用預(yù)構(gòu)建的發(fā)行版(如Kubernetes發(fā)行版或云提供商的托管服務(wù))相比,二進制部署提供了更大的靈活性和定制性。 優(yōu)勢: 靈活性和定制性:二進制部署提供了更大的靈活性,允許您自

    2024年03月09日
    瀏覽(27)
  • Centos7安裝圖形化界面并使用Windows遠程桌面連接(包含離線部署)

    Centos7安裝圖形化界面并使用Windows遠程桌面連接(包含離線部署)

    1、關(guān)閉防火墻和selinux(xrdp是通過3389端口遠程桌面連接 ) [root@localhost ~]# systemctl stop firewalld #臨時關(guān)閉防火墻 [root@localhost ~]# systemctl disable firewalld.service #永久關(guān)閉防火墻 [root@localhost ~]# setenforce 0 #臨時關(guān)閉selinux [root@localhost ~]# sed -i \\\'s/enforcing/disabled/g\\\' /etc/selinux/config #永久關(guān)閉

    2024年02月08日
    瀏覽(32)
  • Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    宿主機系統(tǒng) 集群角色 服務(wù)器IP 主機名稱 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服務(wù)器初始化及網(wǎng)絡(luò)配置 VMware安裝Centos7并初始化網(wǎng)絡(luò)使外部可以訪問** 注意事項:請一定要看完上面這篇文章再執(zhí)行下面的操

    2024年02月03日
    瀏覽(55)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包