国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Kubernetes部署Nacos集群

這篇具有很好參考價(jià)值的文章主要介紹了Kubernetes部署Nacos集群。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

一、k8s架構(gòu)

master: 11.0.1.3
node: 11.0.1.4,11.0.1.5(nfs)
nfs: 11.0.1.5

二、安裝nfs

安裝nfs-utils和rpcbind

nfs客戶(hù)端和服務(wù)端都安裝nfs-utils包

yum install nfs-utils rpcbind -y

創(chuàng)建共享目錄

mkdir -p /nfsdata
chmod 777 /nfsdata

編輯/etc/exports文件添加如下內(nèi)容

vi /etc/exports

/nfsdata *(rw,sync,no_root_squash)

啟動(dòng)服務(wù)

 systemctl enable rpcbind.service --now
 systemctl enable nfs.service --now

啟動(dòng)順序一定是rpcbind->nfs,否則有可能出現(xiàn)錯(cuò)誤

三、部署自動(dòng)分配PV的相關(guān)程序

先從github上拉取nacos的代碼:

git clone https://github.com/nacos-group/nacos-k8s.git

內(nèi)容結(jié)構(gòu)如下:

[root@master1 ~]# tree nacos-k8s/deploy
nacos-k8s/deploy
├── ceph
│   ├── pvc.yaml
│   └── sc.yaml
├── ingress
│   ├── ingress-nginx.yaml
│   └── service-nodeport.yaml
├── mysql
│   ├── mysql-ceph.yaml
│   ├── mysql-local.yaml
│   ├── mysql-nfs.yaml
│   └── mysql.yaml
├── nacos
│   ├── nacos-ingress.yaml
│   ├── nacos-no-pvc-ingress.yaml
│   ├── nacos-pvc-ceph.yaml
│   ├── nacos-pvc-nfs.yaml
│   ├── nacos-quick-start.yaml
│   └── nacos-tmp.yaml
└── nfs
    ├── class.yaml
    ├── deployment.yaml
    └── rbac.yaml

因?yàn)镾torageClass可以實(shí)現(xiàn)自動(dòng)配置,所以使用StorageClass之前,我們需要先安裝存儲(chǔ)驅(qū)動(dòng)的自動(dòng)配置程序,而這個(gè)配置程序必須擁有一定的權(quán)限去訪問(wèn)我們的kubernetes集群(類(lèi)似dashboard一樣,必須有權(quán)限訪問(wèn)各種api,才能實(shí)現(xiàn)管理)。

創(chuàng)建rbac:

[root@master1 ~]# cat nacos-k8s/deploy/nfs/rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: devops
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: devops
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: devops
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

創(chuàng)建StorageClass

[root@master1 ~]# cat nacos-k8s/deploy/nfs/class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"

創(chuàng)建自動(dòng)配置程序 - NFS客戶(hù)端

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: devops
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
            # 這里IP是你nfs服務(wù)器的IP
              value: 11.0.1.5  
            - name: NFS_PATH
            # 這里路徑就是你上面的nfs服務(wù)器上的路徑
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            # 這里IP是你nfs服務(wù)器的IP
            server: 11.0.1.5
            # 這里路徑就是你上面的nfs服務(wù)器上的路徑
            path: /nfsdata

四、部署MySQL

MySQL的配置文件如下:

[root@master1 ~]# cat nacos-k8s/deploy/mysql/mysql-nfs.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql
  namespace: devops
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: managed-nfs-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: managed-nfs-storage
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
  namespace: devops
  labels:
    name: mysql
spec:
  replicas: 1
  selector:
    name: mysql
  template:
    metadata:
      labels:
        name: mysql
    spec:
      containers:
      - name: mysql
        image: nacos/nacos-mysql:5.7
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
        - name: MYSQL_DATABASE
          value: "nacos_devtest"
        - name: MYSQL_USER
          value: "nacos"
        - name: MYSQL_PASSWORD
          value: "nacos"
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: mysql
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: devops
  labels:
    name: mysql
spec:
  type: NodePort
  ports:
  - port: 3306
    targetPort: 3306
    nodePort: 30001
  selector:
    name: mysql

查看執(zhí)行后結(jié)果:

[root@master1 ~]# kubectl get pod,svc -n devops
NAME                                          READY   STATUS    RESTARTS   AGE
pod/mysql-bthlg                               1/1     Running   0          79m
pod/nfs-client-provisioner-5698b6944c-r4fqt   1/1     Running   0          31m

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
service/mysql            NodePort    10.103.249.249   <none>        3306:30001/TCP                        79m

五、部署Nacos集群

[root@master1 ~]# cat nacos-k8s/deploy/nacos/nacos-nfs.yaml 
# 請(qǐng)閱讀Wiki文章
# https://github.com/nacos-group/nacos-k8s/wiki/%E4%BD%BF%E7%94%A8peerfinder%E6%89%A9%E5%AE%B9%E6%8F%92%E4%BB%B6
---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  namespace: devops
  labels:
    app: nacos
spec:
  publishNotReadyAddresses: true 
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的選舉端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: devops
data:
  mysql.host: "mysql"
  mysql.db.name: "nacos_devtest"
  mysql.port: "3306"
  mysql.user: "nacos"
  mysql.password: "nacos"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: devops
spec:
  podManagementPolicy: Parallel
  serviceName: nacos-headless
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:1.1
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /home/nacos/plugins/peer-finder
              name: data
              subPath: peer-finder
      containers:
        - name: nacos
          imagePullPolicy: IfNotPresent
          image: nacos/nacos-server:v2.2.0
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
          volumeMounts:
            - name: data
              mountPath: /home/nacos/plugins/peer-finder
              subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 1Gi
  selector:
    matchLabels:
      app: nacos

執(zhí)行完成后,查看結(jié)果:

[root@master1 ~]# kubectl get pod,svc -n devops
NAME                                          READY   STATUS    RESTARTS   AGE
pod/mysql-bthlg                               1/1     Running   0          79m
pod/nacos-0                                   1/1     Running   0          30m
pod/nacos-1                                   1/1     Running   0          30m
pod/nacos-2                                   1/1     Running   0          30m
pod/nfs-client-provisioner-5698b6944c-r4fqt   1/1     Running   0          31m

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
service/mysql            NodePort    10.103.249.249   <none>        3306:30001/TCP                        79m
service/nacos-headless   ClusterIP   None             <none>        8848/TCP,9848/TCP,9849/TCP,7848/TCP   30m

檢查nacos的日志:

[root@master1 ~]# kubectl logs -f nacos-0 -n devops
+ export CUSTOM_SEARCH_NAMES=application
+ CUSTOM_SEARCH_NAMES=application
+ export CUSTOM_SEARCH_LOCATIONS=file:/home/nacos/conf/
+ CUSTOM_SEARCH_LOCATIONS=file:/home/nacos/conf/
+ export MEMBER_LIST=
+ MEMBER_LIST=
+ PLUGINS_DIR=/home/nacos/plugins/peer-finder
+ [[ cluster == \s\t\a\n\d\a\l\o\n\e ]]
+ [[ '' == \e\m\b\e\d\d\e\d ]]
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m'
+ [[ n == \y ]]
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages'
+ print_servers
+ [[ ! -d /home/nacos/plugins/peer-finder ]]
+ bash /home/nacos/plugins/peer-finder/plugin.sh
+ sleep 30
2023/04/12 15:21:58 Peer list updated
was []
now [nacos-0.nacos-headless.devops.svc.cluster.local nacos-1.nacos-headless.devops.svc.cluster.local nacos-2.nacos-headless.devops.svc.cluster.local]
2023/04/12 15:21:58 execing: /home/nacos/plugins/peer-finder/on-start.sh with stdin: nacos-0.nacos-headless.devops.svc.cluster.local
nacos-1.nacos-headless.devops.svc.cluster.local
nacos-2.nacos-headless.devops.svc.cluster.local
2023/04/12 15:21:59 
+ [[ all == \c\o\n\f\i\g ]]
+ [[ all == \n\a\m\i\n\g ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ hostname == \h\o\s\t\n\a\m\e ]]
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list='
++ sed -E -n 's/.* version "([0-9]*).*$/\1/p'
++ /usr/lib/jvm/java-1.8.0-openjdk/bin/java -version
+ JAVA_MAJOR_VERSION=1
+ [[ 1 -ge 9 ]]
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar '
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar  --spring.config.additional-location=file:/home/nacos/conf/'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar  --spring.config.additional-location=file:/home/nacos/conf/ --spring.config.name=application'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar  --spring.config.additional-location=file:/home/nacos/conf/ --spring.config.name=application --logging.config=/home/nacos/conf/nacos-logback.xml'
+ JAVA_OPT=' -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar  --spring.config.additional-location=file:/home/nacos/conf/ --spring.config.name=application --logging.config=/home/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288'
+ echo 'Nacos is starting, you can docker logs your container'
+ exec /usr/lib/jvm/java-1.8.0-openjdk/bin/java -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.preferHostnameOverIp=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar --spring.config.additional-location=file:/home/nacos/conf/ --spring.config.name=application --logging.config=/home/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288
Nacos is starting, you can docker logs your container

         ,--.
       ,--.'|
   ,--,:  : |                                           Nacos 2.2.0
,`--.'`|  ' :                       ,---.               Running in cluster mode, All function modules
|   :  :  | |                      '   ,'\   .--.--.    Port: 8848
:   |   \ | :  ,--.--.     ,---.  /   /   | /  /    '   Pid: 1
|   : '  '; | /       \   /     \.   ; ,. :|  :  /`./   Console: http://nacos-0.nacos-headless.devops.svc.cluster.local:8848/nacos/index.html
'   ' ;.    ;.--.  .-. | /    / ''   | |: :|  :  ;_
|   | | \   | \__\/: . ..    ' / '   | .; : \  \    `.      https://nacos.io
'   : |  ; .' ," .--.; |'   ; :__|   :    |  `----.   \
|   | '`--'  /  /  ,.  |'   | '.'|\   \  /  /  /`--'  /
'   : |     ;  :   .'   \   :    : `----'  '--'.     /
;   |.'     |  ,     .-./\   \  /            `--'---'
'---'        `--`---'     `----'

2023-04-12 15:22:47,327 INFO The server IP list of Nacos is [nacos-0.nacos-headless.devops.svc.cluster.local:8848, nacos-1.nacos-headless.devops.svc.cluster.local:8848, nacos-2.nacos-headless.devops.svc.cluster.local:8848]

2023-04-12 15:22:48,966 INFO Nacos is starting...

2023-04-12 15:22:51,275 INFO Nacos is starting...

2023-04-12 15:22:52,431 INFO Nacos is starting...

2023-04-12 15:22:53,841 INFO Nacos is starting...

2023-04-12 15:22:55,126 INFO Nacos is starting...

2023-04-12 15:22:56,432 INFO Nacos is starting...

2023-04-12 15:22:57,614 INFO Nacos is starting...

2023-04-12 15:22:59,760 INFO Nacos is starting...

2023-04-12 15:23:02,355 INFO Nacos is starting...

2023-04-12 15:23:03,414 INFO Nacos is starting...

2023-04-12 15:23:04,512 INFO Nacos is starting...

2023-04-12 15:23:05,565 INFO Nacos is starting...

2023-04-12 15:23:06,566 INFO Nacos is starting...

2023-04-12 15:23:07,567 INFO Nacos is starting...

2023-04-12 15:23:09,218 INFO Nacos is starting...

2023-04-12 15:23:10,395 INFO Nacos is starting...

2023-04-12 15:23:11,413 INFO Nacos is starting...

2023-04-12 15:23:12,416 INFO Nacos is starting...

2023-04-12 15:23:13,417 INFO Nacos is starting...

2023-04-12 15:23:14,431 INFO Nacos is starting...

2023-04-12 15:23:16,033 INFO Nacos is starting...

2023-04-12 15:23:18,371 INFO Nacos is starting...

2023-04-12 15:23:19,642 INFO Nacos is starting...

2023-04-12 15:23:20,762 INFO Nacos is starting...

2023-04-12 15:23:21,870 INFO Nacos is starting...

2023-04-12 15:23:22,883 INFO Nacos is starting...

2023-04-12 15:23:23,917 INFO Nacos is starting...

2023-04-12 15:23:24,983 INFO Nacos is starting...

2023-04-12 15:23:25,171 INFO Nacos started successfully in cluster mode. use external storage

通過(guò)日志可以看出,nacos-0、nacos-1、nacos-2均以集群形式運(yùn)行正常。但是目前我們的nacos的svc是Cluster IP形式,我們還無(wú)法通過(guò)瀏覽器去訪問(wèn),需借助ingress-nginx。

六、創(chuàng)建ingress

獲取ingress-nginx并安裝,本次案例使用的是0.30版本

[root@master1 ~]# mkdir nacos-k8s/deploy/ingress
cd nacos-k8s/deploy/ingress
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
kubectl apply -f ./

上面兩個(gè)文件不好下載的話(huà),可以復(fù)制下面的:
mandatory.yaml內(nèi)容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

service-nodeport.yaml內(nèi)容如下:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

查看創(chuàng)建好的ingress-nginx的svc,端口是30565:

[root@master1 ~]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.110.95.210   <none>        80:30565/TCP,443:30228/TCP   16m

創(chuàng)建nacos的ingress

[root@master1 nacos]# cat nacos-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nacos-ingress
  namespace: devops
spec:
  rules:
  - host: www.mynacos.com 
    http:
      paths:
      - path: /nacos
        backend:
          serviceName: nacos-headless 
          servicePort: 8848

創(chuàng)建完成后,觀察一會(huì),等ingress獲取到地址后訪問(wèn)測(cè)試:

[root@master1 ~]# kubectl get ingress -n devops
NAME            CLASS    HOSTS             ADDRESS   PORTS   AGE
nacos-ingress   <none>   www.mynacos.com             80      10s

[root@master1 ~]# kubectl get ingress -n devops
NAME            CLASS    HOSTS             ADDRESS         PORTS   AGE
nacos-ingress   <none>   www.mynacos.com   10.110.95.210   80      35m

在Windows的hosts文件里加上如下內(nèi)容:

11.0.1.3 www.mynacos.com

通過(guò)瀏覽器訪問(wèn) www.mynacos.com:30565/nacos:
k8s部署nacos集群,kubernetes,kubernetes,docker,java
登錄進(jìn)去,用戶(hù)名和密碼均為nacos:
k8s部署nacos集群,kubernetes,kubernetes,docker,java
可以看到當(dāng)前集群三個(gè)節(jié)點(diǎn)均為我們創(chuàng)建的節(jié)點(diǎn)。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-774039.html

到了這里,關(guān)于Kubernetes部署Nacos集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • k8s部署nacos集群以及服務(wù)注冊(cè)

    k8s部署nacos集群以及服務(wù)注冊(cè)

    使用mysql存儲(chǔ)nacos數(shù)據(jù) mysql搭建可以參考:https://blog.csdn.net/yorao4565/article/details/128445946 在k8s中部署nacos配置nacos-cm 在k8s中部署nacos的headless-service,用于集群間訪問(wèn) 用于服務(wù)注冊(cè) 用于外部訪問(wèn)管理頁(yè)面 訪問(wèn)nacos界面:http://10.10.10.10:30004 創(chuàng)建一個(gè)簡(jiǎn)單的服務(wù),能啟動(dòng)就行,增加

    2024年02月07日
    瀏覽(21)
  • Kubernetes(k8s)集群安裝部署

    Kubernetes(k8s)集群安裝部署

    名稱(chēng) IP 系統(tǒng) 配置 主控節(jié)點(diǎn) 192.168.202.101 CentOS 7.9.2009 2核4G 工作節(jié)點(diǎn)1 192.168.202.102 CentOS 7.9.2009 2核4G 工作節(jié)點(diǎn)2 192.168.202.103 CentOS 7.9.2009 2核4G 2.1 升級(jí)操作系統(tǒng)內(nèi)核 導(dǎo)入elrepo gpg key 安裝elrepo YUM源倉(cāng)庫(kù) 安裝kernel-ml版本,ml為長(zhǎng)期穩(wěn)定版本,lt為長(zhǎng)期維護(hù)版本 設(shè)置grub2默認(rèn)引導(dǎo)為0 重

    2024年02月10日
    瀏覽(97)
  • Kubernetes(k8s)集群部署----->超詳細(xì)

    Kubernetes(k8s)集群部署----->超詳細(xì)

    ??The Begin??點(diǎn)點(diǎn)關(guān)注,收藏不迷路?? Kubernetes(簡(jiǎn)稱(chēng)k8s)是一個(gè)開(kāi)源的容器編排平臺(tái),可以幫助開(kāi)發(fā)人員和運(yùn)維團(tuán)隊(duì)更輕松地管理容器化應(yīng)用程序。本文將詳細(xì)介紹如何進(jìn)行k8s集群的部署,以幫助讀者快速搭建一個(gè)高可用、可伸縮的k8s集群。 1、操作系統(tǒng):至少三臺(tái)物理機(jī)或

    2024年02月05日
    瀏覽(20)
  • yum部署kubernetes(k8s)集群、k8s常用資源管理

    目錄 一、環(huán)境搭建 1、準(zhǔn)備環(huán)境 1)計(jì)算機(jī)說(shuō)明,建議系統(tǒng)版本7.4或者7.6 2)修改所有主機(jī)的計(jì)算機(jī)名設(shè)置host文件 ?2、安裝master節(jié)點(diǎn) 1)安裝etcd配置etcd 2)安裝k8s-master節(jié)點(diǎn) 3)配置apiserver 4)配置controller和scheduler 5)啟動(dòng)k8s服務(wù) 3、安裝k8s-master上的node 1)安裝node 2)配置kube

    2024年02月13日
    瀏覽(35)
  • 基于Docker的K8s(Kubernetes)集群部署

    基于Docker的K8s(Kubernetes)集群部署

    開(kāi)始搭建k8s集群 三臺(tái)服務(wù)器修改主機(jī)名稱(chēng) 關(guān)閉對(duì)話(huà)窗口,重新連接 三臺(tái)主機(jī)名稱(chēng)呢就修改成功了。 接下來(lái)修改每臺(tái)節(jié)點(diǎn)的 hosts 文件 所有節(jié)點(diǎn)關(guān)閉 setLinux 查看是否關(guān)閉成功 為每個(gè)節(jié)點(diǎn)添加 k8s 數(shù)據(jù)源 如果安裝docker數(shù)據(jù)源找不到y(tǒng)um-config 所有節(jié)點(diǎn)安裝kubelet kubelet安裝中… k

    2024年02月08日
    瀏覽(25)
  • 云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    前面利用kubekey部署了一個(gè)簡(jiǎn)單的非高可用,etcd單實(shí)例的kubernetes集群,經(jīng)過(guò)研究,發(fā)現(xiàn)部署過(guò)程可以簡(jiǎn)化,省去了一部分下載過(guò)程(主要是下載kubernetes組件的過(guò)程)只是kubernetes版本會(huì)固定在1.22.16版本,etcd集群可以部署成生產(chǎn)用的外部集群,并且apiserver等等組件也是高可用,

    2024年02月15日
    瀏覽(28)
  • 【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    目錄 基于Prometheus監(jiān)控Kubernetes集群安裝部署 一、環(huán)境準(zhǔn)備 二、部署kubernetes集群 三、部署Prometheus監(jiān)控平臺(tái) 四、部署Grafana服務(wù) 五、grafana? web操作 IP地址 主機(jī)名 組件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168

    2024年02月12日
    瀏覽(107)
  • kubernetes集群(k8s)之安裝部署Calico 網(wǎng)絡(luò)

    kubernetes集群(k8s)之安裝部署Calico 網(wǎng)絡(luò)

    目錄 安裝部署Calico 網(wǎng)絡(luò) (一)環(huán)境準(zhǔn)備 (二)部署docker環(huán)境 (三)部署kubernetes集群 (四)部署Calico網(wǎng)絡(luò)插件 IP地址 主機(jī)名 組件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168.100.133 k8s-node02 kubeadm、kubelet、kube

    2024年02月12日
    瀏覽(39)
  • [kubernetes]二進(jìn)制部署k8s集群-基于containerd

    k8s從1.24版本開(kāi)始不再直接支持docker,但可以自行調(diào)整相關(guān)配置,實(shí)現(xiàn)1.24版本后的k8s還能調(diào)用docker。其實(shí)docker自身也是調(diào)用containerd,與其k8s通過(guò)docker再調(diào)用containerd,不如k8s直接調(diào)用containerd,以減少性能損耗。 除了containerd,比較流行的容器運(yùn)行時(shí)還有podman,但是podman官方安裝

    2024年02月12日
    瀏覽(30)
  • Kubernetes技術(shù)--使用kubeadm快速部署一個(gè)K8s集群

    Kubernetes技術(shù)--使用kubeadm快速部署一個(gè)K8s集群

    這里我們配置一個(gè) 單master集群 。( 一個(gè)Master節(jié)點(diǎn),多個(gè)Node節(jié)點(diǎn) ) 1.硬件環(huán)境準(zhǔn)備 一臺(tái)或多臺(tái)機(jī)器,操作系統(tǒng) CentOS7.x-86_x64 。這里我們使用安裝了CentOS7的三臺(tái)虛擬機(jī) 硬件配置 : 2GB或更多RAM , 2個(gè)CPU或更多CPU , 硬盤(pán)30GB或更多 2.主機(jī)名稱(chēng)和IP地址規(guī)劃 3. 初始化準(zhǔn)備工作 (1).關(guān)

    2024年02月10日
    瀏覽(122)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包