国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

這篇具有很好參考價(jià)值的文章主要介紹了華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

需要提前準(zhǔn)備好OpenEular操作系統(tǒng)虛擬機(jī)3臺(tái),本文使用模板機(jī)創(chuàng)建。

一、主機(jī)硬件要求

1.1 主機(jī)操作系統(tǒng)說(shuō)明

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群

序號(hào) 操作系統(tǒng)及版本 備注
1 openEuler-22.03-LTS-SP1 下載鏈接:https://repo.openeuler.org/openEuler-22.03-LTS-SP1/ISO/x86_64/openEuler-22.03-LTS-SP1-x86_64-dvd.iso

1.2 主機(jī)硬件配置說(shuō)明

需求 CPU 內(nèi)存 硬盤(pán) 角色 主機(jī)名
4C 4G 1TB master k8s-master01
4C 4G 1TB worker(node) k8s-worker01
4C 4G 1TB worker(node) k8s-worker02

二、主機(jī)準(zhǔn)備

2.1 主機(jī)名配置

由于本次使用3臺(tái)主機(jī)完成kubernetes集群部署,其中1臺(tái)為master節(jié)點(diǎn),名稱(chēng)為k8s-master01;其中2臺(tái)為worker節(jié)點(diǎn),名稱(chēng)分別為:k8s-worker01及k8s-worker02

master節(jié)點(diǎn)
# hostnamectl set-hostname k8s-master01
worker01節(jié)點(diǎn)
# hostnamectl set-hostname k8s-worker01
worker02節(jié)點(diǎn)
# hostnamectl set-hostname k8s-worker02

2.2 主機(jī)IP地址配置

k8s-master01節(jié)點(diǎn)IP地址為:192.168.10.160/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.160"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
k8s-worker1節(jié)點(diǎn)IP地址為:192.168.10.161/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.161"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
k8s-worker2節(jié)點(diǎn)IP地址為:192.168.10.162/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.162"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"

2.3 主機(jī)名與IP地址解析

所有集群主機(jī)均需要進(jìn)行配置。

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.160 k8s-master01
192.168.10.161 k8s-worker01
192.168.10.162 k8s-worker02

2.4 防火墻配置

所有主機(jī)均需要操作。

關(guān)閉現(xiàn)有防火墻firewalld
# systemctl disable firewalld
# systemctl stop firewalld
# firewall-cmd --state
not running

2.5 SELINUX配置

所有主機(jī)均需要操作。修改SELinux配置需要重啟操作系統(tǒng)。

# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

2.6 時(shí)間同步配置

所有主機(jī)均需要操作。最小化安裝系統(tǒng)需要安裝ntpdate軟件。

# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

2.7 配置內(nèi)核轉(zhuǎn)發(fā)及網(wǎng)橋過(guò)濾

所有主機(jī)均需要操作。

開(kāi)啟內(nèi)核路由轉(zhuǎn)發(fā)
# vim /etc/sysctl.conf
# cat /etc/sysctl.conf
......
net.ipv4.ip_forward=1
......
添加網(wǎng)橋過(guò)濾及內(nèi)核轉(zhuǎn)發(fā)配置文件
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
加載br_netfilter模塊
# modprobe br_netfilter
查看是否加載
# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
使用默認(rèn)配置文件生效
# sysctl -p 
使用新添加配置文件生效
# sysctl -p /etc/sysctl.d/k8s.conf

2.8 安裝ipset及ipvsadm

所有主機(jī)均需要操作。

安裝ipset及ipvsadm
# yum -y install ipset ipvsadm
配置ipvsadm模塊加載方式
添加需要加載的模塊
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授權(quán)、運(yùn)行、檢查是否加載
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

2.9 關(guān)閉SWAP分區(qū)

修改完成后需要重啟操作系統(tǒng),如不重啟,可臨時(shí)關(guān)閉,命令為swapoff -a

臨時(shí)關(guān)閉
# swapoff -a
永遠(yuǎn)關(guān)閉swap分區(qū),需要重啟操作系統(tǒng)
# cat /etc/fstab
......

# /dev/mapper/openeuler-swap none                    swap    defaults        0 0

在上一行中行首添加#

三、容器運(yùn)行時(shí)工具安裝及運(yùn)行

查看是否存在docker軟件
# yum list | grep docker
pcp-pmda-docker.x86_64                                  5.3.7-2.oe2203sp1                                                                                                               @anaconda
docker-client-java.noarch                               8.11.7-2.oe2203sp1                                                                                                              everything
docker-client-java.src                                  8.11.7-2.oe2203sp1                                                                                                              source
docker-compose.noarch                                   1.22.0-4.oe2203sp1                                                                                                              everything
docker-compose.src                                      1.22.0-4.oe2203sp1                                                                                                              source
docker-engine.src                                       2:18.09.0-316.oe220                                                                                3sp1                         source
docker-engine.x86_64                                    2:18.09.0-316.oe220                                                                                3sp1                         OS
docker-engine.x86_64                                    2:18.09.0-316.oe220                                                                                3sp1                         everything
docker-engine-debuginfo.x86_64                          2:18.09.0-316.oe220                                                                                3sp1                         debuginfo
docker-engine-debugsource.x86_64                        2:18.09.0-316.oe220                                                                                3sp1                         debuginfo
docker-runc.src                                         1.1.3-9.oe2203sp1                                                                                                               update-source
docker-runc.x86_64                                      1.1.3-9.oe2203sp1                                                                                                               update
podman-docker.noarch                                    1:0.10.1-12.oe2203s                                                                                p1                           everything
python-docker.src                                       5.0.3-1.oe2203sp1                                                                                                               source
python-docker-help.noarch                               5.0.3-1.oe2203sp1                                                                                                               everything
python-docker-pycreds.src                               0.4.0-2.oe2203sp1                                                                                                               source
python-dockerpty.src                                    0.4.1-3.oe2203sp1                                                                                                               source
python-dockerpty-help.noarch                            0.4.1-3.oe2203sp1                                                                                                               everything
python3-docker.noarch                                   5.0.3-1.oe2203sp1                                                                                                               everything
python3-docker-pycreds.noarch                           0.4.0-2.oe2203sp1                                                                                                               everything
python3-dockerpty.noarch                                0.4.1-3.oe2203sp1                                                                                                               everything
安裝docker
# dnf install docker

Last metadata expiration check: 0:53:18 ago on 2023年02月03日 星期五 11時(shí)30分19秒.
Dependencies resolved.
===========================================================================================================================================================
 Package                                Architecture                    Version                                          Repository                   Size
===========================================================================================================================================================
Installing:
 docker-engine                          x86_64                          2:18.09.0-316.oe2203sp1                          OS                           38 M
Installing dependencies:
 libcgroup                              x86_64                          0.42.2-3.oe2203sp1                               OS                           96 k

Transaction Summary
===========================================================================================================================================================
Install  2 Packages

Total download size: 39 M
Installed size: 160 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): libcgroup-0.42.2-3.oe2203sp1.x86_64.rpm                                                                             396 kB/s |  96 kB     00:00
(2/2): docker-engine-18.09.0-316.oe2203sp1.x86_64.rpm                                                                       10 MB/s |  38 MB     00:03
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                       10 MB/s |  39 MB     00:03
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                   1/1
  Running scriptlet: libcgroup-0.42.2-3.oe2203sp1.x86_64                                                                                               1/2
  Installing       : libcgroup-0.42.2-3.oe2203sp1.x86_64                                                                                               1/2
  Running scriptlet: libcgroup-0.42.2-3.oe2203sp1.x86_64                                                                                               1/2
  Installing       : docker-engine-2:18.09.0-316.oe2203sp1.x86_64                                                                                      2/2
  Running scriptlet: docker-engine-2:18.09.0-316.oe2203sp1.x86_64                                                                                      2/2
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

  Verifying        : docker-engine-2:18.09.0-316.oe2203sp1.x86_64                                                                                      1/2
  Verifying        : libcgroup-0.42.2-3.oe2203sp1.x86_64                                                                                               2/2

Installed:
  docker-engine-2:18.09.0-316.oe2203sp1.x86_64                                     libcgroup-0.42.2-3.oe2203sp1.x86_64

Complete!
設(shè)置docker開(kāi)機(jī)啟動(dòng)并啟動(dòng)
# systemctl enable --now docker
查看docker版本
# docker version
Client:
 Version:           18.09.0
 EulerVersion:      18.09.0.316
 API version:       1.39
 Go version:        go1.17.3
 Git commit:        9b9af2f
 Built:             Tue Dec 27 14:25:30 2022
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.0
  EulerVersion:     18.09.0.316
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.17.3
  Git commit:       9b9af2f
  Built:            Tue Dec 27 14:24:56 2022
  OS/Arch:          linux/amd64
  Experimental:     false

四、K8S軟件安裝

安裝k8s依賴(lài),連接跟蹤
# dnf install conntrack
k8s master節(jié)點(diǎn)安裝
# dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-master
k8s worker節(jié)點(diǎn)安裝
# dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-node
# systemctl enable kubelet

五、K8S集群初始化

[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address=192.168.10.160 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.2 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
輸出:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.10.160]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502722 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jvx2bb.pfd31288qyqcfsn7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.160:6443 --token jvx2bb.pfd31288qyqcfsn7 \
    --discovery-token-ca-cert-hash sha256:740fa71f6c5acf156195ce6989cb49b7a64fd061b8bf56e4b1b684cbedafbd40
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

六、K8S集群工作節(jié)點(diǎn)加入

[root@k8s-worker01 ~]# kubeadm join 192.168.10.160:6443 --token jvx2bb.pfd31288qyqcfsn7 \
    --discovery-token-ca-cert-hash sha256:740fa71f6c5acf156195ce6989cb49b7a64fd061b8bf56e4b1b684cbedafbd40
[root@k8s-worker02 ~]# kubeadm join 192.168.10.160:6443 --token jvx2bb.pfd31288qyqcfsn7 \
    --discovery-token-ca-cert-hash sha256:740fa71f6c5acf156195ce6989cb49b7a64fd061b8bf56e4b1b684cbedafbd40
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   3m59s   v1.20.2
k8s-worker01   NotReady   <none>                 18s     v1.20.2
k8s-worker02   NotReady   <none>                 10s     v1.20.2

七、K8S集群網(wǎng)絡(luò)插件使用

[root@k8s-master01 ~]# wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
[root@k8s-master01 ~]# vim calico.yaml
以下兩行默認(rèn)沒(méi)有開(kāi)啟,開(kāi)始后修改第二行為kubeadm初始化使用指定的pod network即可。
3680             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
3681             # chosen from this range. Changing this value after installation will have
3682             # no effect. This should fall within `--cluster-cidr`.
3683             - name: CALICO_IPV4POOL_CIDR
3684               value: "10.244.0.0/16"
3685             # Disable file logging so `kubectl logs` works.
3686             - name: CALICO_DISABLE_FILE_LOGGING
3687               value: "true"
[root@k8s-master01 ~]# kubectl create -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master01 calicodir]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-848c5d445f-rq4h2   1/1     Running   0          10m
calico-node-kjrcb                          1/1     Running   0          10m
calico-node-ssx5m                          1/1     Running   0          10m
calico-node-v9fgt                          1/1     Running   0          10m
coredns-7f89b7bc75-9j4rw                   1/1     Running   0          166m
coredns-7f89b7bc75-srhxf                   1/1     Running   0          166m
etcd-k8s-master01                          1/1     Running   0          166m
kube-apiserver-k8s-master01                1/1     Running   0          166m
kube-controller-manager-k8s-master01       1/1     Running   0          166m
kube-proxy-4xhms                           1/1     Running   0          163m
kube-proxy-njg9s                           1/1     Running   0          166m
kube-proxy-xfb97                           1/1     Running   0          163m
kube-scheduler-k8s-master01                1/1     Running   0          166m

八、應(yīng)用部署驗(yàn)證及訪(fǎng)問(wèn)驗(yàn)證

cat >  nginx.yaml  << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-web
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
EOF
# kubectl create -f nginx.yaml
replicationcontroller/nginx-web created
service/nginx-service-nodeport created
# kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
nginx-web-7lkfz   1/1     Running   0          31m
nginx-web-n4tj5   1/1     Running   0          31m
# kubectl get svc
NAME                     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes               ClusterIP   10.1.0.1      <none>        443/TCP        30m
nginx-service-nodeport   NodePort    10.1.236.15   <none>        80:30001/TCP   10s
  • port: 80
    targetPort: 80
    nodePort: 30001
    protocol: TCP
    type: NodePort
    selector:
    name: nginx
    EOF



~~~powershell
# kubectl create -f nginx.yaml
replicationcontroller/nginx-web created
service/nginx-service-nodeport created
# kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
nginx-web-7lkfz   1/1     Running   0          31m
nginx-web-n4tj5   1/1     Running   0          31m
# kubectl get svc
NAME                     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes               ClusterIP   10.1.0.1      <none>        443/TCP        30m
nginx-service-nodeport   NodePort    10.1.236.15   <none>        80:30001/TCP   10s

華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-477110.html

到了這里,關(guān)于華為云歐拉操作系統(tǒng)(OpenEuler)部署K8S集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀(guān)點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 拋棄對(duì)外依賴(lài),OpenEular(歐拉)操作系統(tǒng)為企業(yè)搭建獨(dú)立的K8S集群環(huán)境! 容器編排平臺(tái)丨Kubernetes 丨自主可控的云計(jì)算系統(tǒng)丨容器化技術(shù)丨 新一代云計(jì)算基礎(chǔ)設(shè)施丨分布式應(yīng)用部署和管理

    拋棄對(duì)外依賴(lài),OpenEular(歐拉)操作系統(tǒng)為企業(yè)搭建獨(dú)立的K8S集群環(huán)境! 容器編排平臺(tái)丨Kubernetes 丨自主可控的云計(jì)算系統(tǒng)丨容器化技術(shù)丨 新一代云計(jì)算基礎(chǔ)設(shè)施丨分布式應(yīng)用部署和管理

    需要提前準(zhǔn)備好OpenEular操作系統(tǒng)虛擬機(jī)3臺(tái),本文使用模板機(jī)創(chuàng)建。 如今,隨著云計(jì)算、大數(shù)據(jù)、人工智能等技術(shù)的快速發(fā)展,越來(lái)越多的企業(yè)開(kāi)始使用容器化技術(shù)來(lái)提高開(kāi)發(fā)和交付速度。而Kubernetes則成為了最受歡迎的容器編排平臺(tái)之一。然而,許多企業(yè)往往將Kubernetes部署在

    2024年02月11日
    瀏覽(35)
  • 華為歐拉系統(tǒng)OpenEuler安裝Docker軟件

    系統(tǒng)版本: openEuler-22.03 下載Docker離線(xiàn)二進(jìn)制包:https://download.docker.com/linux/static/stable/x86_64/docker-20.10.18.tgz 1)新增安裝源 2)下載安裝包 container-selinux 有時(shí)候會(huì)出現(xiàn)版本的問(wèn)題,一起下載就行 3)打包下載的安裝包 1)上傳到離線(xiàn)主機(jī)上 2)安裝docker 啟動(dòng)docker

    2024年02月13日
    瀏覽(19)
  • 初步使用openEuler華為歐拉Linux系統(tǒng)

    初步使用openEuler華為歐拉Linux系統(tǒng)

    https://www.openeuler.org/zh/download/ ps:本文使用的是openEuler 22.03 LTS是一個(gè)長(zhǎng)期支持版本支持到2026年 2022 年 3 月 30 日,基于統(tǒng)一的 5.10 內(nèi)核,發(fā)布面向服務(wù)器、云計(jì)算、邊緣計(jì)算、嵌入式的全場(chǎng)景 openEuler 22.03 LTS 版本,聚焦算力釋放,持續(xù)提升資源利用率,打造全場(chǎng)景協(xié)同的數(shù)字

    2024年02月07日
    瀏覽(26)
  • [Linux]華為的系統(tǒng) 歐拉 鴻蒙 openEuler HarmonyOS OpenHarmony

    之前整理過(guò)《華為機(jī)器運(yùn)行什么桌面系統(tǒng)?EulerOS openEuler LiteOS 鴻蒙 深度 UOS》,但差不多快兩年了,今天再梳理下華為目前提供的主要系統(tǒng): openEuler | open歐拉 定位是服務(wù)器操作系統(tǒng) 是基于CentOS的 Linux 發(fā)行版,開(kāi)源、免費(fèi) 其商業(yè)發(fā)行版由麒麟軟件、統(tǒng)信軟件等提供(如:統(tǒng)

    2024年02月09日
    瀏覽(47)
  • 01 openEuler操作系統(tǒng)介紹

    01 openEuler操作系統(tǒng)介紹

    openEuler是一款開(kāi)源操作系統(tǒng)。當(dāng)前openEuler內(nèi)核源于Linux,支持鯤鵬及其它多種處理器,能夠充分釋放計(jì)算芯片的潛能,是由全球開(kāi)源貢獻(xiàn)者構(gòu)建的高效、穩(wěn)定、安全的開(kāi)源操作系統(tǒng),適用于數(shù)據(jù)庫(kù)、大數(shù)據(jù)、云計(jì)算、人工智能等應(yīng)用場(chǎng)景。同時(shí),openEuler是一個(gè)面向全球的操作

    2023年04月08日
    瀏覽(17)
  • 03 開(kāi)始使用openEuler操作系統(tǒng)

    開(kāi)始使用openEuler操作系統(tǒng),能夠區(qū)分GUI與CLI、學(xué)習(xí)Linux的兩種登錄方式、了解登錄界面各項(xiàng)參數(shù)的含義、使用了解shell以及一些基礎(chǔ)的快捷鍵、使用命令查看基本的系統(tǒng)信息。 GUI(Graphical User Interface,圖形用戶(hù)界面),可以讓用戶(hù)使用鼠標(biāo)作為輸入工具,來(lái)進(jìn)行各項(xiàng)操作。

    2024年02月02日
    瀏覽(21)
  • 【HCIA-openEuler】實(shí)驗(yàn)手冊(cè)—01【openEuler操作系統(tǒng)安裝】

    【HCIA-openEuler】實(shí)驗(yàn)手冊(cè)—01【openEuler操作系統(tǒng)安裝】

    ? 主要介紹虛擬化環(huán)境安裝及openEuler操作系統(tǒng)安裝。 ? 掌握實(shí)驗(yàn)環(huán)境的搭建 ? 掌握openEuler操作系統(tǒng)的安裝部署 表1-1 openEuler實(shí)驗(yàn)所需材料 材料及版本 下載地址 openEuler-20.03-LTS-everything-x86_64-dvd.iso https://mirrors.tuna.tsinghua.edu.cn/openeuler/openEuler-20.03-LTS/ISO/x86_64/openEuler-20.03-LTS-ev

    2024年03月12日
    瀏覽(23)
  • 歐拉操作系統(tǒng)和linux區(qū)別

    centos和Linux區(qū)別? linux與centos的區(qū)別與聯(lián)系: 1、centos是基于linux建立的操作系統(tǒng)。 2、linux屬于內(nèi)核系統(tǒng),只有終端命令界面,無(wú)圖形界面。 3、centos同時(shí)擁有終端命令界面和圖形界面。 4、linux和centos都是屬于開(kāi)源系統(tǒng)。 一般來(lái)說(shuō),我們平時(shí)說(shuō)Linux指的是Linux系統(tǒng)內(nèi)核,而cen

    2024年02月05日
    瀏覽(23)
  • 華為歐拉(openEuler)通過(guò)libvirt安裝虛擬機(jī)

    華為歐拉(openEuler)通過(guò)libvirt安裝虛擬機(jī)

    安裝虛擬化組件 QEMU組件默認(rèn)以用戶(hù)qemu和用戶(hù)組qemu運(yùn)行,如果想要使用root用戶(hù),需要修改QEMU配置文件。 使用以下命令打開(kāi)QEMU配置文件 vim /etc/libvirt/qemu.conf 找到以下兩個(gè)字段,user = \\\"root\\\"和group = \\\"root\\\",取消注釋?zhuān)磩h除前面的#號(hào)),保存并退出。 制作鏡像 3.準(zhǔn)備虛擬機(jī)網(wǎng)

    2024年02月04日
    瀏覽(41)
  • 在虛擬機(jī)中安裝OpenEuler操作系統(tǒng)

    在虛擬機(jī)中安裝OpenEuler操作系統(tǒng)

    目錄 OpenEuler操作系統(tǒng)安裝步驟(詳細(xì)) 一、首先要做好安裝前的準(zhǔn)備工作: 二、進(jìn)行虛擬機(jī)的創(chuàng)建: 三、OpenEuler 23.09操作系統(tǒng)的安裝部署: 1常用的虛擬機(jī):VMware Workstation VMware Workstation下載的官網(wǎng): VMware 中國(guó) - 交付面向企業(yè)的數(shù)字化基礎(chǔ) | CN 使用針對(duì)現(xiàn)代應(yīng)用、多云、數(shù)

    2024年01月24日
    瀏覽(28)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包