国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

kubekey部署k8s集群

這篇具有很好參考價(jià)值的文章主要介紹了kubekey部署k8s集群。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

基本介紹:KubeKey 是 KubeSphere 社區(qū)開源的一款高效集群部署工具,運(yùn)行時(shí)默認(rèn)使用 Docker , 也可對(duì)接 Containerd CRI-O iSula 等 CRI 運(yùn)行時(shí),且 ETCD 集群獨(dú)立運(yùn)行,支持與 K8s 分離部署,提高環(huán)境部署靈活性。

一、準(zhǔn)備一臺(tái)kubekey虛擬機(jī)

    cpu;4 內(nèi)存:8G 系統(tǒng)盤;100G
	三臺(tái)k8s host
	192.168.5.240 master
    192.168.5.227 node1
    192.168.5.126 node2

二、前期準(zhǔn)備工作

    1:關(guān)閉selinux firewalld
	2: 安裝socat,conntrack
	3: 設(shè)置系統(tǒng)變量:export KKZONE=cn
	4:下載kubekey命令:https://github.com/kubesphere/kubekey/releases/download/v3.0.2/kubekey-v3.0.2-linux-amd64.tar.gz
	5:添加hosts解析以及自身做免密登錄

三、部署kubekey

解壓kubekey
tar -zxvf  kubekey-v2.0.0-rc.3-linux-amd64.tar.gz
  • 查看kubekey版本
[root@kubekey ~]# ./kk version
kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.2", GitCommit:"1c395d22e75528d0a7d07c40e1af4830de265a23", GitTreeState:"clean", BuildDate:"2022-11-22T02:04:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
  • 查看支持的k8s版本
[root@kubekey ~]# ./kk version --show-supported-k8s
v1.19.0
v1.19.8
v1.19.9
v1.19.15
v1.20.4
v1.20.6
v1.20.10
v1.21.0
v1.21.1
v1.21.2
v1.21.3
v1.21.4
v1.21.5
v1.21.6
v1.21.7
v1.21.8
v1.21.9
v1.21.10
v1.21.11
v1.21.12
v1.21.13
v1.21.14
v1.22.0
v1.22.1
v1.22.2
v1.22.3
v1.22.4
v1.22.5
v1.22.6
v1.22.7
v1.22.8
v1.22.9
v1.22.10
v1.22.11
v1.22.12
v1.22.13
v1.22.14
v1.22.15
v1.22.16
v1.23.0
v1.23.1
v1.23.2
v1.23.3
v1.23.4
v1.23.5
v1.23.6
v1.23.7
v1.23.8
v1.23.9
v1.23.10
v1.23.11
v1.23.12
v1.23.13
v1.23.14
v1.24.0
v1.24.1
v1.24.2
v1.24.3
v1.24.4
v1.24.5
v1.24.6
v1.24.7
v1.24.8
v1.25.0
v1.25.1
v1.25.2
v1.25.3
v1.25.4
  • 創(chuàng)建kubekey集群
此集群我們部署1.25.3.因?yàn)閺?.24開始,k8s默認(rèn)不支持docker
[root@kubekey ~]# ./kk create cluster --with-kubernetes v1.25.3 --container-manager containerd   ##指定runtime,默認(rèn)為docker
等待集群創(chuàng)建完成
 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

17:36:51 CST [GreetingsModule] Greetings
17:36:51 CST message: [kubekey]
Greetings, KubeKey!
17:36:51 CST success: [kubekey]
17:36:51 CST [NodePreCheckModule] A pre-check on nodes
17:36:52 CST success: [kubekey]
17:36:52 CST [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| kubekey | y    | y    | y       | y        | y     |       |         | y         | y      |        | 1.6.10     | y          |             |                  | CST 17:36:52 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
17:36:54 CST success: [LocalHost]
17:36:54 CST [NodeBinariesModule] Download installation binaries
17:36:54 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
17:36:54 CST message: [localhost]
kubeadm is existed
17:36:54 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
17:36:55 CST message: [localhost]
kubelet is existed
17:36:55 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
17:36:55 CST message: [localhost]
kubectl is existed
17:36:55 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
17:36:56 CST message: [localhost]
helm is existed
17:36:56 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
17:36:56 CST message: [localhost]
kubecni is existed
17:36:56 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
17:36:57 CST message: [localhost]
crictl is existed
17:36:57 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
17:36:57 CST message: [localhost]
etcd is existed
17:36:57 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
17:36:58 CST message: [localhost]
containerd is existed
17:36:58 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
17:36:58 CST message: [localhost]
runc is existed
17:36:58 CST success: [LocalHost]
17:36:58 CST [ConfigureOSModule] Get OS release
17:36:58 CST success: [kubekey]
17:36:58 CST [ConfigureOSModule] Prepare to init OS
17:37:00 CST success: [kubekey]
17:37:00 CST [ConfigureOSModule] Generate init os script
17:37:00 CST success: [kubekey]
17:37:00 CST [ConfigureOSModule] Exec init os script
17:37:01 CST stdout: [kubekey]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
17:37:01 CST success: [kubekey]
17:37:01 CST [ConfigureOSModule] configure the ntp server for each node
17:37:01 CST skipped: [kubekey]
17:37:01 CST [KubernetesStatusModule] Get kubernetes cluster status
17:37:02 CST success: [kubekey]
17:37:02 CST [InstallContainerModule] Sync containerd binaries
17:37:02 CST skipped: [kubekey]
17:37:02 CST [InstallContainerModule] Sync crictl binaries
17:37:02 CST skipped: [kubekey]
17:37:02 CST [InstallContainerModule] Generate containerd service
17:37:02 CST skipped: [kubekey]
17:37:02 CST [InstallContainerModule] Generate containerd config
17:37:02 CST skipped: [kubekey]
17:37:02 CST [InstallContainerModule] Generate crictl config
17:37:02 CST skipped: [kubekey]
17:37:02 CST [InstallContainerModule] Enable containerd
17:37:02 CST skipped: [kubekey]
17:37:02 CST [PullModule] Start to pull images on all nodes
17:37:02 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.8
17:37:04 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.25.3
17:37:05 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.25.3
17:37:06 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.25.3
17:37:07 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.25.3
17:37:07 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
17:37:08 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
17:37:08 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
17:37:09 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
17:37:09 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
17:37:10 CST message: [kubekey]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
17:37:11 CST success: [kubekey]
17:37:11 CST [ETCDPreCheckModule] Get etcd status
17:37:11 CST stdout: [kubekey]
ETCD_NAME=etcd-kubekey
17:37:11 CST success: [kubekey]
17:37:11 CST [CertsModule] Fetch etcd certs
17:37:11 CST success: [kubekey]
17:37:11 CST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-kubekey certificate and key on disk
[certs] Using existing member-kubekey certificate and key on disk
[certs] Using existing node-kubekey certificate and key on disk
17:37:12 CST success: [LocalHost]
17:37:12 CST [CertsModule] Synchronize certs file
17:37:15 CST success: [kubekey]
17:37:15 CST [CertsModule] Synchronize certs file to master
17:37:15 CST skipped: [kubekey]
17:37:15 CST [InstallETCDBinaryModule] Install etcd using binary
17:37:17 CST success: [kubekey]
17:37:17 CST [InstallETCDBinaryModule] Generate etcd service
17:37:17 CST success: [kubekey]
17:37:17 CST [InstallETCDBinaryModule] Generate access address
17:37:17 CST success: [kubekey]
17:37:17 CST [ETCDConfigureModule] Health check on exist etcd
17:37:17 CST success: [kubekey]
17:37:17 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
17:37:17 CST skipped: [kubekey]
17:37:17 CST [ETCDConfigureModule] Join etcd member
17:37:17 CST skipped: [kubekey]
17:37:17 CST [ETCDConfigureModule] Restart etcd
17:37:17 CST skipped: [kubekey]
17:37:17 CST [ETCDConfigureModule] Health check on new etcd
17:37:17 CST skipped: [kubekey]
17:37:17 CST [ETCDConfigureModule] Check etcd member
17:37:17 CST skipped: [kubekey]
17:37:17 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
17:37:18 CST success: [kubekey]
17:37:18 CST [ETCDConfigureModule] Health check on all etcd
17:37:18 CST success: [kubekey]
17:37:18 CST [ETCDBackupModule] Backup etcd data regularly
17:37:18 CST success: [kubekey]
17:37:18 CST [ETCDBackupModule] Generate backup ETCD service
17:37:18 CST success: [kubekey]
17:37:18 CST [ETCDBackupModule] Generate backup ETCD timer
17:37:19 CST success: [kubekey]
17:37:19 CST [ETCDBackupModule] Enable backup etcd service
17:37:20 CST success: [kubekey]
17:37:20 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
17:37:32 CST success: [kubekey]
17:37:32 CST [InstallKubeBinariesModule] Synchronize kubelet
17:37:32 CST success: [kubekey]
17:37:32 CST [InstallKubeBinariesModule] Generate kubelet service
17:37:32 CST success: [kubekey]
17:37:32 CST [InstallKubeBinariesModule] Enable kubelet service
17:37:33 CST success: [kubekey]
17:37:33 CST [InstallKubeBinariesModule] Generate kubelet env
17:37:33 CST success: [kubekey]
17:37:33 CST [InitKubernetesModule] Generate kubeadm config
17:37:34 CST success: [kubekey]
17:37:34 CST [InitKubernetesModule] Init cluster using kubeadm
17:37:51 CST stdout: [kubekey]
W1202 17:37:34.445440   18353 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1202 17:37:34.446807   18353 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1202 17:37:34.451446   18353 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubekey kubekey.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.5.30 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.504382 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubekey as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubekey as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 90t423.8ehhtkv8domjs7no
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token 90t423.8ehhtkv8domjs7no \
	--discovery-token-ca-cert-hash sha256:58329bea84ba4ee8b87682da9fb5b1b8a8ce87ae8a5fdee702315a5fd6f52006 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token 90t423.8ehhtkv8domjs7no \
	--discovery-token-ca-cert-hash sha256:58329bea84ba4ee8b87682da9fb5b1b8a8ce87ae8a5fdee702315a5fd6f52006
17:37:51 CST success: [kubekey]
17:37:51 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
17:37:51 CST success: [kubekey]
17:37:51 CST [InitKubernetesModule] Remove master taint
17:37:51 CST stdout: [kubekey]
error: taint "node-role.kubernetes.io/master:NoSchedule" not found
17:37:51 CST [WARN] Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes kubekey node-role.kubernetes.io/master=:NoSchedule-" 
error: taint "node-role.kubernetes.io/master:NoSchedule" not found: Process exited with status 1
17:37:51 CST stdout: [kubekey]
node/kubekey untainted
17:37:51 CST success: [kubekey]
17:37:51 CST [InitKubernetesModule] Add worker label
17:37:52 CST stdout: [kubekey]
node/kubekey labeled
17:37:52 CST success: [kubekey]
17:37:52 CST [ClusterDNSModule] Generate coredns service
17:37:52 CST success: [kubekey]
17:37:52 CST [ClusterDNSModule] Override coredns service
17:37:52 CST stdout: [kubekey]
service "kube-dns" deleted
17:37:54 CST stdout: [kubekey]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
17:37:54 CST success: [kubekey]
17:37:54 CST [ClusterDNSModule] Generate nodelocaldns
17:37:54 CST success: [kubekey]
17:37:54 CST [ClusterDNSModule] Deploy nodelocaldns
17:37:54 CST stdout: [kubekey]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
17:37:54 CST success: [kubekey]
17:37:54 CST [ClusterDNSModule] Generate nodelocaldns configmap
17:37:55 CST success: [kubekey]
17:37:55 CST [ClusterDNSModule] Apply nodelocaldns configmap
17:37:56 CST stdout: [kubekey]
configmap/nodelocaldns created
17:37:56 CST success: [kubekey]
17:37:56 CST [KubernetesStatusModule] Get kubernetes cluster status
17:37:56 CST stdout: [kubekey]
v1.25.3
17:37:56 CST stdout: [kubekey]
kubekey   v1.25.3   [map[address:192.168.5.30 type:InternalIP] map[address:kubekey type:Hostname]]
17:37:57 CST stdout: [kubekey]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
8fa0cf5c3cd6b435d507323cc3639c33f9e4931de745018fcd043c0210e2d1d8
17:37:57 CST stdout: [kubekey]
secret/kubeadm-certs patched
17:37:57 CST stdout: [kubekey]
secret/kubeadm-certs patched
17:37:58 CST stdout: [kubekey]
secret/kubeadm-certs patched
17:37:58 CST stdout: [kubekey]
uumdk0.eruq90yy8li1v12m
17:37:58 CST success: [kubekey]
17:37:58 CST [JoinNodesModule] Generate kubeadm config
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Join control-plane node
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Join worker node
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Remove master taint
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Add worker label to master
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Synchronize kube config to worker
17:37:58 CST skipped: [kubekey]
17:37:58 CST [JoinNodesModule] Add worker label to worker
17:37:58 CST skipped: [kubekey]
17:37:58 CST [DeployNetworkPluginModule] Generate calico
17:37:58 CST success: [kubekey]
17:37:58 CST [DeployNetworkPluginModule] Deploy calico
17:37:59 CST stdout: [kubekey]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
17:37:59 CST success: [kubekey]
17:37:59 CST [ConfigureKubernetesModule] Configure kubernetes
17:37:59 CST success: [kubekey]
17:37:59 CST [ChownModule] Chown user $HOME/.kube dir
17:38:00 CST success: [kubekey]
17:38:00 CST [AutoRenewCertsModule] Generate k8s certs renew script
17:38:00 CST success: [kubekey]
17:38:00 CST [AutoRenewCertsModule] Generate k8s certs renew service
17:38:00 CST success: [kubekey]
17:38:00 CST [AutoRenewCertsModule] Generate k8s certs renew timer
17:38:01 CST success: [kubekey]
17:38:01 CST [AutoRenewCertsModule] Enable k8s certs renew service
17:38:01 CST success: [kubekey]
17:38:01 CST [SaveKubeConfigModule] Save kube config as a configmap
17:38:01 CST success: [LocalHost]
17:38:01 CST [AddonsModule] Install addons
17:38:01 CST success: [LocalHost]
17:38:01 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:
		
	kubectl get pod -A
  • 環(huán)境查看
[root@kubekey ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-75c594996d-wtrk9   1/1     Running   0          32s
kube-system   calico-node-48vks                          1/1     Running   0          32s
kube-system   coredns-67ddbf998c-k2fwf                   1/1     Running   0          32s
kube-system   coredns-67ddbf998c-kk6sp                   1/1     Running   0          32s
kube-system   kube-apiserver-kubekey                     1/1     Running   0          49s
kube-system   kube-controller-manager-kubekey            1/1     Running   0          46s
kube-system   kube-proxy-hjn6j                           1/1     Running   0          32s
kube-system   kube-scheduler-kubekey                     1/1     Running   0          46s
kube-system   nodelocaldns-zcrvr                         1/1     Running   0          32s

查看容器文章來源地址http://www.zghlxwxcb.cn/news/detail-695313.html

[root@kubekey ~]# crictl  ps 
I1202 17:46:07.309812   28107 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/run/containerd/containerd.sock" URL="unix:///run/containerd/containerd.sock"
I1202 17:46:07.311257   28107 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/run/containerd/containerd.sock" URL="unix:///run/containerd/containerd.sock"
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
6bfd3d45bd034       ec95788d0f725       7 minutes ago       Running             calico-kube-controllers   0                   0e3d60df6f9e7
146ef8a1fa933       5185b96f0becf       7 minutes ago       Running             coredns                   0                   13edf51da4822
7a34634370e96       5185b96f0becf       7 minutes ago       Running             coredns                   0                   85830b1a14ad7
1013cf001bd21       a3447b26d32c7       7 minutes ago       Running             calico-node               0                   cbaf5547f890d
92db9c525c99a       beaaf00edd38a       8 minutes ago       Running             kube-proxy                0                   30190f94c0c22
ec3526860d5f0       5340ba194ec91       8 minutes ago       Running             node-cache                0                   3cca48e41ad39
7a4316c0488e3       6d23ec0e8b87e       8 minutes ago       Running             kube-scheduler            0                   e8b22ab1ad2a4
258487c11704d       6039992312758       8 minutes ago       Running             kube-controller-manager   0                   edfc8079077bb
5f20bcf73c4e4       0346dbd74bcb9       8 minutes ago       Running             kube-apiserver            0                   2974695f93619

四、部署k8s集群

  • 編輯config.yaml文件
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: test.com
spec:
  hosts:
  - {name: master, address: 192.168.5.240, internalAddress: 192.168.5.240, privateKeyPath: ~/.ssh/id_rsa}
  - {name: node1, address: 192.168.5.227, internalAddress: 192.168.5.227, privateKeyPath: ~/.ssh/id_rsa}
  - {name: node2, address: 192.168.5.126, internalAddress: 192.168.5.126, privateKeyPath: ~/.ssh/id_rsa}
  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    domain: lb.test.com
    address: ""
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: test.com
    containerManager: containerd
    DNSDomain: test.com
  • 編輯host環(huán)境,添加域名解析
	192.168.5.240 master
    192.168.5.227 node1
    192.168.5.126 node2
  • 做主機(jī)免密登錄
ssh-copy-id 192.168.5.240(三臺(tái)都要做)
  • 開始創(chuàng)建集群
./kk create cluster -f config.yaml
  • 查看集群環(huán)境
[root@master ~]# kubectl get node 
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   2d19h   v1.25.3
node1    Ready    worker          2d19h   v1.25.3
node2    Ready    worker          2d19h   v1.25.3
[root@master ~]# 
[root@master ~]# kubectl get po -A 
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57bcc88b-d29ds   1/1     Running   0          2d19h
kube-system   calico-node-4dt5p                        1/1     Running   0          2d19h
kube-system   calico-node-kgmr2                        1/1     Running   0          2d19h
kube-system   calico-node-pdh8x                        1/1     Running   0          2d19h
kube-system   coredns-6d69f479b-2dc2f                  1/1     Running   0          2d19h
kube-system   coredns-6d69f479b-m8qbn                  1/1     Running   0          2d19h
kube-system   kube-apiserver-master                    1/1     Running   0          2d19h
kube-system   kube-controller-manager-master           1/1     Running   0          2d19h
kube-system   kube-proxy-ccsdp                         1/1     Running   0          2d19h
kube-system   kube-proxy-tvzbc                         1/1     Running   0          2d19h
kube-system   kube-proxy-x5fx5                         1/1     Running   0          2d19h
kube-system   kube-scheduler-master                    1/1     Running   0          2d19h
kube-system   nodelocaldns-7p9jb                       1/1     Running   0          2d19h
kube-system   nodelocaldns-qzn8f                       1/1     Running   0          2d19h
kube-system   nodelocaldns-tnwt6                       1/1     Running   0          2d19h
[root@master ~]# 

五、部署metrics-server(在k8s master節(jié)點(diǎn))

編輯metrics-server.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
  
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      hostNetwork: true
      containers:
      - name: metrics-server
        image: eipwork/metrics-server:v0.3.7
        # command:
        # - /metrics-server
        # - --kubelet-insecure-tls
        # - --kubelet-preferred-address-types=InternalIP 
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls=true
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,externalDNS
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        beta.kubernetes.io/os: linux

---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: 4443
[root@master ~]# 

  • 部署
[root@master ~]# kubectl apply -f metrics-server.yaml
  • 測試,查看資源使用率
[root@master ~]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   121m         3%     2691Mi          18%       
node1    84m          2%     1515Mi          10%       
node2    125m         3%     1512Mi          10% 

到了這里,關(guān)于kubekey部署k8s集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 安裝部署k8s集群

    安裝部署k8s集群

    系統(tǒng): CentOS Linux release 7.9.2009 (Core) 準(zhǔn)備3臺(tái)主機(jī) 192.168.44.148 k8s-master 92.168.44.154 k8s-worker01 192.168.44.155 k8s-worker02 3臺(tái)主機(jī)準(zhǔn)備工作 關(guān)閉防火墻和selinux 關(guān)閉swap分區(qū)(swap分區(qū)會(huì)降低性能,所以選擇關(guān)閉) 參考如下鏈接: 設(shè)置node的主機(jī)名,并配置/etc/hosts (這樣可以方面看到pod調(diào)

    2024年02月19日
    瀏覽(23)
  • 安裝部署(卸載)k8s集群

    安裝部署(卸載)k8s集群

    Kubeadm是一個(gè)K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。? 序號(hào) ip地址 hostname 節(jié)點(diǎn) 1 10.10.80.220 appnode1 主節(jié)點(diǎn) 2 10.10.80.221 appnode2 工作節(jié)點(diǎn) 3 10.10.80.222 dbnode 預(yù)留 ?1、安裝docker: 2、添加阿里云YUM軟件源 ? 3、安裝kubeadm,kubelet和kubectl 要求master node和worke

    2024年02月08日
    瀏覽(26)
  • Kubernetes(k8s)集群安裝部署

    Kubernetes(k8s)集群安裝部署

    名稱 IP 系統(tǒng) 配置 主控節(jié)點(diǎn) 192.168.202.101 CentOS 7.9.2009 2核4G 工作節(jié)點(diǎn)1 192.168.202.102 CentOS 7.9.2009 2核4G 工作節(jié)點(diǎn)2 192.168.202.103 CentOS 7.9.2009 2核4G 2.1 升級(jí)操作系統(tǒng)內(nèi)核 導(dǎo)入elrepo gpg key 安裝elrepo YUM源倉庫 安裝kernel-ml版本,ml為長期穩(wěn)定版本,lt為長期維護(hù)版本 設(shè)置grub2默認(rèn)引導(dǎo)為0 重

    2024年02月10日
    瀏覽(97)
  • K8s集群部署(二進(jìn)制安裝部署詳細(xì)手冊(cè))

    K8s集群部署(二進(jìn)制安裝部署詳細(xì)手冊(cè))

    ? ?一、簡介 K8s部署主要有兩種方式: 1、Kubeadm Kubeadm是一個(gè)K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 2、二進(jìn)制 ? 從github下載發(fā)行版的二進(jìn)制包,手動(dòng)部署每個(gè)組件,組成Kubernetes集群。 ?本文通過二進(jìn)制安裝部署的方式在centos7上搭建kubernetes集群

    2024年02月15日
    瀏覽(21)
  • K8S集群安裝與部署(Linux系統(tǒng))

    K8S集群安裝與部署(Linux系統(tǒng))

    一、環(huán)境說明:CentOS7、三臺(tái)主機(jī)(Master:10.0.0.132、Node1:10.0.0.133、Node2:10.0.0.134) 二、準(zhǔn)備環(huán)境: 映射 關(guān)閉防火墻 三、etcd集群配置 安裝etcd(Master) 修改etcd配置文件/etc/etcd/etcd.conf(Master) 安裝K8S節(jié)點(diǎn)組件、etcd、flannel以及docker(Node1和Node2) 修改etcd配置文件/etc/etcd/et

    2024年02月11日
    瀏覽(22)
  • 【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    目錄 基于Prometheus監(jiān)控Kubernetes集群安裝部署 一、環(huán)境準(zhǔn)備 二、部署kubernetes集群 三、部署Prometheus監(jiān)控平臺(tái) 四、部署Grafana服務(wù) 五、grafana? web操作 IP地址 主機(jī)名 組件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168

    2024年02月12日
    瀏覽(107)
  • 安裝部署rancher2.7.0,然后導(dǎo)入K8S集群,管理集群

    安裝部署rancher2.7.0,然后導(dǎo)入K8S集群,管理集群

    centos系統(tǒng)請(qǐng)參考博客 ubuntu系統(tǒng)請(qǐng)參考博客 默認(rèn)用戶是admin 瀏覽器打開:https://IP:443 回車后就出現(xiàn)了如下圖 利用剛才查到的密碼登錄 登錄后直接修改密碼,點(diǎn)擊繼續(xù)。 上圖就是進(jìn)入后的默認(rèn)頁面 不同版本調(diào)整中文的頁面不一樣,具體請(qǐng)根據(jù)自己的版本可以百度或者去官網(wǎng)查

    2024年02月11日
    瀏覽(50)
  • K8s(kubernetes)集群搭建及dashboard安裝、基礎(chǔ)應(yīng)用部署

    K8s(kubernetes)集群搭建及dashboard安裝、基礎(chǔ)應(yīng)用部署

    本質(zhì)是一組服務(wù)器集群,在集群每個(gè)節(jié)點(diǎn)上運(yùn)行特定的程序,來對(duì)節(jié)點(diǎn)中的容器進(jìn)行管理。實(shí)現(xiàn)資源管理的自動(dòng)化。 自我修復(fù) 彈性伸縮 服務(wù)發(fā)現(xiàn) 負(fù)載均衡 版本回退 存儲(chǔ)編排 控制節(jié)點(diǎn)(master)-控制平面 APIserver :資源操作的唯一入口 scheduler :集群資源調(diào)度,將Pod調(diào)度到node節(jié)

    2024年02月08日
    瀏覽(32)
  • Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    宿主機(jī)系統(tǒng) 集群角色 服務(wù)器IP 主機(jī)名稱 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服務(wù)器初始化及網(wǎng)絡(luò)配置 VMware安裝Centos7并初始化網(wǎng)絡(luò)使外部可以訪問** 注意事項(xiàng):請(qǐng)一定要看完上面這篇文章再執(zhí)行下面的操

    2024年02月03日
    瀏覽(55)
  • Amazon Linux2使用kubeadm部署安裝K8S集群

    在AWS上啟動(dòng)3臺(tái)Amazon Linux2的服務(wù)器,服務(wù)器配置為2vcpu 和2GB內(nèi)存 1. 修改主機(jī)名(可選步驟) 2.導(dǎo)入k8s的yum倉庫密鑰 3. 配置kubernetes源 4. 部署安裝kubeadm、kubectl、docker,并且啟動(dòng)docker 5. 在master節(jié)點(diǎn)上執(zhí)行初始化 具體初始化過程如下 [init] Using Kubernetes version: v1.27.1 [preflight] Runni

    2024年02月06日
    瀏覽(29)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包