目錄
一、創(chuàng)建3臺虛擬機
二、為每臺虛擬機安裝Docker
三、安裝kubelet
3.1 安裝要求
3.2 為每臺服務(wù)器完成前置設(shè)置
3.3 為每臺服務(wù)器安裝kubelet、kubeadm、kubectl
四、使用kubeadm引導(dǎo)集群
4.1 master服務(wù)器
4.2 node1、node2服務(wù)器
4.3 初始化主節(jié)點
4.4 work節(jié)點加入集群
五、token過期怎么辦?
六、安裝可視化界面dashboard
6.1 安裝
6.2 暴露端口
6.3 訪問web界面
6.4 創(chuàng)建訪問賬號
6.5 生成令牌
6.6 登錄
七、寫在后面的話
一、創(chuàng)建3臺虛擬機
具體操作步驟可以參考之前的教程,建議是先安裝一臺,然后克隆虛擬機,這樣速度快。
注意:在克隆時記得修改Mac地址、IP地址、UUID和主機名。(最后別忘了保存下快照~)
安裝VMware虛擬機、Linux系統(tǒng)(CentOS7)_何蘇三月的博客-CSDN博客
克隆Linux系統(tǒng)(centos)_linux克隆_何蘇三月的博客-CSDN博客
?
二、為每臺虛擬機安裝Docker
請參考:Docker安裝、常見命令、安裝常見容器(Mysql、Redis等)_docker redis 容器_何蘇三月的博客-CSDN博客
教程中安裝docker的命令
yum install docker-ce docker-ce-cli containerd.io
原來是默認安裝最新版,這里需要指定一下版本,目的是保障后續(xù)安裝k8s不出問題:
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
其他步驟不變。
三、安裝kubelet
3.1 安裝要求
-
一臺兼容的 Linux 主機。Kubernetes 項目為基于 Debian 和 Red Hat 的 Linux 發(fā)行版以及一些不提供包管理器的發(fā)行版提供通用的指令。
-
每臺機器 2 GB 或更多的 RAM (如果少于這個數(shù)字將會影響你應(yīng)用的運行內(nèi)存)
-
2 CPU 核或更多
-
集群中的所有機器的網(wǎng)絡(luò)彼此均能相互連接(公網(wǎng)和內(nèi)網(wǎng)都可以)
-
設(shè)置防火墻放行規(guī)則
-
-
節(jié)點之中不可以有重復(fù)的主機名、MAC 地址或 product_uuid。請參見這里了解更多詳細信息。
- 設(shè)置不同hostname
-
開啟機器上的某些端口。請參見這里 了解更多詳細信息。
- ???????內(nèi)網(wǎng)互信
-
禁用交換分區(qū)。為了保證 kubelet 正常工作,你 必須 禁用交換分區(qū)。
- ???????永久關(guān)閉
3.2 為每臺服務(wù)器完成前置設(shè)置
#各個機器設(shè)置自己的域名
hostnamectl set-hostname xxxx
# 將 SELinux 設(shè)置為 permissive 模式(相當(dāng)于將其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#關(guān)閉swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#允許 iptables 檢查橋接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
3.3 為每臺服務(wù)器安裝kubelet、kubeadm、kubectl
kubelet - “廠長”
kubectl - 程序員敲命令行的命令窗
kubeadm - 引導(dǎo)創(chuàng)建集群的
# 1.先配置K8S去哪兒下載的地址信息
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# 2. 安裝
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
# 3. 啟動kubelet
sudo systemctl enable --now kubelet
??systemctl status kubelet
查看狀態(tài)會發(fā)現(xiàn),kubelet 現(xiàn)在每隔幾秒就會重啟,因為它陷入了一個等待 kubeadm 指令的死循環(huán)。這是正?,F(xiàn)象不用管!
四、使用kubeadm引導(dǎo)集群
4.1 master服務(wù)器
下載各個機器需要的鏡像,以下只需要在master機器上執(zhí)行:
# 1. 定義一個for循環(huán),需要的東西下載
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
# 2. 賦予權(quán)限,讓它下載這些東西
chmod +x ./images.sh && ./images.sh
4.2 node1、node2服務(wù)器
從圖上可以知道,從節(jié)點也需要安裝kube-proxy。我們可以只下載這個鏡像,當(dāng)然了為了避免出現(xiàn)意外,我們也可以都下載下來。
方法完全參考4.1
4.3 初始化主節(jié)點
1.首先給所有的服務(wù)器都添加一下k8s110這臺服務(wù)器的域名映射
#所有機器添加master域名映射,以下需要修改為自己的內(nèi)網(wǎng)ip地址
echo "192.168.37.110 cluster-endpoint" >> /etc/hosts
2.然后只在k8s110這臺服務(wù)器上執(zhí)行主節(jié)點初始化過程:
#主節(jié)點初始化
kubeadm init \
--apiserver-advertise-address=192.168.37.110 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.50.0/24
#要求所有網(wǎng)絡(luò)范圍不重疊 --pod-network-cidr --service-cidr --apiserver-advertise-address 都不能重疊
如果出現(xiàn)上述錯誤,則執(zhí)行如下命令:
sysctl -w net.ipv4.ip_forward=1
然后重新執(zhí)行
最后看到如下畫面則表示初始化成功!
這段還比較重要的,它告訴我們怎么使用這個集群信息等等,所以我們把文本單獨復(fù)制出來
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token 3e54se.alzs9d1mkf30f25w \
--discovery-token-ca-cert-hash sha256:689c076e294bdbb588103a51aaa7248b8a0df34bde634a6189d311ad46a02856 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token 3e54se.alzs9d1mkf30f25w \
--discovery-token-ca-cert-hash sha256:689c076e294bdbb588103a51aaa7248b8a0df34bde634a6189d311ad46a02856
那么按照它的要求一步一步執(zhí)行吧!
3.按要求創(chuàng)建文件夾復(fù)制文件給予權(quán)限等操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后我們查看一下集群的所有結(jié)點:
#查看集群所有節(jié)點
kubectl get nodes
發(fā)現(xiàn)k8s110這臺服務(wù)器就是master節(jié)點了,但是它的狀態(tài)是NotReady。
沒關(guān)系按照它的要求繼續(xù)執(zhí)行,下一步說需要安裝一個網(wǎng)絡(luò)插件。
4.安裝網(wǎng)絡(luò)插件
可以有多種安裝選擇,我們就用calico
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
下載成功,我們calico.yaml配置文件就有了。
重要提示??:如果我們在初始化主節(jié)點時,修改了--pod-network-cidr=192.168.0.0/16,那么我們就要進入這個配置文件,將我們修改后的ip地址寫上去。
ok,有了這個配置文件,就可以通過如下命令為k8s安裝calico插件所需要的東西了
#根據(jù)配置文件,給集群創(chuàng)建資源(以后通過該命令為k8s創(chuàng)建資源,不限于calico)
kubectl apply -f calico.yaml
?然后執(zhí)行命令安裝calico網(wǎng)絡(luò)插件
如果出現(xiàn)上述提示,說明我們的yaml文件有可能換行符搞錯了等等。
再重新下載就好了。
我們?nèi)绾尾榭醇翰渴鹆四男?yīng)用呢?
# 查看集群部署了哪些應(yīng)用
docker ps
# 等價于
kubectl get pods -A
# 運行中的應(yīng)用在docker里面叫容器,在k8s里面叫Pod,至于為什么,后續(xù)再講
以上,master節(jié)點就準備就緒了!
4.4 work節(jié)點加入集群
前面初始化主節(jié)點成功后的提示中有步驟:
kubeadm join cluster-endpoint:6443 --token 3e54se.alzs9d1mkf30f25w \
--discovery-token-ca-cert-hash sha256:689c076e294bdbb588103a51aaa7248b8a0df34bde634a6189d311ad46a02856
我們只需要將它在另外兩臺服務(wù)器各自執(zhí)行即可。
如果加入報錯,請查看是否已經(jīng)關(guān)閉了防火墻,確保關(guān)閉防火墻然后執(zhí)行:
sysctl -w net.ipv4.ip_forward=1
ok,然后去master查看一下節(jié)點信息。
我們也可以通過linux的命令 watch -n 1 kubectl get pods -A,每一秒查看一下狀態(tài)
OK了,我們再看看
至此,K8S集群就跑起來了。
五、token過期怎么辦?
token超過24小時就失效了,如果我們還沒有加入從節(jié)點,或者想加入新的從節(jié)點,可以在master節(jié)點執(zhí)行如下命令,讓它重新生成
kubeadm token create --print-join-command
六、安裝可視化界面dashboard
6.1 安裝
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
如果網(wǎng)絡(luò)不好安裝不了就復(fù)制下面的,將它寫入到一個yaml文件中,然后執(zhí)行該文件即可:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.3.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.6
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
6.2 暴露端口
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
type: ClusterIP 改為 type: NodePort
相當(dāng)于docker中將內(nèi)部的端口映射到linux的某個端口
找到放行的端口
kubectl get svc -A |grep kubernetes-dashboard
## 如果是云服務(wù)器,找到端口,在安全組放行
6.3 訪問web界面
訪問: https://集群任意IP:端口 https://192.168.37.110:31820/
發(fā)現(xiàn)虛擬機部署的無法訪問,云服務(wù)器則沒有問題。
我后面試了試Google、Edge都不行,華為瀏覽器、火狐瀏覽器可以訪問到:
?具體怎么解決,這個我放到后面有時間再處理吧~
6.4 創(chuàng)建訪問賬號
創(chuàng)建一個配置文件dash-usr.yaml
#創(chuàng)建訪問賬號,準備一個yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
6.5 生成令牌
這個可視化界面是通過令牌登錄的,我們可以通過如下命令生成訪問令牌:
#獲取訪問令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
6.6 登錄
七、寫在后面的話
值得注意的是,我們的這些kubectl命令都需要在master節(jié)點上運行,別整岔劈了!
另外,我們重啟后,k8s集群正常來說就會自動啟動,只不過部分應(yīng)用需要時間慢慢啟動,等他們?nèi)縭unning
如果發(fā)現(xiàn)沒有重啟成功,那么檢查一下swap交換分區(qū)是不是沒有關(guān)閉?防火墻是不是打開了?docker是不是都啟動了?
遇事不要慌,逐一進行排查,這也是處理問題的能力。我在搭建的過程中,也遇到了不少的坑,正常。不過只要嚴格按照上述過程執(zhí)行,我相信問題不大的。文章來源:http://www.zghlxwxcb.cn/news/detail-726155.html
好了,我們下一個階段見!文章來源地址http://www.zghlxwxcb.cn/news/detail-726155.html
到了這里,關(guān)于Kubernetes(K8S)集群部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!