1 k8s組件介紹
1.1 kube-apiserver:
Kubernetes API server 為 api 對象驗證并配置數(shù)據(jù),包括 pods、 services、replicationcontrollers和其它 api 對象,API Server 提供 REST 操作,并為集群的共享狀態(tài)提供前端訪問??,kubernetes中的所有其他組件都通過該前端進?交互。
https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/
1.2 kube-scheduler
Kubernetes的pod調(diào)度器,負責(zé)將Pods指派到合法的節(jié)點上,kube-scheduler調(diào)度器基于約束和可?資源為調(diào)度隊列中每個Pod確定其可合法放置的節(jié)點,kube-scheduler?個擁有豐富策略、能夠感知拓撲變化、?持特定負載的功能組件,kube-scheduler需要考慮獨?的和集體的資源需求、服務(wù)質(zhì)量需求、硬件/軟件/策略限制、親和與反親和規(guī)范等需求。
https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-scheduler/
1.3 kube-controller-manager
Controller Manager作為集群內(nèi)部的管理控制中?,負責(zé)集群內(nèi)的Node、
Pod副本、服務(wù)端點(Endpoint)、命名空間(Namespace)、服務(wù)賬號(ServiceAccount)、資源定額(ResourceQuota)的管理,當(dāng)某個Node意外宕機時,Controller Manager會及時發(fā)現(xiàn)并執(zhí)??動化修復(fù)流程,確保集群中的pod副本始終處于預(yù)期的?作狀態(tài)。
https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-controller-manager/
1.4 kube-proxy
Kubernetes ?絡(luò)代理運?在 node 上,它反映了 node 上 Kubernetes API 中定義的服務(wù),并可以通過?組后端進?簡單的 TCP、UDP 和 SCTP 流轉(zhuǎn)發(fā)或者在?組后端進?循環(huán) TCP、UDP 和SCTP 轉(zhuǎn)發(fā),?戶必須使? apiserver API 創(chuàng)建?個服務(wù)來配置代理,其實就是kube-proxy通過在主機上維護?絡(luò)規(guī)則并執(zhí)?連接轉(zhuǎn)發(fā)來實現(xiàn)Kubernetes服務(wù)訪問。
https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-proxy/
1.5 kubelet
是運?在每個worker節(jié)點的代理組件,它會監(jiān)視已分配給節(jié)點的pod,具體功能如下:
向master匯報node節(jié)點的狀態(tài)信息
接受指令并在Pod中創(chuàng)建 docker容器
準(zhǔn)備Pod所需的數(shù)據(jù)卷
返回pod的運?狀態(tài)
在node節(jié)點執(zhí)?容器健康檢查
https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet/
1.6 etcd:
etcd 是CoreOS公司開發(fā)?前是Kubernetes默認使?的key-value數(shù)據(jù)存儲系統(tǒng),?于保存所有集群數(shù)據(jù),?持分布式集群功能,?產(chǎn)環(huán)境使?時需要為etcd數(shù)據(jù)提供定期備份機制。
#核?組件: apiserver:提供了資源操作的唯???,并提供認證、授權(quán)、訪問控制、API注冊和發(fā)現(xiàn)等機制
controller manager:負責(zé)維護集群的狀態(tài),?如故障檢測、?動擴展、滾動更新等
scheduler:負責(zé)資源的調(diào)度,按照預(yù)定的調(diào)度策略將Pod調(diào)度到相應(yīng)的機器上
kubelet:負責(zé)維護容器的?命周期,同時也負責(zé)Volume(CVI)和?絡(luò)(CNI)的管理;
Container runtime:負責(zé)鏡像管理以及Pod和容器的真正運?(CRI);
kube-proxy:負責(zé)為Service提供cluster內(nèi)部的服務(wù)發(fā)現(xiàn)和負載均衡;
etcd:保存了整個集群的狀態(tài)
#可選組件:
kube-dns:負責(zé)為整個集群提供DNS服務(wù)
Ingress Controller:為服務(wù)提供外???
Heapster:提供資源監(jiān)控
Dashboard:提供GUI
Federation:提供跨可?區(qū)的集群
Fluentd-elasticsearch:提供集群?志采集、存儲與查詢
2 k8s安裝部署:
2.1:安裝?式:
2.1.1:kubeadm:
使?k8s官?提供的部署?具kubeadm?動安裝,需要在master和node節(jié)點上安裝docker等組件,然后初始化,把管理端的控制服務(wù)和node上的服務(wù)都以pod的?式運?。
2.1.2:安裝注意事項:
注意:
禁?swap
關(guān)閉selinux
關(guān)閉iptables,
優(yōu)化內(nèi)核參數(shù)及資源限制參數(shù)
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#?層的?橋在轉(zhuǎn)發(fā)包時會被宿主機iptables的FORWARD規(guī)則匹配
2.2:部署過程:
2.2.1:具體步驟:
1、基礎(chǔ)環(huán)境準(zhǔn)備
2、部署harbor及haproxy?可?反向代理,實現(xiàn)控制節(jié)點的API反問???可?
3、在所有master節(jié)點安裝指定版本的kubeadm 、kubelet、kubectl、docker
4、在所有node節(jié)點安裝指定版本的kubeadm 、kubelet、docker,在node節(jié)點kubectl為可選安裝,看是否需要在node執(zhí)?kubectl命令進?集群管理及pod管理等操作。
5、master節(jié)點運?kubeadm init初始化命令
6、驗證master節(jié)點狀態(tài)
7、在node節(jié)點使?kubeadm命令將??加?k8smaster(需要使?master?成token認證)
8、驗證node節(jié)點狀態(tài)
9、創(chuàng)建pod并測試?絡(luò)通信
10、部署web服務(wù)Dashboard
2.2.2基礎(chǔ)環(huán)境準(zhǔn)備:
?? | 主機名 | IP地址 |
---|---|---|
k8s-master1 | k8s-master1 | 192.168.100.31 |
k8s-master2 | k8s-master2 | 192.168.100.32 |
k8s-master3 | k8s-master3 | 192.168.100.33 |
k8s-haproxy-1 | k8s-ha1 | 192.168.100.34 |
k8s-haproxy-2 | k8s-ha2 | 192.168.100.35 |
k8s-harbor | k8s-harbor | 192.168.100.36 |
k8s-node1 | k8s-node1 | 192.168.100.37 |
k8s-node2 | k8s-node2 | 192.168.100.38 |
k8s-node3 | k8s-node3 | 192.168.100.39 |
2.3:?可?反向代理:
基于keepalived及HAProxy實現(xiàn)?可?反向代理環(huán)境,為k8s apiserver提供?可?反向代理。
2.3.1:keepalived安裝及配置:
安裝及配置keepalived,并測試VIP的?可?
節(jié)點1安裝及配置keepalived:
root@k8s-ha1:~# apt install keepalived
root@k8s-ha1:~# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.188 dev eth0 label eth0:1
}
}
root@k8s-ha1:~# systemctl restart keepalived
root@k8s-ha1:~# systemctl enable keepalived
節(jié)點2安裝及配置keepalived:
root@k8s-ha2:~# apt install keepalived
root@k8s-ha2:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@k8s-ha2:~# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.188 dev eth0 label eth0:1
}
}
root@k8s-ha2:~# systemctl restart keepalived
root@k8s-ha3:~# systemctl enable keepalived
root@k8s-ha3:~# ifconfig eth0:1
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.188 netmask 255.255.255.255 broadcast 0.0.0.0
ether 00:0c:29:51:ba:18 txqueuelen 1000 (Ethernet)
2.3.2:haproxy安裝及配置:
使用腳本進行編譯安裝,所使用的安裝腳本以及安裝包,見文末:
安裝腳本:
FILE_DIR=`pwd`
LUA_PKG="lua-5.3.5.tar.gz"
LUA_DIR="lua-5.3.5"
HAPROXY_PKG="haproxy-2.0.15.tar.gz"
HAPROXY_DIR="haproxy-2.0.15"
HAPROXY_VER="2.0.15"
function install_system_package(){
grep "Ubuntu" /etc/issue &> /dev/null
if [ $? -eq 0 ];then
apt update
apt install iproute2 ntpdate make tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip libreadline-dev libsystemd-dev -y
fi
grep "Kernel" /etc/issue &> /dev/null
if [ $? -eq 0 ];then
yum install vim iotop bc gcc gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel zip unzip zlib-devel net-tools lrzsz tree ntpdate telnet lsof tcpdump wget libevent libevent-devel bc systemd-devel bash-completion traceroute psmisc -y
fi
}
function install_lua(){
cd ${FILE_DIR} && tar xvf ${LUA_PKG} && cd ${LUA_DIR} && make linux test
}
function install_haptroxy(){
if -d /etc/haproxy;then
echo "HAProxy 已經(jīng)安裝,即將退出安裝過程!"
else
mkdir -p /var/lib/haproxy /etc/haproxy
cd ${FILE_DIR} && tar xvf ${HAPROXY_PKG} && cd ${HAPROXY_DIR} && make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=/usr/local/src/lua-5.3.5/src/ LUA_LIB=/usr/local/src/lua-5.3.5/src/ PREFIX=/apps/haproxy && make install PREFIX=/apps/haproxy && cp haproxy /usr/sbin/
\cp ${FILE_DIR}/haproxy.cfg /etc/haproxy/haproxy.cfg
\cp ${FILE_DIR}/haproxy.service /lib/systemd/system/haproxy.service
systemctl daemon-reload && systemctl restart haproxy && systemctl enable haproxy
killall -0 haproxy
if [ $? -eq 0 ];then
echo "HAProxy ${HAPROXY_VER} 安裝成功!" && echo "即將退出安裝過程!" && sleep 1
else
echo "HAProxy ${HAPROXY_VER} 安裝失敗!" && echo "即將退出安裝過程!" && sleep 1
fi
fi
}
main(){
install_system_package
install_lua
install_haptroxy
}
main
節(jié)點1安裝及配置haproxy:
root@k8s-ha1:~# vim /etc/haproxy/haproxy.cfg
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:123456
listen k8s-6443
bind 192.168.100.188:6443
mode tcp
balance roundrobin
log global
server 192.168.100.31 192.168.100.31:6443 check inter 3000 fall 2 rise 5
server 192.168.100.32 192.168.100.32:6443 check inter 3000 fall 2 rise 5
server 192.168.100.33 192.168.100.33:6443 check inter 3000 fall 2 rise 5
root@k8s-ha1:~# systemctl enable haproxy
root@k8s-ha1:~# systemctl restart haproxy
節(jié)點2安裝配置參考節(jié)點一,配置一樣
2.4:harbor
habor可以不進行安裝,自行決定,使用過程用可以將鏡像上傳至harbor,其余節(jié)點從harbor進行拉取,節(jié)約時間,具體安裝教程見下一篇博客!
2.5安裝kubeadm等組件:
在master和node節(jié)點安裝kubeadm 、kubelet、kubectl、docker等組件,負載均衡服務(wù)器不需要安裝。
2.5.1 安裝docker-腳本一鍵安裝
安裝版本為:19.03.15,安裝包,腳本文件見文末!
#!/bin/bash
DOCKER_FILE="docker-19.03.15.tgz"
DOCKER_DIR="/data/docker"
#wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.15.tgz
install_docker(){
mkdir -p /data/docker
mkdir -p /etc/docker
tar xvf $DOCKER_FILE -C $DOCKER_DIR
cd $DOCKER_DIR
cp docker/* /usr/bin/
#創(chuàng)建containerd的service文件,并且啟動
cat >/etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl enable --now containerd.service
#準(zhǔn)備docker的service文件
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
EOF
#準(zhǔn)備docker的socket文件
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
groupadd docker
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://fe4wv34b.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/var/lib/docker"
}
EOF
#啟動docker
systemctl enable --now docker.socket && systemctl enable --now docker.service
}
install_docker
2.5.2:所有節(jié)點安裝kubelet kubeadm kubectl
所有節(jié)點配置阿?云倉庫地址并安裝相關(guān)組件,node節(jié)點可選安裝kubectl
配置阿?云鏡像的kubernetes源(?于安裝kubelet kubeadm kubectl命令)
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11Otippu
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
驗證:
master節(jié)點運?kubeadm init初始化命令
2.6:master節(jié)點運?kubeadm init初始化命令
2.6.1:kubeadm命令使?:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/ #命令選項及幫助
Available Commands:
alpha #kubeadm處于測試階段的命令
completion #bash命令補全,需要安裝bash-completion
# mkdir /data/scripts -p
# kubeadm completion bash > /data/scripts/kubeadm_completion.sh
# source /data/scripts/kubeadm_completion.sh
# vim /etc/profile
source /data/scripts/kubeadm_completion.sh
config #管理kubeadm集群的配置,該配置保留在集群的ConfigMap中
#kubeadm config print init-defaults
help Help about any command
init #初始化?個Kubernetes控制平?
join #將節(jié)點加?到已經(jīng)存在的k8s master
reset 還原使?kubeadm init或者kubeadm join對系統(tǒng)產(chǎn)?的環(huán)境變化
token #管理token
upgrade #升級k8s版本
version #查看版本信息
2.6.2:kubeadm init命令簡介:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/ #命令使?
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/ #集群
初始化:
root@docker-node1:~# kubeadm init --help
## --apiserver-advertise-address string #K8S API Server將要監(jiān)聽的監(jiān)聽的本機IP
## --apiserver-bind-port int32 #API Server綁定的端?,默認為6443
--apiserver-cert-extra-sans stringSlice #可選的證書額外信息,?于指定API Server的服務(wù)
器證書。可以是IP地址也可以是DNS名稱。
--cert-dir string #證書的存儲路徑,缺省路徑為 /etc/kubernetes/pki
--certificate-key string #定義?個?于加密kubeadm-certs Secret中的控制平臺證書的密鑰
--config string #kubeadm #配置?件的路徑
## --control-plane-endpoint string #為控制平臺指定?個穩(wěn)定的IP地址或DNS名稱,即配置?個
可以?期使?切是?可?的VIP或者域名,k8s 多master?可?基于此參數(shù)實現(xiàn)
--cri-socket string #要連接的CRI(容器運?時接?,Container Runtime Interface, 簡稱
CRI)套接字的路徑,如果為空,則kubeadm將嘗試?動檢測此值,"僅當(dāng)安裝了多個CRI或具有?標(biāo)準(zhǔn)CRI
插槽時,才使?此選項"
--dry-run #不要應(yīng)?任何更改,只是輸出將要執(zhí)?的操作,其實就是測試運?。
--experimental-kustomize string #?于存儲kustomize為靜態(tài)pod清單所提供的補丁的路徑。
--feature-gates string #?組?來描述各種功能特性的鍵值(key=value)對,選項是:
IPv6DualStack=true|false (ALPHA - default=false)
## --ignore-preflight-errors strings #可以忽略檢查過程 中出現(xiàn)的錯誤信息,?如忽略
swap,如果為all就忽略所有
## --image-repository string #設(shè)置?個鏡像倉庫,默認為k8s.gcr.io
## --kubernetes-version string #指定安裝k8s版本,默認為stable-1
--node-name string #指定node節(jié)點名稱
## --pod-network-cidr #設(shè)置pod ip地址范圍
## --service-cidr #設(shè)置service?絡(luò)地址范圍
## --service-dns-domain string #設(shè)置k8s內(nèi)部域名,默認為cluster.local,會有相應(yīng)的DNS服
務(wù)(kube-dns/coredns)解析?成的域名記錄。
--skip-certificate-key-print #不打印?于加密的key信息
--skip-phases strings #要跳過哪些階段
--skip-token-print #跳過打印token信息
--token #指定token
--token-ttl #指定token過期時間,默認為24?時,0為永不過期
--upload-certs #更新證書
#全局可選項:
--add-dir-header #如果為true,在?志頭部添加?志?錄
--log-file string #如果不為空,將使?此?志?件
--log-file-max-size uint #設(shè)置?志?件的最???,單位為兆,默認為1800兆,0為沒有限制
--rootfs #宿主機的根路徑,也就是絕對路徑
--skip-headers #如果為true,在log?志??不顯示標(biāo)題前綴
--skip-log-headers #如果為true,在log?志??不顯示標(biāo)題
2.6.3:準(zhǔn)備鏡像:
[k8s-master1 root ~]# kubeadm config images list --kubernetes-version v1.20.5
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
2.6.4:master節(jié)點下載鏡像:
建議提前在master節(jié)點下載鏡像以減少安裝等待時間,但是鏡像默認使?Google的鏡像倉庫,所以國內(nèi)?法直接下載,但是可以通過阿?云的鏡像倉庫把鏡像先提前下載下來,可以避免后期因鏡像下載異常?導(dǎo)致k8s部署異常。
[k8s-master1 root ~]# cat images-download.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubeapiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubecontroller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubescheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubeproxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
2.6.5:驗證當(dāng)前鏡像:
2.7 ?可?master初始化:
基于keepalived實現(xiàn)?可?VIP,通過haproxy實現(xiàn)kube-apiserver的反向代理,然后將對kube-apiserver的管理請求轉(zhuǎn)發(fā)?多臺 k8s master以實現(xiàn)管理端?可?。
2.7.1:基于命令初始化?可?master?式:
初始化命令,根據(jù)自己的集群設(shè)計,不同環(huán)境稍有差異:
kubeadm init --apiserver-advertise-address=192.168.100.31 --control-plane-endpoint=192.168.100.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=test.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
2.7.2:基于?件初始化?可?master?式:
# kubeadm config print init-defaults #輸出默認初始化配置
# kubeadm config print init-defaults > kubeadm-init.yaml #將默認配置輸出??件
# cat kubeadm-init.yaml #修改后的初始化?件內(nèi)容
root@k8s-master1:~# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.100.31
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master1.example.local
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.100.188:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
networking:
dnsDomain: jiege.local
podSubnet: 10.100.0.0/16
serviceSubnet: 10.200.0.0/16
scheduler: {}
root@k8s-master1:~# kubeadm init --config kubeadm-init.yaml #基于?件執(zhí)?k8s
master初始化
2.8:配置kube-config?件及?絡(luò)組件:
2.8.1:kube-config?件:
Kube-config?件中包含kube-apiserver地址及相關(guān)認證信息
[k8s-master1 root ~]# mkdir -p $HOME/.kube
[k8s-master1 root ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s-master1 root ~]# sudo chown
(
i
d
?
u
)
:
(id -u):
(id?u):(id -g) $HOME/.kube/config
[k8s-master1 root ~]# kubectl get node
部署?絡(luò)組件flannel,需要連接外網(wǎng)進行下載,kube-flannel.yml文件見文末,我已經(jīng)將flannel作用到的鏡像導(dǎo)出,見文末,自行導(dǎo)入到docker:
[k8s-master1 root ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[k8s-master1 root ~]# vim kube-flannel.yml
修改Network:10.100.0.0/16 需要保持跟初始化時一致
[k8s-master1 root ~]# kubectl apply -f kube-flannel.yml
驗證master節(jié)點狀態(tài),這個需要稍微等一兩分鐘:
2.8.2:當(dāng)前maste?成證書?于添加新控制節(jié)點:
[k8s-master1 root ~]# kubeadm init phase upload-certs --upload-certs
2.9:添加節(jié)點到k8s集群:
將其他的maser節(jié)點及node節(jié)點分別添加到k8集群中
2.9.1:master節(jié)點2:
見2.7.1圖中的添加方式,配合2.8.2生成的證書
kubeadm join 192.168.100.188:6443 --token g0v6kt.h3tdcd4uzbarpngk \
--discovery-token-ca-cert-hash sha256:676f6d36823d8c872d6f4831326c3b695999de602c41b319ef744c7ebf201a07 \
--control-plane --certificate-key dcb213158115e9f9fc97c03655527ff8b56e6e628c603bc616940320fb83243b
2.9.2:master節(jié)點3:
見2.7.1圖中的添加方式,配合2.8.2生成的證書
kubeadm join 192.168.100.188:6443 --token g0v6kt.h3tdcd4uzbarpngk \
--discovery-token-ca-cert-hash sha256:676f6d36823d8c872d6f4831326c3b695999de602c41b319ef744c7ebf201a07 \
--control-plane --certificate-key dcb213158115e9f9fc97c03655527ff8b56e6e628c603bc616940320fb83243b
2.9.3:添加node節(jié)點:
各需要加?到k8s master集群中的node節(jié)點都要安裝docker kubeadm kubelet ,因此都要執(zhí)?安裝docker kubeadm kubelet的步驟。
kubeadm join 192.168.100.188:6443 --token g0v6kt.h3tdcd4uzbarpngk \
--discovery-token-ca-cert-hash sha256:676f6d36823d8c872d6f4831326c3b695999de602c41b319ef744c7ebf201a07
2.9.4:驗證當(dāng)前node狀態(tài):
各Node節(jié)點會?動加?到master節(jié)點,下載鏡像并啟動flannel,直到最終在master看到node處于Ready狀態(tài)。
2.9.5:k8s創(chuàng)建容器并測試內(nèi)部?絡(luò):
2.9.6:驗證外部?絡(luò)
3 部署dashboard:
https://github.com/kubernetes/dashboard/releases/tag/v2.7.0
3.1:部署dashboard v2.7.0:
[k8s-master1 root ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
[k8s-master1 root ~]# mv recommended.yaml dashboard-2.7.0.yaml
[k8s-master1 root ~]# vim dashboard-2.7.0.yaml
更改以上兩行
[k8s-master1 root ~]# vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
[k8s-master1 root ~]# kubectl apply -f dashboard-2.7.0.yaml -f admin-user.yaml
驗證30002端口是否打開:
3.2:訪問dashboard:
輸入https://192.168.100.31:30002
3.3:獲取登錄token:
[k8s-master1 root ~]# kubectl get secret -A | grep admin
kubernetes-dashboard admin-user-token-m74vn kubernetes.io/service-account-token 3 4m41s
[k8s-master1 root ~]# kubectl describe secret admin-user-token-m74vn -n kubernetes-dashboard
Name: admin-user-token-m74vn
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: d9834b25-0d89-4675-9fe7-091da4b3da44
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImtCdVI2LTN3Rm51N3ZCcmV0Z29UdHR5eTFZZDlNc1FqMUR4OWxkN1JiTzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW03NHZuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkOTgzNGIyNS0wZDg5LTQ2NzUtOWZlNy0wOTFkYTRiM2RhNDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.iHaYFnhfB_NJLxq8Cke_eCJkkYiaozCAkSJq3knlV3MS6sqB5NKA77DBWgEWiKzsr12K4JB4lfTx_mtJOY1959z7iPY3_xKaNsSy90sXV4N-8_w0R8WsS_u9rmdJpgculFrw9bEp7QZPTj39Mmqx9yjYsmeLPbInVnRi585sxl9fwi2sxJ4K5PSsEqyundoP_2lAonB5BQ_zARfE8MvK13M4C69hNEfENtpOvyIjEKO4UsaT2g4Tl8Cn0XoVeL6-MTU7_AUmqGC_SlW4ssguGI7jS_GgOwBzVmFjz7y_j-8VMWat-5UL7kyvpH9BAsfJkALzaQn1DHOT4xZtVOUKfw
3.4:dashboard 界?:
文章來源:http://www.zghlxwxcb.cn/news/detail-774110.html
劃重點,搭建過程中所使用到的文件,安裝包:
鏈接:https://pan.baidu.com/s/1vlyPA-VFMERlcXvZhX_GLQ?pwd=0525 提取碼:0525
搭建過程中遇到文件可以聯(lián)系我qq:2573522468文章來源地址http://www.zghlxwxcb.cn/news/detail-774110.html
到了這里,關(guān)于k8s集群環(huán)境部署-高可用部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!