0. 前言
k8s從1.24版本開始不再直接支持docker,但可以自行調(diào)整相關配置,實現(xiàn)1.24版本后的k8s還能調(diào)用docker。其實docker自身也是調(diào)用containerd,與其k8s通過docker再調(diào)用containerd,不如k8s直接調(diào)用containerd,以減少性能損耗。
除了containerd,比較流行的容器運行時還有podman,但是podman官方安裝文檔要么用包管理器在線安裝,要么用包管理器下載一堆依賴再編譯安裝,內(nèi)網(wǎng)離線環(huán)境下安裝可能會比較麻煩,而containerd的安裝包是靜態(tài)二進制文件,解壓后就能直接使用,離線環(huán)境下相對方便一點。
本文將在內(nèi)網(wǎng)離線環(huán)境下用二進制文件部署一個三節(jié)點集群+harbor鏡像倉庫。集群中部署了三個apiserver,并配置nginx反向代理,提升master的高可用性(如對高可用有進一步要求,可以再加個keepalive)。
相關軟件信息:
名稱 | 版本 | 說明 |
---|---|---|
containerd | cri-containerd-cni-1.7.2-linux-amd64 | 容器運行時 |
harbor | 2.8.2 | 容器鏡像倉庫 |
etcd | 3.4.24 | 鍵值對數(shù)據(jù)庫 |
kubernetes | 1.26.6 | 容器編排系統(tǒng) |
nginx | 1.25.1 | 負載均衡,反向代理apiserver |
服務器信息:
IP | 操作系統(tǒng) | 硬件配置 | Hostname | 說明 |
---|---|---|---|---|
192.168.3.31 | Debian 11.6 amd64 | 4C4G | k8s31 | nginx+etcd+master+node |
192.168.3.32 | Debian 11.6 amd64 | 4C4G | k8s32 | etcd+master+node |
192.168.3.33 | Debian 11.6 amd64 | 4C4G | k8s33 | etcd+master+node |
192.168.3.43 | Debian 11.6 amd64 | 4C4G | 無 | harbor,內(nèi)網(wǎng)域名registry.atlas.cn |
1. 系統(tǒng)初始化
初始化部分需要三臺k8s節(jié)點主機都執(zhí)行, 根據(jù)實際情況修改參數(shù)。
- 修改主機名
# 3.31服務器
hostnamectl set-hostname k8s31
# 3.32服務器
hostnamectl set-hostname k8s32
# 3.33服務器
hostnamectl set-hostname k8s33
- 修改/etc/hosts文件,增加以下配置。
192.168.3.31 k8s31
192.168.3.32 k8s32
192.168.3.33 k8s33
- 配置時間同步服務
# 1. 安裝chrony時間同步應用
apt install -y chrony
# 2. 添加內(nèi)網(wǎng)的ntp服務器地址。如果在公網(wǎng),可配置阿里云的ntp服務器地址:ntp.aliyun.com
echo 'server 192.168.3.41 iburst' > /etc/chrony/sources.d/custom-ntp-server.sources
# 3. 啟動
systemctl start chrony
# 如果已經(jīng)啟動過了, 可以熱加載配置: chronyc reload sources
- 關閉swap。如果安裝系統(tǒng)時創(chuàng)建了swap,則需要關閉。
swapoff -a
- 裝載內(nèi)核模塊
# 添加配置
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# 立即裝載
modprobe overlay
modprobe br_netfilter
# 檢查裝載。如果沒有輸出結果說明沒有裝載。
lsmod | grep br_netfilter
- 配置系統(tǒng)參數(shù)
# 1. 添加配置文件
cat << EOF > /etc/sysctl.d/k8s-sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
vm.swappiness = 0
EOF
# 2. 配置生效
sysctl -p /etc/sysctl.d/k8s-sysctl.conf
- 啟用ipvs。編寫system配置文件,實現(xiàn)開機自動裝載到內(nèi)核。
# 1. 安裝依賴
apt install -y ipset ipvsadm
# 2. 新建文件并添加配置, 文件路徑可任意
tee /root/scripts/k8s.sh <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 3. 新建system文件, 實現(xiàn)開機執(zhí)行腳本
cat << EOF > /etc/systemd/system/myscripts.service
[Unit]
Description=Run a Custom Script at Startup
After=default.target
[Service]
ExecStart=sh /root/scripts/k8s.sh
[Install]
WantedBy=default.target
EOF
# 4. 加載并啟用system
systemctl daemon-reload
systemctl enable myscripts
2. 部署harbor鏡像倉庫
由于部署集群時候需要先拉取一些鏡像,所以harbor在集群外的一個服務器單獨部署。官方安裝腳本使用了docker,所以harbor節(jié)點需要先安裝docker,安裝步驟可參考 博客園 - linux離線安裝docker與compose。
harbor的安裝步驟可參考 博客園 - centos離線安裝harbor,這里大致寫一下。
harbor的github release GitHub - goharbor/harbor/releases 提供離線安裝包,要先下載好,然后解壓。
- 創(chuàng)建ssl證書
mkdir certs
# 創(chuàng)建服務器證書密鑰文件harbor.key
openssl genrsa -des3 -out harbor.key 2048
# 輸入密碼,確認密碼,自己隨便定義,但是要記住,后面會用到。
# 創(chuàng)建服務器證書的申請文件harbor.csr
openssl req -new -key harbor.key -out harbor.csr
# 輸入密鑰文件的密碼, 然后一路回車
# 備份一份服務器密鑰文件
cp harbor.key harbor.key.org
# 去除文件口令
openssl rsa -in harbor.key.org -out harbor.key
# 輸入密鑰文件的密碼
# 創(chuàng)建一個自當前日期起為期十年的證書 harbor.crt
openssl x509 -req -days 3650 -in harbor.csr -signkey harbor.key -out harbor.crt
- 修改配置文件 harbor.yml,僅列出自修改項。數(shù)據(jù)存儲目錄和日志目錄自定義了。
hostname: 192.168.3.43
certificate: /home/atlas/apps/harbor/certs/harbor.crt
private_key: /home/atlas/apps/harbor/certs/harbor.key
# admin用戶登錄密碼
harbor_admin_password: Harbor2023
# 數(shù)據(jù)卷目錄
data_volume: /home/atlas/apps/harbor/data
# 日志目錄
location: /home/atlas/apps/harbor/logs/
- 執(zhí)行安裝腳本
./install.sh
- 瀏覽器訪問 https://192.168.3.43 ,測試能否正常登錄訪問harbor。
3. 安裝containerd
- 從GitHub https://github.com/containerd/containerd/releases 下載二進制包
- 解壓壓縮包
tar xf cri-containerd-cni-1.7.2-linux-amd64.tar.gz -C /
- 生成 containerd 配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
- 編輯
/etc/containerd/config.toml
,修改以下內(nèi)容
# 修改數(shù)據(jù)存儲目錄
root = "/home/apps/containerd"
# 對于使用systemd作為init system的linux發(fā)行版,官方建議用systemd作為容器cgroup driver
# false改成true
SystemdCgroup = true
# 修改pause鏡像下載地址,這里用的是內(nèi)網(wǎng)域名地址
sandbox_image = "registry.atlas.cn/public/pause:3.9"
# 私有harbor的連接信息
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.atlas.cn"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.atlas.cn".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.atlas.cn".auth]
username = "admin"
password = "Harbor2023"
- 重加載systemd配置,啟動containerd
systemctl daemon-reload
systemctl start containerd
4. 生成ca證書
后面的k8s和etcd集群都會用到ca證書。如果組織能提供統(tǒng)一的CA認證中心,則直接使用組織頒發(fā)的CA證書即可。如果沒有統(tǒng)一的CA認證中心,則可以通過頒發(fā)自簽名的CA證書來完成安全配置。這里自行生成一個ca證書。
# 生成私鑰文件ca.key
openssl genrsa -out ca.key 2048
# 根據(jù)私鑰文件生成根證書文件ca.crt
# /CN為master的主機名或IP地址
# days為證書的有效期
openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.3.31" -days 36500 -out ca.crt
# 拷貝ca證書到/etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki
cp ca.crt ca.key /etc/kubernetes/pki/
5. 部署etcd集群
部署一個三節(jié)點etcd集群,集群間使用https協(xié)議加密通信。etcd的安裝包可以從官網(wǎng)下載,下載后解壓,將壓縮包中的etcd
和etcdctl
放到/usr/local/bin
目錄。
- 編輯文件
etcd_ssl.cnf
。IP地址為etcd節(jié)點。
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 192.168.3.31
IP.2 = 192.168.3.32
IP.3 = 192.168.3.33
- 創(chuàng)建etcd服務端證書
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
- 創(chuàng)建etcd客戶端證書
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt
- 編輯etcd的配置文件。注意,各節(jié)點的ETCD_NAME和監(jiān)聽地址不一樣,ip和證書文件路徑要根據(jù)實際來修改。以下示例為192.168.3.31的etcd配置
ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/atlas/apps/etcd/data
ETCD_CERT_FILE=/home/atlas/apps/etcd/certs/etcd_server.crt
ETCD_KEY_FILE=/home/atlas/apps/etcd/certs/etcd_server.key
ETCD_TRUSTED_CA_FILE=/home/atlas/apps/kubernetes/certs/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.3.31:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.3.31:2379
ETCD_PEER_CERT_FILE=/home/atlas/apps/etcd/certs/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/atlas/apps/etcd/certs/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/home/atlas/apps/kubernetes/certs/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.3.31:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.3.31:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.3.31:2380,etcd2=https://192.168.3.32:2380,etcd3=https://192.168.3.33:2380"
ETCD_INITIAL_CLUSTER_STATE=new
- 編輯
/etc/systemd/system/etcd.service
,注意根據(jù)實際修改配置文件和etcd二進制文件的路徑
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
EnvironmentFile=/home/atlas/apps/etcd/conf/etcd.conf
ExecStart=/usr/local/bin/etcd
Restart=always
[Install]
WantedBy=multi-user.target
- 加載systemd配置,啟動etcd
systemctl daemon-reload
systemctl start etcd
- 驗證集群是否部署成功。注意根據(jù)實際修改證書文件路徑和etcd節(jié)點的IP與端口
etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=/home/atlas/apps/etcd/certs/etcd_client.crt --key=/home/atlas/apps/etcd/certs/etcd_client.key --endpoints=https://192.168.3.31:2379,https://192.168.3.32:2379,https://192.168.3.33:2379 endpoint health
如果集群部署成功,應該有如下類似輸出
https://192.168.3.33:2379 is healthy: successfully committed proposal: took = 27.841376ms
https://192.168.3.32:2379 is healthy: successfully committed proposal: took = 29.489289ms
https://192.168.3.31:2379 is healthy: successfully committed proposal: took = 35.703538ms
6. 部署k8s
k8s的二進制文件安裝包可以從github下載:https://github.com/kubernetes/kubernetes/releases
在changelog中找到二進制包的下載鏈接,下載server binary即可,里面包含了master和node的二進制文件。
解壓后將其中的二進制文件挪到 /usr/local/bin
6.1 安裝apiserver
- 編輯master_ssl.cnf。DNS.5 ~ DNS.7為三臺服務器的主機名,另行設置
/etc/hosts
。IP.1為Master Service虛擬服務的Cluster IP地址,IP.2 ~ IP.4為apiserver的服務器IP
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s31
DNS.6 = k8s32
DNS.7 = k8s33
IP.1 = 169.169.0.1
IP.2 = 192.168.3.31
IP.3 = 192.168.3.32
IP.4 = 192.168.3.33
- 生成證書文件
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.3.31" -out apiserver.csr
# ca.crt和ca.key是 "2. openssl生成證書"中的兩個證書文件
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
- 使用cfssl創(chuàng)建sa.pub和sa-key.pem。cfssl和cfssljson可以從github下載
cat<<EOF > sa-csr.json
{
"CN":"sa",
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"k8s",
"OU":"System"
}
]
}
EOF
# cfssl和cfssljson可自行在GitHub搜索下載
cfssl gencert -initca sa-csr.json | cfssljson -bare sa -
openssl x509 -in sa.pem -pubkey -noout > sa.pub
- 編輯kube-apiserver的配置文件,注意根據(jù)實際情況修改文件路徑和etcd地址
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/home/atlas/apps/kubernetes/apiserver/certs/apiserver.crt \
--tls-private-key-file=/home/atlas/apps/kubernetes/apiserver/certs/apiserver.key \
--client-ca-file=/home/atlas/apps/kubernetes/certs/ca.crt \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/home/atlas/apps/kubernetes/certs/sa.pub \
--service-account-signing-key-file=/home/atlas/apps/kubernetes/certs/sa-key.pem \
--apiserver-count=3 --endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.3.31:2379,https://192.168.3.32:2379,https://192.168.3.33:2379 \
--etcd-cafile=/home/atlas/apps/kubernetes/certs/ca.crt \
--etcd-certfile=/home/atlas/apps/etcd/certs/etcd_client.crt \
--etcd-keyfile=/home/atlas/apps/etcd/certs/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--audit-log-maxsize=100 \
--audit-log-maxage=15 \
--audit-log-path=/home/atlas/apps/kubernetes/apiserver/logs/apiserver.log --v=2"
- 編輯service文件。
/etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/home/atlas/apps/kubernetes/apiserver/conf/apiserver
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=always
[Install]
WantedBy=multi-user.target
- 加載service文件,啟動kube-apiserver
systemctl daemon-reload
systemctl start kube-apiserver
- 生成客戶端證書
openssl genrsa -out client.key 2048
# /CN的名稱用于標識連接apiserver的客戶端用戶名稱
openssl req -new -key client.key -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500
- 創(chuàng)建客戶端連接apiserver所需的kubeconfig配置文件。其中server為nginx監(jiān)聽地址。注意根據(jù)實際修改配置
apiVersion: v1
kind: Config
clusters:
- name: default
cluster:
server: https://192.168.3.31:9443
certificate-authority: /home/atlas/apps/kubernetes/certs/ca.crt
users:
- name: admin
user:
client-certificate: /home/atlas/apps/kubernetes/apiserver/certs/client.crt
client-key: /home/atlas/apps/kubernetes/apiserver/certs/client.key
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
6.2 安裝kube-controller-manager
- 編輯配置文件 /home/atlas/apps/kubernetes/controller-manager/conf/env
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/home/atlas/apps/kubernetes/apiserver/certs/apiserver.key \
--root-ca-file=/home/atlas/apps/kubernetes/certs/ca.crt \
--v=0"
- 編輯service文件
/etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/home/atlas/apps/kubernetes/controller-manager/conf/env
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
[Install]
WantedBy=multi-user.target
- 加載配置文件并啟動
systemctl daemon-reload
systemctl start kube-controller-manager
6.3 安裝kube-scheduler
- 編輯配置文件
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig \
--leader-elect=true \
--v=0"
- 編輯service文件
/etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=//home/atlas/apps/kubernetes/scheduler/conf/env
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always
[Install]
WantedBy=multi-user.target
- 啟動
systemctl daemon-reload
systemctl start kube-scheduler
6.4 安裝nginx
這里用nginx對apiserver進行tcp反向代理,也可以使用haproxy。nginx編譯安裝可參考 博客園 - linux編譯安裝nginx,docker安裝nginx更加簡單,本文略過。以下為示例配置:
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 65536;
}
stream{
log_format json2 '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log logs/stream.log json2;
upstream apiservers {
server 192.168.3.31:6443;
server 192.168.3.32:6443;
server 192.168.3.33:6443;
}
server {
listen 9443;
proxy_pass apiservers;
}
}
6.5 安裝kubelet
- 編輯文件 /home/atlas/apps/kubernetes/kubelet/conf/env。注意修改
hostname-override
中的IP為Node節(jié)點自己的IP。如果修改了containerd的socket地址,則配置中也要按實際修改。
KUBELET_ARGS="--kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig \
--config=/home/atlas/apps/kubernetes/kubelet/conf/kubelet.config \
--hostname-override=192.168.3.31 \
--v=0 \
--container-runtime-endpoint="unix:///run/containerd/containerd.sock"
主要參數(shù)說明
參數(shù) | 說明 |
---|---|
--kubeconfig | 設置與 apiserver 連接的配置,可以與 controller-manager 的 kubeconfig 相同。新的Node節(jié)點注意拷貝客戶端相關證書文件,比如ca.crt, client.key, client.crt |
--config | kubelet 配置文件,設置可以讓多個Node共享的配置參數(shù)。 |
--hostname-override | 本Node在集群中的名稱,默認值為主機名 |
--network-plugin | 網(wǎng)絡插件類型,推薦使用CNI網(wǎng)絡插件 |
- 編輯文件 /home/atlas/apps/kubernetes/kubelet/conf/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
cgroupDriver: systemd
clusterDNS: ["169.169.0.100"]
clusterDomain: cluster.local
authentication:
anonymous:
enabled: true
主要參數(shù)說明
參數(shù) | 說明 |
---|---|
address | 服務監(jiān)聽IP地址 |
port | 服務監(jiān)聽端口號,默認值為10250 |
cgroupDriver | cgroupDriver驅(qū)動,默認值為cgroupfs,可選 systemd |
clusterDNS | 集群DNS服務的IP地址 |
clusterDomain | 服務DNS域名后綴 |
authentication | 是否允許匿名訪問或者是否使用webhook鑒權 |
- 編輯service文件 /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.target
[Service]
EnvironmentFile=/home/atlas/apps/kubernetes/kubelet/conf/env
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=always
[Install]
WantedBy=multi-user.target
- 加載service并啟動kubelet
systemctl daemon-reload && systemctl start kubelet
6.6 安裝kube-proxy
- 編輯文件 /home/atlas/apps/kubernetes/proxy/conf/env。注意修改
hostname-override
中的IP為Node節(jié)點自己的IP。
KUBE_PROXY_ARGS="--kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig \
--hostname-override=192.168.3.31 \
--proxy-mode=ipvs \
--v=0"
- 編輯service文件 /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=/home/atlas/apps/kubernetes/proxy/conf/env
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=always
[Install]
WantedBy=multi-user.target
- 加載service并啟動
systemctl daemon-reload && systemctl start kube-proxy
6.7 安裝calico
- 在master節(jié)點通過kubectl查詢自動注冊到 k8s 的 node 信息。由于 Master 開啟了 https 認證,所以 kubectl 也需要使用客戶端 CA證書連接Master,可以直接使用 apiserver 的 kubeconfig 文件。
kubectl --kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig get nodes
若是不想每次敲命令都要指定kubeconfig文件,可以編輯~/.bashrc
,增加如下內(nèi)容后source ~/.bashrc
alias kubectl='/usr/local/bin/kubectl --kubeconfig=/home/atlas/apps/kubernetes/apiserver/conf/kubeconfig'
如果操作步驟和以上保持一致,命令執(zhí)行應有類似如下輸出。
NAME STATUS ROLES AGE VERSION
192.168.3.31 Ready <none> 18m v1.26.6
192.168.3.32 Ready <none> 16m v1.26.6
192.168.3.33 Ready <none> 16m v1.26.6
由于安裝containerd時,安裝包里已經(jīng)包含了cni插件,所以節(jié)點狀態(tài)都是ready。自測節(jié)點之間通信還是有問題,所以換成相對來說熟悉點的calico。
- 下載calico文件。
wget https://docs.projectcalico.org/manifests/calico.yaml
- 編輯calico.yaml文件。因為內(nèi)網(wǎng)離線部署,以提前在公網(wǎng)下載好calico的鏡像并推送到內(nèi)網(wǎng)的harbor鏡像倉庫,所以配置文件中的鏡像修改成了內(nèi)網(wǎng)的鏡像。
image: registry.atlas.cn/calico/cni:v3.26.1
image: registry.atlas.cn/calico/node:v3.26.1
image: registry.atlas.cn/calico/kube-controllers:v3.26.1
- 部署calico。PS:此處的kubectl命令以alias kubeconfig
kubectl apply -f calico.yaml
- 查看calico的pod是否正常運行。如果正常,狀態(tài)應該都是running;若不正常,則需要describe pod的信息查看什么問題
kubectl get pods -A
6.8 集群內(nèi)安裝CoreDNS
- 編輯部署文件 coredns.yaml。其中corefile里面的forward地址是內(nèi)網(wǎng)的dns服務器地址,若內(nèi)網(wǎng)沒有dns,可修改為
/etc/resolv.conf
。coredns的鏡像地址也是提前推送的harbor的鏡像。
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
cluster.local {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local 169.169.0.0/16 {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 192.168.3.41
cache 30
loop
reload
loadbalance
}
. {
cache 30
loadbalance
forward . 192.168.3.41
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
imagePullSecrets:
- name: registry-harbor
containers:
- name: coredns
image: registry.atlas.cn/public/coredns:1.11.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 169.169.0.100
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
- 部署一個nginx用于測試。注意按實際修改鏡像地址
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
env: dev
template:
metadata:
labels:
app: nginx
env: dev
spec:
containers:
- name: nginx
image: registry.atlas.cn/public/nginx:1.25.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
env: dev
發(fā)布nginx服務
kubectl create -f nginx.yaml
- 運行一個ubuntu的pod。鏡像基于原版的ubuntu:22.04修改,提前安裝了dnsutils再封裝推送到內(nèi)網(wǎng)harbor,Dockerfile內(nèi)容如下:
FROM ubuntu:22.04
RUN apt update -y && apt install -y dnsutils iputils-ping curl
RUN apt clean && rm -rf /var/lib/apt/lists/*
使用文件聲明pod。注意按實際修改鏡像地址
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
namespace: default
spec:
containers:
- name: ubuntu
image: registry.atlas.cn/public/ubuntu:22.04.1
command:
- tail
- -f
- /dev/null
發(fā)布pod
kubectl create -f ubuntu.yaml
使用exec選項進入pod內(nèi)
kubectl exec -it ubuntu -- bash
在pod內(nèi)測試能否連通nginx。若一切響應正常,說明集群已基本搭建成功
# 測試能否解析出svc-nginx的ip
nslookup svc-nginx
# 測試能否調(diào)通svc-nginx:80
curl http://svc-nginx
補充
以上步驟只是部署了一個能正常發(fā)布服務的基礎k8s集群,生產(chǎn)環(huán)境中還要考慮存儲、網(wǎng)絡、安全等問題,相關內(nèi)容比較多,本文不再贅述,可參考其它文檔。
問題記錄
sysctl加載配置時報錯
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory文章來源:http://www.zghlxwxcb.cn/news/detail-653460.html
處理:裝載內(nèi)核模塊文章來源地址http://www.zghlxwxcb.cn/news/detail-653460.html
modprobe br_netfilter
參考
- CSDN - Debian11之 Containerd1.7.x 安裝及配置
- 博客園 - k8s-1.26.0 + Containerd安裝過程
- 博客園 - 配置Containerd在harbor私有倉庫
- 博客園 - linux離線安裝docker與compose
- 博客園 - centos離線安裝harbor
- 博客園 - [kubernetes]二進制部署k8s集群
到了這里,關于[kubernetes]二進制部署k8s集群-基于containerd的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!