我們在上面:VM部署CentOS并且設(shè)置網(wǎng)絡(luò)?部署好了服務(wù)器。接下來需要準(zhǔn)備三個服務(wù)器分別為
master節(jié)點:master??192.168.171.7
node節(jié)點:node1? ?192.168.171.6
node節(jié)點:node2??192.168.171.4
此步驟需要啟動三臺虛擬機(jī),并且使用xshell進(jìn)行連接
使用執(zhí)行多個的命令來在每個服務(wù)器同步執(zhí)行相同的命令
一:部署前準(zhǔn)備(三臺服務(wù)器都操作)
檢查操作系統(tǒng)的版本
# 此方式安裝kubernetes集群要求Centos版本要在7.5或以上
cat /etc/redhat-release
# CentOS Linux release 7.9.2009 (Core)
?主機(jī)名解析
?為了方便后面集群節(jié)點間的直接調(diào)用,在這配置一下主機(jī)名解析, 企業(yè)中推薦使用內(nèi)部DNS服務(wù)器,這里的ip地址要根據(jù)自己機(jī)器的來。主機(jī)名成解析編輯三臺服務(wù)器的/etc/hosts文件,添加下面內(nèi)容
192.168.171.7 master
192.168.171.6 node1
192.168.171.4 node2
分別在三臺服務(wù)器上
??
ping master
ping node1
ping node2
時間同步
kubernetes要求集群中的節(jié)點時間必須精確一致, 這里直接使用chronyd服務(wù)從網(wǎng)絡(luò)同步時間。
企業(yè)中建議配置內(nèi)部的時間同步服務(wù)器
# 啟動chronyd服務(wù)
systemctl start chronyd
#設(shè)置chronyd服務(wù)開機(jī)自啟
systemctl enable chronyd
# chronyd服務(wù)啟動稍等幾秒鐘,就可以使用date命令驗證時間了
date
禁用iptables和firewalld服務(wù)
kubernetes和docker在運(yùn)行中會產(chǎn)生大量的iptables規(guī)則,為了不讓系統(tǒng)規(guī)則跟它們混淆,直接關(guān)閉系統(tǒng)的規(guī)則
# 1關(guān)閉firewalld服務(wù)
systemctl stop firewalld
systemctl disable firewalld
# 2關(guān)閉iptables服務(wù)
systemctl stop iptables
systemctl disable iptables
禁用selinux
selinux是inux系統(tǒng)下的一個安全服務(wù),如果不關(guān)閉它,在安裝集群中會產(chǎn)生各種各樣的奇葩問題
-
編輯/etc/selinux/config文件,修改SELINUX的值為disabled
-
# 注意修改完畢之后需要重啟linux服務(wù)
SELINUX=disabled
禁用swap分區(qū)
wap分區(qū)指的是虛擬內(nèi)存分區(qū),它的作用是在物理內(nèi)存使用完之后,將磁盤空間虛擬成內(nèi)存來使用
啟用swap設(shè)備會對系統(tǒng)的性能產(chǎn)生非常負(fù)面的影響,因此kubernetes要求每個節(jié)點都要禁用swap設(shè)備
但是如果因為某些原因確實不能關(guān)閉swap分區(qū),就需要在集群安裝過程中通過明確的參數(shù)進(jìn)行配置說明
# 編輯分區(qū)配置文件/etc/fstab,注釋掉swap分區(qū)一行
# 注意修改完畢之后需要重啟linux服務(wù)
UUID=455cc753-7a60-4c17-a424-7741728c44a1 /boot xfs defaults 0 0
# /dev/mapper/centos-swap swap swap defaults 0 0
修改linux的內(nèi)核參數(shù)
# 修改linux的內(nèi)核參數(shù),添加網(wǎng)橋過濾和地址轉(zhuǎn)發(fā)功能
# 編輯/etc/sysctl.d/kubernetes.conf文件,添加如下配置:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 重新加載配置
sysctl -p
#加載網(wǎng)橋過濾模塊
modprobe br_netfilter
#查看網(wǎng)橋過濾模塊是否加載成功
lsmod | grep br_netfilter
配置ipvs功能
# 1安裝ipset和ipvsadm
yum install ipset ipvsadmin -y
# 2添加需要加載的模塊寫入腳本文件
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 3為腳本文件添加執(zhí)行權(quán)限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 4執(zhí)行腳本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
# 5查看對應(yīng)的模塊是否加載成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
重啟服務(wù)器
上面步驟完成之后,需要重新啟動linux系統(tǒng)
reboot
二:安裝k8s前準(zhǔn)備的軟件(三臺服務(wù)器都操作)
1:安裝docker
https://o36meer8.mirror.aliyuncs.com可以替換為自己的阿里云鏡像倉庫
# 1切換鏡像源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# 2查看當(dāng)前鏡像源中支持的docker版本
yum list docker-ce --showduplicates
# 3安裝特定版本的docker-ce
# 必須指定--setopt=obsoletes=0,否則yum會自動安裝更高版本
yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
# 4添加一個配置文件
# Docker在默認(rèn)情況下使用的Cgroup Driver為cgroupfs, 而kubernetes推薦使用systemd來代替cgroupfs
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://o36meer8.mirror.aliyuncs.com"]
}
EOF
# 5啟動docker
systemctl restart docker
systemctl enable docker
# 6檢查docker狀態(tài)和版本
docker version
2:安裝kubernetes組件
# 由于kubernetes的鏡像源在國外,速度比較慢,這里切換成國內(nèi)的鏡像源
# 編輯/etc/yum.repos.d/kubernetes.repo,添加下面的配置
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 安裝kubeadm、kubelet和kubectl
yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
# 配置kubelet的cgroup
# 編輯/etc/sysconfig/kubelet,添加下面的配置
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
# 4設(shè)置kubelet開機(jī)自啟
systemctl enable kubelet
3:準(zhǔn)備集群鏡像
其中images={.....}是直接在控制臺輸入,點擊enter即可,for循環(huán)也是一樣直接在控制臺輸入,
# 在安裝kubernetes集群之前,必須要提前準(zhǔn)備好集群需要的鏡像,所需鏡像可以通過下面命令查看
kubeadm config images list
# 下載鏡像
# 此鏡像在kubernetes的倉庫中,由于網(wǎng)絡(luò)原因,無法連接,下面提供了一種替代方案
images=(
kube-apiserver:v1.17.4
kube-controller-manager:v1.17.4
kube-scheduler:v1.17.4
kube-proxy:v1.17.4
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
三:集群初始化
下面開始對集群進(jìn)行初始化,并將node節(jié)點加入到集群中
下面的操作只需要在master節(jié)點上執(zhí)行即可
apiserver-advertise-address為master節(jié)點的IP地址,根據(jù)自己的master節(jié)點來修改
# 創(chuàng)建集群
kubeadm init --kubernetes-version=v1.17.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.171.7
# 創(chuàng)建必要文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
這個時候執(zhí)行
kubectl get nodes
?目前只有master節(jié)點,需要把node1,node2添加進(jìn)去。
?創(chuàng)建集群的時候,我們會看到有一個kubeadmin join的命令,拷貝下來。
在另外兩個node節(jié)點執(zhí)行語句 (這個語句是根據(jù)自己機(jī)器的執(zhí)行結(jié)果來的)
?在再master節(jié)點上執(zhí)行 kubectl get nodes就能看到三個節(jié)點都在上面了。
但是狀態(tài)為NotReady,因為還未安裝網(wǎng)絡(luò)
四:安裝網(wǎng)絡(luò)插件
kubernetes支持多種網(wǎng)絡(luò)插件,比如flannel、 calico、 canal等等, 任選- 種使用即可, 本次選擇flannel
下面操作依舊只在master節(jié)點執(zhí)行即可,插件使用的是DaemonSet的控制器,它會在每個節(jié)點上都運(yùn)行
# 獲取fanne1的配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 使用配置文件啟動fannel
kubectl apply -f kube-flannel.yml
# 稍等片刻,再次查看集群節(jié)點的狀態(tài)
kubectl get nodes
?到此,kubernetes的集群環(huán)境搭建完成
注意:如果發(fā)現(xiàn)kube-flannel.yml用不了,可以在網(wǎng)上找合適的kube-flannel.yml文件
下面是kube-flannel.yml的命令
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-zhangjiakou.aliyuncs.com/test-lab/coreos-flannel:s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
五:一些問題總結(jié)
1:我之前運(yùn)行?kubeadm init?時丟失了原始的“kubeadm join”命令,怎么辦?
在master節(jié)點上執(zhí)行:
kubeadm token create --print-join-command
2:service docker start/systemctl start docker執(zhí)行時報這“"systemctl status docker.service" and "journalctl -xe" for details”錯誤
錯誤信息還可能是如下
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
解決方案(針對自己在docker 官網(wǎng)下載最新的rpm 后自己使用yum install **/**/**.rpm 安裝出現(xiàn)這個問題):
Apr 3 15:31:11 Docker kernel: bio: create slab <bio-2> at 2
Apr 3 15:31:11 Docker dockerd: time="2018-04-03T15:31:11.835565555+08:00" level=info msg="devmapper: Creating filesystem xfs on device docker-253:1-34265854-base, mkfs args: [-m crc=0,finobt=0 /dev/mapper/docker-253:1-34265854-base]"
Apr 3 15:31:11 Docker dockerd: time="2018-04-03T15:31:11.836336636+08:00" level=info msg="devmapper: Error while creating filesystem xfs on device docker-253:1-34265854-base: exit status 1"
Apr 3 15:31:11 Docker dockerd: time="2018-04-03T15:31:11.836350296+08:00" level=error msg="[graphdriver] prior storage driver devicemapper failed: exit status 1"
Apr 3 15:31:11 Docker dockerd: Error starting daemon: error initializing graphdriver: exit status 1
Apr 3 15:31:11 Docker systemd: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 3 15:31:11 Docker systemd: Failed to start Docker Application Container Engine.
很明顯了:mkfs.xfs版本太低,遂更新:
yum update xfsprogs
重啟docker服務(wù),正常!
3:子節(jié)點node1啟動?kubeadm init 的時候出錯:error execution phase preflight: [preflight] Some fatal errors occurred:
W0821 17:29:08.265525 7407 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
原因是:一些配置文件與服務(wù)已經(jīng)存在
解決方案:
#重置kubeadm
kubeadm reset
在運(yùn)行kubeadm join命令
4:安裝網(wǎng)絡(luò)插件,node節(jié)點出現(xiàn)錯誤:?Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/kube-flannel-ds-amd64-rkb6q to localhost.localdomain
Warning FailedCreatePodSandBox 4m1s (x68 over 39m) kubelet, localhost.localdomain Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: TLS handshake timeout
解決方案如下:
# 查看 kubelet 配置
$ systemctl status -l kubelet
$ cd /var/lib/kubelet/
$ cp kubeadm-flags.env kubeadm-flags.env.ori
# 把 k8s.gcr.io/pause:3.3 改成 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
$ cat /var/lib/kubelet/kubeadm-flags.env
$ KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1"
# 重啟 kubelet 服務(wù)
$ systemctl daemon-reload
$ systemctl restart kubelet
5:服務(wù)器的主機(jī)名還是localhost如何修改
臨時的:sudo hostname new_hostname
永久的 vi /etc/hosts
將localhost.localhostadmin改成new_hostname
6:k8s安裝報錯Error: docker-ce-cli conflicts with 2:docker-1.13.1-103.git7f2769b.el7.centos.x86_64
原因是因為之前安裝過docker,版本沖突原因,解決方法如下:
查詢安裝docker列表:yum list installed | grep docker
containerd.io.x86_64 1.2.13-3.1.el7 @docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 @docker-ce-stable
docker-ce-cli.x86_64 1:19.03.7-3.el7 @docker-ce-stable
卸載對應(yīng)的包
?yum -y remove containerd.io.x86_64 docker-ce.x86_64 docker-ce-cli.x86_64
參考:https://www.cnblogs.com/liulj0713/p/12501957.html
7:[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
原因:這是在 Kubernetes 中初始化集群時出現(xiàn)的錯誤。/var/lib/etcd 目錄不為空,這是由于之前在該目錄下有一個 etcd 實例。需要清空該目錄或?qū)⑵鋫浞?,然后再次嘗試初始化??梢允褂妹顏砗雎赃@個錯誤并繼續(xù)執(zhí)行初始化過程。另外,如果需要查看更詳細(xì)的錯誤棧信息,可以在命令中加上參數(shù) --v=5 或更高。
解決方法:在語句后面添加:--ignore-preflight-errors=DirAvailable--var-lib-etcd
如:kubeadm init --config /root/kubeadm-config.yaml --upload-certs --ignore-preflight-errors=DirAvailable--var-lib-etcd
參考:https://www.cnblogs.com/yuwen01/p/17438591.html
其他錯誤:K8S 故障處理經(jīng)驗積累(網(wǎng)絡(luò))_k8 s取消失敗重啟_牛牛Blog的博客-CSDN博客
參考:K8s集群搭建教程_ikemorebi的博客-CSDN博客
虛擬機(jī)部署Kubernetes(K8S)_虛擬機(jī)安裝k8s_生骨大頭菜的博客-CSDN博客文章來源:http://www.zghlxwxcb.cn/news/detail-850872.html
K8S集群安裝與部署(Linux系統(tǒng))_linux 安裝k8s_Chensay.的博客-CSDN博客文章來源地址http://www.zghlxwxcb.cn/news/detail-850872.html
到了這里,關(guān)于手把手教你在虛擬機(jī)中部署Kubernetes集群(K8S)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!