国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián)

這篇具有很好參考價(jià)值的文章主要介紹了liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

先按照官方的教程在虛擬機(jī)安裝學(xué)習(xí)

在開始以下教程之前,您應(yīng)該確保您的系統(tǒng)上安裝了以下軟件:

  • Docker,容器運(yùn)行時(shí)。
  • Kubectl,Kubernetes 的命令行工具。
 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
 kubectl version --client
  • Helm,Kubernetes 的包管理器。
 curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
 helm version
  • curl,通過 HTTP/HTTPS 與教程應(yīng)用程序交互。
  • Kind,Docker 運(yùn)行時(shí)中的 Kubernetes。(在docker中運(yùn)行的Kubernetes,主要做測(cè)試用)
#For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
#For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
kind version
  • liqoc??tl與 Liqo 交互的命令行工具。
 #AMD64:
curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v0.10.2/liqoctl-linux-amd64.tar.gz" | tar -xz
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl
#ARM64:
curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v0.10.2/liqoctl-linux-arm64.tar.gz" | tar -xz
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl

創(chuàng)建虛擬集群

然后,讓我們?cè)谟?jì)算機(jī)上打開一個(gè)終端并啟動(dòng)以下腳本,該腳本將使用 Kind 創(chuàng)建一對(duì)集群。每個(gè)集群由兩個(gè)節(jié)點(diǎn)組成(一個(gè)用于控制平面,一個(gè)作為簡(jiǎn)單工作節(jié)點(diǎn)):

git clone https://github.com/liqotech/liqo.git
cd liqo
git checkout v0.10.2
cd examples/quick-start
./setup.sh

此步驟如果報(bào)錯(cuò): failed to create cluster: failed to init node with kubeadm: command “docker exec --privileged rome-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6” failed with error: exit status 1
Command Output: I0419 00:55:49.069409 44 initconfiguration.go:254] loading configuration from “/kind/kubeadm.conf”

the HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
排查:
直接用 kind create cluster --name rome 是成功的,發(fā)現(xiàn)Ensuring node image:kindest/node:v1.29.2
而用./setup.sh執(zhí)行的是v1.25.0
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
解決辦法:
直接改/opt/liqo/examples/quick-start/manifests/cluster.yam里面的kindest/node改成v1.29.2

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  serviceSubnet: "10.90.0.0/12"
  podSubnet: "10.200.0.0/16"
nodes:
  - role: control-plane
    image: kindest/node:v1.25.0  #這里改成v1.29.2
  - role: worker
    image: kindest/node:v1.25.0  #這里改成v1.29.2

詳情請(qǐng)點(diǎn)擊

在執(zhí)行:

root@main:~/liqo/examples/quick-start# ./setup.sh 
SUCCESS	No cluster "rome" is running.
SUCCESS	No cluster "milan" is running.
SUCCESS	Cluster "rome" has been created.
SUCCESS	Cluster "milan" has been created.
root@main:~/liqo/examples/quick-start# 

測(cè)試集群

您可以通過鍵入以下內(nèi)容來(lái)檢查已部署的集群:

root@main:~/liqo/examples/quick-start# kind get clusters
milan
rome
root@main:~/liqo/examples/quick-start# 

這意味著您的主機(jī)上部署并運(yùn)行了兩個(gè) Kind 集群。
默認(rèn)情況下,兩個(gè)集群的 kubeconfig 存儲(chǔ)在當(dāng)前目錄(./liqo_kubeconf_rome, ./liqo_kubeconf_milan)中。

root@main:~/liqo/examples/quick-start# pwd
/root/liqo/examples/quick-start
root@main:~/liqo/examples/quick-start# ls
liqo_kubeconf_milan  liqo_kubeconf_rome  manifests  setup.sh
root@main:~/liqo/examples/quick-start# 

您可以通過以下方式導(dǎo)出本教程其余部分使用的適當(dāng)環(huán)境變量(即KUBECONFIG和KUBECONFIG_MILAN),并引用它們的位置:

export KUBECONFIG="$PWD/liqo_kubeconf_rome"
export KUBECONFIG_MILAN="$PWD/liqo_kubeconf_milan"

永久配置:

sudo vim ~/.bashrc
export KUBECONFIG="$PWD/liqo_kubeconf_rome"
export KUBECONFIG_MILAN="$PWD/liqo_kubeconf_milan"

建議將第一個(gè)集群的 kubeconfig 導(dǎo)出為默認(rèn)值(即KUBECONFIG),因?yàn)樗鼘⑹翘摂M集群的入口點(diǎn),您將主要與其交互。

在第一個(gè)集群上,您只需輸入以下內(nèi)容即可獲取可用的 Pod:

root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-76f75df574-m5ppl                     1/1     Running   0          110s
kube-system          coredns-76f75df574-mljgc                     1/1     Running   0          110s
kube-system          etcd-rome-control-plane                      1/1     Running   0          2m6s
kube-system          kindnet-p9cjf                                1/1     Running   0          107s
kube-system          kindnet-pg7bt                                1/1     Running   0          110s
kube-system          kube-apiserver-rome-control-plane            1/1     Running   0          2m4s
kube-system          kube-controller-manager-rome-control-plane   1/1     Running   0          2m4s
kube-system          kube-proxy-gh5kr                             1/1     Running   0          110s
kube-system          kube-proxy-l67g5                             1/1     Running   0          107s
kube-system          kube-scheduler-rome-control-plane            1/1     Running   0          2m4s
local-path-storage   local-path-provisioner-7577fdbbfb-ct9rz      1/1     Running   0          110s

同樣,在第二個(gè)集群上,您可以觀察執(zhí)行中的 Pod:

kubectl get pods -A --kubeconfig "$KUBECONFIG_MILAN"

安裝Liqo

現(xiàn)在,您將使用以下名稱在兩個(gè)集群上安裝 Liqo

  • rome:本地集群,您將在其中部署和控制應(yīng)用程序。

  • milan:遠(yuǎn)程集群,部分工作負(fù)載將被卸載到該集群。

您可以通過啟動(dòng)以下命令在rome集群上安裝 Liqo:

liqoctl install kind --cluster-name rome

這里一直報(bào)錯(cuò),安裝不成功

 ERRO  Error installing or upgrading Liqo: release liqo failed, and has been uninstalled due to atomic being set: timed out waiting for the condition                                                                                                                                    
 INFO  Likely causes for the installation/upgrade timeout could include:
 INFO  * One or more pods failed to start (e.g., they are in the ImagePullBackOff status)
 INFO  * A service of type LoadBalancer has been configured, but no provider is available
 INFO  You can add the --verbose flag for debug information concerning the failing resources
 INFO  Additionally, if necessary, you can increase the timeout value with the --timeout flag

根據(jù)提示,運(yùn)行:liqoctl install kind --cluster-name rome --verbose 展示詳情信息:

 Starting delete for "liqo-webhook-certificate-patch" ServiceAccount                                                                                                                                                                                                               
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch" Role                                                                                                                                                                                                                         
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch" RoleBinding                                                                                                                                                                                                                  
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch-pre" Job                                                                                                                                                                                                                      
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Watching for changes to Job liqo-webhook-certificate-patch-pre with timeout of 10m0s                                                                                                                                                                                              
 INFO  Add/Modify event for liqo-webhook-certificate-patch-pre: ADDED                                                                                                                                                                                                                    
 INFO  liqo-webhook-certificate-patch-pre: Jobs active: 0, jobs failed: 0, jobs succeeded: 0                                                                                                                                                                                             
 INFO  Add/Modify event for liqo-webhook-certificate-patch-pre: MODIFIED                                                                                                                                                                                                                 
 INFO  liqo-webhook-certificate-patch-pre: Jobs active: 1, jobs failed: 0, jobs succeeded: 0                                                                                                                                                                                             
 INFO  Install failed and atomic is set, uninstalling release       
 .....
 Ignoring delete failure for "liqo-telemetry" /v1, Kind=ServiceAccount: serviceaccounts "liqo-telemetry" not found                                                                                                                                                                 
 INFO  Ignoring delete failure for "liqo-crd-replicator" /v1, Kind=ServiceAccount: serviceaccounts "liqo-crd-replicator" not found                                                                                                                                                       
 INFO  Ignoring delete failure for "liqo-gateway" /v1, Kind=ServiceAccount: serviceaccounts "liqo-gateway" not found                                                                                                                                                                     
 INFO  Ignoring delete failure for "liqo-auth" /v1, Kind=ServiceAccount: serviceaccounts "liqo-auth" not found                                                                                                                                                                           
 INFO  Ignoring delete failure for "liqo-network-manager" /v1, Kind=ServiceAccount: serviceaccounts "liqo-network-manager" not found                                                                                                                                                     
 INFO  Ignoring delete failure for "liqo-route" /v1, Kind=ServiceAccount: serviceaccounts "liqo-route" not found                                                                                                                                                                         
 INFO  Ignoring delete failure for "liqo-controller-manager" /v1, Kind=ServiceAccount: serviceaccounts "liqo-controller-manager" not found                                                                                                                                               
 INFO  Ignoring delete failure for "liqo-metric-agent" /v1, Kind=ServiceAccount: serviceaccounts "liqo-metric-agent" not found                                                                                                                                                           
 INFO  Starting delete for "liqo-webhook" MutatingWebhookConfiguration                                                                                                                                                                                                                   
 INFO  Ignoring delete failure for "liqo-webhook" admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io "liqo-webhook" not found                                                                                
 INFO  Starting delete for "liqo-webhook" ValidatingWebhookConfiguration                                                                                                                                                                                                                 
 INFO  Ignoring delete failure for "liqo-webhook" admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io "liqo-webhook" not found                                                                            
 INFO  purge requested for liqo                                                                                                                                                                                                                                                          
 ERRO  Error installing or upgrading Liqo: release liqo failed, and has been uninstalled due to atomic being set: timed out waiting for the condition                                                                                                                                    
 INFO  Likely causes for the installation/upgrade timeout could include:
 INFO  * One or more pods failed to start (e.g., they are in the ImagePullBackOff status)
 INFO  * A service of type LoadBalancer has been configured, but no provider is available
 INFO  You can add the --verbose flag for debug information concerning the failing resources
 INFO  Additionally, if necessary, you can increase the timeout value with the --timeout flag
                                                                           

這里還不太清楚怎么辦,看情況好像是liqo-webhook相關(guān)的東西不行,又看到提示"One or more pods failed to start (e.g., they are in the ImagePullBackOff status)"
然后執(zhí)行了下kubectl get pods -A 發(fā)現(xiàn)liqo-webhook-certificate-patch-pre-dxqj9 沒拉取下來(lái)

root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS             RESTARTS   AGE
kube-system          coredns-76f75df574-prq94                     1/1     Running            0          63m
kube-system          coredns-76f75df574-xljfr                     1/1     Running            0          63m
kube-system          etcd-rome-control-plane                      1/1     Running            0          63m
kube-system          kindnet-5g85h                                1/1     Running            0          63m
kube-system          kindnet-74l4j                                1/1     Running            0          63m
kube-system          kube-apiserver-rome-control-plane            1/1     Running            0          63m
kube-system          kube-controller-manager-rome-control-plane   1/1     Running            0          63m
kube-system          kube-proxy-86v24                             1/1     Running            0          63m
kube-system          kube-proxy-9c9df                             1/1     Running            0          63m
kube-system          kube-scheduler-rome-control-plane            1/1     Running            0          63m
liqo                 liqo-webhook-certificate-patch-pre-dxqj9     0/1     ImagePullBackOff   0          25m
local-path-storage   local-path-provisioner-7577fdbbfb-7vxcq      1/1     Running            0          63m

root@liqo:~/liqo/examples/quick-start# kubectl logs -f -n liqo liqo-webhook-certificate-patch-pre-dxqj9 
Error from server (BadRequest): container "create" in pod "liqo-webhook-certificate-patch-pre-dxqj9" is waiting to start: image can't be pulled

然后去官方倉(cāng)庫(kù)搜索webhook-certificate 發(fā)現(xiàn)
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
執(zhí)行 kubectl describe job liqo-webhook-certificate-patch --namespace=liqo

root@liqo:~/liqo/examples/quick-start# kubectl describe job liqo-webhook-certificate-patch --namespace=liqo
Name:                        liqo-webhook-certificate-patch-pre
Namespace:                   liqo
Selector:                    batch.kubernetes.io/controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
Labels:                      app.kubernetes.io/component=webhook-certificate-patch
                             app.kubernetes.io/instance=liqo-webhook-certificate-patch-pre
                             app.kubernetes.io/managed-by=Helm
                             app.kubernetes.io/name=webhook-certificate-patch-pre
                             app.kubernetes.io/part-of=liqo
                             app.kubernetes.io/version=v0.10.2
                             helm.sh/chart=liqo-v0.10.2
Annotations:                 helm.sh/hook: pre-install,pre-upgrade
                             helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
Parallelism:                 1
Completions:                 1
Completion Mode:             NonIndexed
Suspend:                     false
Backoff Limit:               6
TTL Seconds After Finished:  150
Start Time:                  Fri, 19 Apr 2024 07:04:18 +0000
Pods Statuses:               1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=webhook-certificate-patch
                    app.kubernetes.io/instance=liqo-webhook-certificate-patch-pre
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=webhook-certificate-patch-pre
                    app.kubernetes.io/part-of=liqo
                    app.kubernetes.io/version=v0.10.2
                    batch.kubernetes.io/controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
                    batch.kubernetes.io/job-name=liqo-webhook-certificate-patch-pre
                    controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
                    helm.sh/chart=liqo-v0.10.2
                    job-name=liqo-webhook-certificate-patch-pre
  Service Account:  liqo-webhook-certificate-patch
  Containers:
   create:
    Image:      k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    Port:       <none>
    Host Port:  <none>
    Args:
      create
      --host=liqo-controller-manager,liqo-controller-manager.liqo,liqo-controller-manager.liqo.svc,liqo-controller-manager.liqo.svc.cluster.local
      --namespace=liqo
      --secret-name=liqo-webhook-certs
      --cert-name=tls.crt
      --key-name=tls.key
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  29m   job-controller  Created pod: liqo-webhook-certificate-patch-pre-dxqj9

發(fā)現(xiàn)image是:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
這里安裝過k8s的都知道,會(huì)有這個(gè)問題 k8s.gcr.io會(huì)訪問不到。
這里卡了好久,k8s可以指定–image-repository 但是他不支持

liqoctl install kind --cluster-name rome --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers  --verbose 
Error: unknown flag: --image-repository
root@liqo:~/liqo/examples/quick-start# kubectl get pods -A

然后拉取aliyuncs的kube-webhook-certgen,打tag,在重新刪除pod從新運(yùn)行pod也是不行:

root@liqo:~/liqo/examples/quick-start# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
v1.1.1: Pulling from google_containers/kube-webhook-certgen
ec52731e9273: Pull complete 
b90aa28117d4: Pull complete 
Digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
root@liqo:~/liqo/examples/quick-start#  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
root@liqo:~/liqo/examples/quick-start# docker images
REPOSITORY                                                                 TAG       IMAGE ID       CREATED         SIZE
kindest/node                                                               v1.29.2   09c50567d34e   2 months ago    956MB
kindest/node                                                               v1.25.0   d3da246e125a   19 months ago   870MB
k8s.gcr.io/ingress-nginx/kube-webhook-certgen                              v1.1.1    c41e9fcadf5a   2 years ago     47.7MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen   v1.1.1    c41e9fcadf5a   2 years ago     47.7MB
root@liqo:~/liqo/examples/quick-start# kubectl delete pod liqo-webhook-certificate-patch-pre-fj72j -n liqo
pod "liqo-webhook-certificate-patch-pre-fj72j" deleted
root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS              RESTARTS   AGE
kube-system          coredns-76f75df574-prq94                     1/1     Running             0          159m
kube-system          coredns-76f75df574-xljfr                     1/1     Running             0          159m
kube-system          etcd-rome-control-plane                      1/1     Running             0          159m
kube-system          kindnet-5g85h                                1/1     Running             0          159m
kube-system          kindnet-74l4j                                1/1     Running             0          159m
kube-system          kube-apiserver-rome-control-plane            1/1     Running             0          159m
kube-system          kube-controller-manager-rome-control-plane   1/1     Running             0          159m
kube-system          kube-proxy-86v24                             1/1     Running             0          159m
kube-system          kube-proxy-9c9df                             1/1     Running             0          159m
kube-system          kube-scheduler-rome-control-plane            1/1     Running             0          159m
liqo                 liqo-webhook-certificate-patch-pre-jk979     0/1     ContainerCreating   0          3s
local-path-storage   local-path-provisioner-7577fdbbfb-7vxcq      1/1     Running             0          159m

最后還是創(chuàng)建失敗,ImagePullBackOff。這里我的原理不太懂,打標(biāo)簽之后也沒直接用我pull下來(lái)的鏡像,還是會(huì)去拉取,因?yàn)槲野惭bk8s的時(shí)候直接指定–image-repository 所以不用什么拉取鏡像打標(biāo)簽什么的,這里應(yīng)該網(wǎng)上有的教程太老了吧,現(xiàn)在k8s都是直接用containerd了,k8s安裝見k8s安裝,linux-ubuntu上面kubernetes詳細(xì)安裝過程

接著又搜索怎么修改pod的image,各種試都不行,他是個(gè)job,k8s太多東西了,這個(gè)概念不清楚
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián),liqo,k8s
到這里又行不通了,又看官網(wǎng),發(fā)現(xiàn)執(zhí)行的時(shí)候,可以指定這個(gè)webhook-certificate的image,刪除liqo:

liqoctl uninstall --purge --kubeconfig="$KUBECONFIG"
liqoctl uninstall --purge --kubeconfig="$KUBECONFIG_MILAN"

等刪除成功之后從新執(zhí)行:

liqoctl install kind --cluster-name rome --verbose --set webhook.patch.image=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1

成功

 INFO  Installer initialized                                                                                                                                                                                                                                                             
 INFO  Cluster configuration correctly retrieved                                                                                                                                                                                                                                         
 INFO  Installation parameters correctly generated                                                                                                                                                                                                                                       
 INFO  All Set! You can now proceed establishing a peering (liqoctl peer --help for more information)                                                                                                                                                                                    
 INFO  Make sure to use the same version of Liqo on all remote clusters

在Milan集群上也安裝 Liqo :

root@liqo:~/liqo/examples/quick-start# liqoctl install kind --cluster-name milan --kubeconfig "$KUBECONFIG_MILAN" --set webhook.patch.image=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 --verbose

Liqo pod 已啟動(dòng)并運(yùn)行:

root@liqo:~/liqo/examples/quick-start# kubectl get pods -n liqo
NAME                                      READY   STATUS    RESTARTS   AGE
liqo-auth-6bf849d75-ll25k                 1/1     Running   0          53m
liqo-controller-manager-7c586975f-d2k6c   1/1     Running   0          53m
liqo-crd-replicator-679cfc85cd-g85nb      1/1     Running   0          53m
liqo-gateway-759f8b7d4d-lsblm             1/1     Running   0          53m
liqo-metric-agent-77d765d9df-v4tp2        1/1     Running   0          53m
liqo-network-manager-dd886d8bc-9st79      1/1     Running   0          53m
liqo-proxy-6d49f7789b-klcp4               1/1     Running   0          53m
liqo-route-6nglb                          1/1     Running   0          53m
liqo-route-7z4vc                          1/1     Running   0          53m
root@liqo:~/liqo/examples/quick-start# 

此外,您可以使用以下命令檢查安裝狀態(tài)和主要 Liqo 配置參數(shù):

root@liqo:~/liqo/examples/quick-start# liqoctl status
┌─ Namespace existence check ──────────────────────────────────────────────────────┐
|  INFO  ? liqo control plane namespace liqo exists                                |
└──────────────────────────────────────────────────────────────────────────────────┘

┌─ Control plane check ────────────────────────────────────────────────────────────┐
|  Deployment                                                                      |
|      liqo-controller-manager: Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-crd-replicator:     Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-metric-agent:       Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-auth:               Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-proxy:              Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-network-manager:    Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-gateway:            Desired: 1, Ready: 1/1, Available: 1/1             |
|  DaemonSet                                                                       |
|      liqo-route:              Desired: 2, Ready: 2/2, Available: 2/2             |
└──────────────────────────────────────────────────────────────────────────────────┘

┌─ Local cluster information ──────────────────────────────────────────────────────┐
|  Cluster identity                                                                |
|      Cluster ID:   06b2ab0f-5dd0-42cb-aaca-73f92741b740                          |
|      Cluster name: rome                                                          |
|      Cluster labels                                                              |
|          liqo.io/provider: kind                                                  |
|  Configuration                                                                   |
|      Version: v0.10.2                                                            |
|  Network                                                                         |
|      Pod CIDR:      10.200.0.0/16                                                |
|      Service CIDR:  10.80.0.0/12                                                 |
|      External CIDR: 10.201.0.0/16                                                |
|  Endpoints                                                                       |
|      Network gateway:       udp://172.18.0.2:30620                               |
|      Authentication:        https://172.18.0.3:32395                             |
|      Kubernetes API server: https://172.18.0.3:6443                              |
└──────────────────────────────────────────────────────────────────────────────────┘

對(duì)等兩個(gè)集群

就是把兩個(gè)集群建立互連,在此示例中,由于兩個(gè) API 服務(wù)器(可以理解成兩個(gè)k8s可以相互訪問)可相互訪問,因此您將使用帶外對(duì)等互連方法。

首先,從Milan集群獲取對(duì)等命令:

root@liqo:~/liqo/examples/quick-start# liqoctl generate peer-command --kubeconfig "$KUBECONFIG_MILAN"
 INFO  Peering information correctly retrieved                                                                                                                                                                                                                                           

Execute this command on a *different* cluster to enable an outgoing peering with the current cluster:

liqoctl peer out-of-band milan --auth-url https://172.18.0.4:31720 --cluster-id 0422b752-25e5-42d0-acbf-1d584b09d1a6 --auth-token dfd35fcb10d65c142738261c17d94724bd3bf2dd54e14ac344e22a0cee27b58a084f452ead3df1857e2bb9dd35d0d6ba5b03b7e507f220eff4c71785a42e7cae
root@liqo:~/liqo/examples/quick-start# 

rome:本地集群,您將在其中部署和控制應(yīng)用程序。
milan:遠(yuǎn)程集群,部分工作負(fù)載將被卸載到該集群。
就是在rome集群執(zhí)行pod相關(guān)的命令會(huì)到milan上?

其次,將命令復(fù)制粘貼到Rome集群中:

root@liqo:~/liqo/examples/quick-start# liqoctl peer out-of-band milan --auth-url https://172.18.0.4:31720 --cluster-id 0422b752-25e5-42d0-acbf-1d584b09d1a6 --auth-token dfd35fcb10d65c142738261c17d94724bd3bf2dd54e14ac344e22a0cee27b58a084f452ead3df1857e2bb9dd35d0d6ba5b03b7e507f220eff4c71785a42e7cae
 INFO  Peering enabled                                                                                                                                                                                                                                                                   
 INFO  Authenticated to cluster "milan"                                                                                                                                                                                                                                                  
 INFO  Outgoing peering activated to the remote cluster "milan"                                                                                                                                                                                                                          
 INFO  Network established to the remote cluster "milan"                                                                                                                                                                                                                                 
 INFO  Node created for remote cluster "milan"                                                                                                                                                                                                                                           
 INFO  Peering successfully established
root@liqo:~/liqo/examples/quick-start# 

現(xiàn)在, Rome集群中的 Liqo 控制平面將聯(lián)系所提供的身份驗(yàn)證端點(diǎn),向Milan集群提供令牌以獲取其 Kubernetes 身份。

您可以通過運(yùn)行以下命令來(lái)檢查對(duì)等狀態(tài):

root@liqo:~/liqo/examples/quick-start# kubectl get foreignclusters
NAME    TYPE        OUTGOING PEERING   INCOMING PEERING   NETWORKING    AUTHENTICATION   AGE
milan   OutOfBand   Established        None               Established   Established      85s

輸出表明跨集群網(wǎng)絡(luò)隧道已建立,并且傳出對(duì)等互連當(dāng)前處于活動(dòng)狀態(tài)(即,Rome集群可以將工作負(fù)載卸載到Milan集群,但反之則不然):
同時(shí),liqo-milan除了物理節(jié)點(diǎn)之外,您還應(yīng)該看到一個(gè)虛擬節(jié)點(diǎn) ( ):

oot@liqo:~/liqo/examples/quick-start# kubectl get nodes
NAME                 STATUS   ROLES           AGE     VERSION
liqo-milan           Ready    agent           2m46s   v1.29.2
rome-control-plane   Ready    control-plane   71m     v1.29.2
rome-worker          Ready    <none>          71m     v1.29.2

此外,您可以使用以下方法檢查對(duì)等互連狀態(tài)并檢索更多高級(jí)信息:

liqoctl status peer milan

部署服務(wù)

現(xiàn)在,您可以像在單集群場(chǎng)景中一樣在多集群環(huán)境中部署標(biāo)準(zhǔn) Kubernetes 應(yīng)用程序(即無(wú)需修改)。
下一篇見文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-855746.html

到了這里,關(guān)于liqo學(xué)習(xí)及安裝,k8s,kubernetes多集群互聯(lián)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    目錄 基于Prometheus監(jiān)控Kubernetes集群安裝部署 一、環(huán)境準(zhǔn)備 二、部署kubernetes集群 三、部署Prometheus監(jiān)控平臺(tái) 四、部署Grafana服務(wù) 五、grafana? web操作 IP地址 主機(jī)名 組件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168

    2024年02月12日
    瀏覽(107)
  • 為Kubernetes(k8s)集群安裝儀表盤(Dashboard)

    為Kubernetes(k8s)集群安裝儀表盤(Dashboard)

    對(duì)應(yīng)本片文章的視頻教程地址:https://www.bilibili.com/video/BV1MF41197RS/?vd_source=98deeeab6739fa30792cfcffa994b50e 在之前的文章當(dāng)中我們搭建了一個(gè)kubernetes集群,文章地址: https://blog.csdn.net/m0_51510236/article/details/130842122 這篇文章我們依照官方文檔為這個(gè)kubernetes集群安裝儀表盤(Dashboard),官

    2024年02月13日
    瀏覽(22)
  • K8s(kubernetes)集群搭建及dashboard安裝、基礎(chǔ)應(yīng)用部署

    K8s(kubernetes)集群搭建及dashboard安裝、基礎(chǔ)應(yīng)用部署

    本質(zhì)是一組服務(wù)器集群,在集群每個(gè)節(jié)點(diǎn)上運(yùn)行特定的程序,來(lái)對(duì)節(jié)點(diǎn)中的容器進(jìn)行管理。實(shí)現(xiàn)資源管理的自動(dòng)化。 自我修復(fù) 彈性伸縮 服務(wù)發(fā)現(xiàn) 負(fù)載均衡 版本回退 存儲(chǔ)編排 控制節(jié)點(diǎn)(master)-控制平面 APIserver :資源操作的唯一入口 scheduler :集群資源調(diào)度,將Pod調(diào)度到node節(jié)

    2024年02月08日
    瀏覽(32)
  • 【Kubernetes】kubeadm安裝k8s1.25.0高可用集群

    【Kubernetes】kubeadm安裝k8s1.25.0高可用集群

    詳情請(qǐng)參考 : 【Kubernetes】kubeadm安裝k8s穩(wěn)定版(1.23.1)高可用集群 這里不再贅述 這部分內(nèi)容還沒發(fā)布。。。后續(xù)有空再整理更新啦。 master、node節(jié)點(diǎn)都需要操作 docker也要安裝,docker跟containerd不沖突,安裝docker是為了能基于dockerfile構(gòu)建鏡像。 master、node節(jié)點(diǎn)都需要操作 3臺(tái)機(jī)

    2024年01月16日
    瀏覽(28)
  • Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    Centos7 安裝部署 Kubernetes(k8s) 高可用集群

    宿主機(jī)系統(tǒng) 集群角色 服務(wù)器IP 主機(jī)名稱 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服務(wù)器初始化及網(wǎng)絡(luò)配置 VMware安裝Centos7并初始化網(wǎng)絡(luò)使外部可以訪問** 注意事項(xiàng):請(qǐng)一定要看完上面這篇文章再執(zhí)行下面的操

    2024年02月03日
    瀏覽(55)
  • kubernetes(k8s)安裝、集群搭建、可視化界面、完全卸載

    kubernetes(k8s)安裝、集群搭建、可視化界面、完全卸載

    官網(wǎng):https://kubernetes.io/zh-cn/docs/concepts/overview/ Kubernetes 是一個(gè)可移植、可擴(kuò)展的開源平臺(tái), 用于管理容器化的工作負(fù)載和服務(wù) ,可促進(jìn)聲明式配置和自動(dòng)化。 Kubernetes 作用: 服務(wù)發(fā)現(xiàn)和負(fù)載均衡 Kubernetes 可以使用 DNS 名稱或自己的 IP 地址來(lái)暴露容器。 如果進(jìn)入容器的流量很

    2024年02月02日
    瀏覽(27)
  • 云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)

    前面利用kubekey部署了一個(gè)簡(jiǎn)單的非高可用,etcd單實(shí)例的kubernetes集群,經(jīng)過研究,發(fā)現(xiàn)部署過程可以簡(jiǎn)化,省去了一部分下載過程(主要是下載kubernetes組件的過程)只是kubernetes版本會(huì)固定在1.22.16版本,etcd集群可以部署成生產(chǎn)用的外部集群,并且apiserver等等組件也是高可用,

    2024年02月15日
    瀏覽(28)
  • 二進(jìn)制安裝Kubernetes(k8s) v1.27.3 IPv4/IPv6雙棧 可脫離互聯(lián)網(wǎng)

    https://github.com/cby-chen/Kubernetes 開源不易,幫忙點(diǎn)個(gè)star,謝謝了 kubernetes(k8s)二進(jìn)制高可用安裝部署,支持IPv4+IPv6雙棧。 我使用IPV6的目的是在公網(wǎng)進(jìn)行訪問,所以我配置了IPV6靜態(tài)地址。 若您沒有IPV6環(huán)境,或者不想使用IPv6,不對(duì)主機(jī)進(jìn)行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月16日
    瀏覽(27)
  • 二進(jìn)制安裝Kubernetes(k8s) v1.27.1 IPv4/IPv6雙棧 可脫離互聯(lián)網(wǎng)

    https://github.com/cby-chen/Kubernetes 開源不易,幫忙點(diǎn)個(gè)star,謝謝了 kubernetes(k8s)二進(jìn)制高可用安裝部署,支持IPv4+IPv6雙棧。 我使用IPV6的目的是在公網(wǎng)進(jìn)行訪問,所以我配置了IPV6靜態(tài)地址。 若您沒有IPV6環(huán)境,或者不想使用IPv6,不對(duì)主機(jī)進(jìn)行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月06日
    瀏覽(36)
  • [Kubernetes[K8S]集群:master主節(jié)點(diǎn)初始化]:通過Calico和Coredns網(wǎng)絡(luò)插件方式安裝

    [Kubernetes[K8S]集群:master主節(jié)點(diǎn)初始化]:通過Calico和Coredns網(wǎng)絡(luò)插件方式安裝

    主節(jié)點(diǎn):安裝coredns - init初始化 主節(jié)點(diǎn)(此時(shí)還沒有安裝calico) 從節(jié)點(diǎn):基于主節(jié)點(diǎn)生成join命令加入集群 主節(jié)點(diǎn):安裝calico:apply 生成pod,此時(shí)沒有調(diào)整yaml網(wǎng)卡 coredns 和calico pod 運(yùn)行成功 但是 calico-node-cl8f2 運(yùn)行失敗 查看 解決鏈接 因?yàn)橹皩戇^一篇,calico一直異常,步驟

    2024年04月15日
    瀏覽(19)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包