国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

這篇具有很好參考價值的文章主要介紹了k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報違法"按鈕提交疑問。

1、jenkins架構(gòu)

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins
基于java命令,運(yùn)?java war包或jar包,本次以jenkins.war 包部署?式為例,且要求jenkins的數(shù)據(jù)保存?外部存儲(NFS或者PVC),其他java應(yīng)?看實(shí)際需求是否需要將數(shù)據(jù)保存?外部存儲。

從上述架構(gòu)圖可以看到,Jenkins通過k8s上的pv/pvc來連接外部存儲,通過svc的方式向外暴露服務(wù),在集群內(nèi)部通過直接訪問svc就可以正常訪問到j(luò)enkins,對于集群外部成員,通過外部負(fù)載均衡器來訪問Jenkins;

2、鏡像準(zhǔn)備

2.1、Jenkins鏡像目錄文件

root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# tree
.
├── Dockerfile
├── build-command.sh
├── jenkins-2.319.2.war
└── run_jenkins.sh

0 directories, 4 files
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# 

2.2、構(gòu)建Jenkins鏡像

2.2.1、構(gòu)建Jenkins鏡像Dockerfile

root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat Dockerfile
#Jenkins Version 2.190.1
FROM harbor.ik8s.cc/pub-images/jdk-base:v8.212

ADD jenkins-2.319.2.war /apps/jenkins/jenkins.war
ADD run_jenkins.sh /usr/bin/

EXPOSE 8080 

CMD ["/usr/bin/run_jenkins.sh"]
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# 

上述Dockerfile主要引用了一個jdk-base的鏡像,該鏡像有java環(huán)境,然后再這基礎(chǔ)之上將jenkins war包和運(yùn)行腳本加進(jìn)去,然后暴露8080端口,最后給出運(yùn)行jenkins的cmd命令;

2.2.2、運(yùn)行jenkins 腳本

root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat run_jenkins.sh 
#!/bin/bash
cd /apps/jenkins && java -server -Xms1024m -Xmx1024m -Xss512k -jar jenkins.war --webroot=/apps/jenkins/jenkins-data --httpPort=8080
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# 

2.2.3、構(gòu)建Jenkins鏡像腳本

root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat build-command.sh 
#!/bin/bash
#docker build -t  harbor.ik8s.cc/magedu/jenkins:v2.319.2 .
#echo "鏡像制作完成,即將上傳至Harbor服務(wù)器"
#sleep 1
#docker push harbor.ik8s.cc/magedu/jenkins:v2.319.2
#echo "鏡像上傳完成"

echo "即將開始就像構(gòu)建,請稍等!" && echo 3 && sleep 1 && echo 2 && sleep 1 && echo 1
nerdctl build -t  harbor.ik8s.cc/magedu/jenkins:v2.319.2 .
if [ $? -eq 0 ];then
  echo "即將開始鏡像上傳,請稍等!" && echo 3 && sleep 1 && echo 2 && sleep 1 && echo 1
  nerdctl push harbor.ik8s.cc/magedu/jenkins:v2.319.2 
  if [ $? -eq 0 ];then
    echo "鏡像上傳成功!"
  else
    echo "鏡像上傳失敗"
  fi
else
  echo "鏡像構(gòu)建失敗,請檢查構(gòu)建輸出信息!"
fi
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# 

運(yùn)行腳本構(gòu)建鏡像
k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

2.3、驗(yàn)證Jenkins鏡像

2.3.1、在harbor上查看對應(yīng)jenkins鏡像是否正常上傳?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

2.3.2、測試Jenkins鏡像是否可以正常運(yùn)行?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

2.3.3、web訪問jenkins是否可以正常訪問?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

能夠看到上述頁面,說明jenkins鏡像制作沒有問題;

3、準(zhǔn)備PV/PVC

3.1、在nfs服務(wù)器上準(zhǔn)備jenkins數(shù)據(jù)目錄

root@harbor:~# mkdir -p /data/k8sdata/magedu/{jenkins-data,jenkins-root-data}
root@harbor:~# ll /data/k8sdata/magedu/{jenkins-data,jenkins-root-data}
/data/k8sdata/magedu/jenkins-data:
total 8
drwxr-xr-x  2 root root 4096 Aug  6 03:35 ./
drwxr-xr-x 21 root root 4096 Aug  6 03:35 ../

/data/k8sdata/magedu/jenkins-root-data:
total 8
drwxr-xr-x  2 root root 4096 Aug  6 03:35 ./
drwxr-xr-x 21 root root 4096 Aug  6 03:35 ../
root@harbor:~# tail  /etc/exports

/data/k8sdata/magedu/mysql-datadir-1 *(rw,no_root_squash)
/data/k8sdata/magedu/mysql-datadir-2 *(rw,no_root_squash)
/data/k8sdata/magedu/mysql-datadir-3 *(rw,no_root_squash)
/data/k8sdata/magedu/mysql-datadir-4 *(rw,no_root_squash)
/data/k8sdata/magedu/mysql-datadir-5 *(rw,no_root_squash)


/data/k8sdata/magedu/jenkins-data *(rw,no_root_squash)
/data/k8sdata/magedu/jenkins-root-data *(rw,no_root_squash)
root@harbor:~# exportfs -av
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [18]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis0".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [19]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [20]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis2".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [21]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis3".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [22]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis4".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [23]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis5".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [27]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [28]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-2".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [29]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-3".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [30]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-4".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [31]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-5".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [34]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/jenkins-data".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [35]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/jenkins-root-data".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exporting *:/data/k8sdata/magedu/jenkins-root-data
exporting *:/data/k8sdata/magedu/jenkins-data
exporting *:/data/k8sdata/magedu/mysql-datadir-5
exporting *:/data/k8sdata/magedu/mysql-datadir-4
exporting *:/data/k8sdata/magedu/mysql-datadir-3
exporting *:/data/k8sdata/magedu/mysql-datadir-2
exporting *:/data/k8sdata/magedu/mysql-datadir-1
exporting *:/data/k8sdata/magedu/redis5
exporting *:/data/k8sdata/magedu/redis4
exporting *:/data/k8sdata/magedu/redis3
exporting *:/data/k8sdata/magedu/redis2
exporting *:/data/k8sdata/magedu/redis1
exporting *:/data/k8sdata/magedu/redis0
exporting *:/data/k8sdata/magedu/redis-datadir-1
exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
exporting *:/data/k8sdata/magedu/static
exporting *:/data/k8sdata/magedu/images
exporting *:/data/k8sdata/mysite
exporting *:/data/k8sdata/myserver
exporting *:/pod-vol
exporting *:/data/volumes
exporting *:/data/k8sdata/kuboard
root@harbor:~# 

3.2、在k8s上創(chuàng)建pv

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-datadir-pv
  namespace: magedu
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.0.42
    path: /data/k8sdata/magedu/jenkins-data 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-root-datadir-pv
  namespace: magedu
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.0.42
    path: /data/k8sdata/magedu/jenkins-root-data
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# kubectl apply  -f jenkins-persistentvolume.yaml 
persistentvolume/jenkins-datadir-pv created
persistentvolume/jenkins-root-datadir-pv created
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# 

3.3、驗(yàn)證pv

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

3.4、在k8s上創(chuàng)建pvc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-datadir-pvc
  namespace: magedu
spec:
  volumeName: jenkins-datadir-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-root-data-pvc
  namespace: magedu
spec:
  volumeName: jenkins-root-datadir-pv 
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# kubectl apply -f jenkins-persistentvolumeclaim.yaml
persistentvolumeclaim/jenkins-datadir-pvc created
persistentvolumeclaim/jenkins-root-data-pvc created
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# 

3.5、驗(yàn)證pvc

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

4、準(zhǔn)備在k8s上運(yùn)行jenkins的yaml文件

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-jenkins
  name: magedu-jenkins-deployment
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-jenkins
  template:
    metadata:
      labels:
        app: magedu-jenkins
    spec:
      containers:
      - name: magedu-jenkins-container
        image: harbor.ik8s.cc/magedu/jenkins:v2.319.2 
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        volumeMounts:
        - mountPath: "/apps/jenkins/jenkins-data/"
          name: jenkins-datadir-magedu
        - mountPath: "/root/.jenkins"
          name: jenkins-root-datadir
      volumes:
        - name: jenkins-datadir-magedu
          persistentVolumeClaim:
            claimName: jenkins-datadir-pvc
        - name: jenkins-root-datadir
          persistentVolumeClaim:
            claimName: jenkins-root-data-pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-jenkins
  name: magedu-jenkins-service
  namespace: magedu
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 38080
  selector:
    app: magedu-jenkins

5、應(yīng)用配置清單運(yùn)行Jenkins

root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl apply -f jenkins.yaml
deployment.apps/magedu-jenkins-deployment created
service/magedu-jenkins-service created
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# 

6、驗(yàn)證

6.1、驗(yàn)證Jenkins Pod是否正常運(yùn)行?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

6.2、驗(yàn)證web訪問jenkins是否可正常訪問?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins
查看jenkins密碼

root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl get pods -n magedu 
NAME                                             READY   STATUS      RESTARTS        AGE
magedu-jenkins-deployment-5f6899db-zn4xg         1/1     Running     0               11m
magedu-nginx-deployment-5589bbf4bc-6gd2w         1/1     Running     12 (107m ago)   62d
magedu-tomcat-app1-deployment-7754c8549c-c7rtb   1/1     Running     6 (108m ago)    62d
magedu-tomcat-app1-deployment-7754c8549c-prglk   1/1     Running     6 (108m ago)    62d
mysql-0                                          2/2     Running     4 (108m ago)    51d
mysql-1                                          2/2     Running     4 (108m ago)    51d
mysql-2                                          2/2     Running     4 (108m ago)    51d
redis-0                                          1/1     Running     4 (108m ago)    60d
redis-1                                          1/1     Running     4 (108m ago)    60d
redis-2                                          1/1     Running     4 (108m ago)    60d
redis-3                                          1/1     Running     4 (108m ago)    60d
redis-4                                          1/1     Running     4 (108m ago)    60d
redis-5                                          1/1     Running     4 (108m ago)    60d
ubuntu1804                                       0/1     Completed   0               60d
zookeeper1-675c5477cb-vmwwq                      1/1     Running     6 (108m ago)    62d
zookeeper2-759fb6c6f-7jktr                       1/1     Running     6 (108m ago)    62d
zookeeper3-5c78bb5974-vxpbh                      1/1     Running     6 (108m ago)    62d
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl exec -it magedu-jenkins-deployment-5f6899db-zn4xg -n magedu  cat /root/.jenkins/secrets/initialAdminPassword
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
8c4a17a8ecfe4fb88ed8701cb18340df
root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# 

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins
k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

ok,能夠通過web網(wǎng)頁正常訪問到j(luò)enkins pod,到此jenkins服務(wù)就正常運(yùn)行至k8s上了;后續(xù)可以通過外部負(fù)載均衡器將jenkins發(fā)布到集群外部成員訪問;

7、在外部負(fù)載均衡器上發(fā)布jenkins

ha01

root@k8s-ha01:~# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
  
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
  
vrrp_instance VI_1 {
    state MASTER
    interface ens160
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.111 dev ens160 label ens160:0
        192.168.0.112 dev ens160 label ens160:1
    }
}
root@k8s-ha01:~# cat /etc/haproxy/haproxy.cfg 
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen k8s_apiserver_6443
bind 192.168.0.111:6443
mode tcp
#balance leastconn
server k8s-master01 192.168.0.31:6443 check inter 2000 fall 3 rise 5
server k8s-master02 192.168.0.32:6443 check inter 2000 fall 3 rise 5
server k8s-master03 192.168.0.33:6443 check inter 2000 fall 3 rise 5

listen jenkins_80
bind 192.168.0.112:80
mode tcp
server k8s-node01 192.168.0.34:38080 check inter 2000 fall 3 rise 5
server k8s-node02 192.168.0.35:38080 check inter 2000 fall 3 rise 5
server k8s-node03 192.168.0.36:38080 check inter 2000 fall 3 rise 5
root@k8s-ha01:~# systemctl restart keepalived haproxy
root@k8s-ha01:~# ss -tnl
State            Recv-Q            Send-Q                       Local Address:Port                       Peer Address:Port           Process           
LISTEN           0                 4096                         192.168.0.111:6443                            0.0.0.0:*                                
LISTEN           0                 4096                         192.168.0.112:80                              0.0.0.0:*                                
LISTEN           0                 4096                         127.0.0.53%lo:53                              0.0.0.0:*                                
LISTEN           0                 128                                0.0.0.0:22                              0.0.0.0:*                                
root@k8s-ha01:~# 

ha02

root@k8s-ha02:~# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
  
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
  
vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 70
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.111 dev ens160 label ens160:0
        192.168.0.112 dev ens160 label ens160:1
    }
}
root@k8s-ha02:~# cat /etc/haproxy/haproxy.cfg 
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen k8s_apiserver_6443
bind 192.168.0.111:6443
mode tcp
#balance leastconn
server k8s-master01 192.168.0.31:6443 check inter 2000 fall 3 rise 5
server k8s-master02 192.168.0.32:6443 check inter 2000 fall 3 rise 5
server k8s-master03 192.168.0.33:6443 check inter 2000 fall 3 rise 5


listen jenkins_80
bind 192.168.0.112:80
mode tcp 
server k8s-node01 192.168.0.34:38080 check inter 2000 fall 3 rise 5
server k8s-node02 192.168.0.35:38080 check inter 2000 fall 3 rise 5
server k8s-node03 192.168.0.36:38080 check inter 2000 fall 3 rise 5
root@k8s-ha02:~# systemctl restart keepalived haproxy 
root@k8s-ha02:~# 
root@k8s-ha02:~# ss -tnl
State            Recv-Q            Send-Q                       Local Address:Port                       Peer Address:Port           Process           
LISTEN           0                 4096                         192.168.0.111:6443                            0.0.0.0:*                                
LISTEN           0                 4096                         192.168.0.112:80                              0.0.0.0:*                                
LISTEN           0                 4096                         127.0.0.53%lo:53                              0.0.0.0:*                                
LISTEN           0                 128                                0.0.0.0:22                              0.0.0.0:*                                
root@k8s-ha02:~# 

7.1、訪問負(fù)載均衡器的vip,看看是否能夠訪問到j(luò)enkins ?

k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins

能夠正常通過訪問負(fù)載均衡器的vip訪問到j(luò)enkins,說明jenkins服務(wù)正常被負(fù)載均衡器反代成功;文章來源地址http://www.zghlxwxcb.cn/news/detail-628117.html

到了這里,關(guān)于k8s實(shí)戰(zhàn)案例之運(yùn)行Java單體服務(wù)-jenkins的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 基于docker,k8s 搭建服務(wù)(單體docker-compose編排)

    基于docker,k8s 搭建服務(wù)(單體docker-compose編排)

    1、 yum -y install gcc yum -y instacc gcc-c++ 2、安裝yum 工具 yum install -y yum-utils device-mapper-persistent-data lvm2 --skip-broken 3、設(shè)置docker鏡像倉庫 阿里云 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 更改鏡像倉庫為阿里云 sed -i ‘s/download.docker.com/mirrors.aliyun.com/do

    2024年01月22日
    瀏覽(25)
  • Kubernetes(k8s)實(shí)戰(zhàn):使用k8s+jenkins實(shí)現(xiàn)CICD

    Kubernetes(k8s)實(shí)戰(zhàn):使用k8s+jenkins實(shí)現(xiàn)CICD

    CIDI(Continuous Integration,Continuous Delivery Deployment),持續(xù)集成,持續(xù)部署,持續(xù)發(fā)布。 也就是說,在本地開發(fā)完代碼之后,push到遠(yuǎn)程倉庫,然后代碼打包、部署的這個過程完全是自動化完成的。 但是我們不要進(jìn)入一個誤區(qū),CICD并不意味著一定就是這一套流程,只要實(shí)現(xiàn)了代

    2024年02月12日
    瀏覽(31)
  • Kubernetes(K8S)學(xué)習(xí)(三):K8S實(shí)戰(zhàn)案例

    Kubernetes(K8S)學(xué)習(xí)(三):K8S實(shí)戰(zhàn)案例

    附:查看命名空間命令 kubectl get namespace kubectl get ns 創(chuàng)建wordpress-db.yaml文件,這里以mysql作為wordpress的db: yaml內(nèi)容: 根據(jù)wordpress-db.yaml配置,創(chuàng)建資源mysql數(shù)據(jù)庫: yaml中MySQL配置說明: 用戶:root ??????密碼:rootPassW0rd 數(shù)據(jù)庫名稱:wordpress 用戶:wordpress ??????密碼:wo

    2024年04月09日
    瀏覽(30)
  • 構(gòu)建新一代的K8s原生Java微服務(wù)+Quarkus實(shí)戰(zhàn)

    構(gòu)建新一代的K8s原生Java微服務(wù)+Quarkus實(shí)戰(zhàn)

    送書第一期 《用戶畫像:平臺構(gòu)建與業(yè)務(wù)實(shí)踐》 送書活動之抽獎工具的打造 《獲取博客評論用戶抽取幸運(yùn)中獎?wù)摺?送書第二期 《Spring Cloud Alibaba核心技術(shù)與實(shí)戰(zhàn)案例》 送書第三期 《深入淺出Java虛擬機(jī)》 送書第四期 《AI時代項(xiàng)目經(jīng)理成長之道》 送書第五期 《Kubernetes原生

    2024年02月08日
    瀏覽(26)
  • 實(shí)戰(zhàn)-基于Jenkins+K8s構(gòu)建DevOps平臺(九)

    實(shí)戰(zhàn)-基于Jenkins+K8s構(gòu)建DevOps平臺(九)

    第一部分:安裝持久化存儲nfs 1、在k8s-master和k8s-node1上安裝nfs服務(wù) [root@k8s-master ~]# yum install nfs-utils -y [root@k8s-master ~]# systemctl start nfs [root@k8s-master ~]# systemctl enable nfs [root@k8s-node1 ~]# yum install nfs-utils -y [root@k8s-node1 ~]# systemctl start nfs [root@k8s-node1 ~]# systemctl enable nfs [root@k8s-node2 ~

    2024年02月08日
    瀏覽(29)
  • k8s實(shí)戰(zhàn)案例之部署Zookeeper集群

    k8s實(shí)戰(zhàn)案例之部署Zookeeper集群

    zookeeper是一個開源的分布式協(xié)調(diào)服務(wù),由知名互聯(lián)網(wǎng)公司Yahoo創(chuàng)建,它是Chubby的開源實(shí)現(xiàn);換句話講,zookeeper是一個典型的分布式數(shù)據(jù)一致性解決方案,分布式應(yīng)用程序可以基于它實(shí)現(xiàn)數(shù)據(jù)的發(fā)布/訂閱、負(fù)載均衡、名稱服務(wù)、分布式協(xié)調(diào)/通知、集群管理、Master選舉、分布式鎖

    2024年02月08日
    瀏覽(24)
  • k8s的jenkins部署java項(xiàng)目到k8s集群cicd持續(xù)集成

    k8s的jenkins部署java項(xiàng)目到k8s集群cicd持續(xù)集成

    k8s1.16.0-k8s的jenkins部署java項(xiàng)目到k8s集群cicd(ci成,cd手動部署的) 注意: 本文檔只是實(shí)現(xiàn)了ci,cd是通過ci生成的鏡像,再手工再k8s-master執(zhí)行的部署(只因pod部署的jenkins連接k8s的認(rèn)證不知怎么操作,若jenkins是單獨(dú)部署在k8s-master機(jī)器上,能直接在master執(zhí)行kubectl命令就沒這個問題了

    2024年02月03日
    瀏覽(37)
  • Jenkins K8S Docker 一鍵部署SpringCloud微服務(wù)

    Jenkins K8S Docker 一鍵部署SpringCloud微服務(wù)

    一鍵部署springcloud微服務(wù),需要用到 Jenkins K8S Docker等工具,若未安裝,請參考《Centos 7 安裝K8S》 本文使用jenkins部署,流程如下圖 開發(fā)者將代碼push到git 運(yùn)維人員通過jenkins部署,自動到git上pull代碼 通過maven構(gòu)建代碼 將maven構(gòu)建后的jar打包成docker鏡像 并 push docker鏡像到docker

    2024年02月02日
    瀏覽(26)
  • k8s實(shí)戰(zhàn)案例之部署redis單機(jī)和redis cluster

    k8s實(shí)戰(zhàn)案例之部署redis單機(jī)和redis cluster

    redis是一款基于BSD協(xié)議,開源的非關(guān)系型數(shù)據(jù)庫(nosql數(shù)據(jù)庫),作者是意大利開發(fā)者Salvatore Sanfilippo在2009年發(fā)布,使用C語言編寫;redis是基于內(nèi)存存儲,而且是目前比較流行的鍵值數(shù)據(jù)庫(key-value database),它提供將內(nèi)存通過網(wǎng)絡(luò)遠(yuǎn)程共享的一種服務(wù),提供類似功能的還有

    2024年02月08日
    瀏覽(21)
  • k8s實(shí)戰(zhàn)案例之部署Nginx+Tomcat+NFS實(shí)現(xiàn)動靜分離

    k8s實(shí)戰(zhàn)案例之部署Nginx+Tomcat+NFS實(shí)現(xiàn)動靜分離

    根據(jù)業(yè)務(wù)的不同,我們可以導(dǎo)入官方基礎(chǔ)鏡像,在官方基礎(chǔ)鏡像的基礎(chǔ)上自定義需要用的工具和環(huán)境,然后構(gòu)建成自定義出自定義基礎(chǔ)鏡像,后續(xù)再基于自定義基礎(chǔ)鏡像,來構(gòu)建不同服務(wù)的基礎(chǔ)鏡像,最后基于服務(wù)的自定義基礎(chǔ)鏡像構(gòu)建出對應(yīng)業(yè)務(wù)鏡像;最后將這些鏡像上傳

    2024年02月07日
    瀏覽(21)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包