Keepalived高可用集群
高可用集群簡介
什么是高可用集群?
高可用集群 (High Availability;Cluster,簡稱HA Cluster) ,是指以減少服務(wù)中斷時間為目的的服務(wù)器集群技術(shù)。它通過保護用戶的業(yè)務(wù)程序?qū)ν獠婚g斷提供的服務(wù),把因軟件、硬件、人為造成的故障對業(yè)務(wù)的影響降低到最小程度。
自動切換/故障轉(zhuǎn)移(FailOver)
自動切換階段某一主機如果確認對方故障,則正常主機除繼續(xù)進行原來的任務(wù)還將依據(jù)各種容錯備援模式接管預先設(shè)定的備援作業(yè)程序,并進行后續(xù)的程序及服務(wù)。
通俗地說,即當A無法為客戶服務(wù)時,系統(tǒng)能夠自動地切換,使B能夠及時地頂上繼續(xù)為客戶提供服務(wù),且客戶感覺不到這個為他提供服務(wù)的對象已經(jīng)更換
通過上面判斷節(jié)點故障后,將高可用集群資源(如VIP、httpd等)從該不具備法定票數(shù)的集群節(jié)點轉(zhuǎn)移到故障轉(zhuǎn)移域( Failover Domain,可以接收故障資源轉(zhuǎn)移的節(jié)點)。
自動偵測/腦裂
自動偵測階段由主機上的軟件通過冗余偵測線,經(jīng)由復雜的監(jiān)聽程序,邏輯判斷,來相互偵測對方運行的情況。
常用的方法是:集群各節(jié)點間通過心跳信息判斷節(jié)點是否出現(xiàn)故障。
腦裂:在高可用(HA)系統(tǒng)中,當聯(lián)系2個節(jié)點的“心跳線"斷開時,本來為一整體、動作協(xié)調(diào)的HA系統(tǒng),就分裂成為2個獨立的個體。由于相互失去了聯(lián)系,都以為是對方出了故障。兩個節(jié)點上的HA軟件像“裂腦人"一樣,爭搶“"共享資源"、爭起“應(yīng)用服務(wù)",就會發(fā)生嚴重后果——或者共享資源被瓜分、2邊"服務(wù)"都起不來了"或者2邊"服務(wù)"都起來了,但同時讀寫“共享存儲",導致數(shù)據(jù)損壞(常見如數(shù)據(jù)庫輪詢著的聯(lián)機日志出錯)。
腦裂解決方案:1.添加冗余的心跳線 2.啟用磁盤鎖 3. 設(shè)置仲裁機制 4. 腦裂的監(jiān)控報警
其他高可用方案:heartbeat、pacemaker、piranha(web頁面)
Keepalived
keepalived是什么?
keepalived是集群管理中保證集群高可用的一個服務(wù)軟件,用來防止單點故障.
keepalived工作原理
keepalived是以VRRP協(xié)議為實現(xiàn)基礎(chǔ)的,VRRP全稱Virtual Router Redundancy Protocol,即虛擬路由冗余協(xié)議。
將N臺提供相同功能的服務(wù)器組成一個服務(wù)器組,這個組里面有一個master和個backup,master上面有一個對外提供服務(wù)的vip(該服務(wù)器所在局域網(wǎng)內(nèi)其他機器的默認路由為該vip) ,master會發(fā)組播,當backup收不到vrrp包時就認為master宕掉了,這時就需要根據(jù)VRRP的優(yōu)先級來選舉一個backup當master
keepalived主要有三個模塊
分別是core. check和vrrp。
core模塊為keepalived的核心,負責主進程的啟動、維護以及全局配置文件的加載和解析。check負責健康檢查,包括常見的各種檢查方式。vrrp模塊是來實現(xiàn)VRRP協(xié)議的。
實戰(zhàn)案例1 keepalived + nginx
準備:server1 server2 關(guān)閉防火墻 selinux
server1:
yum install -y keepalived
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-backup //備份原文件
vim /etc/keepalived/keepalived.conf //把內(nèi)容全刪了 ggdG 然后配置如下
! Configuration File for keepalived
global_defs {
router_id 1
}
#vrrp_script chk_nginx {
#script "/etc/keepalived/ck_ng.sh"
#interval 2
#weight -5
#fall 3
#}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.70.130
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.140
}
#track_script {
#chk_nginx
#}
}
yum install -y nginx
systemctl enable nginx
systemctl start nginx
vim /var/share/nginx/html/index.html //自行修改頁面 以便區(qū)分server2的nginx
curl -i 192.168.70.130
systemctl start keepalived
systemctl enable keepalived
server2:
yum install -y keepalived
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-backup //備份原文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 2
}
#vrrp_script chk_nginx {
#script "/etc/keepalived/ck_ng.sh"
#interval 2
#weight -5
#fall 3
#}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.70.132
virtual_router_id 55
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.140
}
#track_script {
#chk_nginx
#}
}
yum install -y nginx
systemctl enable nginx
systemctl start nginx
curl -i localhost
systemctl start keepalived
systemctl enable keepalived
測試:
[root@localhost local]# curl -i 192.168.70.140 //返回的應(yīng)該是server1 nginx的內(nèi)容
可以試著把server1斷網(wǎng),vmware設(shè)置 取消網(wǎng)絡(luò)連接 再測試訪問 這時候返回的應(yīng)該是server2 nginx的內(nèi)容
關(guān)于keepalived對nginx狀態(tài)未知的問題
恢復之前的實驗。啟動兩臺主機的keepalived和nginx。確保頁面訪問正常。關(guān)閉master的nginx服務(wù)。systemctl stop nginx繼續(xù)訪問VIP,請問頁面是否會切換到backup呢?keepalived并不會關(guān)心nginx的狀態(tài),原因是keepalived監(jiān)控的是接口ip狀態(tài)。無法監(jiān)控nginx服務(wù)狀態(tài)。解決方案:
1、監(jiān)控腳本
server1 server2 添加nginx監(jiān)控腳本
vim /etc/keepalived/ck_ng.sh
#!/bin/bash
#檢查nginx進程是否存在
counter=`ps -C nginx --no-heading | wc -l`
if [ ${counter} = 0 ] ;then
systemctl restart nginx
sleep 5
counter2=`ps -C nginx --no-heading | wc -l`
if [ ${counter2} = 0 ] ;then
systemctl stop keepalived
fi
fi
chmod +x /etc/keepalived/ck_ng.sh
修改keepalived.conf文件 把上述寫的注釋都取消 server1 server2 都取消注釋 其他內(nèi)容不變
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 1
}
vrrp_script chk_nginx {
script "/etc/keepalived/ck_ng.sh"
interval 2
weight -5
fall 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.70.130
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.140
}
track_script {
chk_nginx
}
}
systemctl restart keepalived
測試:
systemctl stop nginx
systemctl status nginx
如果測試結(jié)果并沒按預期執(zhí)行
在vrrp_script chk_nginx{} 中間加debug
tail -f /var/log/messages //查看日志
如果出現(xiàn)Aug 27 20:59:44 localhost Keepalived_vrrp[51703]: /etc/keepalived/ck_ng.sh exited due to signal 15
說明生命探測advert_int設(shè)置時間太短了 增加5秒試試 相應(yīng)interval必須大于advert_int的時間設(shè)置6秒試試,兩臺server都必須改過來!
實戰(zhàn)案例2 keepalived + lvs集群
1.在master上安裝配置keepalived ipvsadm
yum install keepalived ipvsadm -y
2.在master上修改配置文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id Director 1
}
#Keepalived
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.140/24 dev ens33
}
}
#LVS
virtual_server 192.168.70.140 80 {
delay_loop 3 # 將 Keepalived 故障轉(zhuǎn)移時的延遲檢測循環(huán)次數(shù)設(shè)置為 5 次
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.70.133 80 {
weight 1
TCP_CHECK {
connect_timeout 5
}
}
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK{
connect_timeout 3
}
}
}
3.在backup安裝配置keepalived ipvsadm
yum install keepalived ipvsadm -y
4.在backup上修改配置文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id Director 2
}
#Keepalived
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.140/24 dev ens33
}
}
#LVS
virtual_server 192.168.70.140 80 {
delay_loop 3 # 將 Keepalived 故障轉(zhuǎn)移時的延遲檢測循環(huán)次數(shù)設(shè)置為 5 次
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.70.133 80 {
weight 1
TCP_CHECK {
connect_timeout 5
}
}
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK{
connect_timeout 3
}
}
}
5.啟動兩臺設(shè)備的keepalived
systemctl start keepalived
systemctl enable keepalived
6.兩臺realserver 安裝并啟動httpd
yum install -y httpd
systemtl start httpd
systemtl enable httpd
7.新建lo:0文件 回環(huán)接口
vim /etc/sysconfig/network-scripts/ifcfg-lo:0 //配置如下
DEVICE=lo:0
IPADDR=192.168.70.140
NETMASK=255.255.255.255
ONBOOT=yes
8.配置路由 讓每次開機都配置上回環(huán)接口
不管誰訪問140 都讓回環(huán)接口來處理
vim /etc/rc.local //添加如下
/sbin/route add -host 192.168.70.140 dev lo:0
9.配置 sysctl.conf文件
vim /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
10.將lo:0文件拷貝到另一臺realserver
scp /etc/sysconfig/network-scripts/ifcfg-lo:0 192.168.70.134:/etc/sysconfig/network-scripts/ifcfg-lo:0
scp /etc/sysctl.conf 192.168.70.134:/etc/sysctl.conf
10.另一臺一樣配置rc.local文件
vim /etc/rc.local //添加
/sbin/route add -host 192.168.70.140 dev lo:0
11.一樣配置sysctl.conf文件
vim /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
12.測試
瀏覽器訪問192.168.70.140 再關(guān)閉master的網(wǎng)絡(luò)再試試 還能訪問表示我們實驗成功了
LVS+Keepalived 常見面試題文章來源:http://www.zghlxwxcb.cn/news/detail-690551.html
1.什么是集群?集群分為哪些類型?列舉代表的產(chǎn)品。2.有些負載均衡集群服務(wù)?他們有什么區(qū)別?
3.LVS-DR和LVS-NAT的工作原理。
4.keepalived的工作原理。
5.高可用集群有哪些產(chǎn)品。他們的區(qū)別。
6.負載均衡集群的策略有哪些?能否舉例說明?文章來源地址http://www.zghlxwxcb.cn/news/detail-690551.html
到了這里,關(guān)于Keepalived高可用集群、Keepalive+LVS的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!