国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

HA高可用集群部署

這篇具有很好參考價(jià)值的文章主要介紹了HA高可用集群部署。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

HA高可用集群部署

高可用 ZooKeeper 集群部署

zookeeper安裝部署

注意:需要安裝jdk,但jdk已經(jīng)在第4章裝過(guò),這里直接裝zookeeper

#解壓并安裝zookeeper
[root@master ~]# ls
anaconda-ks.cfg
apache-hive-2.0.0-bin.tar.gz
hadoop-2.7.1.tar.gz
jdk-8u152-linux-x64.tar.gz
mysql-community-client-5.7.18-1.el7.x86_64.rpm
mysql-community-common-5.7.18-1.el7.x86_64.rpm
mysql-community-devel-5.7.18-1.el7.x86_64.rpm
mysql-community-libs-5.7.18-1.el7.x86_64.rpm
mysql-community-server-5.7.18-1.el7.x86_64.rpm
mysql-connector-java-5.1.46.jar
sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
zookeeper-3.4.8.tar.gz
[root@master ~]# tar xf zookeeper-3.4.8.tar.gz -C /usr/local/src/
[root@master ~]# cd /usr/local/src/
[root@master src]# ls
hadoop  hive  jdk  sqoop  zookeeper-3.4.8
[root@master src]# mv zookeeper-3.4.8 zookeeper
[root@master src]# ls
hadoop  hive  jdk  sqoop  zookeeper

創(chuàng)建zookeeper數(shù)據(jù)目錄

[root@master src]# mkdir /usr/local/src/zookeeper/data
[root@master src]# mkdir /usr/local/src/zookeeper/logs

配置環(huán)境變量

[root@master src]# vi /etc/profile.d/zookeeper.sh
export ZK_HOME=/usr/local/src/zookeeper
export PATH=$PATH:$ZK_HOME/bin

修改zoo.cfg配置文件

[root@master src]# cd /usr/local/src/zookeeper/conf/
[root@master conf]# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[root@master conf]# cp zoo_sample.cfg zoo.cfg 
[root@master conf]# vi zoo.cfg 
#修改
dataDir=/usr/local/src/zookeeper/data
#增加
dataLogDir=/usr/local/src/zookeeper/logs
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

創(chuàng)建myid配置文件

[root@master conf]# cd ..
[root@master zookeeper]# cd data/
[root@master data]# echo "1" > myid

分發(fā)Zookeeper集群配置文件

#發(fā)送環(huán)境變量文件到slave1,slave2
[root@master data]# scp -r /etc/profile.d/zookeeper.sh slave1:/etc/profile.d/
[root@master data]# scp -r /etc/profile.d/zookeeper.sh slave2:/etc/profile.d/

#發(fā)送zookeeper配置文件到slave1,slave2
[root@master ~]# scp -r /usr/local/src/zookeeper/ slave1:/usr/local/src/
[root@master ~]# scp -r /usr/local/src/zookeeper/ slave2:/usr/local/src/

修改myid配置

#slave1
[root@slave1 ~]# echo "2" >  /usr/local/src/zookeeper/data/myid 


#slave2
[root@slave2 ~]# echo "3" >  /usr/local/src/zookeeper/data/myid 

#查看3個(gè)節(jié)點(diǎn)
[root@master ~]# cat /usr/local/src/zookeeper/data/myid 
1
[root@slave1 ~]# cat /usr/local/src/zookeeper/data/myid 
2
[root@slave2 ~]# cat /usr/local/src/zookeeper/data/myid 
3

修改文件所屬權(quán)限

[root@master ~]# chown -R hadoop.hadoop /usr/local/src/
[root@slave1 ~]# chown -R hadoop.hadoop /usr/local/src/
[root@slave2 ~]# chown -R hadoop.hadoop /usr/local/src/

查看防火墻和selinux,如果沒(méi)關(guān)就關(guān)掉

#以master為例,slave1,slave2同樣要做

[root@master ~]# systemctl disable --now firewalld
[root@master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; ve>
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@master ~]# vi /etc/selinux/config 
SELINUX=disabled

切換hadoop用戶,啟動(dòng)zookeeper

[root@master ~]# su - hadoop
[root@slave1 ~]# su - hadoop
[root@slave2 ~]# su - hadoop

#啟動(dòng)zookeeper
[hadoop@master ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@master ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[hadoop@slave1 ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave1 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Mode: leader

[hadoop@slave2 ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave2 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Mode: follower

查看集群

[hadoop@master ~]$ jps
1522 QuorumPeerMain
1579 Jps

[hadoop@slave1 ~]$ jps
1368 Jps
1309 QuorumPeerMain

[hadoop@slave2 ~]$ jps
1330 QuorumPeerMain
1387 Jps

Hadoop HA集群部署

注意:ssh免密登錄在第4章已經(jīng)配過(guò),這里直接配HA

配置密鑰加幾條:

  • 將masterr創(chuàng)建的公鑰發(fā)給slave1

    [hadoop@master ~]$ scp ~/.ssh/authorized_keys root@slave1:~/.ssh/
    root@slave1's password: 
    authorized_keys                                                        100%  567   672.2KB/s   00:00  
    
  • 將slave1的私鑰加到公鑰里

    [hadoop@slave1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    
  • 將公鑰發(fā)給slave2,master

    [hadoop@slave1 ~]$ ssh-copy-id slave2
    [hadoop@slave1 ~]$ ssh-copy-id master
    

刪除第4章安裝的hadoop

#刪除環(huán)境變量,三個(gè)節(jié)點(diǎn)都要做
[root@master ~]# rm -rf /etc/profile.d/hadoop.sh
[root@slave1 ~]# rm -rf /etc/profile.d/hadoop.sh
[root@slave2 ~]# rm -rf /etc/profile.d/hadoop.sh

#刪除hadoop
[root@master ~]# rm -rf /usr/local/src/hadoop/
[root@slave1 ~]# rm -rf /usr/local/src/hadoop/
[root@slave2 ~]# rm -rf /usr/local/src/hadoop/

配置hadoop環(huán)境變量

[root@master ~]# vi /etc/profile.d/hadoop.sh
export HADOOP_HOME=/usr/local/src/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_OPTS="Djava.library.path=$HADOOP_INSTALL/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME=/usr/local/src/jdk
export PATH=$PATH:$JAVA_HOME/bin
export ZK_HOME=/usr/local/src/zookeeper
export PATH=$PATH:$ZK_HOME/bin

配置 hadoop-env.sh 配置文件

[root@master ~]# tar -xf hadoop-2.7.1.tar.gz -C /usr/local/src/
[root@master ~]# mv /usr/local/src/hadoop-2.7.1/ /usr/local/src/hadoop
[root@master ~]# cd /usr/local/src/hadoop/etc/hadoop/
[root@master hadoop]# vi hadoop-env.sh 
#在最下面添加如下配置:
export JAVA_HOME=/usr/local/src/jdk

配置 core-site.xml 配置文件

[root@master hadoop]# vi core-site.xml
<configuration>
        <property>
                 <name>fs.defaultFS</name>
                 <value>hdfs://mycluster</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/usr/local/src/hadoop/tmp</value>
        </property>
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>master:2181,slave1:2181,slave2:2181</value>
        </property>
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>30000</value>
                <description>ms</description>
        </property>
        <property>
                <name>fs.trash.interval</name>
                <value>1440</value>
        </property>
</configuration>

配置 hdfs-site.xml 配置文件

[root@master hadoop]# vi hdfs-site.xml 
<configuration>
        <property>
                <name>dfs.qjournal.start-segment.timeout.ms</name>
                <value>60000</value>
        </property>
        <property>
                <name>dfs.nameservices</name>
                <value>mycluster</value>
		</property>
        <property>
                <name>dfs.ha.namenodes.mycluster</name>
                <value>master,slave1</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.mycluster.master</name>
                <value>master:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.mycluster.slave1</name>
                <value>slave1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.mycluster.master</name>
                <value>master:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.mycluster.slave1</name>
                <value>slave1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://master:8485;slave1:8485;slave2:8485/mycluster</value>
        </property>
        <property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>                	      
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>
                sshfence
                shell(/bin/true)
                </value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>false</value>
        </property>
        <property>
 		<name>dfs.support.append</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_rsa</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/nn</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/dn</value>
        </property>
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/usr/local/src/hadoop/tmp/hdfs/jn</value>
        </property>
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.connect-timeout</name>
                <value>30000</value>
        </property>
        <property>
                <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
                <value>60000</value>
        </property>

</configuration>

配置mapred-site.xml配置文件

[root@master ~]# cd /usr/local/src/hadoop/etc/hadoop/
[root@master hadoop]# cp mapred-site.xml.template mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>master:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>master:19888</value>
        </property>

</configuration>

配置yarn-site.xml配置文件

<configuration>

<!-- Site specific YARN configuration properties -->

        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>yrc</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname.rm1</name>
                <value>master</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname.rm2</name>
                <value>slave1</value>
        </property>
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <value>master:2181,slave1:2181,slave2:2181</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>86400</value>
        </property>
        <property>
                <name>yarn.resourcemanager.recovery.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.resourcemanager.store.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
</configuration>

配置slaves配置文件

[root@master hadoop]# vi slaves
#刪除localhost添加以下
master
slave1
slave2

創(chuàng)建數(shù)據(jù)存放目錄

#namenode、datanode、journalnode 等存放數(shù)據(jù)的公共目錄為/usr/local/src/hadoop/tmp
[root@master hadoop]# mkdir -p /usr/local/src/hadoop//tmp/hdfs/{nn,dn,jn}
[root@master hadoop]# mkdir -p /usr/local/src/hadoop/tmp/logs

分發(fā)文件到其他節(jié)點(diǎn)

#分發(fā)環(huán)境變量文件
[root@master hadoop]# scp -r /etc/profile.d/hadoop.sh slave1:/etc/profile.d/
hadoop.sh                                                              100%  601   496.6KB/s   00:00    
[root@master hadoop]# scp -r /etc/profile.d/hadoop.sh slave2:/etc/profile.d/
hadoop.sh                                                              100%  601   314.7KB/s   00:00  

#分發(fā)hadoop配置目錄
[root@master hadoop]# scp -r /usr/local/src/hadoop/ slave1:/usr/local/src/
[root@master hadoop]# scp -r /usr/local/src/hadoop/ slave2:/usr/local/src/

修改目錄所有者和所有者組

[root@master ~]# chown -R hadoop.hadoop /usr/local/src/
[root@slave1 ~]# chown -R hadoop.hadoop /usr/local/src/
[root@slave2 ~]# chown -R hadoop.hadoop /usr/local/src/

生效環(huán)境變量

#在切換hadoop用戶時(shí)會(huì)自動(dòng)導(dǎo)入,為了以防萬(wàn)一,還是手動(dòng)source一下
[root@master ~]# source /etc/profile.d/hadoop.sh 
[root@slave1 ~]# source /etc/profile.d/hadoop.sh 
[root@slave2 ~]# source /etc/profile.d/hadoop.sh 

HA高可用集群?jiǎn)?dòng)

HA的啟動(dòng)

啟動(dòng)journalnode守護(hù)進(jìn)程

#切換hadoop用戶
[hadoop@master ~]$ hadoop-daemons.sh start journalnode
master: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-master.out
slave1: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-slave1.out
slave2: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-slave2.out

初始化namenode

[hadoop@master ~]$ hdfs namenode -format
............
23/05/28 13:58:27 INFO namenode.FSImage: Allocated new BlockPoolId: BP-793703415-192.168.88.10-1685253507647
23/05/28 13:58:27 INFO common.Storage: Storage directory /usr/local/src/hadoop/tmp/hdfs/nn has been successfully formatted.
23/05/28 13:58:28 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
23/05/28 13:58:28 INFO util.ExitUtil: Exiting with status 0
23/05/28 13:58:28 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.88.10
************************************************************/

注冊(cè)ZNode

#要先啟動(dòng)zookeeper不然會(huì)報(bào)錯(cuò)
[hadoop@master ~]$ zkServer.sh start
[hadoop@slave1 ~]$ zkServer.sh start
[hadoop@slave2 ~]$ zkServer.sh start

[hadoop@master ~]$ hdfs zkfc -formatZK
......
23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server slave2/192.168.88.30:2181. Will not attempt to authenticate using SASL (unknown error)
23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Socket connection established to slave2/192.168.88.30:2181, initiating session
23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server slave2/192.168.88.30:2181, sessionid = 0x38860f220b90000, negotiated timeout = 30000
23/05/28 14:01:08 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
23/05/28 14:01:08 INFO ha.ActiveStandbyElector: Session connected.
23/05/28 14:01:08 INFO zookeeper.ZooKeeper: Session: 0x38860f220b90000 closed
23/05/28 14:01:08 INFO zookeeper.ClientCnxn: EventThread shut down

啟動(dòng)hdfs

[hadoop@master ~]$ start-all.sh 

同步master數(shù)據(jù)

#復(fù)制 namenode 元數(shù)據(jù)到其它節(jié)點(diǎn)(在 master 節(jié)點(diǎn)執(zhí)行)
[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave1:/usr/local/src/hadoop/tmp/hdfs/nn/
VERSION                                                                100%  204   189.8KB/s   00:00    
seen_txid                                                              100%    2     1.3KB/s   00:00    
fsimage_0000000000000000000.md5                                        100%   62    38.1KB/s   00:00    
fsimage_0000000000000000000                                            100%  353   378.0KB/s   00:00    
edits_inprogress_0000000000000000001                                   100% 1024KB   5.0MB/s   00:00    
in_use.lock                                                            100%   11  
6.4KB/s   00:00    

[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave2:/usr/local/src/hadoop/tmp/hdfs/nn/
VERSION                                                                100%  204   294.1KB/s   00:00    
seen_txid                                                              100%    2     2.2KB/s   00:00    
fsimage_0000000000000000000.md5                                        100%   62    65.8KB/s   00:00    
fsimage_0000000000000000000                                            100%  353   554.6KB/s   00:00    
edits_inprogress_0000000000000000001                                   100% 1024KB   6.7MB/s   00:00    
in_use.lock                                                            100%   11     8.9KB/s   00:00   

在slave1上啟動(dòng)resourcemanager和namenode進(jìn)程

[hadoop@slave1 ~]$ yarn-daemons.sh start resourcemanager
[hadoop@slave1 ~]$ hadoop-daemon.sh start namenode
[hadoop@slave1 ~]$ jps
1489 JournalNode
1841 DFSZKFailoverController
1922 NodeManager
2658 NameNode
2738 Jps
1702 DataNode
2441 ResourceManager
1551 QuorumPeerMain

啟動(dòng)MapReduce任務(wù)歷史服務(wù)器

[hadoop@master ~]$ yarn-daemon.sh start proxyserver
starting proxyserver, logging to /usr/local/src/hadoop/logs/yarn-hadoop-proxyserver-master.out
[hadoop@master ~]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /usr/local/src/hadoop/logs/mapred-hadoop-historyserver-master.out

查看端口和進(jìn)程

[hadoop@master ~]$ jps
3297 JobHistoryServer
2260 DataNode
2564 DFSZKFailoverController
2788 NodeManager
2678 ResourceManager
2122 NameNode
3371 Jps
1727 JournalNode
1919 QuorumPeerMain


[hadoop@slave1 ~]$ jps
1489 JournalNode
1841 DFSZKFailoverController
1922 NodeManager
2658 NameNode
2738 Jps
1702 DataNode
2441 ResourceManager
1551 QuorumPeerMain


[hadoop@slave2 ~]$ jps
1792 NodeManager
1577 QuorumPeerMain
2282 Jps
1515 JournalNode
1647 DataNode

查看網(wǎng)頁(yè)顯示

  • master:50070

    HA高可用集群部署

  • slave1:50070

    HA高可用集群部署

  • master:8088

    HA高可用集群部署

HA的測(cè)試

創(chuàng)建一個(gè)測(cè)試文件

[hadoop@master ~]$ vi rainmom.txt
Hello World
Hello Hadoop

在hdfs創(chuàng)建文件夾

[hadoop@master ~]$ hadoop fs -mkdir /input

將rainmom.txt傳輸?shù)絠nput上

[hadoop@master ~]$ hadoop fs -put ~/rainmom.txt /input

進(jìn)入到j(luò)ar包測(cè)試文件目錄下,測(cè)試mapreduce

[hadoop@master ~]$ cd /usr/local/src/hadoop/share/hadoop/mapreduce/
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input/rainmom.txt /output
.....
23/05/28 14:35:37 INFO mapreduce.Job: Running job: job_1685253795384_0001
23/05/28 14:35:48 INFO mapreduce.Job: Job job_1685253795384_0001 running in uber mode : false
23/05/28 14:35:48 INFO mapreduce.Job:  map 0% reduce 0%
23/05/28 14:35:57 INFO mapreduce.Job:  map 100% reduce 0%
23/05/28 14:36:09 INFO mapreduce.Job:  map 100% reduce 100%
23/05/28 14:36:10 INFO mapreduce.Job: Job job_1685253795384_0001 completed successfully
....

查看hdfs下的傳輸結(jié)果

[hadoop@master ~]$ hadoop fs -ls -R /output
-rw-r--r--   2 hadoop supergroup          0 2023-05-28 14:36 /output/_SUCCESS
-rw-r--r--   2 hadoop supergroup         25 2023-05-28 14:36 /output/part-r-00000

查看文件測(cè)試的結(jié)果

[hadoop@master ~]$ hadoop fs -cat /output/part-r-00000
Hadoop	1
Hello	2
World	1

高可用性驗(yàn)證

自動(dòng)切換服務(wù)狀態(tài)

#格式:hdfs haadmin -failover --forcefence --forceactive 主 備
[hadoop@master ~]$ hdfs haadmin -failover --forcefence --forceactive slave1 master

#這里注意一點(diǎn),執(zhí)行這條命令,會(huì)出現(xiàn):forcefence and forceactive flags
not supported with auto-failover enabled.的提示,這句話表示,配置了自動(dòng)切換之后,就不能進(jìn)行手動(dòng)切換了,
故此次切換失敗, 該意思是在配置故障自動(dòng)切換(dfs.ha.automatic-failover.enabled=true)之后,
無(wú)法手動(dòng)進(jìn)行,可將該參數(shù)更改為false(不需要重啟進(jìn)程)后,重新執(zhí)行該命令即可。

# dfs.ha.automatic-failover.enabled參數(shù)需要在hdfs-site.xml或者core-site.xml中修改
[hadoop@master ~]$ vi /usr/local/src/hadoop/etc/hadoop/hdfs-site.xml 

        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        

#查看狀態(tài)
[hadoop@master ~]$ hdfs haadmin -getServiceState slave1
standby
[hadoop@master ~]$ hdfs haadmin -getServiceState master
active

手動(dòng)切換服務(wù)狀態(tài)

#在 maste 停止并啟動(dòng) namenode
[hadoop@master ~]$ hadoop-daemon.sh stop namenode
stopping namenode

#查看狀態(tài)
[hadoop@master ~]$ hdfs haadmin -getServiceState master
23/05/28 14:53:55 INFO ipc.Client: Retrying connect to server: master/192.168.88.10:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From master/192.168.88.10 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[hadoop@master ~]$ hdfs haadmin -getServiceState slave1
active

#重新啟動(dòng)
[hadoop@master ~]$ hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out

#再次查看狀態(tài)
[hadoop@master ~]$ hdfs haadmin -getServiceState slave1
active
[hadoop@master ~]$ hdfs haadmin -getServiceState master
standby

查看web服務(wù)端

  • master:50070

    HA高可用集群部署

  • slave1:50070

    HA高可用集群部署文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-462471.html

到了這里,關(guān)于HA高可用集群部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • Hadoop HA-hadoop完全分布式高可用集群配置、高可用集群?jiǎn)?dòng)方式、master/slave1/slave2配置

    Hadoop HA-hadoop完全分布式高可用集群配置、高可用集群?jiǎn)?dòng)方式、master/slave1/slave2配置

    ? ? ? ? ?本文章使用root用戶完成相關(guān)配置與啟動(dòng)、這里分為master、slave1、slave2進(jìn)行配置 ????????一、將hadoop解壓至需要的目錄下 ? ? ? ? ?二、配置hadoop-env.sh啟動(dòng)文件 ? ? ? ? 三、配置hdfs-site.xml文件 ????????四、配置core-site.xml文件 ????????五、配置yarn-site.x

    2024年02月06日
    瀏覽(29)
  • Hadoop YARN HA 集群安裝部署詳細(xì)圖文教程

    Hadoop YARN HA 集群安裝部署詳細(xì)圖文教程

    目錄 一、YARN 集群角色、部署規(guī)劃 1.1?集群角色--概述 1.2?集群角色--ResourceManager(RM)? 1.3?集群角色--NodeManager(NM)? 1.4 HA?集群部署規(guī)劃 二、YARN RM?重啟機(jī)制 2.1 概述? 2.2 演示? 2.2.1?不開(kāi)啟?RM?重啟機(jī)制現(xiàn)象? 2.3 兩種實(shí)現(xiàn)方案與區(qū)別? 2.3.1?Non-work-preserving RM restart 2.3.2?

    2024年02月04日
    瀏覽(53)
  • 分布式數(shù)據(jù)庫(kù)Apache Doris HA集群部署

    ???????????? 哈嘍!大家好,我是【IT邦德】,江湖人稱jeames007,10余年DBA及大數(shù)據(jù)工作經(jīng)驗(yàn) 一位上進(jìn)心十足的【大數(shù)據(jù)領(lǐng)域博主】!?????? 中國(guó)DBA聯(lián)盟(ACDU)成員,目前服務(wù)于工業(yè)互聯(lián)網(wǎng) 擅長(zhǎng)主流Oracle、MySQL、PG、高斯及Greenplum運(yùn)維開(kāi)發(fā),備份恢復(fù),安裝遷移,性能優(yōu)

    2024年02月06日
    瀏覽(23)
  • HadoopHA模式(由于Hadoop的HA模式是在Hadoop完全分布式基礎(chǔ)上,利用zookeeper等協(xié)調(diào)工具配置的高可用的Hadoop集群模式)

    HadoopHA模式(由于Hadoop的HA模式是在Hadoop完全分布式基礎(chǔ)上,利用zookeeper等協(xié)調(diào)工具配置的高可用的Hadoop集群模式)

    目錄 1.前期準(zhǔn)備 1.1.hadoop-3.1.3.tar.gz,jdk-8u212-linux-x64.tar.gz,apache-zookeeper-3.5.7-bin.tar.gz三個(gè)包提取碼:k5y6 2.解壓安裝包,配置環(huán)境變量 3. 將三個(gè)節(jié)點(diǎn)分別命名為master、slave1、slave2并做免密登錄 免密在前面Hadoop完全分布式搭建說(shuō)過(guò),這里不再贅述 4.搭建zookeeper集群 ?根據(jù)配置的

    2024年02月04日
    瀏覽(24)
  • CENTO OS上的網(wǎng)絡(luò)安全工具(二十二)Spark HA swarm容器化集群部署

    CENTO OS上的網(wǎng)絡(luò)安全工具(二十二)Spark HA swarm容器化集群部署

    ????????在Hadoop集群swarm部署的基礎(chǔ)上,我們更進(jìn)一步,把Spark也拉進(jìn)來(lái)。相對(duì)來(lái)說(shuō),在Hadoop搞定的情況下,Spark就簡(jiǎn)單多了。 ? ? ? ? ?之所以把這件事還要拿出來(lái)講……當(dāng)然是因?yàn)榈暨^(guò)坑。我安裝的時(shí)候,hadoop是3.3.5,所以spark下載這個(gè)為hadoop 3.3 預(yù)編譯的版本就好——一

    2024年02月05日
    瀏覽(23)
  • Zabbix第二部分:基于Proxy分布式部署實(shí)現(xiàn)Web監(jiān)控和Zabbix HA集群的搭建

    Zabbix第二部分:基于Proxy分布式部署實(shí)現(xiàn)Web監(jiān)控和Zabbix HA集群的搭建

    1)分擔(dān) server 的集中式壓力; 2)解決多機(jī)房之間的網(wǎng)絡(luò)延時(shí)問(wèn)題。 agent -- proxy -- server zabbix-server :整個(gè)監(jiān)控體系中最核心的組件,它負(fù)責(zé)接收客戶端發(fā)送的報(bào)告信息,所有配置、 統(tǒng)計(jì)數(shù)據(jù)及操作數(shù)據(jù)都由它組織。 Database :所有配置信息和zabbix收集到的數(shù)據(jù)都存儲(chǔ)在數(shù)據(jù)庫(kù)

    2024年02月06日
    瀏覽(45)
  • Kubernetes高可用集群二進(jìn)制部署(二)ETCD集群部署

    Kubernetes高可用集群二進(jìn)制部署(二)ETCD集群部署

    Kubernetes概述 使用kubeadm快速部署一個(gè)k8s集群 Kubernetes高可用集群二進(jìn)制部署(一)主機(jī)準(zhǔn)備和負(fù)載均衡器安裝 Kubernetes高可用集群二進(jìn)制部署(二)ETCD集群部署 Kubernetes高可用集群二進(jìn)制部署(三)部署api-server Kubernetes高可用集群二進(jìn)制部署(四)部署kubectl和kube-controller-man

    2024年02月14日
    瀏覽(23)
  • RabbitMQ高可用集群部署

    2023年06月29日
    瀏覽(15)
  • k8s集群環(huán)境部署-高可用部署

    k8s集群環(huán)境部署-高可用部署

    1.1 kube-apiserver: Kubernetes API server 為 api 對(duì)象驗(yàn)證并配置數(shù)據(jù),包括 pods、 services、replicationcontrollers和其它 api 對(duì)象,API Server 提供 REST 操作,并為集群的共享狀態(tài)提供前端訪問(wèn)??,kubernetes中的所有其他組件都通過(guò)該前端進(jìn)?交互。 https://kubernetes.io/zh/docs/reference/command-line-

    2024年02月03日
    瀏覽(27)
  • 部署LVS+Keepalived高可用集群

    部署LVS+Keepalived高可用集群

    目錄 一、keepalived概述 1.1管理LVS負(fù)載均衡軟件 1.2VRRP(Virtual Router Redundancy Protocol) 原理 二、keepalived服務(wù)的重要功能 2.1自動(dòng)切換(failover) 2.2健康檢查(health checking) 2.3高可用(HA) 三、L4和L7負(fù)載均衡的區(qū)別 四、keepalive故障自動(dòng)切換 4.1搶占與非搶占 五、keeplived體系主要模塊

    2024年02月13日
    瀏覽(19)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包