国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

HadoopHA模式(由于Hadoop的HA模式是在Hadoop完全分布式基礎(chǔ)上,利用zookeeper等協(xié)調(diào)工具配置的高可用的Hadoop集群模式)

這篇具有很好參考價(jià)值的文章主要介紹了HadoopHA模式(由于Hadoop的HA模式是在Hadoop完全分布式基礎(chǔ)上,利用zookeeper等協(xié)調(diào)工具配置的高可用的Hadoop集群模式)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

目錄

1.前期準(zhǔn)備

1.1.hadoop-3.1.3.tar.gz,jdk-8u212-linux-x64.tar.gz,apache-zookeeper-3.5.7-bin.tar.gz三個(gè)包提取碼:k5y6

2.解壓安裝包,配置環(huán)境變量

3. 將三個(gè)節(jié)點(diǎn)分別命名為master、slave1、slave2并做免密登錄

免密在前面Hadoop完全分布式搭建說(shuō)過(guò),這里不再贅述

4.搭建zookeeper集群

?根據(jù)配置的路徑新建zkdata,zkdatalog目錄。然后到zkdata目錄中可以touch新建一個(gè)文件myid,也可以直接echo寫入為1,另外slave1,salve2分別對(duì)應(yīng)2,3。?編輯

?5.分發(fā)解壓后的java,/etc/profile,zookeeper修改myid為2,3

6.啟動(dòng)zookeeper

查看狀態(tài)

vim core-site.xml

vim hdfs-site.xml

vim yarn-site.xml

其余幾個(gè)配置和前面Hadoop完全分布式一樣

6.分發(fā)Hadoop

7.首次啟動(dòng)HDFS的HA模式,步驟如下

7.1.在虛擬機(jī)master上啟動(dòng)zookeeper集群

7.2.在虛擬機(jī)master上格式化zookeeper

7.3.分別在虛擬機(jī)master,slave1,slave2上啟動(dòng)journalnode進(jìn)程

7.4.然后格式化

7.5.

?start-all.sh報(bào)錯(cuò)

??hadoop-daemon.sh start namenode單獨(dú)啟動(dòng)master上的namenode

? hdfs namenode -bootstrapStandby再在另外你要起的虛擬機(jī)上同步namenode

最后?start-all.sh

8.在Master節(jié)點(diǎn)上使用命令分別查看服務(wù)nn2與rm2進(jìn)程狀態(tài)

hdfs haadmin -getServiceState nn2

yarn rmadmin -getServiceState rm2


HadoopHA模式搭建規(guī)劃

主機(jī)名

IP地址

相關(guān)進(jìn)程

master

根據(jù)自己的

NameNode,DataNode,

DFSZKFailoverController,

QuorumPeerMain,JournalNode,

ResourceManager,NodeMananger

slave1

根據(jù)自己的

NameNode,DataNode,

DFSZKFailoverController,

QuorumPeerMain,JournalNode,

ResourceManager,NodeMananger

slave2

根據(jù)自己的

DataNode, NodeMananger,

QuorumPeerMain, JournalNode

1.前期準(zhǔn)備


1.1.hadoop-3.1.3.tar.gz,jdk-8u212-linux-x64.tar.gz,apache-zookeeper-3.5.7-bin.tar.gz三個(gè)包提取碼:k5y6

2.解壓安裝包,配置環(huán)境變量

tar -zxf? tar包? -C? 指定目錄

hadoop ha,hadoop,java,大數(shù)據(jù)

?解壓后

hadoop ha,hadoop,java,大數(shù)據(jù)

?apache-zookeeper-3.5.7-bin名字好長(zhǎng)不太習(xí)慣可以用mv改名

hadoop ha,hadoop,java,大數(shù)據(jù)

?或者ln -s 軟鏈接

vim /etc/profile配置環(huán)境變量,source /etc/profile使環(huán)境變量生效

hadoop ha,hadoop,java,大數(shù)據(jù)

驗(yàn)證

hadoop version

java -version

hadoop ha,hadoop,java,大數(shù)據(jù)

3. 將三個(gè)節(jié)點(diǎn)分別命名為master、slave1、slave2并做免密登錄

修改主機(jī)名,斷開重連

hostnamectl set-hostname 主機(jī)名

免密在前面Hadoop完全分布式搭建說(shuō)過(guò),這里不再贅述

4.搭建zookeeper集群

cd /opt/module/zookeeper/conf

cp zoo_sample.cfg zoo.cfg

編輯zoo.cfg新增下列配置

hadoop ha,hadoop,java,大數(shù)據(jù)

?hadoop ha,hadoop,java,大數(shù)據(jù)

?根據(jù)配置的路徑新建zkdata,zkdatalog目錄。然后到zkdata目錄中可以touch新建一個(gè)文件myid,也可以直接echo寫入為1,另外slave1,salve2分別對(duì)應(yīng)2,3。hadoop ha,hadoop,java,大數(shù)據(jù)

?5.分發(fā)解壓后的java,/etc/profile,zookeeper修改myid為2,3

scp -r /opt/module/jdk1.8.0_212/ slave1:/opt/module/

scp -r /opt/module/jdk1.8.0_212/ slave2:/opt/module/

scp /etc/profile slave1:/etc/profile
scp /etc/profile slave2:/etc/profile(不要忘記source)

scp -r /opt/module/zookeeper/ slave1:/opt/module/

scp -r /opt/module/zookeeper/ slave2:/opt/module/

6.啟動(dòng)zookeeper

zkServer.sh start

查看狀態(tài)

zkServer.sh status

hadoop ha,hadoop,java,大數(shù)據(jù)

cd /opt/module/hadoop-3.1.3/etc/hadoop

vim core-site.xml

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://cluster</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/opt/module/hadoop-3.1.3/tmpdir</value>
  <description>A base for other temporary directories.</description>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
<value>master:2181,slave1:2181,slave2:2181</value>
  <description>
    A list of ZooKeeper server addresses, separated by commas, that are
    to be used by the ZKFailoverController in automatic failover.
  </description>
</property>

vim hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>3</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>
  <name>dfs.nameservices</name>
  <value>cluster</value>
  <description>
    Comma-separated list of nameservices.
  </description>
</property>
<property>
  <name>dfs.ha.namenodes.cluster</name>
  <value>nn1,nn2</value>
  <description>
The prefix for a given nameservice, contains a comma-separated
    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).

    Unique identifiers for each NameNode in the nameservice, delimited by
    commas. This will be used by DataNodes to determine all the NameNodes
    in the cluster. For example, if you used ?@~\mycluster?@~] as  
thh
e nameservice
    ID previously, and you wanted to use ?@~\nn1?@~] and ?@~\nn22
?@@
~] as the individual
    IDs of the NameNodes, you would configure a property
    dfs.ha.namenodes.mycluster, and its value "nn1,nn2".
  </description>
</property>
  <property>
    <name>dfs.namenode.rpc-address.cluster.nn1</name>
    <value>master:8020</value>
    <description>
      A comma separated list of auxiliary ports for the NameNode to listen on.
      This allows exposing multiple NN addresses to clients.
      Particularly, it is used to enforce different SASL levels on different ports.
      Empty list indicates that auxiliary ports are disabled.
    </description>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.cluster.nn2</name>
    <value>slave1:8020</value>
    <description>
      A comma separated list of auxiliary ports for the NameNode to listen on.
      This allows exposing multiple NN addresses to clients.
      Particularly, it is used to enforce different SASL levels on different ports.
      Empty list indicates that auxiliary ports are disabled.
    </description>
  </property>
<property>
  <name>dfs.namenode.http-address.cluster.nn1</name>
  <value>master:9870</value>
  <description>
    The address and the base port where the dfs namenode web ui will listen on.
  </description>
</property>
<property>
  <name>dfs.namenode.http-address.cluster.nn2</name>
  <value>slave1:9870</value>
  <description>
    The address and the base port where the dfs namenode web ui will listen on.
  </description>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://master:8485;slave1:8485;slave2:8485/cluster</value>
  <description>A directory on shared storage between the multiple namenodes
  in an HA cluster. This directory will be written by the active and read
  by the standby in order to keep the namespaces synchronized. This directory
  does not need to be listed in dfs.namenode.edits.dir above. It should be
  left empty in a non-HA cluster.
  </description>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.cluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  <description>
    The prefix (plus a required nameservice ID) for the class name of the
    configured Failover proxy provider for the host.  For more detailed
    information, please consult the "Configuration Details" section of
    the HDFS High Availability documentation.
  </description>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
  <description>
    Whether automatic failover is enabled. See the HDFS High
    Availability documentation for details on automatic HA
    configuration.
  </description>
</property>
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>shell(/bin/true)</value>
  <description>
    A list of scripts or Java classes which will be used to fence
    the Active NameNode during a failover.  See the HDFS High
    Availability documentation for details on automatic HA
    configuration.
  </description>
</property>



vim yarn-site.xml

 <property>
    <description>A comma separated list of services where service name should only
      contain a-zA-Z0-9_ and can not start with numbers</description>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <description>Name of the cluster. In a HA setting,
      this is used to ensure the RM participates in leader
      election for this cluster and ensures it does not affect
      other clusters</description>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarn-cluster</value>
  </property>
  <property>
    <description>The list of RM nodes in the cluster when HA is
      enabled. See description of yarn.resourcemanager.ha
      .enabled for full details on how this is used.</description>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname.rm1</name>
  <value>master</value>
  </property>
  <property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>slave1</value>
  </property>
  <property>
    <description>
      The http address of the RM web application.
      If only a host is provided as the value,
      the webapp will be served on a random port.
    </description>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>master:8088</value>
  </property>
  <property>
    <description>
      The http address of the RM web application.
      If only a host is provided as the value,
      the webapp will be served on a random port.
    </description>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>slave1:8088</value>
  </property>
<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>master:2181,slave1:2181,slave2:2181</value>
  </property>

其余幾個(gè)配置和前面Hadoop完全分布式一樣

6.分發(fā)Hadoop

7.首次啟動(dòng)HDFS的HA模式,步驟如下

7.1.在虛擬機(jī)master上啟動(dòng)zookeeper集群

7.2.在虛擬機(jī)master上格式化zookeeper

hdfs zkfc -formatZK

7.3.分別在虛擬機(jī)master,slave1,slave2上啟動(dòng)journalnode進(jìn)程

?hadoop-daemon.sh start journalnode

hadoop ha,hadoop,java,大數(shù)據(jù)

7.4.然后格式化

?hdfs namenode -format

hadoop ha,hadoop,java,大數(shù)據(jù)

7.5.

?start-all.sh報(bào)錯(cuò)

hadoop ha,hadoop,java,大數(shù)據(jù)

?添加進(jìn)環(huán)境變量

hadoop ha,hadoop,java,大數(shù)據(jù)

??hadoop-daemon.sh start namenode單獨(dú)啟動(dòng)master上的namenode

? hdfs namenode -bootstrapStandby再在另外你要起的虛擬機(jī)上同步namenode

hadoop ha,hadoop,java,大數(shù)據(jù)

最后?start-all.sh

hadoop ha,hadoop,java,大數(shù)據(jù)

?hadoop ha,hadoop,java,大數(shù)據(jù)

8.在Master節(jié)點(diǎn)上使用命令分別查看服務(wù)nn2與rm2進(jìn)程狀態(tài)

hdfs haadmin -getServiceState nn2

yarn rmadmin -getServiceState rm2

hadoop ha,hadoop,java,大數(shù)據(jù)

?hadoop ha,hadoop,java,大數(shù)據(jù)

報(bào)錯(cuò)了:

hadoop ha,hadoop,java,大數(shù)據(jù)

看看是否是hdfs-site.xml里面寫錯(cuò)了,果然

hadoop ha,hadoop,java,大數(shù)據(jù)

?namenode打成了namenodes,修改過(guò)來(lái)重啟,成功了

hadoop ha,hadoop,java,大數(shù)據(jù)

?文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-764780.html

到了這里,關(guān)于HadoopHA模式(由于Hadoop的HA模式是在Hadoop完全分布式基礎(chǔ)上,利用zookeeper等協(xié)調(diào)工具配置的高可用的Hadoop集群模式)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 部署HDFS集群(完全分布式模式、hadoop用戶控制集群、hadoop-3.3.4+安裝包)

    部署HDFS集群(完全分布式模式、hadoop用戶控制集群、hadoop-3.3.4+安裝包)

    目錄 前置 一、上傳解壓 (一 )上傳 (二)解壓 二、修改配置文件 (一)配置workers文件 (二)配置hadoop-env.sh文件 (三)配置core-site.xml文件 (四)配置hdfs-site.xml文件 三、分發(fā)到hp2、hp3, 并設(shè)置環(huán)境變量 (一)準(zhǔn)備數(shù)據(jù)目錄? ? (二)配置環(huán)境變量 四、創(chuàng)建數(shù)據(jù)目錄,并

    2024年04月14日
    瀏覽(28)
  • Hadoop3.x完全分布式模式下slaveDataNode節(jié)點(diǎn)未啟動(dòng)調(diào)整

    Hadoop3.x完全分布式模式下slaveDataNode節(jié)點(diǎn)未啟動(dòng)調(diào)整

    目錄 前言 一、問(wèn)題重現(xiàn) 1、查詢Hadoop版本 ?2、集群?jiǎn)?dòng)Hadoop 二、問(wèn)題分析 三、Hadoop3.x的集群配置 1、停止Hadoop服務(wù) 2、配置workers 3、從節(jié)點(diǎn)檢測(cè) 4、WebUI監(jiān)控 總結(jié) ????????在大數(shù)據(jù)的世界里,Hadoop絕對(duì)是一個(gè)值得學(xué)習(xí)的框架。關(guān)于Hadoop的知識(shí),有很多博主和視頻博主都做

    2024年02月04日
    瀏覽(39)
  • hadoop完全分布式

    hadoop完全分布式

    三臺(tái)虛擬機(jī),關(guān)閉防火墻,關(guān)閉selinux 查看防火狀態(tài) systemctl status firewalld 暫時(shí)關(guān)閉防火墻 systemctl stop firewalld 永久關(guān)閉防火墻 systemctl disable firewalld 查看 selinux狀態(tài) getenforce 暫時(shí)關(guān)閉 selinux setenforce 0 永久關(guān)閉 selinux 在/etc/selinux/config文件中將SELINUX改為disabled 修改主機(jī)名: hostn

    2023年04月12日
    瀏覽(29)
  • Hadoop 完全分布式部署

    Hadoop 完全分布式部署

    前期準(zhǔn)備 分析: 準(zhǔn)備3臺(tái)客戶機(jī)(關(guān)閉防火墻、靜態(tài)IP、主機(jī)名稱) 【CentOS 7】 安裝JDK 【jdk1.8】 安裝Hadoop 【hadoop 3.3.4】 配置環(huán)境變量 配置ssh 配置集群 單點(diǎn)啟動(dòng) 群起并測(cè)試集群 Hadoop 集群規(guī)劃: node1 node2 node3 IP 10.90.100.121 10.90.100.122 10.90.100.123 HDFS NameNode 、 DataNode DataNode D

    2024年03月23日
    瀏覽(30)
  • 搭建完全分布式Hadoop

    搭建完全分布式Hadoop

    登錄三個(gè)虛擬機(jī) 執(zhí)行命令: vim /etc/hosts 1、上傳安裝包 上傳hadoop安裝包 查看hadoop安裝包 2、解壓縮安裝包 執(zhí)行命令: tar -zxvf hadoop-3.3.4.tar.gz -C /usr/local 查看解壓后的目錄 3、配置環(huán)境變量 執(zhí)行命令: vim /etc/profile 存盤退出,執(zhí)行命令: source /etc/profile ,讓配置生效 查看had

    2024年02月04日
    瀏覽(26)
  • Hadoop完全分布式搭建

    Hadoop完全分布式搭建

    1.下載VM 2.下載Centos鏡像 3.下載hadoop 4.下載FinalShell 5.下載jdk文件 ?6.下載hive,數(shù)據(jù)倉(cāng)庫(kù)學(xué)習(xí)大數(shù)據(jù)專業(yè)的用的到 ? ? 注:開啟虛擬機(jī)把鼠標(biāo)放入屏幕點(diǎn)擊后消失,使用鍵盤上下鍵進(jìn)行選擇 ? 注:點(diǎn)擊之后,什么都不動(dòng),再點(diǎn)擊完成(因?yàn)樗詣?dòng)幫我們分區(qū)了) 注:我們配置

    2024年02月04日
    瀏覽(24)
  • hadoop平臺(tái)完全分布式搭建

    安裝前準(zhǔn)備 一、設(shè)置ssh免密登錄 1.編輯hosts文件,添加主機(jī)名映射內(nèi)容 vim ?/etc/hosts 添加內(nèi)容: 172.17.0.2 ?????master 172.17.0.3 ?????slave1 172.17.0.4 ?????slave2 2.生成公鑰和私鑰 ssh-keygen –t rsa 然后按三次回車 3.復(fù)制公鑰到其他容器(包括自己) ssh-copy-id master ssh-copy-id slav

    2024年03月17日
    瀏覽(30)
  • 【Hadoop】完全分布式集群搭建

    【Hadoop】完全分布式集群搭建

    大家好,我是向陽(yáng)花花花花,這篇文章是我歷時(shí)三天,推翻重做兩小時(shí)進(jìn)行搭建的Hadoop集群(完全分布式)個(gè)人筆記和感想,收錄于初心的《大數(shù)據(jù)》專欄。 ?? 個(gè)人主頁(yè):初心%個(gè)人主頁(yè) ?? 個(gè)人簡(jiǎn)介:大家好,我是初心,一名正在努力的雙非二本院校計(jì)算機(jī)專業(yè)學(xué)生 ??

    2024年02月15日
    瀏覽(29)
  • Hadoop的完全分布式搭建

    Hadoop的完全分布式搭建

    主機(jī)名 Hadoop10 Hadoop11 Hadoop12 網(wǎng)絡(luò) 192.168.10.10 192.168.10.11 192.168.10.12 用戶 hadoop root hadoop root hadoop root HDFS NameNode DateNode DateNode Secondary NameNode DataNode YARN NodeManager NodeManager ResourceManager NodeManager 安裝虛擬機(jī)系統(tǒng),省略 第一步:網(wǎng)卡配置 使用 vi 編輯器編輯系統(tǒng)的網(wǎng)卡配置文件,配置以下

    2024年02月08日
    瀏覽(26)
  • Hadoop 完全分布式集群搭建

    Hadoop 完全分布式集群搭建

    部署前可以先了解下 Hadoop運(yùn)行模式及目錄結(jié)構(gòu)-CSDN博客 服務(wù) hadoop102 hadoop103 hadoop104 NameNode √ DataNode √ √ √ Secondary NameNode √ ResourceManager √ NodeManager √ √ √ JobHistoryServer √ IPv4 192.168.88.102 192.168.88.103 192.168.88.104 NodeObject master worker1 worker2 最小化安裝 Neokylin7.0 用于搭建 Had

    2024年02月04日
    瀏覽(52)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包