前置配置
vm設(shè)置
虛擬機(jī)創(chuàng)建(hadoop1,hadoop2,hadoop3)
在安裝過(guò)程中推薦設(shè)置root用戶(hù)密碼為1234方面后續(xù)操作
linux前置配置(三個(gè)機(jī)器都要設(shè)置)
1.設(shè)置主機(jī)名
以hadoop3為例
hostnamectl set-hostname hadoop3
2.設(shè)置固定ip
vim /etc/sysconfig/network-scripts/ifcfg-ens33
hadoop1 192.168.88.201
hadoop2 192.168.88.202
hadoop3 192.168.88.203
最后執(zhí)行
service network restart
刷新網(wǎng)卡
3.工具連接(三個(gè)機(jī)器都要設(shè)置)
4.主機(jī)映射
windows:
C:\Windows\System32\drivers\etc
修改這個(gè)路徑下的hosts文件
推薦使用vscode打開(kāi)可以修改成功
linux:(三個(gè)機(jī)器都要設(shè)置)
vim /etc/hosts
5.配置SSH免密登錄(三個(gè)機(jī)器都要設(shè)置)
root免密
1.在每一臺(tái)機(jī)器都執(zhí)行:ssh-keygen -trsa -b 4096 ,一路回車(chē)到底即可
2.在每一臺(tái)機(jī)器都執(zhí)行:
ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
hadoop免密
創(chuàng)建hadoop用戶(hù)并配置免密登錄
1.在每一臺(tái)機(jī)器執(zhí)行:useradd hadoop,創(chuàng)建hadoop用戶(hù)
2.在每一臺(tái)機(jī)器執(zhí)行:passwd hadoop,設(shè)置hadoop用戶(hù)密碼為1234
3.在每一臺(tái)機(jī)器均切換到hadoop用戶(hù):su - hadoop,并執(zhí)行ssh-keygen -trsa -b 4096,創(chuàng)建ssh密鑰
4.在每一臺(tái)機(jī)器均執(zhí)行
ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
6.關(guān)閉防火墻和SELinux(三個(gè)機(jī)器都要設(shè)置)
1:
systemctl stop firewalld
systemctl disable firewalld
2.
vim /etc/sysconfig/selinux
設(shè)置好輸入 init 6 重啟
3.
以下操作在三臺(tái)Linux均執(zhí)行
- 安裝ntp軟件
yum install -y ntp
- 更新時(shí)區(qū)
rm -f /etc/localtime;sudo ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
- 同步時(shí)間
ntpdate -u ntp.aliyun.com
- 開(kāi)啟ntp服務(wù)并設(shè)置開(kāi)機(jī)自啟
systemctl start ntpd
systemctl enable ntpd
三臺(tái)創(chuàng)建快照1
環(huán)境配置
1、jdk1.8 Java Downloads | Oracle
2、hadoop-3.3.6 Apache Hadoop
3、hbase-2.5.5.hadoop3x Index of /dist/hbase/2.5.5 (apache.org)
4、zookeeper-3.8.3 Apache ZooKeeper
重點(diǎn):以下配置都是在root用戶(hù)下進(jìn)行配置后續(xù)會(huì)給對(duì)應(yīng)的hadoop用戶(hù)權(quán)限
推薦一口氣配置完在進(jìn)行給予權(quán)限和進(jìn)行配置文件的刷新,以及最后的分發(fā)
jdk
創(chuàng)建文件夾,用來(lái)部署JDK,將JDK和Tomcat都安裝部署到:/export/server 內(nèi)
cd /
mkdir export
cd export
mkdir server
解壓縮JDK安裝文件
tar -zxvf jdk-8u321-linux-x64.tar.gz -C /export/server
配置JDK的軟鏈接
配置JAVA_HOME環(huán)境變量,以及將$JAVA_HOME/bin文件夾加入PATH環(huán)境變量中
vim /etc/profile
export JAVA_HOME=/export/server/jdk
export PATH=$PATH: $JAVA_HOME/bin
生效環(huán)境變量
source /etc/profile
刪除系統(tǒng)自帶的java程序
rm -f /usr/bin/java
軟鏈接我們自己的java
ln -s /export/server/jdk/bin/java /usr/bin/java
執(zhí)行驗(yàn)證
分發(fā)
hadoop2,3先創(chuàng)建文件夾
hadoop分發(fā)
cd /export/server/
scp -r jdk1.8.0_321/ hadoop2:`pwd`
scp -r jdk1.8.0_321/ hadoop3:`pwd`
cd /etc
scp -r profile hadoop2:`pwd`
scp -r profile hadoop3:`pwd`
hadoop2,3
source /etc/profile
rm -f /usr/bin/java
ln -s /export/server/jdk/bin/java /usr/bin/java
hadoop
上傳和解壓
cd /export/server
tar -zxvf hadoop-3.3.6.tar.gz
ln -s hadoop-3.3.6 hadoop
hadoop配置
worksers
hadoop1
hadoop2
hadoop3
hdfs-site.xml
<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:9870</value>
<description> The address and the base port where the dfs namenode web ui will listen on.
</description>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/nn</value>
</property>
<property>
<name>dfs.namenode.hosts</name>
<value>hadoop1,hadoop2,hadoop3</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/dn</value>
</property>
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
hadoop-env.sh
export JAVA_HOME=/export/server/jdk
export HADOOP_HOME=/export/server/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
yarn-site.xml
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop1:19888/jobhistory/logs</value>
<description></description>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>hadoop1:8089</value>
<description>proxy server hostname and port</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
<description>Configuration to enable or disable log aggregation</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/tmp/logs</value>
<description>Configuration to enable or disable log aggregation</description>
</property>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
<description></description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
<description></description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/nm-local</value>
<description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/data/nm-log</value>
<description>Comma-separated list of paths on the local filesystem where logs are written.</description>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
<description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>
<!-- 是否需要開(kāi)啟Timeline服務(wù) -->
<property>
<name>yarn.timeline-service.enabled</name>
<value>true</value>
</property>
<!-- Timeline Web服務(wù)的主機(jī),通過(guò)8188端?訪問(wèn) -->
<property>
<name>yarn.timeline-service.hostname</name>
<value>hadoop1</value>
</property>
<!-- 設(shè)置ResourceManager是否發(fā)送指標(biāo)信息到Timeline服務(wù) -->
<property>
<name>yarn.system-metrics-publisher.enabled</name>
<value>false</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description></description>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
<description></description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
<description></description>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/data/mr-history/tmp</value>
<description></description>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/data/mr-history/done</value>
<description></description>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
環(huán)境變量配置
vim /etc/profile
export HADOOP_HOME=/export/server/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
分發(fā)hadoop到 主機(jī)2,3
發(fā)送hadoop
cd /export/server/
scp -r hadoop-3.3.6/ hadoop2:`pwd`
scp -r hadoop-3.3.6/ hadoop3:`pwd`
發(fā)送環(huán)境變量
cd /etc
scp -r profile hadoop2:`pwd`
scp -r profile hadoop2:`pwd`
其他設(shè)置
hadoop2,3分別創(chuàng)建軟連接
cd /export/server/
ln -s hadoop-3.3.6/ hadoop
刷新環(huán)境變量
source /etc/peorfile
hadoop version
hadoop權(quán)限配置
主機(jī) 123
都執(zhí)行: 以 root
權(quán)限 給 hadoop 用戶(hù)配置相關(guān)權(quán)限
mkdir -p /data/nn
mkdir -p /data/dn
chown -R hadoop:hadoop /data
chown -R hadoop:hadoop /export
創(chuàng)建快照2
格式化與啟動(dòng)
1.切換用戶(hù)hadoop
su - hadoop
2.進(jìn)行格式化
hdfs namenode -format
3.啟動(dòng)!!!
一鍵啟動(dòng):
start-all.sh
分開(kāi)啟動(dòng):
start-dfs.sh
start-yarn.sh
查看網(wǎng)頁(yè)
zookeeper
上傳與解壓
cd /export/server/
tar -zxvf apache-zookeeper-3.9.1-bin.tar.gz
ln -s apache-zookeeper-3.9.1-bin zookeeper
rm -rf apache-zookeeper-3.9.1-bin.tar.gz
配置
cd /export/server/zookeeper/conf/
cp zoo_sample.cfg zoo.cfg
//修改 zoo.cfg 配置文件,將 dataDir=/data/zookeeper/data 修改為指定的data目錄
vim zoo.cfg
dataDir=/export/server/zookeeper/zkData
server.2=hadoop1:2888:3888
server.1=hadoop2:2888:3888
server.3=hadoop3:2888:3888
cd ..
mkdir zkData
vim myid
分發(fā)和環(huán)境變量
環(huán)境變量
vim /etc/profile
export ZOOKEEPER_HOME=/export/server/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
分發(fā)
cd /etc
scp -r profile hadoop2:`pwd`
scp -r profile hadoop3:`pwd`
cd /export/server/
scp -r apache-zookeeper-3.9.1-bin/ hadoop2:`pwd`
scp -r apache-zookeeper-3.9.1-bin/ hadoop3:`pwd`
hadoop2,3創(chuàng)建軟連接
ln -s apache-zookeeper-3.9.1-bin/ zookeeper
hadoop2,3修改內(nèi)容
cd /export/server/zookeeper/zkData/
hadoop1 修改為2
hadoop2 修改為1
hadoop3 修改為3
刷新配置文件
source /etc/profile
重新給權(quán)限
chown -R hadoop:hadoop /export
啟動(dòng)(三個(gè)機(jī)器都執(zhí)行)
su - hadoop
bin/zkServer.sh start
查看狀態(tài)
bin/zkServer.sh status
hbase
上傳和解壓
tar -zxvf hbase-2.5.5-hadoop3-bin.tar.gz
ln -s hbase-2.5.5-hadoop3 hbase
rm -rf hbase-2.5.5-hadoop3-bin.tar.gz
配置
cd /export/server/hbase/conf/
mkdir -p /data/hbase/logs
hbase-env.sh
export JAVA_HOME=/export/server/jdk
export HBASE_MANAGES_ZK=false
regionservers
backup-master
vim backup-master
hbase-site.xml
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop1,hadoop2,hadoop3</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop1:8020/hbase</value>
</property>
<property>
<name>hbase.wal.provider</name>
<value>filesystem</value>
</property>
分發(fā)和權(quán)限以及環(huán)境變量
環(huán)境變量
vim /etc/profile
export HBASE_HOME=/export/server/hbase
export PATH=$PATH:$HBASE_HOME/bin
分發(fā)
cd /export
scp -r hbase-2.5.5-hadoop3/ hadoop2:`pwd`
scp -r hbase-2.5.5-hadoop3/ hadoop3:`pwd`
hadoop2,3分別創(chuàng)建軟連接
ln -s hbase-2.5.5-hadoop2/ hbase
ln -s hbase-2.5.5-hadoop3/ hbase
cd /etc
scp -r profile hadoop2:`pwd`
scp -r profile hadoop3:`pwd`
source /etc/proflie
權(quán)限(都執(zhí)行)
chown -R hadoop:hadoop /export
chown -R hadoop:hadoop /data
啟動(dòng)
su - hadoop
start-hbase
文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-770698.html
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-770698.html
到了這里,關(guān)于Hadoop3.3.6安裝和配置hbase-2.5.5-hadoop3x,zookeeper-3.8.3的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!