配置背景
我使用的root用戶,懶得加sudo
所有文件夾在/opt/module
所有安裝包在/opt/software
所有腳本文件在/root/bin
三臺虛擬機(jī):hadoop102-103-104
分發(fā)腳本 fenfa,放在~/bin下,chmod 777 fenfa給權(quán)限
#!/bin/bash
#1. 判斷參數(shù)個(gè)數(shù)
if [ $# -lt 1 ]
then
echo XXXXXXXXX No Arguement XXXXXXXXX!
exit;
fi
#2. 遍歷集群所有機(jī)器
for host in hadoop103 hadoop104
do
echo ==================== $host ====================
#3. 遍歷所有目錄,挨個(gè)發(fā)送
for file in $@
do
#4. 判斷文件是否存在
if [ -e $file ]
then
#5. 獲取父目錄
pdir=$(cd -P $(dirname $file); pwd)
#6. 獲取當(dāng)前文件的名稱
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
----- 數(shù)據(jù)采集 -----?
Hadoop3.3.4
集群規(guī)劃
???????注意:NameNode和SecondaryNameNode不要安裝在同一臺服務(wù)器
?????? 注意:ResourceManager也很消耗內(nèi)存,不要和NameNode、SecondaryNameNode配置
hadoop102 |
hadoop103 |
hadoop104 |
|
HDFS |
NameNode DataNode |
DataNode |
SecondaryNameNode DataNode |
YARN |
NodeManager |
ResourceManager NodeManager |
NodeManager |
集群安裝步驟?
下載https://archive.apache.org/dist/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz
用xftp工具把安裝包傳到/opt/software
?解壓安裝包
cd /opt/software/
tar -zxvf hadoop-3.3.4.tar.gz -C /opt/module/
改名、軟連接(為了之后使用方便)
cd?/opt/module
mv?hadoop-3.3.4XXX hadoop-334
ln -s hadoop-334 hadoop
環(huán)境變量
vim /etc/profile.d/my_env.sh
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
分發(fā)hadoop和環(huán)境變量
fenfa /opt/module/hadoop-334
fenfa /opt/module/hadoop
fenfa?/etc/profile.d/my_env.sh
配置文件
配置core-site.xml
cd $HADOOP_HOME/etc/hadoop
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop102:8020</value>
</property>
<!-- 指定hadoop數(shù)據(jù)的存儲目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop/data</value>
</property>
<!-- 配置HDFS網(wǎng)頁登錄使用的靜態(tài)用戶為root -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
<!--
<!-- 配置該atguigu(superUser)允許通過代理訪問的主機(jī)節(jié)點(diǎn) -->
<property>
<name>hadoop.proxyuser.atguigu.hosts</name>
<value>*</value>
</property>
<!-- 配置該atguigu(superUser)允許通過代理用戶所屬組 -->
<property>
<name>hadoop.proxyuser.atguigu.groups</name>
<value>*</value>
</property>
<!-- 配置該atguigu(superUser)允許通過代理的用戶-->
<property>
<name>hadoop.proxyuser.atguigu.users</name>
<value>*</value>
</property>
-->
</configuration>
配置hdfs-site.xml
<configuration>
<!-- nn web端訪問地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop102:9870</value>
</property>
<!-- 2nn web端訪問地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop104:9868</value>
</property>
<!-- 測試環(huán)境指定HDFS副本的數(shù)量1 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
配置yarn-site.xml
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop103</value>
</property>
<!-- 環(huán)境變量的繼承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!--yarn單個(gè)容器允許分配的最大最小內(nèi)存 -->
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<!-- yarn容器允許管理的物理內(nèi)存大小 -->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<!-- 關(guān)閉yarn對物理內(nèi)存和虛擬內(nèi)存的限制檢查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
</configuration>
配置mapred-site.xml
<configuration>
<!-- 指定MapReduce程序運(yùn)行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
配置workers
hadoop102
hadoop103
hadoop104
配置歷史服務(wù)器mapred-site.xml
<!-- 歷史服務(wù)器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop102:10020</value>
</property>
<!-- 歷史服務(wù)器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop102:19888</value>
</property>
開啟日志聚集功能,應(yīng)用運(yùn)行完成以后,將程序運(yùn)行日志信息上傳到HDFS系統(tǒng)上
yarn-site.xml
<!-- 開啟日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 設(shè)置日志聚集服務(wù)器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop102:19888/jobhistory/logs</value>
</property>
<!-- 設(shè)置日志保留時(shí)間為7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
fenfa配置文件夾$HADOOP_HOME/etc/hadoop
啟動(dòng)
如果集群是第一次啟動(dòng),需要在hadoop102節(jié)點(diǎn)格式化NameNode(注意格式化之前,一定要先停止上次啟動(dòng)的所有namenode和datanode進(jìn)程,然后再刪除data和log數(shù)據(jù))
hdfs namenode -format
start-dfs.sh
start-yarn.shWeb端查看HDFS的Web頁面:http://hadoop102:9870/
啟停腳本
#!/bin/bash
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
case $1 in
"start")
echo " =================== 啟動(dòng) hadoop集群 ==================="
echo " --------------- 啟動(dòng) hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop/sbin/start-dfs.sh"
echo " --------------- 啟動(dòng) yarn ---------------"
ssh hadoop103 "/opt/module/hadoop/sbin/start-yarn.sh"
echo " --------------- 啟動(dòng) historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop/bin/mapred --daemon start historyserver"
;;
"stop")
echo " =================== 關(guān)閉 hadoop集群 ==================="
echo " --------------- 關(guān)閉 historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop/bin/mapred --daemon stop historyserver"
echo " --------------- 關(guān)閉 yarn ---------------"
ssh hadoop103 "/opt/module/hadoop/sbin/stop-yarn.sh"
echo " --------------- 關(guān)閉 hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac
給權(quán)限?。。。?/p>
Zookeeper
步驟
Index of /zookeeper
tar -zxvf apache-zookeeper-3.7.1-bin.tar.gz -C /opt/module/
mv apache-zookeeper-3.7.1-bin/ zookeeper
在/opt/module/zookeeper/目錄下創(chuàng)建zkData
在/opt/module/zookeeper/zkData目錄下創(chuàng)建一個(gè)myid的文件
在文件中添加與server對應(yīng)的編號,hadoop102寫2,103寫3,104寫4
2
配置zoo.cfg文件
重命名/opt/module/zookeeper/conf目錄下的zoo_sample.cfg為zoo.cfg
修改數(shù)據(jù)存儲路徑配置
dataDir=/opt/module/zookeeper/zkData
#######################cluster##########################
server.2=hadoop102:2888:3888
server.3=hadoop103:2888:3888
server.4=hadoop104:2888:3888
fenfa整個(gè)zookeeper文件夾
記得修改myid文件
啟動(dòng)
#!/bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 啟動(dòng) ------------
ssh $i "/opt/module/zookeeper/bin/zkServer.sh start"
done
};;
"stop"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 停止 ------------
ssh $i "/opt/module/zookeeper/bin/zkServer.sh stop"
done
};;
"status"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 狀態(tài) ------------
ssh $i "/opt/module/zookeeper/bin/zkServer.sh status"
done
};;
esac
Kafka
步驟
Apache Kafka
tar -zxvf kafka_2.12-3.3.1.tgz -C /opt/module/
mv kafka_2.12-3.3.1/ kafka
進(jìn)入到/opt/module/kafka
vim config/server.properties
#broker的全局唯一編號,不能重復(fù),只能是數(shù)字。
broker.id=0
#broker對外暴露的IP和端口 (每個(gè)節(jié)點(diǎn)單獨(dú)配置)
advertised.listeners=PLAINTEXT://hadoop102:9092
#kafka運(yùn)行日志(數(shù)據(jù))存放的路徑,路徑不需要提前創(chuàng)建,kafka自動(dòng)幫你創(chuàng)建,可以配置多個(gè)磁盤路徑,路徑與路徑之間可以用","分隔
log.dirs=/opt/module/kafka/datas
#配置連接Zookeeper集群地址(在zk根目錄下創(chuàng)建/kafka,方便管理)
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka
fenfa整個(gè)kafka文件夾
分別在hadoop103和hadoop104上修改配置文件/opt/module/kafka/config/server.properties中的broker.id(三個(gè)虛擬機(jī)分別是1/2/3)及advertised.listeners
在/etc/profile.d/my_env.sh文件中增加kafka環(huán)境變量配置
vim /etc/profile.d/my_env.sh
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin
fenfa環(huán)境變量
啟動(dòng)
#! /bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103 hadoop104
do
echo " --------啟動(dòng) $i Kafka-------"
ssh $i "/opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties"
done
};;
"stop"){
for i in hadoop102 hadoop103 hadoop104
do
echo " --------停止 $i Kafka-------"
ssh $i "/opt/module/kafka/bin/kafka-server-stop.sh "
done
};;
esac
Flume
步驟
Index of /dist/flume
(1)將apache-flume-1.10.1-bin.tar.gz上傳到linux的/opt/software目錄下
(2)解壓apache-flume-1.10.1-bin.tar.gz到/opt/module/目錄下
mv /opt/module/apache-flume-1.10.1-bin /opt/module/flume
改vim conf/log4j2.xml
<Properties>
<Property name="LOG_DIR">/opt/module/flume/log</Property>
</Properties>
# 引入控制臺輸出,方便學(xué)習(xí)查看日志
<Root level="INFO">
<AppenderRef ref="LogFile" />
<AppenderRef ref="Console" />加上這一行
</Root>
不用分發(fā)
配置采集文件
創(chuàng)建Flume配置文件
在hadoop102節(jié)點(diǎn)的Flume的job目錄下創(chuàng)建file_to_kafka.conf。
#定義組件
a1.sources = r1
a1.channels = c1
#配置source
a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /opt/module/applog/log/app.*
a1.sources.r1.positionFile = /opt/module/flume/taildir_position.json
這里真泥馬坑,不知道尚硅谷怎么順利運(yùn)行的,
這里如果taildir_position.json的上級目錄存在,是無法運(yùn)行的,需要多加一個(gè)不存在的目錄
#配置channel
a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092
a1.channels.c1.kafka.topic = topic_log
a1.channels.c1.parseAsFlumeEvent = false
#組裝
a1.sources.r1.channels = c1
測試
#!/bin/bash
case $1 in
"start"){
echo " --------啟動(dòng) hadoop102 采集flume-------"
ssh hadoop102 "nohup /opt/module/flume/bin/flume-ng agent -n a1 -c /opt/module/flume/conf/ -f /opt/module/flume/job/file_to_kafka.conf >/dev/null 2>&1 &"
};;
"stop"){
echo " --------停止 hadoop102 采集flume-------"
ssh hadoop102 "ps -ef | grep file_to_kafka | grep -v grep |awk '{print \$2}' | xargs -n1 kill -9 "
};;
esac
----- 數(shù)倉 -----
Hive
hive安裝
Hive on Spark:Hive既作為存儲元數(shù)據(jù)又負(fù)責(zé)SQL的解析優(yōu)化,語法是HQL語法,執(zhí)行引擎變成了Spark,Spark負(fù)責(zé)采用RDD執(zhí)行。
注意:官網(wǎng)下載的Hive3.1.3和Spark3.3.1默認(rèn)是不兼容的。因?yàn)镠ive3.1.3支持的Spark版本是2.3.0,所以需要我們重新編譯Hive3.1.3版本。
編譯步驟:官網(wǎng)下載Hive3.1.3源碼,修改pom文件中引用的Spark版本為3.3.1,如果編譯通過,直接打包獲取jar包。如果報(bào)錯(cuò),就根據(jù)提示,修改相關(guān)方法,直到不報(bào)錯(cuò),打包獲取jar包。(這里直接用尚硅谷的安裝包)
解壓-改名-環(huán)境變量
解決日志Jar包沖突,進(jìn)入/opt/module/hive/lib目錄
mv log4j-slf4j-impl-2.17.1.jar log4j-slf4j-impl-2.17.1.jar.bak
Hive元數(shù)據(jù)配置到MySQL,?將MySQL的JDBC驅(qū)動(dòng)拷貝到Hive的lib目錄下
cp /opt/software/mysql/mysql-connector-j-8.0.31.jar /opt/module/hive/lib/
配置Metastore到MySQL,在$HIVE_HOME/conf目錄下新建hive-site.xml文件
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!--配置Hive保存元數(shù)據(jù)信息所需的 MySQL URL地址-->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop102:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=true</value>
</property>
<!--配置Hive連接MySQL的驅(qū)動(dòng)全類名-->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<!--配置Hive連接MySQL的用戶名 -->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<!--配置Hive連接MySQL的密碼 -->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>000000</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop102</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
</configuration>
啟動(dòng)Hive,初始化元數(shù)據(jù)庫
初始化Hive元數(shù)據(jù)庫
mysql -u root -p
create database metastore;
schematool -initSchema -dbType mysql -verbose
修改元數(shù)據(jù)庫字符集
use metastore;
alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8;
alter table TABLE_PARAMS modify column PARAM_VALUE mediumtext character set utf8;
quit;
啟動(dòng)Hive客戶端
hive
show databases;
OK
database_name
default
Time taken: 0.955 seconds, Fetched: 1 row(s)
在Hive所在節(jié)點(diǎn)部署Spark純凈版
上傳并解壓解壓spark-3.3.1-bin-without-hadoop.tgz,環(huán)境變量
修改spark-env.sh配置文件
mv /opt/module/spark/conf/spark-env.sh.template /opt/module/spark/conf/spark-env.sh
vim /opt/module/spark/conf/spark-env.sh
在末尾加一行
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
在hive中創(chuàng)建spark配置文件
vim /opt/module/hive/conf/spark-defaults.conf
spark.master yarn
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop102:8020/spark-history
spark.executor.memory 1g
spark.driver.memory 1g
在HDFS創(chuàng)建如下路徑,用于存儲歷史日志。?
hadoop fs -mkdir /spark-history
向HDFS上傳Spark純凈版jar包,將Spark的依賴上傳到HDFS集群路徑,這樣集群中任何一個(gè)節(jié)點(diǎn)都能獲取到。
hadoop fs -mkdir /spark-jars
hadoop fs -put /opt/module/spark/jars/* /spark-jars
修改hive-site.xml文件
vim /opt/module/hive/conf/hive-site.xml
<!--Spark依賴位置(注意:端口號8020必須和namenode的端口號一致)-->
<property>
<name>spark.yarn.jars</name>
<value>hdfs://hadoop102:8020/spark-jars/*</value>
</property>
<!--Hive執(zhí)行引擎-->
<property>
<name>hive.execution.engine</name>
<value>spark</value>
</property>
測試
hive
create table student(id int, name string);
insert into table student values(1,'abc');
說明是spark引擎,我tm怎么用了74s????
Yarn環(huán)境配置
增加ApplicationMaster資源比例
容量調(diào)度器對每個(gè)資源隊(duì)列中同時(shí)運(yùn)行的Application Master占用的資源進(jìn)行了限制,該限制通過yarn.scheduler.capacity.maximum-am-resource-percent參數(shù)實(shí)現(xiàn),其默認(rèn)值是0.1,表示每個(gè)資源隊(duì)列上Application Master最多可使用的資源為該隊(duì)列總資源的10%,目的是防止大部分資源都被Application Master占用,而導(dǎo)致Map/Reduce Task無法執(zhí)行。
故此處可將該值適當(dāng)調(diào)大。
vim /opt/module/hadoop/etc/hadoop/capacity-scheduler.xml文章來源:http://www.zghlxwxcb.cn/news/detail-801251.html
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.8</value>
</property>
分發(fā),重啟yarn文章來源地址http://www.zghlxwxcb.cn/news/detail-801251.html
到了這里,關(guān)于數(shù)倉項(xiàng)目6.0配置大全(hadoop/Flume/zk/kafka/mysql配置)的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!