Hive 引擎簡(jiǎn)介
Hive 引擎包括:默認(rèn) MR、tez、spark
最底層的引擎就是MR (Mapreduce)無需配置,Hive運(yùn)行自帶
Hive on Spark:Hive 既作為存儲(chǔ)元數(shù)據(jù)又負(fù)責(zé) SQL 的解析優(yōu)化,語法是 HQL 語法,執(zhí)行引擎變成了 Spark,Spark 負(fù)責(zé)采用 RDD 執(zhí)行。
Spark on Hive : Hive 只作為存儲(chǔ)元數(shù)據(jù),Spark 負(fù)責(zé) SQL 解析優(yōu)化,語法是 Spark SQL語法,Spark 負(fù)責(zé)采用 RDD 執(zhí)行。
環(huán)境配置 (ssh已經(jīng)搭好)
- Java 1.8.0+
- Hadoop 2.7.0
- MySQL
- Hive 3.1.2
- Spark 2.3.0
為了方便只用單臺(tái)虛擬機(jī)去跑,多臺(tái)和單臺(tái)一個(gè)套路,分發(fā)即可
JDK準(zhǔn)備
1)卸載現(xiàn)有JDK
sudo rpm -qa | grep -i java | xargs -n1 sudo rpm -e --nodeps
2)解壓JDK到/opt/module目錄下
tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module/
3)配置JDK環(huán)境變量
1)進(jìn)入 /etc/profile
添加如下內(nèi)容,然后保存(:wq)退出
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin
2)讓環(huán)境變量生效
source /etc/profile
3)測(cè)試JDK是否安裝成功
java -version
Hadoop 準(zhǔn)備
部署
1)進(jìn)入到Hadoop安裝包路徑下
cd /opt/software/
2)解壓安裝文件到/opt/module下面
https://hadoop.apache.org/release/2.7.0.html
wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.0/hadoop-2.7.0.tar.gz
tar -zxvf hadoop-2.7.0.tar.gz -C /opt/module/
3)將Hadoop添加到環(huán)境變量
1)獲取Hadoop安裝路徑
/opt/module/hadoop-2.7.0
(2)打開/etc/profile文件
sudo vim /etc/profile
在profile文件末尾添加JDK路徑:(shitf+g)
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-2.7.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
(3)
source /etc/profile
配置集群
1)核心配置文件
配置core-site.xml (hadoop-2.7.0/etc/hadoop/core-site.xml )
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.1.250:8020</value>
</property>
<!-- 指定hadoop數(shù)據(jù)的存儲(chǔ)目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/mnt/data_online/hadoop-data</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>atomecho</value>
</property>
<!-- 配置該luanhao(superUser)允許通過代理訪問的主機(jī)節(jié)點(diǎn) -->
<property>
<name>hadoop.proxyuser.atomecho.hosts</name>
<value>*</value>
</property>
<!-- 配置該luanhao(superUser)允許通過代理用戶所屬組 -->
<property>
<name>hadoop.proxyuser.atomecho.groups</name>
<value>*</value>
</property>
<!-- 配置該luanhao(superUser)允許通過代理的用戶-->
<property>
<name>hadoop.proxyuser.atomecho.groups</name>
<value>*</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
2)HDFS配置文件
配置hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- nn web端訪問地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>192.168.1.250:9870</value>
</property>
<!-- 2nn web端訪問地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.1.250:9868</value>
</property>
<!-- 測(cè)試環(huán)境指定HDFS副本的數(shù)量1 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
3)YARN配置文件
配置yarn-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.1.250</value>
</property>
<!-- 環(huán)境變量的繼承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!-- yarn容器允許分配的最大最小內(nèi)存 -->
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>12288</value>
</property>
<!-- yarn容器允許管理的物理內(nèi)存大小 -->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>12288</value>
</property>
<!-- 關(guān)閉yarn對(duì)物理內(nèi)存和虛擬內(nèi)存的限制檢查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!-- 開啟日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 設(shè)置日志聚集服務(wù)器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://192.168.1.250:19888/jobhistory/logs</value>
</property>
<!-- 設(shè)置日志保留時(shí)間為7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
4)MapReduce配置文件
配置mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MapReduce程序運(yùn)行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 歷史服務(wù)器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.1.250:10020</value>
</property>
<!-- 歷史服務(wù)器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.1.250:19888</value>
</property>
</configuration>
5)配置workers
192.168.1.250
6)配置hadoop-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_212
啟動(dòng)集群
(1)如果集群是第一次啟動(dòng),需要在192.168.1.250節(jié)點(diǎn)格式化NameNode(注意格式化之前,一定要先停止上次啟動(dòng)的所有namenode和datanode進(jìn)程,然后再刪除data和log數(shù)據(jù))
bin/hdfs namenode -format
(2)啟動(dòng)HDFS
sbin/start-dfs.sh
(3)在配置了ResourceManager的節(jié)點(diǎn)啟動(dòng)YARN
sbin/start-yarn.sh
(4)Web端查看HDFS的Web頁面:http://192.168.1.250:9870
(5)Web端查看SecondaryNameNode : http://192.168.1.250:9868/status.html (單機(jī)模式下面什么都沒有)
6)Web端查看ResourceManager : http://192.168.1.250:8088/cluster
LZO壓縮配置
- 編譯hadoop-lzo
wget https://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz
tar -zxvf lzo-2.10.tar.gz
cd lzo-2.10
./configure --enable-shared --prefix /usr/local/lzo-2.10
make && sudo make install
# build hadoop-lzo
C_INCLUDE_PATH=/usr/local/lzo-2.10/include \
LIBRARY_PATH=/usr/local/lzo-2.10/lib \
mvn clean package
2)將編譯好后的 hadoop-lzo-0.4.20.jar 放入 /opt/module/hadoop-2.7.0/share/hadoop/common/
$ pwd
/opt/module/hadoop-2.7.0/share/hadoop/common/
$ ls
hadoop-lzo-0.4.20.jar
2)core-site.xml 增加配置支持 LZO 壓縮
<configuration>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
Hadoop 2.x 端口號(hào) 總結(jié)
MySQL準(zhǔn)備
安裝mysql
1)進(jìn)入msyql 庫
mysql> use mysql
2)查詢 user 表
mysql> select user, host from user;
3)修改 user 表,把 Host 表內(nèi)容修改為%
mysql> update user set host="%" where user="root";
4)刷新
mysql> flush privileges;
Hive 準(zhǔn)備
下載hive:https://dlcdn.apache.org/hive/
1)把 apache-hive-3.1.2-bin.tar.gz上傳到 linux 的/opt/software 目錄下
2)解壓 apache-hive-3.1.2-bin.tar.gz 到/opt/module目錄下面
tar -zxvf /opt/software/apache-hive-3.1.2-bin.tar.gz -C /opt/module/
3)修改 apache-hive-3.1.2-bin.tar.gz 的名稱為 hive
mv /opt/module/apache-hive-3.1.2-bin/ /opt/module/hive
4)修改/etc/profile,添加環(huán)境變量
sudo vim /etc/profile
添加內(nèi)容
#HIVE_HOME
export HIVE_HOME=/opt/module/hive
export PATH=$PATH:$HIVE_HOME/bin
source 一下 /etc/profile 文件,使環(huán)境變量生效
source /etc/profile
Hive 元數(shù)據(jù)配置到 MySQL
- 拷貝驅(qū)動(dòng)
下載mysql JDBC: https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/
將 MySQL 的 JDBC 驅(qū)動(dòng)拷貝到 Hive 的 lib 目錄下
cp /opt/software/mysql-connector-j-8.0.33.jar /opt/module/hive/lib/
- 配置 Metastore 到 MySQL
在$HIVE_HOME/conf 目錄下新建 hive-site.xml 文件
vim hive-site.xml
添加如下內(nèi)容
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.1.249:3306/metastore?useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Lettcue2kg</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.1.249</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<!--Spark 依賴位置(注意:端口號(hào) 8020 必須和 namenode 的端口號(hào)一致)-->
<property>
<name>spark.yarn.jars</name>
<value>hdfs://192.168.1.250:8020/spark-jars/*</value>
</property>
<!--Hive 執(zhí)行引擎-->
<property>
<name>hive.execution.engine</name>
<value>spark</value>
</property>
<!--Hive 和 Spark 連接超時(shí)時(shí)間-->
<property>
<name>hive.spark.client.connect.timeout</name>
<value>10000ms</value>
</property>
</configuration>
- 啟動(dòng) Hive
初始化元數(shù)據(jù)庫
1)登陸MySQL
mysql -uroot -p
2)新建 Hive 元數(shù)據(jù)庫
mysql> create database metastore;
mysql> quit;
3)初始化 Hive 元數(shù)據(jù)庫
schematool -initSchema -dbType mysql -verbose
- 啟動(dòng) hive 客戶端
1)啟動(dòng) Hive 客戶端
bin/hive
2)查看一下數(shù)據(jù)庫
hive (default)> show databases;
OK
database_name
default
Spark 準(zhǔn)備
(1)Spark 官網(wǎng)下載 jar 包地址:
http://spark.apache.org/downloads.html
(2)上傳并解壓解壓 spark-2.3.0-bin-hadoop2.7.tgz
wget https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C /opt/module/
mv /opt/module/spark-2.3.0-bin-hadoop2.7 /opt/module/spark
(3)配置 SPARK_HOME 環(huán)境變量
sudo vim /etc/profile
添加如下內(nèi)容
# SPARK_HOME
export SPARK_HOME=/opt/module/spark
export PATH=$PATH:$SPARK_HOME/bin
source 使其生效
source /etc/profile
(4)在hive 中創(chuàng)建 spark 配置文件
vim /opt/module/hive/conf/spark-defaults.conf
添加如下內(nèi)容(在執(zhí)行任務(wù)時(shí),會(huì)根據(jù)如下參數(shù)執(zhí)行)
spark.master yarn
spark.eventLog.enabled true
spark.eventLog.dir hdfs://192.168.1.250:8020/spark-history
spark.executor.memory 2g
spark.driver.memory 1g
在 HDFS 創(chuàng)建如下路徑,用于存儲(chǔ)歷史日志
hadoop fs -mkdir /spark-history
(5)向 HDFS 上傳 Spark 純凈版 jar 包
上傳并解壓 spark-2.3.0-bin-without-hadoop.tgz
tar -zxvf /opt/software/spark-2.3.0-bin-without-hadoop.tgz
6)上傳 Spark 純凈版 jar 包到 HDFS
hadoop fs -mkdir /spark-jars
hadoop fs -put spark-2.3.0-bin-without-hadoop/jars/* /spark-jars
Hive on Spark 配置
修改 hive-site.xml 文件
vim /opt/module/hive/conf/hive-site.xml
添加如下內(nèi)容
<!--Spark 依賴位置(注意:端口號(hào) 8020 必須和 namenode 的端口號(hào)一致)-->
<property>
<name>spark.yarn.jars</name>
<value>hdfs://192.168.1.250:8020/spark-jars/*</value>
</property>
<!--Hive 執(zhí)行引擎-->
<property>
<name>hive.execution.engine</name>
<value>spark</value>
</property>
<!--Hive 和 Spark 連接超時(shí)時(shí)間-->
<property>
<name>hive.spark.client.connect.timeout</name>
<value>10000ms</value>
</property>
1)兼容性說明
注意:官網(wǎng)下載的 Hive3.1.2 和 Spark3.0.0 默認(rèn)是不兼容的。因?yàn)?Hive3.1.2 支持的 Spark版本是 2.4.5,所以需要我們重新編譯 Hive3.1.2 版本。
編譯步驟:官網(wǎng)下載 Hive3.1.2 源碼,修改 pom 文件中引用的 Spark 版本為 3.0.0,如果編譯通過,直接打包獲取 jar 包。如果報(bào)錯(cuò),就根據(jù)提示,修改相關(guān)方法,直到不報(bào)錯(cuò),打包獲取 jar 包。
Hive on Spark測(cè)試
1)啟動(dòng) hive 客戶端
bin/hive
(2)創(chuàng)建一張測(cè)試表
hive (default)> create table huanhuan(id int, name string);
hive (default)> show tables;
OK
tab_name
huanhuan
Time taken: 0.117 seconds, Fetched: 1 row(s)
3)通過 insert 測(cè)試效果
hive (default)> insert into huanhuan values(1,'haoge');
Query ID = root_20230604114221_a1118af6-6182-455b-80fa-308382ddbee0
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Running with YARN Application = application_1685849514092_0001
Kill Command = /opt/module/hadoop-2.7.0/bin/yarn application -kill application_1685849514092_0001
Hive on Spark Session Web UI URL: http://192.168.1.250:43725
Query Hive on Spark job[0] stages: [0, 1]
Spark job[0] status = RUNNING
--------------------------------------------------------------------------------------
STAGES ATTEMPT STATUS TOTAL COMPLETED RUNNING PENDING FAILED
--------------------------------------------------------------------------------------
Stage-0 ........ 0 FINISHED 1 1 0 0 0
Stage-1 ........ 0 FINISHED 1 1 0 0 0
--------------------------------------------------------------------------------------
STAGES: 02/02 [==========================>>] 100% ELAPSED TIME: 23.28 s
--------------------------------------------------------------------------------------
Spark job[0] finished successfully in 23.28 second(s)
Loading data to table default.huanhuan
OK
col1 col2
Time taken: 98.25 seconds
Spark on yarn & spark on hive配置
- 編輯 SPARK_HOME/conf/spark-defaults.conf
spark.master yarn
spark.driver.memory 512m
spark.yarn.am.memory 512m
spark.executor.memory 512m
# 配置spark日志
spark.eventLog.enabled true
spark.eventLog.dir hdfs://192.168.1.250:8020/spark-logs
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory hdfs://192.168.1.250:8020/spark-logs
spark.history.fs.update.interval 10s
spark.history.ui.port 18080
- 編輯/etc/profile
export HADOOP_HOME=/opt/module/hadoop-2.7.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native:$LD_LIBRARY_PATH
- source/etc/profile
source /etc/profile
- 拷貝hadoop和hive的配置到spark
把hadoop下的
core-site.xml
hdfs-site.xml
拷貝到 SPARK_HOME/conf/
hive下的配置
hive-site.xml
拷貝到 SPARK_HOME/conf/
- 拷貝mysql的驅(qū)動(dòng)
cp mysql-connector-java-5.1.38-bin.jar $SPARK_HOME/jars/
例子
main.py文章來源:http://www.zghlxwxcb.cn/news/detail-635845.html
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql import Row
spark = SparkSession.builder.enableHiveSupport().getOrCreate()
df=spark.sql("show databases")
df.show()
# https://www.projectpro.io/recipes/write-csv-data-table-hive-pyspark
提交腳本
submit.sh文章來源地址http://www.zghlxwxcb.cn/news/detail-635845.html
SPARK_PATH=/opt/module/spark
YARN_QUEUE=default
# DEPLOY_MODE=cluster
DEPLOY_MODE=client
${SPARK_PATH}/bin/spark-submit \
--master yarn \
--name "spark_demo_lr" \
--queue ${YARN_QUEUE} \
--deploy-mode ${DEPLOY_MODE} \
--driver-memory 4g \
--driver-cores 2 \
--executor-memory 4g \
--executor-cores 2 \
--num-executors 2 \
--conf spark.default.parallelism=10 \
--conf spark.executor.memoryOverhead=2g \
--conf spark.driver.memoryOverhead=1g \
--conf spark.yarn.maxAppAttempts=1 \
--conf spark.yarn.submit.waitAppCompletion=true \
./main.py
到了這里,關(guān)于Hive on Spark環(huán)境搭建的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!