国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén))

這篇具有很好參考價(jià)值的文章主要介紹了【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén))。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。


涉及的工具、軟件、包

【虛擬機(jī)】VMware Workstation 16 Pro
【鏡像】CentOS-7-x86_64-DVD-1804.iso
【java】jdk-8u281-linux-x64.rpm
【Hadoop】hadoop-2.7.1.tar.gz
【SSH遠(yuǎn)程】SecureCRTPortable.exe
【上傳下載】SecureFXPortable.exe

網(wǎng)絡(luò):IP、DNS

配網(wǎng)卡ens33

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="fc9e207a-c97c-4d14-b36b-969ba295ffa9"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.196.71
GATEWAY=192.168.196.2
DNS1=114.114.114.114
DNS2=8.8.8.8

重啟網(wǎng)絡(luò)

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ping www.baidu.com

java:安裝、配置

rpm安裝

[root@localhost Desktop]# ls
jdk-8u281-linux-x64.rpm  
[root@localhost Desktop]# rpm -ivh jdk-8u281-linux-x64.rpm
warning: jdk-8u281-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:jdk1.8-2000:1.8.0_281-fcs        ################################# [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...


rpm文件安裝完成后的默認(rèn)路徑為/usr/java/jdk1.8.0_281-amd64/


[root@localhost /]# vi /etc/profile
追加
export JAVA_HOME=/usr/java/jdk1.8.0_281-amd64/
export CLASSPATH=$JAVA_HOME$lib:$CLASSPATH
export PATH=$JAVA_HOME$\bin:$PATH

更新
[root@localhost /]# source /etc/profile

查看java版本
[root@localhost /]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

免密前的準(zhǔn)備

私鑰、公鑰

私鑰:留給自己即本機(jī)
公鑰:發(fā)給其他計(jì)算機(jī) 

位置:根目錄的隱藏目錄
[root@localhost ~]# cd .ssh/
[root@localhost .ssh]# ls
id_rsa  id_rsa.pub
[root@localhost .ssh]# pwd
/root/.ssh

克隆、改名、改IP

克隆三臺(tái)
hadoop01 192.168.196.71
hadoop02 192.168.196.72
hadoop03 192.168.196.73

三臺(tái)機(jī)都要做:??

改ip:
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33

改名:
[root@localhost ~]# hostnamectl set-hostname hadoop01
[root@localhost ~]# hostnamectl set-hostname hadoop02
[root@localhost ~]# hostnamectl set-hostname hadoop03

重啟:
[root@localhost ~]reboot

ssh免密登錄

生成密鑰

[root@hadoop01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pW1bexRKYdsv8tNjg5DBk7iYoTaOiX/9TksBqN5YIlo root@hadoop01
The key's randomart image is:
+---[RSA 2048]----+
|            o    |
|      .   o..+   |
|     . o ..=o o  |
|    . . =+..+. o |
| .Eo = oSooo+ o .|
|..+ X .  ..o.=.o |
|.. = o.  o. ..++.|
|  .  . .o .  ...o|
|   ..   o+       |
+----[SHA256]-----+
[root@hadoop01 ~]# ssh-copy-id localhost
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@localhost's password: 
Permission denied, please try again.
root@localhost's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

密鑰發(fā)送


[root@hadoop01 ~]# ssh-copy-id 192.168.196.72
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.196.72 (192.168.196.72)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.196.72's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.196.72'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop01 ~]# ssh-copy-id 192.168.196.73
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.196.73 (192.168.196.73)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.196.73's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.196.73'"
and check to make sure that only the key(s) you wanted were added.

登錄測(cè)試

[root@hadoop01 ~]# ssh 192.168.196.72
Last login: Wed Mar 27 11:53:21 2024 from 192.168.196.1
[root@hadoop02 ~]# 
可以看到ssh hadoop02免密登錄??

但切回去hadoop01要輸入密碼??
[root@hadoop02 ~]# ssh 192.168.196.71
The authenticity of host '192.168.196.71 (192.168.196.71)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.196.71' (ECDSA) to the list of known hosts.
root@192.168.196.71's password: 
Last failed login: Wed Mar 27 11:57:40 PDT 2024 from localhost on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Wed Mar 27 11:52:21 2024 from 192.168.196.1
[root@hadoop01 ~]# 

hadoop02、hadoop03按以上步驟生成密鑰,發(fā)給另外兩臺(tái),然后經(jīng)行ssh免密登錄測(cè)試。(此處略)

hadoop安裝配置

新建用于存放hadoop的文件夾
[root@hadoop01 ~]# mkdir -p /export/servers

進(jìn)入壓縮包的存放目錄
[root@hadoop01 ~]# cd /home/cps/Desktop/
[root@hadoop01 Desktop]# ls
hadoop-2.7.1.tar.gz

解壓
[root@hadoop01 Desktop]# tar -zxvf  hadoop-2.7.1.tar.gz -C /export/servers/

查看確定解壓后的位置
[root@hadoop01 hadoop-2.7.1]# pwd
/export/servers/hadoop-2.7.1

配置環(huán)境
[root@hadoop01 hadoop-2.7.1]# vi /etc/profile
追加
export HADOOP_HOME=/export/servers/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

更新
[root@hadoop01 hadoop-2.7.1]# source /etc/profile

查看Hadoop版本
[root@hadoop01 hadoop-2.7.1]# hadoop version
Hadoop 2.7.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a
Compiled by jenkins on 2015-06-29T06:04Z
Compiled with protoc 2.5.0
From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
This command was run using /export/servers/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar
[root@hadoop01 hadoop-2.7.1]# 

其他兩臺(tái)一樣安裝配置(此處略)

0. 進(jìn)入主節(jié)點(diǎn)配置目錄/etc/hadoop/

[root@hadoop01 ~]# cd /export/servers/hadoop-2.7.1/etc/hadoop/
[root@hadoop01 hadoop]# ll
total 156
-rw-r--r--. 1 10021 10021  4436 Jun 28  2015 capacity-scheduler.xml
-rw-r--r--. 1 10021 10021  1335 Jun 28  2015 configuration.xsl
-rw-r--r--. 1 10021 10021   318 Jun 28  2015 container-executor.cfg
-rw-r--r--. 1 10021 10021  1123 Mar 29 01:12 core-site.xml
-rw-r--r--. 1 10021 10021  3670 Jun 28  2015 hadoop-env.cmd
-rw-r--r--. 1 10021 10021  4240 Mar 29 01:20 hadoop-env.sh
-rw-r--r--. 1 10021 10021  2598 Jun 28  2015 hadoop-metrics2.properties
-rw-r--r--. 1 10021 10021  2490 Jun 28  2015 hadoop-metrics.properties
-rw-r--r--. 1 10021 10021  9683 Jun 28  2015 hadoop-policy.xml
-rw-r--r--. 1 10021 10021  1132 Mar 29 01:21 hdfs-site.xml
-rw-r--r--. 1 10021 10021  1449 Jun 28  2015 httpfs-env.sh
-rw-r--r--. 1 10021 10021  1657 Jun 28  2015 httpfs-log4j.properties
-rw-r--r--. 1 10021 10021    21 Jun 28  2015 httpfs-signature.secret
-rw-r--r--. 1 10021 10021   620 Jun 28  2015 httpfs-site.xml
-rw-r--r--. 1 10021 10021  3518 Jun 28  2015 kms-acls.xml
-rw-r--r--. 1 10021 10021  1527 Jun 28  2015 kms-env.sh
-rw-r--r--. 1 10021 10021  1631 Jun 28  2015 kms-log4j.properties
-rw-r--r--. 1 10021 10021  5511 Jun 28  2015 kms-site.xml
-rw-r--r--. 1 10021 10021 11237 Jun 28  2015 log4j.properties
-rw-r--r--. 1 10021 10021   951 Jun 28  2015 mapred-env.cmd
-rw-r--r--. 1 10021 10021  1431 Mar 29 01:28 mapred-env.sh
-rw-r--r--. 1 10021 10021  4113 Jun 28  2015 mapred-queues.xml.template
-rw-r--r--. 1 root  root    950 Mar 29 01:29 mapred-site.xml
-rw-r--r--. 1 10021 10021   758 Jun 28  2015 mapred-site.xml.template
-rw-r--r--. 1 10021 10021    10 Jun 28  2015 slaves
-rw-r--r--. 1 10021 10021  2316 Jun 28  2015 ssl-client.xml.example
-rw-r--r--. 1 10021 10021  2268 Jun 28  2015 ssl-server.xml.example
-rw-r--r--. 1 10021 10021  2250 Jun 28  2015 yarn-env.cmd
-rw-r--r--. 1 10021 10021  4585 Mar 29 01:23 yarn-env.sh
-rw-r--r--. 1 10021 10021   933 Mar 29 01:25 yarn-site.xml

1. 核心配置core-site.xml

[root@hadoop01 hadoop]# vi core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <!--指定HDFS的NameNode-->
        <value>hdfs://192.168.196.71:9000</value>
    </property>
    <!--配置Hadoop的臨時(shí)目錄,默認(rèn)/tem/hadoop-${user.name}-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/export/servers/hadoop-2.7.1/tmp</value>
    </property>
</configuration>

2. HDFS配置hadoop-env.sh

[root@hadoop01 hadoop]# vi hadoop-env.sh 

配置JAVA_HOME

# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_281-amd64

3. HDFS配置hdfs-site.xml


[root@hadoop01 hadoop]# vi hdfs-site.xml

<configuration>

    <!--指定HDFS的數(shù)量.........................-->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    
    <!--secondary namenode 輔助節(jié)點(diǎn)所在主機(jī)的IP和端口-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>192.168.196.73:50090</value>
    </property>

</configuration>

4. YARN配置yarn-env.sh

配置JAVA_HOME

5. YARN配置yarn-site.xml


[root@hadoop01 hadoop]# vi yarn-site.xml

<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>192.168.196.72</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

6. MapReduce配置mapred-env.sh

配置JAVA_HOME

7. MapReduce配置mapred-site.xml

拷貝template模板
[root@hadoop01 hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@hadoop01 hadoop]# vi mapred-site.xml

<configuration>
    <!--指定MapReduce運(yùn)行時(shí)的框架,這里指定在YARN上,默認(rèn)在local-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

集群分發(fā)配置文件

一、分發(fā)腳本xsync


[root@hadoop01 bin]# pwd
/bin
[root@hadoop01 bin]# vi xsync

#!/bin/bash
#獲取輸入?yún)?shù)個(gè)數(shù),如果沒(méi)有參數(shù),直接退出
pcount=$#
if((pcount==0)); then
echo no args;
exit;
fi

#獲取文件名稱(chēng)
p1=$1
fname=`basename $p1`
echo fname=$fname

#獲取上級(jí)目錄到絕對(duì)路徑
pdir=`~cd -P $(dirname Sp1); pwd`
echo pdir=$pdir

#獲取當(dāng)前用戶(hù)名稱(chēng)
user=`whoami`
#循環(huán)
for((host=72; host<74 ; host++)); do
       echo ====== rsync -rvl $pdir/$fname $user@192.168.196.$host:$pdir ======
       rsync -rvl $pdir/$fname $user@192.168.196.$host:$pdir
done
[root@hadoop01 bin]# chmod 777 xsync

二、集群分發(fā)配置

[root@hadoop01 bin]# cd /export/servers/hadoop-2.7.1/etc/

[root@hadoop01 etc]# xsync hadoop/
fname=hadoop
/usr/bin/xsync: line 15: ~cd: command not found
pdir=/export/servers/hadoop-2.7.1/etc
====== rsync -rvl /export/servers/hadoop-2.7.1/etc/hadoop root@192.168.196.72:/export/servers/hadoop-2.7.1/etc ======
sending incremental file list
hadoop/.mapred-env.sh.swp
hadoop/core-site.xml
hadoop/hadoop-env.sh
hadoop/hdfs-site.xml
hadoop/mapred-env.sh
hadoop/mapred-site.xml
hadoop/yarn-env.sh
hadoop/yarn-site.xml

sent 19,167 bytes  received 295 bytes  3,538.55 bytes/sec
total size is 91,049  speedup is 4.68
====== rsync -rvl /export/servers/hadoop-2.7.1/etc/hadoop root@192.168.196.73:/export/servers/hadoop-2.7.1/etc ======
sending incremental file list
hadoop/.mapred-env.sh.swp
hadoop/core-site.xml
hadoop/hadoop-env.sh
hadoop/hdfs-site.xml
hadoop/mapred-env.sh
hadoop/mapred-site.xml
hadoop/yarn-env.sh
hadoop/yarn-site.xml

sent 19,167 bytes  received 295 bytes  2,994.15 bytes/sec
total size is 91,049  speedup is 4.68
[root@hadoop01 etc]# 

三、查看文件分發(fā)情況

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linuxhadoop03檢查文件分發(fā)情況(此處略)

[root@hadoop02 ~]#  source /etc/profile
[root@hadoop03 ~]#  source /etc/profile

集群:?jiǎn)喂?jié)點(diǎn)啟動(dòng)

hadoop01

格式化

先檢查目錄cd /export/servers/hadoop-2.7.1下是否存在logs和tmp。
若有,先刪除 rm -rf tmp/ logs/,三臺(tái)機(jī)都要查?。?!
這次才是干凈的目錄??

[root@hadoop01 etc]# cd ..
[root@hadoop01 hadoop-2.7.1]# ll
total 28
drwxr-xr-x. 2 10021 10021   194 Jun 28  2015 bin
drwxr-xr-x. 3 10021 10021    20 Jun 28  2015 etc
drwxr-xr-x. 2 10021 10021   106 Jun 28  2015 include
drwxr-xr-x. 3 10021 10021    20 Jun 28  2015 lib
drwxr-xr-x. 2 10021 10021   239 Jun 28  2015 libexec
-rw-r--r--. 1 10021 10021 15429 Jun 28  2015 LICENSE.txt
-rw-r--r--. 1 10021 10021   101 Jun 28  2015 NOTICE.txt
-rw-r--r--. 1 10021 10021  1366 Jun 28  2015 README.txt
drwxr-xr-x. 2 10021 10021  4096 Jun 28  2015 sbin
drwxr-xr-x. 4 10021 10021    31 Jun 28  2015 share
[root@hadoop01 hadoop-2.7.1]# 

首次啟動(dòng)需要格式化namenode

[root@hadoop01 hadoop-2.7.1]# bin/hdfs namenode -format
(略)
24/03/29 09:34:10 INFO util.GSet: capacity      = 2^15 = 32768 entries
24/03/29 09:34:26 INFO namenode.FSImage: Allocated new BlockPoolId: BP-241537842-192.168.196.71-1711730066285
24/03/29 09:34:26 INFO common.Storage: Storage directory /export/servers/hadoop-2.7.1/tmp/dfs/name has been successfully formatted.
24/03/29 09:34:26 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
24/03/29 09:34:26 INFO util.ExitUtil: Exiting with status 0
24/03/29 09:34:26 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.168.196.71
************************************************************/

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linux

發(fā)現(xiàn)問(wèn)題?。?/h5>
[root@hadoop01 hadoop-2.7.1]# jps
bash: jps: command not found...

/usr/bin 目錄配置jps

[root@hadoop01 bin]# pwd
/usr/bin
[root@hadoop01 bin]# ln -s -f /usr/java/jdk1.8.0_281-amd64/bin/jps jps
[root@hadoop01 bin]# jps
46708 Jps
啟動(dòng) NameNode、DataNode
[root@hadoop01 bin]# cd /export/servers/hadoop-2.7.1
[root@hadoop01 hadoop-2.7.1]# jps
46629 Jps

[root@hadoop01 hadoop-2.7.1]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out
[root@hadoop01 hadoop-2.7.1]# jps
49991 Jps
49944 NameNode

[root@hadoop01 hadoop-2.7.1]# sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop01.out
[root@hadoop01 hadoop-2.7.1]# jps
51479 DataNode
51527 Jps
49944 NameNode

hadoop02

啟動(dòng)DataNode
Last login: Fri Mar 29 09:33:25 2024 from 192.168.196.1
[root@hadoop02 ~]# cd /export/servers/hadoop-2.7.1
[root@hadoop02 hadoop-2.7.1]# sbin/hadoop-d~~t~~ aemon.sh start datanode
starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop02.out
[root@hadoop02 hadoop-2.7.1]# jps
bash: jps: command not found...
[root@hadoop02 hadoop-2.7.1]# cd /usr/bin/
[root@hadoop02 bin]# pwd
/usr/bin
[root@hadoop02 bin]# ln -s -f /usr/java/jdk1.8.0_281-amd64/bin/jps jps
[root@hadoop02 bin]# cd /export/servers/hadoop-2.7.1
[root@hadoop02 hadoop-2.7.1]# jps
34802 Jps
34505 DataNode

hadoop03

啟動(dòng)DataNode
Last login: Fri Mar 29 16:43:14 2024 from 192.168.196.1
[root@hadoop03 ~]# cd /usr/bin/
[root@hadoop03 bin]# ln -s -f /usr/java/jdk1.8.0_281-amd64/bin/jps jps
[root@hadoop03 bin]#  cd /export/servers/hadoop-2.7.1
[root@hadoop03 hadoop-2.7.1]# sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop03.out
[root@hadoop03 hadoop-2.7.1]# jps
41073 DataNode
41125 Jps
[root@hadoop03 hadoop-2.7.1]# 

單啟web測(cè)試

關(guān)閉防火墻,三臺(tái)。

[root@hadoop01 ~]# systemctl stop firewalld 

瀏覽器http://192.168.196.71:50070

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linux

集群群?jiǎn)⑴c配置

配置slaves

[root@hadoop01 bin]# cd /export/servers/hadoop-2.7.1
[root@hadoop01 hadoop-2.7.1]# cd etc/
[root@hadoop01 etc]# cd hadoop/
[root@hadoop01 hadoop]# vi slaves 
[root@hadoop01 hadoop]# cat slaves 
192.168.196.71
192.168.196.72
192.198.196.73

xsync腳本同步

[root@hadoop01 hadoop]# xsync slaves 
fname=slaves
/usr/bin/xsync: line 15: ~cd: command not found
pdir=/export/servers/hadoop-2.7.1/etc/hadoop
====== rsync -rvl /export/servers/hadoop-2.7.1/etc/hadoop/slaves root@192.168.196.72:/export/servers/hadoop-2.7.1/etc/hadoop ======
sending incremental file list
slaves

sent 137 bytes  received 41 bytes  27.38 bytes/sec
total size is 47  speedup is 0.26
====== rsync -rvl /export/servers/hadoop-2.7.1/etc/hadoop/slaves root@192.168.196.73:/export/servers/hadoop-2.7.1/etc/hadoop ======
sending incremental file list
slaves

sent 137 bytes  received 41 bytes  32.36 bytes/sec
total size is 47  speedup is 0.26

檢查分發(fā)情況

[root@hadoop02 hadoop-2.7.1]# cd etc/
[root@hadoop02 etc]# cd hadoop/
[root@hadoop02 hadoop]# cat slaves 
192.168.196.71
192.168.196.72
192.198.196.73

hadoop03(此處略)

停止單節(jié)點(diǎn)與jps檢查

[root@hadoop01 hadoop]# jps
51479 DataNode
49944 NameNode
90493 Jps
[root@hadoop01 hadoop]# cd ..
[root@hadoop01 etc]# cd ..
[root@hadoop01 hadoop-2.7.1]# sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop01 hadoop-2.7.1]# sbin/hadoop-daemon.sh stop namenode
stopping namenode
[root@hadoop01 hadoop-2.7.1]# jps
90981 Jps
[root@hadoop01 hadoop-2.7.1]# 
[root@hadoop02 hadoop-2.7.1]# jps
34505 DataNode
73214 Jps
[root@hadoop02 hadoop-2.7.1]# sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop02 hadoop-2.7.1]# jps
73382 Jps
[root@hadoop02 hadoop-2.7.1]# 
[root@hadoop03 hadoop]# cd ..
[root@hadoop03 etc]# cd ..
[root@hadoop03 hadoop-2.7.1]# jps
41073 DataNode
75572 Jps
[root@hadoop03 hadoop-2.7.1]# sbin/hadoop-daemon.sh stop datanode
stopping datanode
[root@hadoop03 hadoop-2.7.1]# jps
75990 Jps

hadoop01啟動(dòng)HDFS(第1次,異常)

[root@hadoop01 hadoop-2.7.1]# sbin/start-dfs.sh 
Starting namenodes on [hadoop01]
The authenticity of host 'hadoop01 (fe80::466a:550e:e194:cd87%ens33)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? yes
hadoop01: Warning: Permanently added 'hadoop01,fe80::466a:550e:e194:cd87%ens33' (ECDSA) to the list of known hosts.
hadoop01: starting namenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out
The authenticity of host '192.168.196.71 (192.168.196.71)' can't be established.
ECDSA key fingerprint is SHA256:sd4vCsEraYT7vqvsW4egMtl9ctOCt9SkHQQWe0jK6BM.
ECDSA key fingerprint is MD5:04:f4:c4:4b:8a:ae:a5:cf:de:52:16:40:db:b4:17:fc.
Are you sure you want to continue connecting (yes/no)? 192.168.196.72: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop02.out
192.198.196.73: ssh: connect to host 192.198.196.73 port 22: Connection refused
yes
192.168.196.71: Warning: Permanently added '192.168.196.71' (ECDSA) to the list of known hosts.
192.168.196.71: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop01.out
Starting secondary namenodes [192.168.196.73]
192.168.196.73: starting secondarynamenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-secondarynamenode-hadoop03.out
發(fā)現(xiàn)問(wèn)題?。?!
hadoop01: starting namenode, 正常
192.168.196.72: starting datanode, 正常
192.198.196.73: ssh: connect to host 192.198.196.73 port 22: Connection refused ==> 異常
192.168.196.71: starting datanode, 正常
192.168.196.73: starting secondarynamenode, 正常

停止start-dfs.shjps查看各節(jié)點(diǎn)情況,確保datanode、namenode、SecondaryNameNode 全都關(guān)掉。

檢查cd /export/servers/hadoop-2.7.1目錄下是否存在logs和data。刪除 rm -rf logs/,三臺(tái)機(jī)都要查!?。?/p>

主節(jié)點(diǎn)使用scp同步到另外兩臺(tái)機(jī)

scp拷貝同步
[root@hadoop01 ~]# scp /etc/profile 192.168.196.72:/etc/profile
profile                                                                          100% 2049   664.2KB/s   00:00    
[root@hadoop01 ~]# scp /etc/profile 192.168.196.73:/etc/profile
profile                                                                          100% 2049   535.7KB/s   00:00    
[root@hadoop01 ~]# scp -r /export/ 192.168.196.72:/
[root@hadoop01 ~]# scp -r /export/ 192.168.196.73:/

hadoop01啟動(dòng)HDFS(第2次,正常)

[root@hadoop01 etc]# start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out
192.168.196.73: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop03.out
192.168.196.72: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop02.out
192.168.196.71: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop01.out
Starting secondary namenodes [192.168.196.73]
192.168.196.73: starting secondarynamenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-secondarynamenode-hadoop03.out
jps查各節(jié)點(diǎn)進(jìn)程
[root@hadoop01 hadoop-2.7.1]# jps
6033 Jps
5718 NameNode
5833 DataNode
[root@hadoop01 hadoop-2.7.1]# 
[root@hadoop02 hadoop-2.7.1]# jps
5336 DataNode
5390 Jps
[root@hadoop02 hadoop-2.7.1]# 
[root@hadoop03 hadoop-2.7.1]# jps
5472 SecondaryNameNode
5523 Jps
5387 DataNode
[root@hadoop03 hadoop-2.7.1]# 

hadoop02啟動(dòng)YARN(第1次,異常)

[root@hadoop02 hadoop-2.7.1]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-resourcemanager-hadoop02.out
192.168.196.71: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop01.out
192.168.196.72: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop02.out
192.168.196.73: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop03.out
[root@hadoop02 hadoop-2.7.1]# jps
12035 NodeManager
11911 ResourceManager
12409 Jps
[root@hadoop03 hadoop-2.7.1]# jps
11890 Jps
11638 SecondaryNameNode
11852 NodeManager
[root@hadoop01 current]# jps
13206 NodeManager
12791 NameNode
13375 Jps
發(fā)現(xiàn)問(wèn)題?。?!
NodeManager和DataNode進(jìn)程同時(shí)只能工作一個(gè)。

原因: -format 在格式化后會(huì)生成一個(gè)clusterld(集群id),后續(xù)格式化新生成的clusterld會(huì)與未刪除的產(chǎn)生沖突。

解決:
在格式化之前,先刪除之前格式化 -format產(chǎn)生的信息(默認(rèn)在/tmp,如果配置了該目錄,那就去你配置的目錄,具體查xml文件)rm -rf tmp/ logs/

[root@hadoop01 hadoop-2.7.1]# rm -rf tmp/ logs/
[root@hadoop01 hadoop-2.7.1]# ll
total 28
drwxr-xr-x. 2 10021 10021   194 Jun 28  2015 bin
drwxr-xr-x. 3 10021 10021    20 Jun 28  2015 etc
drwxr-xr-x. 2 10021 10021   106 Jun 28  2015 include
drwxr-xr-x. 3 10021 10021    20 Jun 28  2015 lib
drwxr-xr-x. 2 10021 10021   239 Jun 28  2015 libexec
-rw-r--r--. 1 10021 10021 15429 Jun 28  2015 LICENSE.txt
-rw-r--r--. 1 10021 10021   101 Jun 28  2015 NOTICE.txt
-rw-r--r--. 1 10021 10021  1366 Jun 28  2015 README.txt
drwxr-xr-x. 2 10021 10021  4096 Jun 28  2015 sbin
drwxr-xr-x. 4 10021 10021    31 Jun 28  2015 share

(略)其他兩臺(tái)機(jī)一樣的操作:rm -rf tmp/ logs/

hadoop02啟動(dòng)YARN(第2次,正常,hdfs+yarn完整啟動(dòng))

1. NameNode格式化

[root@hadoop01 hadoop-2.7.1]# hdfs namenode -format
(略。。。)
24/03/31 03:04:20 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1182120143-192.168.196.71-1711879460448
24/03/31 03:04:20 INFO common.Storage: Storage directory /export/servers/hadoop-2.7.1/tmp/dfs/name has been successfully formatted.
24/03/31 03:04:20 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
24/03/31 03:04:20 INFO util.ExitUtil: Exiting with status 0
24/03/31 03:04:20 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.168.196.71
************************************************************/



2.查看進(jìn)程

[root@hadoop01 hadoop-2.7.1]# jps
17391 Jps
[root@hadoop02 hadoop-2.7.1]# jps
16333 Jps
[root@hadoop03 hadoop-2.7.1]# jps
16181 Jps



3.hadoop01啟動(dòng)hafs

[root@hadoop01 hadoop-2.7.1]# start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-namenode-hadoop01.out
192.168.196.73: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop03.out
192.168.196.72: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop02.out
192.168.196.71: starting datanode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-datanode-hadoop01.out
Starting secondary namenodes [192.168.196.73]
192.168.196.73: starting secondarynamenode, logging to /export/servers/hadoop-2.7.1/logs/hadoop-root-secondarynamenode-hadoop03.out
[root@hadoop01 hadoop-2.7.1]# 



4.查看進(jìn)程

[root@hadoop01 hadoop-2.7.1]# jps
18183 Jps
17833 DataNode
17727 NameNode

[root@hadoop02 hadoop-2.7.1]# jps
16705 Jps
16493 DataNode

[root@hadoop03 hadoop-2.7.1]# jps
16438 SecondaryNameNode
16360 DataNode
16638 Jps



5.hadoop02啟動(dòng)yarn

[root@hadoop02 hadoop-2.7.1]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-resourcemanager-hadoop02.out
192.168.196.71: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop01.out
192.168.196.73: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop03.out
192.168.196.72: starting nodemanager, logging to /export/servers/hadoop-2.7.1/logs/yarn-root-nodemanager-hadoop02.out


6.查看進(jìn)程

[root@hadoop01 hadoop-2.7.1]# jps
18305 NodeManager
18373 Jps
17833 DataNode
17727 NameNode

[root@hadoop02 hadoop-2.7.1]# jps
16995 Jps
16917 NodeManager
16809 ResourceManager
16493 DataNode

[root@hadoop03 hadoop-2.7.1]# jps
16755 NodeManager
16438 SecondaryNameNode
16360 DataNode
18028 Jps

Web測(cè)試

HDFS:NameNode、DataNode

[root@hadoop01 ~]# systemctl stop firewalld  

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linux
http://192.168.196.71:50070/

[root@hadoop03 hadoop-2.7.1]# systemctl stop firewalld  

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linux
http://192.168.196.73:50090/

YARN:ResourceManager

[root@hadoop02 ~]# systemctl stop firewalld  

【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén)),云計(jì)算,分布式,云計(jì)算,hadoop,大數(shù)據(jù),linux
http://192.168.196.72:8088

可能會(huì)遇到的坑(小結(jié))

  1. 啟動(dòng)時(shí),集群會(huì)生成一個(gè)id,相關(guān)信息會(huì)存放在 /tmp 、 /logs 目錄下,如果實(shí)驗(yàn)出現(xiàn)問(wèn)題需要重做時(shí),為避免下次啟動(dòng)格式化時(shí)生成新的id與原有的沖突,-format 格式化前要注意:
    首先jsp確認(rèn)進(jìn)程全部關(guān)閉,然后刪除rm -rf tmp/ logs/這兩個(gè)目錄。/logs與hdfs相關(guān),/tmp與yarn相關(guān)。

  2. web測(cè)試無(wú)法打開(kāi)時(shí),查訪(fǎng)問(wèn)鏈接對(duì)應(yīng)的主機(jī)的防火墻是否關(guān)閉systemctl stop firewalld

  3. jps檢查進(jìn)程已沒(méi)有,但是重新啟動(dòng)時(shí),進(jìn)程已經(jīng)開(kāi)啟,在linux的根目錄下/tmp存在啟動(dòng)的進(jìn)程臨時(shí)文件,將集群相關(guān)進(jìn)程刪除掉,再重新啟動(dòng)集群。

  4. jps不生效問(wèn)題:
    先查環(huán)境JAVA_HOME,用 java -version驗(yàn)證。
    如果環(huán)境沒(méi)問(wèn)題,進(jìn)入/usr/bin 目錄,配置 ln -s -f /usr/java/jdk1.8.0_211/bin/jps jps
    jps命令配置(jps: command not found)

  5. yarn和hfds配置如果不在一臺(tái)機(jī)子,web訪(fǎng)問(wèn)的IP要注意修改,相關(guān)配置在:
    hdfs ?? core-site.xml
    yarn ?? yarn-site.xml

  6. 本還想做mapreduce單詞計(jì)數(shù)的,坑有點(diǎn)多,搞了好久,以后有機(jī)會(huì)再補(bǔ)上吧。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-848719.html

到了這里,關(guān)于【云計(jì)算】Hadoop2.x完全分布式集群(入門(mén))的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 1.1完全分布式Hadoop署集群

    大家好,我是行不更名,坐不改姓的宋曉剛,下面將帶領(lǐng)大家從基礎(chǔ)到小白Hadoop學(xué)習(xí),跟上我的步伐進(jìn)入Hadoop的世界。 微信:15319589104 QQ: 2981345658 文章介紹: 在大數(shù)據(jù)時(shí)代,Hadoop已成為處理海量數(shù)據(jù)的必備工具。但如何從零開(kāi)始搭建一個(gè)完整的Hadoop分布式集群呢?本文將為

    2024年01月23日
    瀏覽(17)
  • hadoop-3.3.3完全分布式集群搭建

    hadoop-3.3.3完全分布式集群搭建

    隨著大數(shù)據(jù)的不斷發(fā)展,hadoop在前段時(shí)間就已經(jīng)更新到了3.x版本,與2.x版本存在著一些差異,在某大數(shù)據(jù)競(jìng)賽中也是使用的hadoop3.x的版本,本文就介紹hadoop3.x版本的完全分布式集群搭建。 jdk:1.8 hadoop:3.3.3 操作系統(tǒng):centos7 需要的所有安裝包都放在master節(jié)點(diǎn)的/opt目錄下,安

    2024年02月09日
    瀏覽(29)
  • Hadoop集群部署-(完全分布式模式,hadoop-2.7.4)

    Hadoop集群部署-(完全分布式模式,hadoop-2.7.4)

    相關(guān)軟件下載準(zhǔn)備:(推薦) 虛擬機(jī)運(yùn)行環(huán)境:VMware 16.1.1 映像文件:CentOS-7-x86_64-DVD-1810.iso 遠(yuǎn)程登錄工具:Xshell-7.0.0090.exe Xftp傳輸工具:Xftp-7.0.0111p.exe 首先掌握VMware的下載與安裝,有l(wèi)inux學(xué)習(xí)基礎(chǔ)的同學(xué)可以略看。 參考鏈接:https://mp.weixin.qq.com/s/CUiauodcjCFPIXEFEx8fOw 【軟件名

    2024年02月09日
    瀏覽(22)
  • hadoop完全分布式集群搭建(超詳細(xì))-大數(shù)據(jù)集群搭建

    hadoop完全分布式集群搭建(超詳細(xì))-大數(shù)據(jù)集群搭建

    本次搭建完全分布式集群用到的環(huán)境有: jdk1.8.0 hadoop-2.7.7 本次搭建集群所需環(huán)境也給大家準(zhǔn)備了,下載鏈接地址:https://share.weiyun.com/dk7WgaVk 密碼:553ubk 本次完全分布式集群搭建需要提前建立好三臺(tái)虛擬機(jī),我分別把它們的主機(jī)名命名為:master,slave1,slave2 一.配置免密登陸 首先

    2024年02月10日
    瀏覽(29)
  • Hadoop3.1.4完全分布式集群搭建

    Hadoop3.1.4完全分布式集群搭建

    在Centos7中直接使用root用戶(hù)執(zhí)行hostnamectl命令修改,重啟(reboot)后永久生效。 要求:三臺(tái)主機(jī)的名字分別為:master slave1 slave2 關(guān)閉后,可查看防火墻狀態(tài),當(dāng)顯示disavtive(dead)的字樣,說(shuō)明CentOS 7防火墻已經(jīng)關(guān)閉。 但要注意的是,上面的命令只是臨時(shí)關(guān)閉了CentOS 7防火墻,

    2024年04月17日
    瀏覽(56)
  • Linux環(huán)境搭建Hadoop及完全分布式集群

    Hadoop是一個(gè)開(kāi)源的分布式計(jì)算框架,旨在處理大規(guī)模數(shù)據(jù)集和進(jìn)行并行計(jì)算。核心包括兩個(gè)組件:HFDS、MapReduce。 配置方案 各虛擬機(jī)的用戶(hù)名分別為test0、test1、test2,主機(jī)名為hadoop100、hadoop101、hadoop102 虛擬機(jī)的分配如下: hadoop100:NameNode + ResourceManager hadoop101:DataNode + NodeM

    2024年03月23日
    瀏覽(35)
  • 寫(xiě)給大忙人看Hadoop完全分布式集群搭建

    寫(xiě)給大忙人看Hadoop完全分布式集群搭建

    vi /usr/local/hadoop/hadoop-2.10.0/etc/hadoop/hdfs-site.xml 修改其內(nèi)容為: dfs.replication 3 dfs.name.dir /usr/local/hadoop/hdfs/name dfs.data.dir /usr/local/hadoop/hdfs/data 復(fù)制mapred-site.xml.template為mapred-site.xml cp /usr/local/hadoop/hadoop-2.10.0/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/hadoop-2.10.0/etc/hadoop/mapred-site.xml

    2024年03月27日
    瀏覽(23)
  • 用三臺(tái)云服務(wù)器搭建hadoop完全分布式集群

    用三臺(tái)云服務(wù)器搭建hadoop完全分布式集群

    本次利用云服務(wù)器搭建Hadoop集群, 在開(kāi)始之前,你需要3臺(tái)云服務(wù)器,可以在同一家購(gòu)買(mǎi)也可以在不同家購(gòu)買(mǎi)。此次教程采用百度云產(chǎn)品,可以換不同賬號(hào)新手免費(fèi)試用幾個(gè)月,具體配置如下: 服務(wù)器名稱(chēng) 配置 磁盤(pán)容量 master 2cpu 內(nèi)存4GB 40G slave1 1cpu 內(nèi)存2GB 60G slave2 1cpu 內(nèi)存

    2024年02月04日
    瀏覽(87)
  • 大數(shù)據(jù)內(nèi)容分享(九):Hadoop-生產(chǎn)集群搭建(完全分布式)

    大數(shù)據(jù)內(nèi)容分享(九):Hadoop-生產(chǎn)集群搭建(完全分布式)

    目錄 Hadoop運(yùn)行模式——完全分布式 1、準(zhǔn)備3臺(tái)虛擬機(jī)(關(guān)閉防火墻、配置靜態(tài)IP 和 主機(jī)名稱(chēng)) 2、安裝JDK 和 Hadoop 并配置JDK和Hadoop的環(huán)境變量 3、配置完全分布式集群 4、集群配置 1)集群部署規(guī)劃 2)配置文件說(shuō)明 3)配置集群 5、集群?jiǎn)?dòng) 與 測(cè)試 1)workers的配置 2)啟動(dòng)集

    2024年02月21日
    瀏覽(24)
  • 部署HDFS集群(完全分布式模式、hadoop用戶(hù)控制集群、hadoop-3.3.4+安裝包)

    部署HDFS集群(完全分布式模式、hadoop用戶(hù)控制集群、hadoop-3.3.4+安裝包)

    目錄 前置 一、上傳解壓 (一 )上傳 (二)解壓 二、修改配置文件 (一)配置workers文件 (二)配置hadoop-env.sh文件 (三)配置core-site.xml文件 (四)配置hdfs-site.xml文件 三、分發(fā)到hp2、hp3, 并設(shè)置環(huán)境變量 (一)準(zhǔn)備數(shù)據(jù)目錄? ? (二)配置環(huán)境變量 四、創(chuàng)建數(shù)據(jù)目錄,并

    2024年04月14日
    瀏覽(28)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包