1.修改主機名稱(需要在root用戶下執(zhí)行)
hostnamectl set-hostname 需要修改的主機名稱
或者修改配置文件 vim /etc/hostname
2.如果主機沒有固定IP,需要固定IP(這一步自行查詢)
3.關(guān)閉防火墻
systemctl start firewalld.service #開啟防火墻
systemctl restart firewalld.service #重啟防火墻
systemctl stop firewalld.service #關(guān)閉防火墻
systemctl status firewalld.service # 防火墻狀態(tài)
4.禁用selinux
永久關(guān)閉selinux 安全策略,可以修改/etc/selinux/config, 將SELINUX=enforcing 修改為SELINUX=disabled
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
5.設(shè)置ssh免密登錄
進入/root/.ssh儲存密鑰文件夾,通過ls -l指令查看是否有舊密鑰
cd /root/.ssh #進入秘鑰存放目錄
rm -rf * #刪除舊秘鑰
使用ssh-keygen -t dsa 命令生成秘鑰,在這個過程中需要多次回車鍵選取默認配置
[root@localhost .ssh]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:QNHQYbzj9rWNmAItP5x root@hadoopmaster
The key's randomart image is:
+---[DSA 1024]----+
| +*oo . . |
| .
| B * *
| + oo.oo|
+----[SHA256]-----+
將生成的密鑰文件id_dsa.pub 復制到SSH指定的密鑰文件中authorized_keys中
cat id_dsa.pub >>authorized_keys
測試秘鑰是否登入成功
[root@localhost .ssh]# ssh hadoopmaster
The authenticity of host 'hadoopmaster (fe80::7468:4a91:e381:bd03%eth0)' can't be established.
ECDSA key fingerprint is SHA256:SOi/rsJBsRn/zcHQ/gtT0Bg.
ECDSA key fingerprint is MD5:6a:0:88:38:fc:e0:bf:4b6:bf:59:b0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoopmaster,f8a91:e3' (ECDSA) to the list of known hosts.
Last login: Fri Feb 2 10:17:45 2024 from 192.168
6.重啟
修改主機名等相關(guān)配置,必須重啟主機
[root@hadoopmaster ~]# reboot
7.安裝jdk
將jdk-8u341-linux-x64.rpm上傳到/user/local文件夾中,并執(zhí)行
rpm -ivh jdk-8u341-linux-x64.rpm
這樣安裝的jdk路徑為:/usr/java/jdk1.8.0_341-amd64
8.安裝hadoop(hadoop用戶下操作)
將hadoop-3.3.6.tar.gz文件上傳到/home/hadoop文件夾,然后使用tar -xvf hadoop-3.3.6.tar.gz 解壓文件,并使用mv hadoop-3.3.6 hadoop更改文件夾名
9.配置hadoop環(huán)境配置(root下操作)
vim /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
修改完之后,執(zhí)行source /etc/profile使變更環(huán)境變量生效
[root@hadoopmaster local]# source /etc/profile
在hadoop目錄創(chuàng)建data目錄
mkdir ./data
11.修改配置文件
進入/home/hadoop/hadoop/etc/hadoop查看目錄下的文件,配置幾個必要的文件
(1)配置core-site.xml
vim ./core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoopmaster:9000</value>
<description>NameNode URI</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/data</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131073</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
(2)配置hdfs-site.xml
vim ./hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop/data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop/data/dfs/data</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
(3)配置mapred-site.xml
vim ./mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoopmaster:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoopmaster:19888</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/share/hadoop/common/*:/home/hadoop/hadoop/share/hadoop/common/lib/*:/home/hadoop/hadoop/share/hadoop/hdfs/*:/home/hadoop/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop/share/hadoop/mapreduce/*:/home/hadoop/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/hadoop/share/hadoop/yarn/*:/home/hadoop/hadoop/share/hadoop/yarn/lib/*</value>
</property>
</configuration>
(4)配置yarn-site.xml
vim ./yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.connect.retry-interval.ms</name>
<value>20000</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>
<property>
<name>yarn.nodemanager.localizer.address</name>
<value>hadoopmaster:8040</value>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>hadoopmaster:8050</value>
</property>
<property>
<name>yarn.nodemanager.webapp.address</name>
<value>hadoopmaster:8042</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/hadoop/hadoop/yarndata/yarn</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/hadoop/hadoop/yarndata/log</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
</configuration>
(5)配置hadoop-env.sh
vim ./hadoop-env.sh
修改第54行
export JAVA_HOME=/usr/java/jdk1.8.0_341-amd64
(6)配置workers
vim ./workers
[hadoop@hadoopmaster hadoop]$ vim ./workers
[hadoop@hadoopmaster hadoop]$ cat ./workers
hadoopmaster
11.初始化hadoop
進入/home/hadoop/hadoop/bin路徑
執(zhí)行:hadoop namenode -format
12.Hadoop3 驗證
Hadoop 使用之前必須進行格式化,可以使用如下指令進行格式化:
hadoop namenode -format
如果在使用Hadoop的過程中出錯,或者Hadoop 無法正常啟動,可能需要重新格式化
重新格式化的流程步驟:
停止Hadoop
刪除Hadoop 下的data和logs文件夾
重新格式化
13.啟動hadoop
start-all.sh
查看進程
jps文章來源:http://www.zghlxwxcb.cn/news/detail-828803.html
13.停止hadoop
stop-all.sh文章來源地址http://www.zghlxwxcb.cn/news/detail-828803.html
到了這里,關(guān)于Centos7部署hadoop(單機)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!