国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】

這篇具有很好參考價(jià)值的文章主要介紹了尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

視頻地址:尚硅谷大數(shù)據(jù)Hadoop教程(Hadoop 3.x安裝搭建到集群調(diào)優(yōu))

  1. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記01【大數(shù)據(jù)概論】
  2. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】
  3. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記03【Hadoop-HDFS】
  4. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記04【Hadoop-MapReduce】
  5. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記05【Hadoop-Yarn】
  6. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記06【Hadoop-生產(chǎn)調(diào)優(yōu)手冊(cè)】
  7. 尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記07【Hadoop-源碼解析】

目錄

02_尚硅谷大數(shù)據(jù)技術(shù)之Hadoop(入門)V3.3

P007【007_尚硅谷_Hadoop_入門_課程介紹】07:29

P008【008_尚硅谷_Hadoop_入門_Hadoop是什么】03:00

P009【009_尚硅谷_Hadoop_入門_Hadoop發(fā)展歷史】05:52

P010【010_尚硅谷_Hadoop_入門_Hadoop三大發(fā)行版本】05:59

P011【011_尚硅谷_Hadoop_入門_Hadoop優(yōu)勢(shì)】03:52

P012【012_尚硅谷_Hadoop_入門_Hadoop1.x2.x3.x區(qū)別】03:00

P013【013_尚硅谷_Hadoop_入門_HDFS概述】06:26

P014【014_尚硅谷_Hadoop_入門_YARN概述】06:35

P015【015_尚硅谷_Hadoop_入門_MapReduce概述】01:55

P016【016_尚硅谷_Hadoop_入門_HDFS&YARN&MR關(guān)系】03:22

P017【017_尚硅谷_Hadoop_入門_大數(shù)據(jù)技術(shù)生態(tài)體系】09:17

P018【018_尚硅谷_Hadoop_入門_VMware安裝】04:41

P019【019_尚硅谷_Hadoop_入門_Centos7.5軟硬件安裝】15:56

P020【020_尚硅谷_Hadoop_入門_IP和主機(jī)名稱配置】10:50

P021【021_尚硅谷_Hadoop_入門_Xshell遠(yuǎn)程訪問(wèn)工具】09:05

P022【022_尚硅谷_Hadoop_入門_模板虛擬機(jī)準(zhǔn)備完成】12:25

P023【023_尚硅谷_Hadoop_入門_克隆三臺(tái)虛擬機(jī)】15:01

P024【024_尚硅谷_Hadoop_入門_JDK安裝】07:02

P025【025_尚硅谷_Hadoop_入門_Hadoop安裝】07:20

P026【026_尚硅谷_Hadoop_入門_本地運(yùn)行模式】11:56

P027【027_尚硅谷_Hadoop_入門_scp&rsync命令講解】15:01

P028【028_尚硅谷_Hadoop_入門_xsync分發(fā)腳本】18:14

P029【029_尚硅谷_Hadoop_入門_ssh免密登錄】11:25

P030【030_尚硅谷_Hadoop_入門_集群配置】13:24

P031【031_尚硅谷_Hadoop_入門_群起集群并測(cè)試】16:52

P032【032_尚硅谷_Hadoop_入門_集群崩潰處理辦法】08:10

P033【033_尚硅谷_Hadoop_入門_歷史服務(wù)器配置】05:26

P034【034_尚硅谷_Hadoop_入門_日志聚集功能配置】05:42

P035【035_尚硅谷_Hadoop_入門_兩個(gè)常用腳本】09:18

P036【036_尚硅谷_Hadoop_入門_兩道面試題】04:15

P037【037_尚硅谷_Hadoop_入門_集群時(shí)間同步】11:27

P038【038_尚硅谷_Hadoop_入門_常見問(wèn)題總結(jié)】10:57


02_尚硅谷大數(shù)據(jù)技術(shù)之Hadoop(入門)V3.3

P007【007_尚硅谷_Hadoop_入門_課程介紹】07:29

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P008【008_尚硅谷_Hadoop_入門_Hadoop是什么】03:00

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P009【009_尚硅谷_Hadoop_入門_Hadoop發(fā)展歷史】05:52

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P010【010_尚硅谷_Hadoop_入門_Hadoop三大發(fā)行版本】05:59

Hadoop三大發(fā)行版本:Apache、Cloudera、Hortonworks。

1Apache Hadoop

官網(wǎng)地址:http://hadoop.apache.org

下載地址:https://hadoop.apache.org/releases.html

2Cloudera Hadoop

官網(wǎng)地址:https://www.cloudera.com/downloads/cdh

下載地址:https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_6_download.html

(1)2008年成立的Cloudera是最早將Hadoop商用的公司,為合作伙伴提供Hadoop的商用解決方案,主要是包括支持、咨詢服務(wù)、培訓(xùn)。

(2)2009年Hadoop的創(chuàng)始人Doug Cutting也加盟Cloudera公司。Cloudera產(chǎn)品主要為CDH,Cloudera Manager,Cloudera Support

(3)CDH是Cloudera的Hadoop發(fā)行版,完全開源,比Apache Hadoop在兼容性,安全性,穩(wěn)定性上有所增強(qiáng)。Cloudera的標(biāo)價(jià)為每年每個(gè)節(jié)點(diǎn)10000美元。

(4)Cloudera Manager是集群的軟件分發(fā)及管理監(jiān)控平臺(tái),可以在幾個(gè)小時(shí)內(nèi)部署好一個(gè)Hadoop集群,并對(duì)集群的節(jié)點(diǎn)及服務(wù)進(jìn)行實(shí)時(shí)監(jiān)控。

3Hortonworks Hadoop

官網(wǎng)地址:https://hortonworks.com/products/data-center/hdp/

下載地址:https://hortonworks.com/downloads/#data-platform

(1)2011年成立的Hortonworks是雅虎與硅谷風(fēng)投公司Benchmark Capital合資組建。

(2)公司成立之初就吸納了大約25名至30名專門研究Hadoop的雅虎工程師,上述工程師均在2005年開始協(xié)助雅虎開發(fā)Hadoop,貢獻(xiàn)了Hadoop80%的代碼。

(3)Hortonworks的主打產(chǎn)品是Hortonworks Data Platform(HDP),也同樣是100%開源的產(chǎn)品,HDP除常見的項(xiàng)目外還包括了Ambari,一款開源的安裝和管理系統(tǒng)。

(4)2018年Hortonworks目前已經(jīng)被Cloudera公司收購(gòu)。

P011【011_尚硅谷_Hadoop_入門_Hadoop優(yōu)勢(shì)】03:52

Hadoop優(yōu)勢(shì)(4高)

  1. 高可靠性
  2. 高拓展性
  3. 高效性
  4. 高容錯(cuò)性

P012【012_尚硅谷_Hadoop_入門_Hadoop1.x2.x3.x區(qū)別】03:00

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P013【013_尚硅谷_Hadoop_入門_HDFS概述】06:26

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

Hadoop Distributed File System,簡(jiǎn)稱 HDFS,是一個(gè)分布式文件系統(tǒng)。

  • 1)NameNode(nn):存儲(chǔ)文件的元數(shù)據(jù),如文件名,文件目錄結(jié)構(gòu),文件屬性(生成時(shí)間、副本數(shù)、文件權(quán)限),以及每個(gè)文件的塊列表和塊所在的DataNode等。
  • 2)DataNode(dn):在本地文件系統(tǒng)存儲(chǔ)文件塊數(shù)據(jù),以及塊數(shù)據(jù)的校驗(yàn)和。
  • 3)Secondary NameNode(2nn):每隔一段時(shí)間對(duì)NameNode元數(shù)據(jù)備份。

P014【014_尚硅谷_Hadoop_入門_YARN概述】06:35

Yet Another Resource Negotiator 簡(jiǎn)稱 YARN ,另一種資源協(xié)調(diào)者,是 Hadoop 的資源管理器。

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P015【015_尚硅谷_Hadoop_入門_MapReduce概述】01:55

MapReduce 將計(jì)算過(guò)程分為兩個(gè)階段:Map 和 Reduce

  • 1)Map 階段并行處理輸入數(shù)據(jù)
  • 2)Reduce 階段對(duì) Map 結(jié)果進(jìn)行匯總

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P016【016_尚硅谷_Hadoop_入門_HDFS&YARN&MR關(guān)系】03:22

  1. HDFS
    1. NameNode:負(fù)責(zé)數(shù)據(jù)存儲(chǔ)。
    2. DataNode:數(shù)據(jù)存儲(chǔ)在哪個(gè)節(jié)點(diǎn)上。
    3. SecondaryNameNode:秘書,備份NameNode數(shù)據(jù)恢復(fù)NameNode部分工作。
  2. YARN:整個(gè)集群的資源管理。
    1. ResourceManager:資源管理,map階段。
    2. NodeManager
  3. MapReduce

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P017【017_尚硅谷_Hadoop_入門_大數(shù)據(jù)技術(shù)生態(tài)體系】09:17

大數(shù)據(jù)技術(shù)生態(tài)體系

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

推薦系統(tǒng)項(xiàng)目框架

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P018【018_尚硅谷_Hadoop_入門_VMware安裝】04:41

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建?尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P019【019_尚硅谷_Hadoop_入門_Centos7.5軟硬件安裝】15:56

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P020【020_尚硅谷_Hadoop_入門_IP和主機(jī)名稱配置】10:50

[root@hadoop100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@hadoop100 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.88.133  netmask 255.255.255.0  broadcast 192.168.88.255
        inet6 fe80::363b:8659:c323:345d  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:0f:0a:6d  txqueuelen 1000  (Ethernet)
        RX packets 684561  bytes 1003221355 (956.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 53538  bytes 3445292 (3.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 84  bytes 9492 (9.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 84  bytes 9492 (9.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:1c:3c:a9  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hadoop100 ~]# systemctl restart network
[root@hadoop100 ~]# cat /etc/host
cat: /etc/host: 沒(méi)有那個(gè)文件或目錄
[root@hadoop100 ~]# cat /etc/hostname
hadoop100
[root@hadoop100 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@hadoop100 ~]# vim /etc/hosts
[root@hadoop100 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.88.100  netmask 255.255.255.0  broadcast 192.168.88.255
        inet6 fe80::363b:8659:c323:345d  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:0f:0a:6d  txqueuelen 1000  (Ethernet)
        RX packets 684830  bytes 1003244575 (956.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 53597  bytes 3452600 (3.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 132  bytes 14436 (14.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 132  bytes 14436 (14.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:1c:3c:a9  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hadoop100 ~]# ll
總用量 40
-rw-------. 1 root root 1973 3月  14 10:19 anaconda-ks.cfg
-rw-r--r--. 1 root root 2021 3月  14 10:26 initial-setup-ks.cfg
drwxr-xr-x. 2 root root 4096 3月  14 10:27 公共
drwxr-xr-x. 2 root root 4096 3月  14 10:27 模板
drwxr-xr-x. 2 root root 4096 3月  14 10:27 視頻
drwxr-xr-x. 2 root root 4096 3月  14 10:27 圖片
drwxr-xr-x. 2 root root 4096 3月  14 10:27 文檔
drwxr-xr-x. 2 root root 4096 3月  14 10:27 下載
drwxr-xr-x. 2 root root 4096 3月  14 10:27 音樂(lè)
drwxr-xr-x. 2 root root 4096 3月  14 10:27 桌面
[root@hadoop100 ~]# 

vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="3241b48d-3234-4c23-8a03-b9b393a99a65"
DEVICE="ens33"
ONBOOT="yes"

IPADDR=192.168.88.100
GATEWAY=192.168.88.2
DNS1=192.168.88.2

vim /etc/hosts

127.0.0.1 ? localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 ? ? ? ? localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.88.100 hadoop100
192.168.88.101 hadoop101
192.168.88.102 hadoop102
192.168.88.103 hadoop103
192.168.88.104 hadoop104
192.168.88.105 hadoop105
192.168.88.106 hadoop106
192.168.88.107 hadoop107
192.168.88.108 hadoop108

192.168.88.151 node1 node1.itcast.cn
192.168.88.152 node2 node2.itcast.cn
192.168.88.153 node3 node3.itcast.cn

P021【021_尚硅谷_Hadoop_入門_Xshell遠(yuǎn)程訪問(wèn)工具】09:05

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P022【022_尚硅谷_Hadoop_入門_模板虛擬機(jī)準(zhǔn)備完成】12:25

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

yum install -y epel-release

systemctl stop firewalld

systemctl disable firewalld.service

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P023【023_尚硅谷_Hadoop_入門_克隆三臺(tái)虛擬機(jī)】15:01

vim /etc/sysconfig/network-scripts/ifcfg-ens33

vim /etc/hostname

reboot

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P024【024_尚硅谷_Hadoop_入門_JDK安裝】07:02

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

在hadoop102上安裝jdk,然后將jdk拷貝到hadoop103與hadoop104上。

P025【025_尚硅谷_Hadoop_入門_Hadoop安裝】07:20

同P024圖!

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P026【026_尚硅谷_Hadoop_入門_本地運(yùn)行模式】11:56

Apache Hadoop

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

http://node1:9870/explorer.html#/

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

[root@node1 ~]# cd /export/server/hadoop-3.3.0/share/hadoop/mapreduce/
[root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.0.jar wordcount /wordcount/input /wordcount/output
2023-03-20 14:43:07,516 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node1/192.168.88.151:8032
2023-03-20 14:43:09,291 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1679293699463_0001
2023-03-20 14:43:11,916 INFO input.FileInputFormat: Total input files to process : 1
2023-03-20 14:43:12,313 INFO mapreduce.JobSubmitter: number of splits:1
2023-03-20 14:43:13,173 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1679293699463_0001
2023-03-20 14:43:13,173 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-03-20 14:43:14,684 INFO conf.Configuration: resource-types.xml not found
2023-03-20 14:43:14,684 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-03-20 14:43:17,054 INFO impl.YarnClientImpl: Submitted application application_1679293699463_0001
2023-03-20 14:43:17,123 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1679293699463_0001/
2023-03-20 14:43:17,124 INFO mapreduce.Job: Running job: job_1679293699463_0001
2023-03-20 14:43:52,340 INFO mapreduce.Job: Job job_1679293699463_0001 running in uber mode : false
2023-03-20 14:43:52,360 INFO mapreduce.Job:  map 0% reduce 0%
2023-03-20 14:44:08,011 INFO mapreduce.Job:  map 100% reduce 0%
2023-03-20 14:44:16,986 INFO mapreduce.Job:  map 100% reduce 100%
2023-03-20 14:44:18,020 INFO mapreduce.Job: Job job_1679293699463_0001 completed successfully
2023-03-20 14:44:18,579 INFO mapreduce.Job: Counters: 54
        File System Counters
                FILE: Number of bytes read=31
                FILE: Number of bytes written=529345
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=142
                HDFS: Number of bytes written=17
                HDFS: Number of read operations=8
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
                HDFS: Number of bytes read erasure-coded=0
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=11303
                Total time spent by all reduces in occupied slots (ms)=6220
                Total time spent by all map tasks (ms)=11303
                Total time spent by all reduce tasks (ms)=6220
                Total vcore-milliseconds taken by all map tasks=11303
                Total vcore-milliseconds taken by all reduce tasks=6220
                Total megabyte-milliseconds taken by all map tasks=11574272
                Total megabyte-milliseconds taken by all reduce tasks=6369280
        Map-Reduce Framework
                Map input records=2
                Map output records=5
                Map output bytes=53
                Map output materialized bytes=31
                Input split bytes=108
                Combine input records=5
                Combine output records=2
                Reduce input groups=2
                Reduce shuffle bytes=31
                Reduce input records=2
                Reduce output records=2
                Spilled Records=4
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=546
                CPU time spent (ms)=3680
                Physical memory (bytes) snapshot=499236864
                Virtual memory (bytes) snapshot=5568684032
                Total committed heap usage (bytes)=365953024
                Peak Map Physical memory (bytes)=301096960
                Peak Map Virtual memory (bytes)=2779201536
                Peak Reduce Physical memory (bytes)=198139904
                Peak Reduce Virtual memory (bytes)=2789482496
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=34
        File Output Format Counters 
                Bytes Written=17
[root@node1 mapreduce]#

[root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.0.jar wordcount /wc_input /wc_output
2023-03-20 15:01:48,007 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node1/192.168.88.151:8032
2023-03-20 15:01:49,475 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1679293699463_0002
2023-03-20 15:01:50,522 INFO input.FileInputFormat: Total input files to process : 1
2023-03-20 15:01:51,010 INFO mapreduce.JobSubmitter: number of splits:1
2023-03-20 15:01:51,894 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1679293699463_0002
2023-03-20 15:01:51,894 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-03-20 15:01:52,684 INFO conf.Configuration: resource-types.xml not found
2023-03-20 15:01:52,687 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-03-20 15:01:53,237 INFO impl.YarnClientImpl: Submitted application application_1679293699463_0002
2023-03-20 15:01:53,487 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1679293699463_0002/
2023-03-20 15:01:53,492 INFO mapreduce.Job: Running job: job_1679293699463_0002
2023-03-20 15:02:15,329 INFO mapreduce.Job: Job job_1679293699463_0002 running in uber mode : false
2023-03-20 15:02:15,342 INFO mapreduce.Job:  map 0% reduce 0%
2023-03-20 15:02:26,652 INFO mapreduce.Job:  map 100% reduce 0%
2023-03-20 15:02:40,297 INFO mapreduce.Job:  map 100% reduce 100%
2023-03-20 15:02:41,350 INFO mapreduce.Job: Job job_1679293699463_0002 completed successfully
2023-03-20 15:02:41,557 INFO mapreduce.Job: Counters: 54
        File System Counters
                FILE: Number of bytes read=60
                FILE: Number of bytes written=529375
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=149
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=8
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
                HDFS: Number of bytes read erasure-coded=0
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=8398
                Total time spent by all reduces in occupied slots (ms)=9720
                Total time spent by all map tasks (ms)=8398
                Total time spent by all reduce tasks (ms)=9720
                Total vcore-milliseconds taken by all map tasks=8398
                Total vcore-milliseconds taken by all reduce tasks=9720
                Total megabyte-milliseconds taken by all map tasks=8599552
                Total megabyte-milliseconds taken by all reduce tasks=9953280
        Map-Reduce Framework
                Map input records=4
                Map output records=6
                Map output bytes=69
                Map output materialized bytes=60
                Input split bytes=100
                Combine input records=6
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=60
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=1000
                CPU time spent (ms)=3880
                Physical memory (bytes) snapshot=503771136
                Virtual memory (bytes) snapshot=5568987136
                Total committed heap usage (bytes)=428343296
                Peak Map Physical memory (bytes)=303013888
                Peak Map Virtual memory (bytes)=2782048256
                Peak Reduce Physical memory (bytes)=200757248
                Peak Reduce Virtual memory (bytes)=2786938880
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=49
        File Output Format Counters 
                Bytes Written=38
[root@node1 mapreduce]# pwd
/export/server/hadoop-3.3.0/share/hadoop/mapreduce
[root@node1 mapreduce]# 

P027【027_尚硅谷_Hadoop_入門_scp&rsync命令講解】15:01

第一次同步用scp,后續(xù)同步用rsync。

rsync主要用于備份和鏡像,具有速度快、避免復(fù)制相同內(nèi)容和支持符號(hào)鏈接的優(yōu)點(diǎn)。

rsyncscp區(qū)別:rsync做文件的復(fù)制要比scp的速度快,rsync只對(duì)差異文件做更新。scp是把所有文件都復(fù)制過(guò)去。

P028【028_尚硅谷_Hadoop_入門_xsync分發(fā)腳本】18:14

拷貝同步命令

  1. scp(secure copy)安全拷貝
  2. rsync 遠(yuǎn)程同步工具
  3. xsync 集群分發(fā)腳本

dirname命令:截取文件的路徑,去除文件名中的非目錄部分,僅顯示與目錄有關(guān)的內(nèi)容。

[root@node1 ~]# dirname /home/atguigu/a.txt
/home/atguigu
[root@node1 ~]#

basename命令:獲取文件名稱。

[root@node1 atguigu]# basename /home/atguigu/a.txt
a.txt
[root@node1 atguigu]#

#!/bin/bash

#1. 判斷參數(shù)個(gè)數(shù)
if [ $# -lt 1 ]
then
    echo Not Enough Arguement!
    exit;
fi

#2. 遍歷集群所有機(jī)器
for host in hadoop102 hadoop103 hadoop104
do
    echo ====================  $host  ====================
    #3. 遍歷所有目錄,挨個(gè)發(fā)送

    for file in $@
    do
        #4. 判斷文件是否存在
        if [ -e $file ]
            then
                #5. 獲取父目錄
                pdir=$(cd -P $(dirname $file); pwd)

                #6. 獲取當(dāng)前文件的名稱
                fname=$(basename $file)
                ssh $host "mkdir -p $pdir"
                rsync -av $pdir/$fname $host:$pdir
            else
                echo $file does not exists!
        fi
    done
done
[root@node1 bin]# chmod 777 xsync 
[root@node1 bin]# ll
總用量 4
-rwxrwxrwx 1 atguigu atguigu 727 3月  20 16:00 xsync
[root@node1 bin]# cd ..
[root@node1 atguigu]# xsync bin/
==================== node1 ====================
sending incremental file list

sent 94 bytes  received 17 bytes  222.00 bytes/sec
total size is 727  speedup is 6.55
==================== node2 ====================
sending incremental file list
bin/
bin/xsync

sent 871 bytes  received 39 bytes  606.67 bytes/sec
total size is 727  speedup is 0.80
==================== node3 ====================
sending incremental file list
bin/
bin/xsync

sent 871 bytes  received 39 bytes  1,820.00 bytes/sec
total size is 727  speedup is 0.80
[root@node1 atguigu]# pwd
/home/atguigu
[root@node1 atguigu]# ls -al
總用量 20
drwx------  6 atguigu atguigu  168 3月  20 15:56 .
drwxr-xr-x. 6 root    root      56 3月  20 10:08 ..
-rw-r--r--  1 root    root       0 3月  20 15:44 a.txt
-rw-------  1 atguigu atguigu   21 3月  20 11:48 .bash_history
-rw-r--r--  1 atguigu atguigu   18 8月   8 2019 .bash_logout
-rw-r--r--  1 atguigu atguigu  193 8月   8 2019 .bash_profile
-rw-r--r--  1 atguigu atguigu  231 8月   8 2019 .bashrc
drwxrwxr-x  2 atguigu atguigu   19 3月  20 15:56 bin
drwxrwxr-x  3 atguigu atguigu   18 3月  20 10:17 .cache
drwxrwxr-x  3 atguigu atguigu   18 3月  20 10:17 .config
drwxr-xr-x  4 atguigu atguigu   39 3月  10 20:04 .mozilla
-rw-------  1 atguigu atguigu 1261 3月  20 15:56 .viminfo
[root@node1 atguigu]# 
連接成功
Last login: Mon Mar 20 16:01:40 2023
[root@node1 ~]# su atguigu
[atguigu@node1 root]$ cd /home/atguigu/
[atguigu@node1 ~]$ pwd
/home/atguigu
[atguigu@node1 ~]$ xsync bin/
==================== node1 ====================
The authenticity of host 'node1 (192.168.88.151)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.88.151' (ECDSA) to the list of known hosts.
atguigu@node1's password: 
atguigu@node1's password: 
sending incremental file list

sent 98 bytes  received 17 bytes  17.69 bytes/sec
total size is 727  speedup is 6.32
==================== node2 ====================
The authenticity of host 'node2 (192.168.88.152)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.88.152' (ECDSA) to the list of known hosts.
atguigu@node2's password: 
atguigu@node2's password: 
sending incremental file list

sent 94 bytes  received 17 bytes  44.40 bytes/sec
total size is 727  speedup is 6.55
==================== node3 ====================
The authenticity of host 'node3 (192.168.88.153)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3,192.168.88.153' (ECDSA) to the list of known hosts.
atguigu@node3's password: 
atguigu@node3's password: 
sending incremental file list

sent 94 bytes  received 17 bytes  44.40 bytes/sec
total size is 727  speedup is 6.55
[atguigu@node1 ~]$ 
----------------------------------------------------------------------------------------
連接成功
Last login: Mon Mar 20 17:22:20 2023 from 192.168.88.151
[root@node2 ~]# su atguigu
[atguigu@node2 root]$ vim /etc/sudoers
您在 /var/spool/mail/root 中有新郵件
[atguigu@node2 root]$ su root
密碼:
[root@node2 ~]# vim /etc/sudoers
[root@node2 ~]# cd /opt/
[root@node2 opt]# ll
總用量 0
drwxr-xr-x  4 atguigu atguigu 46 3月  20 11:32 module
drwxr-xr-x. 2 root    root     6 10月 31 2018 rh
drwxr-xr-x  2 atguigu atguigu 67 3月  20 10:47 software
[root@node2 opt]# su atguigu
[atguigu@node2 opt]$ cd /home/atguigu/
[atguigu@node2 ~]$ llk
bash: llk: 未找到命令
[atguigu@node2 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node2 ~]$ cd ~
您在 /var/spool/mail/root 中有新郵件
[atguigu@node2 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node2 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
您在 /var/spool/mail/root 中有新郵件
[atguigu@node2 ~]$ cd bin
[atguigu@node2 bin]$ ll
總用量 4
-rwxrwxrwx 1 atguigu atguigu 727 3月  20 16:00 xsync
[atguigu@node2 bin]$ 
----------------------------------------------------------------------------------------
連接成功
Last login: Mon Mar 20 17:22:26 2023 from 192.168.88.152
[root@node3 ~]# vim /etc/sudoers
您在 /var/spool/mail/root 中有新郵件
[root@node3 ~]# cd /opt/
[root@node3 opt]# ll
總用量 0
drwxr-xr-x  4 atguigu atguigu 46 3月  20 11:32 module
drwxr-xr-x. 2 root    root     6 10月 31 2018 rh
drwxr-xr-x  2 atguigu atguigu 67 3月  20 10:47 software
[root@node3 opt]# cd ~
您在 /var/spool/mail/root 中有新郵件
[root@node3 ~]# ll
總用量 4
-rw-------. 1 root root 1340 9月  11 2020 anaconda-ks.cfg
-rw-------  1 root root    0 2月  23 16:20 nohup.out
[root@node3 ~]# ll
總用量 4
-rw-------. 1 root root 1340 9月  11 2020 anaconda-ks.cfg
-rw-------  1 root root    0 2月  23 16:20 nohup.out
您在 /var/spool/mail/root 中有新郵件
[root@node3 ~]# cd ~
[root@node3 ~]# ll
總用量 4
-rw-------. 1 root root 1340 9月  11 2020 anaconda-ks.cfg
-rw-------  1 root root    0 2月  23 16:20 nohup.out
[root@node3 ~]# su atguigu
[atguigu@node3 root]$ cd ~
[atguigu@node3 ~]$ ls
bin
[atguigu@node3 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node3 ~]$ cd bin
[atguigu@node3 bin]$ ll
總用量 4
-rwxrwxrwx 1 atguigu atguigu 727 3月  20 16:00 xsync
[atguigu@node3 bin]$ 
----------------------------------------------------------------------------------------
連接成功
Last login: Mon Mar 20 16:01:40 2023
[root@node1 ~]# su atguigu
[atguigu@node1 root]$ cd /home/atguigu/
[atguigu@node1 ~]$ pwd
/home/atguigu
[atguigu@node1 ~]$ xsync bin/
==================== node1 ====================
The authenticity of host 'node1 (192.168.88.151)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.88.151' (ECDSA) to the list of known hosts.
atguigu@node1's password: 
atguigu@node1's password: 
sending incremental file list

sent 98 bytes  received 17 bytes  17.69 bytes/sec
total size is 727  speedup is 6.32
==================== node2 ====================
The authenticity of host 'node2 (192.168.88.152)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.88.152' (ECDSA) to the list of known hosts.
atguigu@node2's password: 
atguigu@node2's password: 
sending incremental file list

sent 94 bytes  received 17 bytes  44.40 bytes/sec
total size is 727  speedup is 6.55
==================== node3 ====================
The authenticity of host 'node3 (192.168.88.153)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node3,192.168.88.153' (ECDSA) to the list of known hosts.
atguigu@node3's password: 
atguigu@node3's password: 
sending incremental file list

sent 94 bytes  received 17 bytes  44.40 bytes/sec
total size is 727  speedup is 6.55
[atguigu@node1 ~]$ xsync /etc/profile.d/my_env.sh
==================== node1 ====================
atguigu@node1's password: 
atguigu@node1's password: 
.sending incremental file list

sent 48 bytes  received 12 bytes  13.33 bytes/sec
total size is 223  speedup is 3.72
==================== node2 ====================
atguigu@node2's password: 
atguigu@node2's password: 
sending incremental file list
my_env.sh
rsync: mkstemp "/etc/profile.d/.my_env.sh.guTzvB" failed: Permission denied (13)

sent 95 bytes  received 126 bytes  88.40 bytes/sec
total size is 223  speedup is 1.01
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
==================== node3 ====================
atguigu@node3's password: 
atguigu@node3's password: 
sending incremental file list
my_env.sh
rsync: mkstemp "/etc/profile.d/.my_env.sh.evDUZa" failed: Permission denied (13)

sent 95 bytes  received 126 bytes  88.40 bytes/sec
total size is 223  speedup is 1.01
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
[atguigu@node1 ~]$ sudo ./bin/xsync /etc/profile.d/my_env.sh
==================== node1 ====================
sending incremental file list

sent 48 bytes  received 12 bytes  120.00 bytes/sec
total size is 223  speedup is 3.72
==================== node2 ====================
sending incremental file list
my_env.sh

sent 95 bytes  received 41 bytes  272.00 bytes/sec
total size is 223  speedup is 1.64
==================== node3 ====================
sending incremental file list
my_env.sh

sent 95 bytes  received 41 bytes  272.00 bytes/sec
total size is 223  speedup is 1.64
[atguigu@node1 ~]$ 

P029【029_尚硅谷_Hadoop_入門_ssh免密登錄】11:25

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建?尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

連接成功
Last login: Mon Mar 20 19:14:44 2023 from 192.168.88.1
[root@node1 ~]# su atguigu
[atguigu@node1 root]$ pwd
/root
[atguigu@node1 root]$ cd ~
[atguigu@node1 ~]$ pwd
/home/atguigu
[atguigu@node1 ~]$ ls -al
總用量 20
drwx------  7 atguigu atguigu  180 3月  20 19:22 .
drwxr-xr-x. 6 root    root      56 3月  20 10:08 ..
-rw-r--r--  1 root    root       0 3月  20 15:44 a.txt
-rw-------  1 atguigu atguigu  391 3月  20 19:36 .bash_history
-rw-r--r--  1 atguigu atguigu   18 8月   8 2019 .bash_logout
-rw-r--r--  1 atguigu atguigu  193 8月   8 2019 .bash_profile
-rw-r--r--  1 atguigu atguigu  231 8月   8 2019 .bashrc
drwxrwxr-x  2 atguigu atguigu   19 3月  20 15:56 bin
drwxrwxr-x  3 atguigu atguigu   18 3月  20 10:17 .cache
drwxrwxr-x  3 atguigu atguigu   18 3月  20 10:17 .config
drwxr-xr-x  4 atguigu atguigu   39 3月  10 20:04 .mozilla
drwx------  2 atguigu atguigu   25 3月  20 19:22 .ssh
-rw-------  1 atguigu atguigu 1261 3月  20 15:56 .viminfo
[atguigu@node1 ~]$ cd .ssh
[atguigu@node1 .ssh]$ ll
總用量 4
-rw-r--r-- 1 atguigu atguigu 546 3月  20 19:23 known_hosts
[atguigu@node1 .ssh]$ cat known_hosts 
node1,192.168.88.151 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
node2,192.168.88.152 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
node3,192.168.88.153 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
[atguigu@node1 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/atguigu/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/atguigu/.ssh/id_rsa.
Your public key has been saved in /home/atguigu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CBFD39JBRh/GgmTTIsC+gJVjWDQWFyE5riX287PXfzc atguigu@node1.itcast.cn
The key's randomart image is:
+---[RSA 2048]----+
| =O*O=+==.o      |
|..O+.=o=o+..     |
|.= o..o.o..      |
|+.+  . o         |
|.=..  . S        |
|. .o             |
|    o   .        |
|     o . .   . E |
|     .+   ... . .|
+----[SHA256]-----+
[atguigu@node1 .ssh]$ 
[atguigu@node1 .ssh]$ ll
總用量 12
-rw------- 1 atguigu atguigu 1679 3月  20 19:40 id_rsa
-rw-r--r-- 1 atguigu atguigu  405 3月  20 19:40 id_rsa.pub
-rw-r--r-- 1 atguigu atguigu  546 3月  20 19:23 known_hosts
[atguigu@node1 .ssh]$ cat id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA0a3S5+QcIvjewfOYIHTlnlzbQgV7Y92JC6ZMt1XGklva53rw
CEhf9yvfcM1VTBDNYHyvl6+rTKKU8EHTfpEoYa5blB299ZfKkM4OxPkcE9nz7uTN
TyYF84wAR1IIEtT1bLVJPyh/Hvh8ye6UMj1PhZIGflNjbGwYkJoDK3wXxwaD4xey
Y0zVCgL7QDqND0Iw8XQrCSQ8cQVgbBxprUYu97+n/7GOs0WASC6gtrW9IksxHtSB
pI6ieKVzv9fuWcbhb8C5w7BqdVU6jrKqo7FnQaDKBNdC4weOku4TA+beHpc4W3p8
f8b+b3U+A0qOj+uRVX7uDoxuunH4xAjqn8TmPQIDAQABAoIBAQCFl0UHn6tZkMyE
MApdq3zcj/bWMp3x+6SkOnkoWcshVsq6rvYdoNcbqOU8fmZ5Bz+C2Q4bC76NHgzc
omP4gM2Eps0MKoLr5aEW72Izly+Pak7jhv1UDzq9eBZ5WkdwkCQp9brMNaYAensv
QQVEmRGAXZArjj+LRbfE8YtReke/8jxyJlRxmVrq+A0a6VAAdOSL/71EJZ9+zJy/
SpN3UlZj27LndYIaOIsQ/vnhTrtb75l4VH24UNhHzJQv1PcBSUrSVOEWrIq/sOzU
b4RW3Fuo51ZLB9ysvxZd5KnwC+yX63XKf8IJqfpWt1KrJ3IV6acvs1UEU+DELfUY
b7v0GkhhAoGBAOuswY5qI0zUiBSEGxysDml5ZG9n4i2JnzmKmnVMAGwQq7ZzUv0o
VwObDmFgp+A8NDAstxR6My5kKky2MOSv/ckJdAEzY9iVI3vXtkT54HYhHstIzNYg
ube1MylcLUttaR/OpbJpyN8BavTQEtydJP7Xchorw6DaZOGLhWjX8EjpAoGBAOPD
IVSfi+51s9h5cIDvvm6IiKDn05kf8D/VrD3awm/hrQrRwF3ouD6JBr/T9OfWqh1W
v9xjn5uurTflO8CZOU91VB/nihXxN0pT6CREi8/I9QSAZbrCkCIWZ6ku7seyEZg6
fp756zCyVeKNSZPpDbKH5LCSyafkroZBxcZKFp41AoGAXff0+SbiyliXpa6C7OzB
llabsDv4l/Wesh/MtGZIaM5A2S+kcGJsR3jExBj49tSqbmb13MlYrO+tWgbu+dAe
XdFSGsR11D6q9k8tUtVbJV7RW3a8jchgpJowOxaQzNlkKBWKRdgeCqUTE2f/jU1v
Gdmnmj3G89UAklnCKOqo2TkCgYEAuGBVEgkaIQ7daQdd4LKzaQ1T9VXWAGZPeY2C
oov9zM5W46RK4nqq88y/TvjJkAhBrAB2znVDVqcACHikd1RShZVIZY9tRDgB90SX
bwyiVbGrT1qVf6tTPJUAk3+vwq7O+XmY2R8dmk0zo3OWtYr7EKRbp+kcH7LK6VpD
PTLqvmUCgYEAt8rZWnAjGiipc/lLHMkoeKMK+JvA42HETVxQkdG17hTRzrotMMaF
CajslMcQ9m+ALHko2uyvsHVOdm66tQO65IKr5iavpcq8ZHKh51jJPdJpQwAJE9vr
d4ASXHEESfNK5/YPzMAIy019lgJal4bsy8tE8i6LIv6/PHVhNDs3Rsg=
-----END RSA PRIVATE KEY-----
[atguigu@node1 .ssh]$ cat id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
[atguigu@node1 .ssh]$ ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
atguigu@node2's password: 
Permission denied, please try again.
atguigu@node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.

[atguigu@node1 .ssh]$ ssh node2
Last login: Mon Mar 20 19:37:14 2023
[atguigu@node2 ~]$ hostname
node2.itcast.cn
[atguigu@node2 ~]$ exit
登出
Connection to node2 closed.
[atguigu@node1 .ssh]$ ssh-copy-id node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
atguigu@node3's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node3'"
and check to make sure that only the key(s) you wanted were added.

[atguigu@node1 .ssh]$ ssh node3
Last login: Mon Mar 20 19:37:33 2023
[atguigu@node3 ~]$ hostname
node3.itcast.cn
[atguigu@node3 ~]$ exit
登出
Connection to node3 closed.
[atguigu@node1 .ssh]$ ssh node1
atguigu@node1's password: 
Last login: Mon Mar 20 19:36:46 2023
[atguigu@node1 ~]$ exit
登出
Connection to node1 closed.
[atguigu@node1 .ssh]$ ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
atguigu@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.

[atguigu@node1 .ssh]$ ll
總用量 16
-rw------- 1 atguigu atguigu  405 3月  20 19:45 authorized_keys
-rw------- 1 atguigu atguigu 1679 3月  20 19:40 id_rsa
-rw-r--r-- 1 atguigu atguigu  405 3月  20 19:40 id_rsa.pub
-rw-r--r-- 1 atguigu atguigu  546 3月  20 19:23 known_hosts
[atguigu@node1 .ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
[atguigu@node1 .ssh]$ pwd
/home/atguigu/.ssh
[atguigu@node1 .ssh]$ su root
密碼:
[root@node1 .ssh]# ll
總用量 16
-rw------- 1 atguigu atguigu  810 3月  20 19:51 authorized_keys
-rw------- 1 atguigu atguigu 1679 3月  20 19:40 id_rsa
-rw-r--r-- 1 atguigu atguigu  405 3月  20 19:40 id_rsa.pub
-rw-r--r-- 1 atguigu atguigu  546 3月  20 19:23 known_hosts
[root@node1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? 
[root@node1 .ssh]# ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

[root@node1 .ssh]# ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

[root@node1 .ssh]# ssh-copy-id node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

[root@node1 .ssh]# su atguigu
[atguigu@node1 .ssh]$ cd ~
[atguigu@node1 ~]$ xsync hello.txt
==================== node1 ====================
hello.txt does not exists!
==================== node2 ====================
hello.txt does not exists!
==================== node3 ====================
hello.txt does not exists!
[atguigu@node1 ~]$ pwd
/home/atguigu
[atguigu@node1 ~]$ cd /home/atguigu/
[atguigu@node1 ~]$ xsync hello.txt
==================== node1 ====================
hello.txt does not exists!
==================== node2 ====================
hello.txt does not exists!
==================== node3 ====================
hello.txt does not exists!
[atguigu@node1 ~]$ xsync a.txt
==================== node1 ====================
sending incremental file list

sent 43 bytes  received 12 bytes  110.00 bytes/sec
total size is 3  speedup is 0.05
==================== node2 ====================
sending incremental file list
a.txt

sent 93 bytes  received 35 bytes  256.00 bytes/sec
total size is 3  speedup is 0.02
==================== node3 ====================
sending incremental file list
a.txt

sent 93 bytes  received 35 bytes  256.00 bytes/sec
total size is 3  speedup is 0.02
[atguigu@node1 ~]$ 
----------------------------------------------------------------------------------------
連接成功
Last login: Mon Mar 20 19:17:38 2023
[root@node2 ~]# su atguigu
[atguigu@node2 root]$ cd ~
[atguigu@node2 ~]$ pwd
/home/atguigu
[atguigu@node2 ~]$ ls -al
總用量 20
drwx------  5 atguigu atguigu 139 3月  20 19:17 .
drwxr-xr-x. 3 root    root     21 3月  20 10:08 ..
-rw-------  1 atguigu atguigu 108 3月  20 19:36 .bash_history
-rw-r--r--  1 atguigu atguigu  18 8月   8 2019 .bash_logout
-rw-r--r--  1 atguigu atguigu 193 8月   8 2019 .bash_profile
-rw-r--r--  1 atguigu atguigu 231 8月   8 2019 .bashrc
drwxrwxr-x  2 atguigu atguigu  19 3月  20 15:56 bin
drwxrwxr-x  3 atguigu atguigu  18 3月  20 10:17 .cache
drwxrwxr-x  3 atguigu atguigu  18 3月  20 10:17 .config
-rw-------  1 atguigu atguigu 557 3月  20 19:17 .viminfo
[atguigu@node2 ~]$ 
連接斷開
連接成功
Last login: Mon Mar 20 19:36:35 2023 from 192.168.88.1
[root@node2 ~]# cd /home/atguigu/.ssh/
您在 /var/spool/mail/root 中有新郵件
[root@node2 .ssh]# ll
總用量 4
-rw------- 1 atguigu atguigu 405 3月  20 19:43 authorized_keys
[root@node2 .ssh]# cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
[root@node2 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:rKXFOBLTEYhuY0iBovwDyguTlvqAZozIMIiAHWhaWyI root@node2.itcast.cn
The key's randomart image is:
+---[RSA 2048]----+
|.oo. .o.         |
|E++.o. .         |
|X=.+o .          |
|Bo*  o +         |
|B++.. o S        |
|%= o . *         |
|O*. . o          |
|+o               |
| ..              |
+----[SHA256]-----+
您在 /var/spool/mail/root 中有新郵件
[root@node2 .ssh]# ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.

[root@node2 .ssh]# ll
總用量 4
-rw------- 1 atguigu atguigu 405 3月  20 19:43 authorized_keys
[root@node2 .ssh]# ssh-copy-id node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node3'"
and check to make sure that only the key(s) you wanted were added.

[root@node2 .ssh]# ll
總用量 4
-rw------- 1 atguigu atguigu 810 3月  20 19:51 authorized_keys
您在 /var/spool/mail/root 中有新郵件
[root@node2 .ssh]# ssh node2
root@node2's password: 
Last login: Mon Mar 20 19:38:58 2023 from 192.168.88.1
[root@node2 ~]# ls -al
總用量 56
dr-xr-x---.  7 root root 4096 3月  20 19:18 .
dr-xr-xr-x. 18 root root  258 10月 26 2021 ..
-rw-r--r--   1 root root    4 2月  22 11:10 111.txt
-rw-r--r--   1 root root    2 2月  22 11:08 1.txt
-rw-r--r--   1 root root    2 2月  22 11:09 2.txt
-rw-r--r--   1 root root    2 2月  22 11:09 3.txt
-rw-------.  1 root root 1340 9月  11 2020 anaconda-ks.cfg
-rw-------.  1 root root 3555 3月  20 19:38 .bash_history
-rw-r--r--.  1 root root   18 12月 29 2013 .bash_logout
-rw-r--r--.  1 root root  176 12月 29 2013 .bash_profile
-rw-r--r--.  1 root root  176 12月 29 2013 .bashrc
drwxr-xr-x.  3 root root   18 9月  11 2020 .cache
drwxr-xr-x.  3 root root   18 9月  11 2020 .config
-rw-r--r--.  1 root root  100 12月 29 2013 .cshrc
drwxr-xr-x.  2 root root   40 9月  11 2020 .oracle_jre_usage
drwxr-----   3 root root   19 3月  20 10:05 .pki
drwx------.  2 root root   80 3月  20 19:49 .ssh
-rw-r--r--.  1 root root  129 12月 29 2013 .tcshrc
-rw-r--r--   1 root root    0 3月  13 19:40 test.txt
-rw-------   1 root root 4620 3月  20 19:18 .viminfo
[root@node2 ~]# pwd
/root
[root@node2 ~]# cd .ssh
您在 /var/spool/mail/root 中有新郵件
[root@node2 .ssh]# ll
總用量 16
-rw-------. 1 root root  402 9月  11 2020 authorized_keys
-rw-------. 1 root root 1679 3月  20 19:48 id_rsa
-rw-r--r--. 1 root root  402 3月  20 19:48 id_rsa.pub
-rw-r--r--. 1 root root 1254 3月  20 09:25 known_hosts
[root@node2 .ssh]# su atguigu
[atguigu@node2 .ssh]$ cd ~
[atguigu@node2 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node2 ~]$ ll
總用量 4
-rw-r--r-- 1 atguigu atguigu  3 3月  20 19:59 a.txt
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
您在 /var/spool/mail/root 中有新郵件
[atguigu@node2 ~]$ 
----------------------------------------------------------------------------------------
連接成功
Last login: Mon Mar 20 19:14:48 2023 from 192.168.88.1
[root@node3 ~]# su atguigu
[atguigu@node3 root]$ cd ~
[atguigu@node3 ~]$ pwd
/home/atguigu
[atguigu@node3 ~]$ ls -al
總用量 16
drwx------  5 atguigu atguigu 123 3月  20 17:25 .
drwxr-xr-x. 3 root    root     21 3月  20 10:08 ..
-rw-------  1 atguigu atguigu 163 3月  20 19:36 .bash_history
-rw-r--r--  1 atguigu atguigu  18 8月   8 2019 .bash_logout
-rw-r--r--  1 atguigu atguigu 193 8月   8 2019 .bash_profile
-rw-r--r--  1 atguigu atguigu 231 8月   8 2019 .bashrc
drwxrwxr-x  2 atguigu atguigu  19 3月  20 15:56 bin
drwxrwxr-x  3 atguigu atguigu  18 3月  20 10:18 .cache
drwxrwxr-x  3 atguigu atguigu  18 3月  20 10:18 .config
[atguigu@node3 ~]$ cd /home/atguigu/.ssh/
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 .ssh]$ ll
總用量 4
-rw------- 1 atguigu atguigu 405 3月  20 19:44 authorized_keys
[atguigu@node3 .ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
[atguigu@node3 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/atguigu/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/atguigu/.ssh/id_rsa.
Your public key has been saved in /home/atguigu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:UXniCTC0jqCGYUsYfBRoUBrlaei8V6dWx7lAvRypEko atguigu@node3.itcast.cn
The key's randomart image is:
+---[RSA 2048]----+
|*o=o..+.  ..     |
|.X o   oo.+ .    |
|*o*E ....* +     |
|*+o..oo +.*      |
|o= ..o.=S*       |
|. . . = o .      |
| . . o   .       |
|  . .            |
|                 |
+----[SHA256]-----+
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 .ssh]$ ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.88.151)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? 
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
The authenticity of host 'node1 (192.168.88.151)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
atguigu@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.

[atguigu@node3 .ssh]$ ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.88.152)' can't be established.
ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
atguigu@node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.

[atguigu@node3 .ssh]$ ll
總用量 16
-rw------- 1 atguigu atguigu  405 3月  20 19:44 authorized_keys
-rw------- 1 atguigu atguigu 1675 3月  20 19:50 id_rsa
-rw-r--r-- 1 atguigu atguigu  405 3月  20 19:50 id_rsa.pub
-rw-r--r-- 1 atguigu atguigu  364 3月  20 19:51 known_hosts
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 .ssh]$ cd ~
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 ~]$ ll
總用量 0
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node3 ~]$ ll
總用量 4
-rw-r--r-- 1 atguigu atguigu  3 3月  20 19:59 a.txt
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 ~]$ cd /home/atguigu
您在 /var/spool/mail/root 中有新郵件
[atguigu@node3 ~]$ ll
總用量 4
-rw-r--r-- 1 atguigu atguigu  3 3月  20 19:59 a.txt
drwxrwxr-x 2 atguigu atguigu 19 3月  20 15:56 bin
[atguigu@node3 ~]$ 

P030【030_尚硅谷_Hadoop_入門_集群配置】13:24

注意:

  • NameNode和SecondaryNameNode不要安裝在同一臺(tái)服務(wù)器。
  • ResourceManager也很消耗內(nèi)存,不要和NameNode、SecondaryNameNode配置在同一臺(tái)機(jī)器上。

hadoop102

hadoop103

hadoop104

HDFS

NameNode

DataNode

DataNode

SecondaryNameNode

DataNode

YARN

NodeManager

ResourceManager

NodeManager

NodeManager

要獲取的默認(rèn)文件

文件存放在Hadoop的jar包中的位置

[core-default.xml]

hadoop-common-3.1.3.jar/core-default.xml

[hdfs-default.xml]

hadoop-hdfs-3.1.3.jar/hdfs-default.xml

[yarn-default.xml]

hadoop-yarn-common-3.1.3.jar/yarn-default.xml

[mapred-default.xml]

hadoop-mapreduce-client-core-3.1.3.jar/mapred-default.xml

P031【031_尚硅谷_Hadoop_入門_群起集群并測(cè)試】16:52

hadoop集群?jiǎn)?dòng)后datanode沒(méi)有啟動(dòng)_hadoop datanode沒(méi)有啟動(dòng)!

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

yarn:資源調(diào)度。

連接成功
Last login: Wed Mar 22 09:16:44 2023
[atguigu@node1 ~]$ cd /opt/module/hadoop-3.1.3
[atguigu@node1 hadoop-3.1.3]$ sbin/start-dfs.sh
Starting namenodes on [node1]
Starting datanodes
Starting secondary namenodes [node3]
[atguigu@node1 hadoop-3.1.3]$ jps
5619 DataNode
5398 NameNode
6647 Jps
6457 NodeManager
[atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /wcinput /wcoutput
2023-03-22 09:22:26,672 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2023-03-22 09:22:26,954 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2023-03-22 09:22:26,954 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2023-03-22 09:22:28,713 INFO input.FileInputFormat: Total input files to process : 1
2023-03-22 09:22:28,764 INFO mapreduce.JobSubmitter: number of splits:1
2023-03-22 09:22:29,208 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local321834777_0001
2023-03-22 09:22:29,218 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-03-22 09:22:29,515 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2023-03-22 09:22:29,520 INFO mapreduce.Job: Running job: job_local321834777_0001
2023-03-22 09:22:29,525 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2023-03-22 09:22:29,551 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:22:29,551 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:22:29,553 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2023-03-22 09:22:29,785 INFO mapred.LocalJobRunner: Waiting for map tasks
2023-03-22 09:22:29,791 INFO mapred.LocalJobRunner: Starting task: attempt_local321834777_0001_m_000000_0
2023-03-22 09:22:29,908 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:22:29,910 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:22:30,037 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2023-03-22 09:22:30,056 INFO mapred.MapTask: Processing split: hdfs://node1:8020/wcinput/word.txt:0+45
2023-03-22 09:22:30,532 INFO mapreduce.Job: Job job_local321834777_0001 running in uber mode : false
2023-03-22 09:22:30,547 INFO mapreduce.Job:  map 0% reduce 0%
2023-03-22 09:22:31,234 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2023-03-22 09:22:31,235 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2023-03-22 09:22:31,235 INFO mapred.MapTask: soft limit at 83886080
2023-03-22 09:22:31,235 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2023-03-22 09:22:31,235 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2023-03-22 09:22:31,277 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2023-03-22 09:22:31,542 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-03-22 09:22:36,432 INFO mapred.LocalJobRunner: 
2023-03-22 09:22:36,463 INFO mapred.MapTask: Starting flush of map output
2023-03-22 09:22:36,463 INFO mapred.MapTask: Spilling map output
2023-03-22 09:22:36,463 INFO mapred.MapTask: bufstart = 0; bufend = 69; bufvoid = 104857600
2023-03-22 09:22:36,463 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600
2023-03-22 09:22:36,615 INFO mapred.MapTask: Finished spill 0
2023-03-22 09:22:36,655 INFO mapred.Task: Task:attempt_local321834777_0001_m_000000_0 is done. And is in the process of committing
2023-03-22 09:22:36,701 INFO mapred.LocalJobRunner: map
2023-03-22 09:22:36,701 INFO mapred.Task: Task 'attempt_local321834777_0001_m_000000_0' done.
2023-03-22 09:22:36,738 INFO mapred.Task: Final Counters for attempt_local321834777_0001_m_000000_0: Counters: 23
        File System Counters
                FILE: Number of bytes read=316543
                FILE: Number of bytes written=822653
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=45
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=5
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=1
        Map-Reduce Framework
                Map input records=4
                Map output records=6
                Map output bytes=69
                Map output materialized bytes=60
                Input split bytes=99
                Combine input records=6
                Combine output records=4
                Spilled Records=4
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=271056896
        File Input Format Counters 
                Bytes Read=45
2023-03-22 09:22:36,739 INFO mapred.LocalJobRunner: Finishing task: attempt_local321834777_0001_m_000000_0
2023-03-22 09:22:36,810 INFO mapred.LocalJobRunner: map task executor complete.
2023-03-22 09:22:36,849 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2023-03-22 09:22:36,876 INFO mapred.LocalJobRunner: Starting task: attempt_local321834777_0001_r_000000_0
2023-03-22 09:22:37,033 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:22:37,033 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:22:37,035 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2023-03-22 09:22:37,043 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4db08ce7
2023-03-22 09:22:37,046 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2023-03-22 09:22:37,178 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=642252800, maxSingleShuffleLimit=160563200, mergeThreshold=423886880, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2023-03-22 09:22:37,216 INFO reduce.EventFetcher: attempt_local321834777_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2023-03-22 09:22:37,376 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local321834777_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
2023-03-22 09:22:37,409 INFO reduce.InMemoryMapOutput: Read 56 bytes from map-output for attempt_local321834777_0001_m_000000_0
2023-03-22 09:22:37,421 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
2023-03-22 09:22:37,457 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
2023-03-22 09:22:37,460 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:22:37,460 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2023-03-22 09:22:37,504 INFO mapreduce.Job:  map 100% reduce 0%
2023-03-22 09:22:37,534 INFO mapred.Merger: Merging 1 sorted segments
2023-03-22 09:22:37,534 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2023-03-22 09:22:37,536 INFO reduce.MergeManagerImpl: Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
2023-03-22 09:22:37,537 INFO reduce.MergeManagerImpl: Merging 1 files, 60 bytes from disk
2023-03-22 09:22:37,541 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2023-03-22 09:22:37,542 INFO mapred.Merger: Merging 1 sorted segments
2023-03-22 09:22:37,547 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2023-03-22 09:22:37,708 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:22:37,831 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2023-03-22 09:22:38,001 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-03-22 09:22:39,978 INFO mapred.Task: Task:attempt_local321834777_0001_r_000000_0 is done. And is in the process of committing
2023-03-22 09:22:39,989 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:22:39,990 INFO mapred.Task: Task attempt_local321834777_0001_r_000000_0 is allowed to commit now
2023-03-22 09:22:40,106 INFO output.FileOutputCommitter: Saved output of task 'attempt_local321834777_0001_r_000000_0' to hdfs://node1:8020/wcoutput
2023-03-22 09:22:40,111 INFO mapred.LocalJobRunner: reduce > reduce
2023-03-22 09:22:40,111 INFO mapred.Task: Task 'attempt_local321834777_0001_r_000000_0' done.
2023-03-22 09:22:40,112 INFO mapred.Task: Final Counters for attempt_local321834777_0001_r_000000_0: Counters: 29
        File System Counters
                FILE: Number of bytes read=316695
                FILE: Number of bytes written=822713
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=45
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=10
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=4
                Reduce shuffle bytes=60
                Reduce input records=4
                Reduce output records=4
                Spilled Records=4
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=159
                Total committed heap usage (bytes)=272105472
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=38
2023-03-22 09:22:40,112 INFO mapred.LocalJobRunner: Finishing task: attempt_local321834777_0001_r_000000_0
2023-03-22 09:22:40,115 INFO mapred.LocalJobRunner: reduce task executor complete.
2023-03-22 09:22:40,507 INFO mapreduce.Job:  map 100% reduce 100%
2023-03-22 09:22:40,507 INFO mapreduce.Job: Job job_local321834777_0001 completed successfully
2023-03-22 09:22:40,529 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=633238
                FILE: Number of bytes written=1645366
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=90
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=15
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=4
        Map-Reduce Framework
                Map input records=4
                Map output records=6
                Map output bytes=69
                Map output materialized bytes=60
                Input split bytes=99
                Combine input records=6
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=60
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=159
                Total committed heap usage (bytes)=543162368
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=45
        File Output Format Counters 
                Bytes Written=38
[atguigu@node1 hadoop-3.1.3]$ 

P032【032_尚硅谷_Hadoop_入門_集群崩潰處理辦法】08:10

先停掉dfs和yarn,sbin/stop-dfs.sh、sbin/stop-yarn.sh,再刪除/data,重新格式化hdfs namenode -format。

[atguigu@node1 hadoop-3.1.3]$ jps
5619 DataNode
5398 NameNode
18967 Jps
6457 NodeManager
[atguigu@node1 hadoop-3.1.3]$ kill -9 5619
[atguigu@node1 hadoop-3.1.3]$ jps
20036 Jps
5398 NameNode
6457 NodeManager
[atguigu@node1 hadoop-3.1.3]$ sbin/stop-dfs.sh
Stopping namenodes on [node1]
Stopping datanodes
Stopping secondary namenodes [node3]
[atguigu@node1 hadoop-3.1.3]$ jps
32126 Jps
[atguigu@node1 hadoop-3.1.3]$ 

P033【033_尚硅谷_Hadoop_入門_歷史服務(wù)器配置】05:26

node1:mapred --daemon start historyserver

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

[atguigu@node1 hadoop-3.1.3]$ mapred --daemon start historyserver
[atguigu@node1 hadoop-3.1.3]$ jps
27061 DataNode
37557 NodeManager
42666 JobHistoryServer
26879 NameNode
42815 Jps
[atguigu@node1 hadoop-3.1.3]$ hadoop fs -put wcinput/word.txt /input
2023-03-22 09:58:16,749 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/
client/    common/    hdfs/      mapreduce/ tools/     yarn/      
[atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/
client/    common/    hdfs/      mapreduce/ tools/     yarn/      
[atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/
hadoop-mapreduce-client-app-3.1.3.jar              hadoop-mapreduce-client-jobclient-3.1.3.jar        hadoop-mapreduce-examples-3.1.3.jar
hadoop-mapreduce-client-common-3.1.3.jar           hadoop-mapreduce-client-jobclient-3.1.3-tests.jar  jdiff/
hadoop-mapreduce-client-core-3.1.3.jar             hadoop-mapreduce-client-nativetask-3.1.3.jar       lib/
hadoop-mapreduce-client-hs-3.1.3.jar               hadoop-mapreduce-client-shuffle-3.1.3.jar          lib-examples/
hadoop-mapreduce-client-hs-plugins-3.1.3.jar       hadoop-mapreduce-client-uploader-3.1.3.jar         sources/
[atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
2023-03-22 09:59:43,045 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2023-03-22 09:59:43,486 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2023-03-22 09:59:43,486 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2023-03-22 09:59:45,880 INFO input.FileInputFormat: Total input files to process : 1
2023-03-22 09:59:45,985 INFO mapreduce.JobSubmitter: number of splits:1
2023-03-22 09:59:46,637 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local146698941_0001
2023-03-22 09:59:46,642 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-03-22 09:59:46,972 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2023-03-22 09:59:46,974 INFO mapreduce.Job: Running job: job_local146698941_0001
2023-03-22 09:59:47,033 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2023-03-22 09:59:47,054 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:59:47,055 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:59:47,058 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2023-03-22 09:59:47,181 INFO mapred.LocalJobRunner: Waiting for map tasks
2023-03-22 09:59:47,182 INFO mapred.LocalJobRunner: Starting task: attempt_local146698941_0001_m_000000_0
2023-03-22 09:59:47,251 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:59:47,255 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:59:47,376 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2023-03-22 09:59:47,390 INFO mapred.MapTask: Processing split: hdfs://node1:8020/input/word.txt:0+45
2023-03-22 09:59:48,125 INFO mapreduce.Job: Job job_local146698941_0001 running in uber mode : false
2023-03-22 09:59:48,150 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2023-03-22 09:59:48,150 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2023-03-22 09:59:48,150 INFO mapred.MapTask: soft limit at 83886080
2023-03-22 09:59:48,150 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2023-03-22 09:59:48,150 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2023-03-22 09:59:48,186 INFO mapreduce.Job:  map 0% reduce 0%
2023-03-22 09:59:48,202 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2023-03-22 09:59:49,223 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-03-22 09:59:50,371 INFO mapred.LocalJobRunner: 
2023-03-22 09:59:50,416 INFO mapred.MapTask: Starting flush of map output
2023-03-22 09:59:50,416 INFO mapred.MapTask: Spilling map output
2023-03-22 09:59:50,416 INFO mapred.MapTask: bufstart = 0; bufend = 69; bufvoid = 104857600
2023-03-22 09:59:50,416 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600
2023-03-22 09:59:50,543 INFO mapred.MapTask: Finished spill 0
2023-03-22 09:59:50,733 INFO mapred.Task: Task:attempt_local146698941_0001_m_000000_0 is done. And is in the process of committing
2023-03-22 09:59:50,764 INFO mapred.LocalJobRunner: map
2023-03-22 09:59:50,764 INFO mapred.Task: Task 'attempt_local146698941_0001_m_000000_0' done.
2023-03-22 09:59:50,847 INFO mapred.Task: Final Counters for attempt_local146698941_0001_m_000000_0: Counters: 23
        File System Counters
                FILE: Number of bytes read=316541
                FILE: Number of bytes written=822643
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=45
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=5
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=1
        Map-Reduce Framework
                Map input records=4
                Map output records=6
                Map output bytes=69
                Map output materialized bytes=60
                Input split bytes=97
                Combine input records=6
                Combine output records=4
                Spilled Records=4
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=267386880
        File Input Format Counters 
                Bytes Read=45
2023-03-22 09:59:50,848 INFO mapred.LocalJobRunner: Finishing task: attempt_local146698941_0001_m_000000_0
2023-03-22 09:59:50,946 INFO mapred.LocalJobRunner: map task executor complete.
2023-03-22 09:59:51,007 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2023-03-22 09:59:51,025 INFO mapred.LocalJobRunner: Starting task: attempt_local146698941_0001_r_000000_0
2023-03-22 09:59:51,156 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2023-03-22 09:59:51,157 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2023-03-22 09:59:51,158 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2023-03-22 09:59:51,213 INFO mapreduce.Job:  map 100% reduce 0%
2023-03-22 09:59:51,226 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@1f0445e7
2023-03-22 09:59:51,238 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2023-03-22 09:59:51,338 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=642252800, maxSingleShuffleLimit=160563200, mergeThreshold=423886880, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2023-03-22 09:59:51,355 INFO reduce.EventFetcher: attempt_local146698941_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2023-03-22 09:59:51,632 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local146698941_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
2023-03-22 09:59:51,665 INFO reduce.InMemoryMapOutput: Read 56 bytes from map-output for attempt_local146698941_0001_m_000000_0
2023-03-22 09:59:51,675 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
2023-03-22 09:59:51,683 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
2023-03-22 09:59:51,689 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:59:51,693 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2023-03-22 09:59:51,715 INFO mapred.Merger: Merging 1 sorted segments
2023-03-22 09:59:51,716 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2023-03-22 09:59:51,719 INFO reduce.MergeManagerImpl: Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
2023-03-22 09:59:51,720 INFO reduce.MergeManagerImpl: Merging 1 files, 60 bytes from disk
2023-03-22 09:59:51,725 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2023-03-22 09:59:51,725 INFO mapred.Merger: Merging 1 sorted segments
2023-03-22 09:59:51,728 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2023-03-22 09:59:51,729 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:59:51,867 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2023-03-22 09:59:52,038 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-03-22 09:59:52,284 INFO mapred.Task: Task:attempt_local146698941_0001_r_000000_0 is done. And is in the process of committing
2023-03-22 09:59:52,302 INFO mapred.LocalJobRunner: 1 / 1 copied.
2023-03-22 09:59:52,302 INFO mapred.Task: Task attempt_local146698941_0001_r_000000_0 is allowed to commit now
2023-03-22 09:59:52,339 INFO output.FileOutputCommitter: Saved output of task 'attempt_local146698941_0001_r_000000_0' to hdfs://node1:8020/output
2023-03-22 09:59:52,343 INFO mapred.LocalJobRunner: reduce > reduce
2023-03-22 09:59:52,343 INFO mapred.Task: Task 'attempt_local146698941_0001_r_000000_0' done.
2023-03-22 09:59:52,344 INFO mapred.Task: Final Counters for attempt_local146698941_0001_r_000000_0: Counters: 29
        File System Counters
                FILE: Number of bytes read=316693
                FILE: Number of bytes written=822703
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=45
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=10
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=4
                Reduce shuffle bytes=60
                Reduce input records=4
                Reduce output records=4
                Spilled Records=4
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=101
                Total committed heap usage (bytes)=267386880
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=38
2023-03-22 09:59:52,344 INFO mapred.LocalJobRunner: Finishing task: attempt_local146698941_0001_r_000000_0
2023-03-22 09:59:52,344 INFO mapred.LocalJobRunner: reduce task executor complete.
2023-03-22 09:59:53,216 INFO mapreduce.Job:  map 100% reduce 100%
2023-03-22 09:59:53,219 INFO mapreduce.Job: Job job_local146698941_0001 completed successfully
2023-03-22 09:59:53,267 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=633234
                FILE: Number of bytes written=1645346
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=90
                HDFS: Number of bytes written=38
                HDFS: Number of read operations=15
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=4
        Map-Reduce Framework
                Map input records=4
                Map output records=6
                Map output bytes=69
                Map output materialized bytes=60
                Input split bytes=97
                Combine input records=6
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=60
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=101
                Total committed heap usage (bytes)=534773760
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=45
        File Output Format Counters 
                Bytes Written=38
[atguigu@node1 hadoop-3.1.3]$ 

P034【034_尚硅谷_Hadoop_入門_日志聚集功能配置】05:42

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

  145  jps
  146  mapred --daemon start historyserver
  147  jps
  148  xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
  149  jps
  150  mapred --daemon stop historyserver
  151  mapred --daemon start historyserver
  152  jps
  153  hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2
  154  history 

   58  jps
   61  sbin/start-yarn.sh
   62  sbin/stop-yarn.sh
   63  sbin/start-yarn.sh
   64  history 

P035【035_尚硅谷_Hadoop_入門_兩個(gè)常用腳本】09:18

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

  155  cd /home/atguigu/bin
  156  ll
  157  vim myhadoop.sh
  158  chmod +x myhadoop.sh
  159  chmod 777 myhadoop.sh 
  160  ll
  161  jps
  162  myhadoop.sh stop
  163  ./myhadoop.sh stop
  164  jps
  165  ./myhadoop.sh start
  166  jps
  167  vim jpsall
  168  chmod +x jpsall
  169  ll
  170  chmod 777 jpsall 
  171  ll
  172  ./jpsall 
  173  cd ~
  174  ll
  175  xsync /home/atguigu/bin/
  176  history 

P036【036_尚硅谷_Hadoop_入門_兩道面試題】04:15

常用端口號(hào)說(shuō)明

端口名稱

Hadoop2.x

Hadoop3.x

NameNode內(nèi)部通信端口

8020 / 9000

8020 / 9000/9820

NameNode HTTP UI

50070

9870

MapReduce查看執(zhí)行任務(wù)端口

8088

8088

歷史服務(wù)器通信端口

19888

19888

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

P037【037_尚硅谷_Hadoop_入門_集群時(shí)間同步】11:27

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

很多公司都是無(wú)網(wǎng)開發(fā)的,連接不了外網(wǎng)。

這個(gè)視頻的操作,不用跟著敲!??!切記!??!

P038【038_尚硅谷_Hadoop_入門_常見問(wèn)題總結(jié)】10:57

root用戶和atguigu兩個(gè)用戶啟動(dòng)集群不統(tǒng)一,只用atguigu啟動(dòng)集群。

文件都是二進(jìn)制,隨便拆。

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建

尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】,# Hadoop,大數(shù)據(jù),Hadoop,Linux,centOS 7,環(huán)境搭建文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-735756.html

到了這里,關(guān)于尚硅谷大數(shù)據(jù)技術(shù)Hadoop教程-筆記02【Hadoop-入門】的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記05【SparkCore(核心編程,累加器、廣播變量)】

    尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記05【SparkCore(核心編程,累加器、廣播變量)】

    視頻地址:尚硅谷大數(shù)據(jù)Spark教程從入門到精通_嗶哩嗶哩_bilibili 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記01【SparkCore(概述、快速上手、運(yùn)行環(huán)境、運(yùn)行架構(gòu))】 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記02【SparkCore(核心編程,RDD-核心屬性-執(zhí)行原理-基礎(chǔ)編程-并行度與分區(qū)-轉(zhuǎn)換算子)】 尚硅

    2024年02月01日
    瀏覽(21)
  • 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記01【Spark(概述、快速上手、運(yùn)行環(huán)境、運(yùn)行架構(gòu))】

    尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記01【Spark(概述、快速上手、運(yùn)行環(huán)境、運(yùn)行架構(gòu))】

    視頻地址: 尚硅谷大數(shù)據(jù)Spark教程從入門到精通_嗶哩嗶哩_bilibili 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記01【Spark(概述、快速上手、運(yùn)行環(huán)境、運(yùn)行架構(gòu))】 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記02【SparkCore(核心編程、案例實(shí)操)】 尚硅谷大數(shù)據(jù)技術(shù)Spark教程-筆記03【SparkSQL(概述、核心編程、

    2023年04月21日
    瀏覽(25)
  • 02.hadoop上課筆記之ssh和復(fù)習(xí)linux

    1.ssh(secure shell)使用了加密處理 私鑰在客戶端 公鑰 在服務(wù)端 2.linux vim提升效率 linux vim 的4dd從光標(biāo)開始復(fù)制4行,粘貼p 3.查看目錄本身權(quán)限 4.sudo加權(quán)限 env查看環(huán)境變量,要在當(dāng)前文件夾執(zhí)行腳本太麻煩了 #直接寫命令,設(shè)置全局變量

    2024年02月07日
    瀏覽(14)
  • 尚硅谷Docker實(shí)戰(zhàn)教程-筆記06【Docker容器數(shù)據(jù)卷】

    尚硅谷Docker實(shí)戰(zhàn)教程-筆記06【Docker容器數(shù)據(jù)卷】

    尚硅谷大數(shù)據(jù)技術(shù)-教程-學(xué)習(xí)路線-筆記匯總表【課程資料下載】 視頻地址:尚硅谷Docker實(shí)戰(zhàn)教程(docker教程天花板)_嗶哩嗶哩_bilibili 尚硅谷Docker實(shí)戰(zhàn)教程-筆記01【理念簡(jiǎn)介、官網(wǎng)介紹、平臺(tái)入門圖解、平臺(tái)架構(gòu)圖解】 尚硅谷Docker實(shí)戰(zhàn)教程-筆記02【安裝docker、鏡像加速器配置

    2024年02月16日
    瀏覽(50)
  • 《黑馬程序員2023新版黑馬程序員大數(shù)據(jù)入門到實(shí)戰(zhàn)教程,大數(shù)據(jù)開發(fā)必會(huì)的Hadoop、Hive,云平臺(tái)實(shí)戰(zhàn)項(xiàng)目》學(xué)習(xí)筆記總目錄

    本文是對(duì)《黑馬程序員新版大數(shù)據(jù)入門到實(shí)戰(zhàn)教程》所有知識(shí)點(diǎn)的筆記進(jìn)行總結(jié)分類。 學(xué)習(xí)視頻:黑馬程序員新版大數(shù)據(jù) 學(xué)習(xí)時(shí)總結(jié)的學(xué)習(xí)筆記以及思維導(dǎo)圖會(huì)在后續(xù)更新,請(qǐng)敬請(qǐng)期待。 前言:配置三臺(tái)虛擬機(jī),為集群做準(zhǔn)備(該篇章請(qǐng)到原視頻進(jìn)行觀看,不在文章內(nèi)詳細(xì)

    2024年02月03日
    瀏覽(101)
  • 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記03【Flink運(yùn)行時(shí)架構(gòu)】

    尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記03【Flink運(yùn)行時(shí)架構(gòu)】

    尚硅谷大數(shù)據(jù)技術(shù)-教程-學(xué)習(xí)路線-筆記匯總表【課程資料下載】 視頻地址:尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程從入門到精通_嗶哩嗶哩_bilibili 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記01【Flink概述、Flink快速上手】 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記02【Flink部署】 尚硅谷大數(shù)據(jù)Flink1.17實(shí)

    2024年02月16日
    瀏覽(44)
  • 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記01【Flink概述、Flink快速上手】

    尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記01【Flink概述、Flink快速上手】

    尚硅谷大數(shù)據(jù)技術(shù)-教程-學(xué)習(xí)路線-筆記匯總表【課程資料下載】 視頻地址:尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程從入門到精通_嗶哩嗶哩_bilibili 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記01【Flink概述、Flink快速上手】 尚硅谷大數(shù)據(jù)Flink1.17實(shí)戰(zhàn)教程-筆記02【Flink部署】 尚硅谷大數(shù)據(jù)Flink1.17實(shí)

    2024年02月09日
    瀏覽(51)
  • Vue2+Vue3筆記(尚硅谷張?zhí)煊砝蠋煟ヾay02

    Vue2+Vue3筆記(尚硅谷張?zhí)煊砝蠋煟ヾay02

    聲明:只是記錄,初心是為了讓頁(yè)面更好看,會(huì)有錯(cuò)誤,我并不是一個(gè)會(huì)記錄的人,所以有點(diǎn)雜亂無(wú)章的感覺(jué),我先花點(diǎn)時(shí)間把視頻迅速過(guò)掉,再來(lái)整理這些雜亂無(wú)章的內(nèi)容 按照視頻來(lái)的話,這里應(yīng)該有一些概念的東西,但我不管這些東西,我這里只做一個(gè)淺顯得記錄 組件:實(shí)現(xiàn)

    2024年02月10日
    瀏覽(50)
  • [學(xué)習(xí)筆記](méi)黑馬程序員-Hadoop入門視頻教程

    [學(xué)習(xí)筆記](méi)黑馬程序員-Hadoop入門視頻教程

    黑馬程序員大數(shù)據(jù)Hadoop入門視頻教程,適合零基礎(chǔ)自學(xué)的大數(shù)據(jù)Hadoop教程 學(xué)習(xí)目標(biāo) 1.理解大數(shù)據(jù)基本概念 2.掌握數(shù)據(jù)分析基本步驟 3.理解分布式、集群概念 4.學(xué)會(huì)VMware虛擬機(jī)的導(dǎo)入與使用 5.掌握Linux常用操作命令使用 6.掌握vi/vim編輯器基礎(chǔ)使用 1.1.1 企業(yè)數(shù)據(jù)分析方向 數(shù)據(jù)分

    2024年02月13日
    瀏覽(24)
  • 大數(shù)據(jù)技術(shù)之Hadoop-入門

    大數(shù)據(jù)技術(shù)之Hadoop-入門

    分布式:多臺(tái)服務(wù)器共同完成某一項(xiàng)任務(wù)。 Hadoop三大發(fā)行版本:Apache、Cloudera、Hortonworks。 1)Apache Hadoop 官網(wǎng)地址:http://hadoop.apache.org 下載地址:https://hadoop.apache.org/releases.html 2)Cloudera Hadoop 官網(wǎng)地址:https://www.cloudera.com/downloads/cdh 下載地址:https://docs.cloudera.com/documentatio

    2024年02月02日
    瀏覽(18)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包