国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

CDH6.3.2集成Kerberos

這篇具有很好參考價(jià)值的文章主要介紹了CDH6.3.2集成Kerberos。希望對大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

CDH6.3.2集成Kerberos

一.參考doc

CDH enable kerberos: Kerberos Security Artifacts Overview | 6.3.x | Cloudera Documentation

CDH disable kerberos:https://www.sameerahmad.net/blog/disable-kerberos-on-CDH; https://community.cloudera.com/t5/Support-Questions/Disabling-Kerberos/td-p/19654

二.CDH集成Kerberos

進(jìn)入Cloudera Manager的**“管理”->“安全”**界面
1)選擇“啟用Kerberos”,進(jìn)入如下界面

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

2)環(huán)境確認(rèn)(勾選全部)

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3)填寫KDC配置

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

要注意的是:這里的 Kerberos Encryption Types 必須跟KDC實(shí)際支持的加密類型匹配(即kdc.conf中的值)

4)KRB5 信息

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

5)填寫主體名和密碼
注:此處填寫的主題名與密碼是我們在Kerberos基礎(chǔ) 中創(chuàng)建的cloudera-scm/admin@XICHUAN.COM用戶

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

6)等待導(dǎo)入KDC憑據(jù)完成

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

7)繼續(xù)

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

8)重啟集群

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

9)完成

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

之后 Cloudera Manager 會(huì)自動(dòng)重啟集群服務(wù),啟動(dòng)之后會(huì)提示 Kerberos 已啟用。在 Cloudera Manager 上啟用 Kerberos 的過程中,會(huì)自動(dòng)做以下的事情:

集群中有多少個(gè)節(jié)點(diǎn),每個(gè)賬戶就會(huì)生成對應(yīng)個(gè)數(shù)的 principal ;
為每個(gè)對應(yīng)的 principal 創(chuàng)建 keytab;
部署 keytab 文件到指定的節(jié)點(diǎn)中;
在每個(gè)服務(wù)的配置文件中加入有關(guān) Kerberos 的配置;
啟用之后訪問集群的所有資源都需要使用相應(yīng)的賬號來訪問,否則會(huì)無法通過 Kerberos 的 authenticatin。

三.Kerberos使用

3.1 訪問hdfs

1.認(rèn)證前(訪問異常):

[root@node01 ~]# hdfs dfs -ls /
22/10/26 15:55:50 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
ls: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "node01/192.168.1.230"; destination host is: "node01":8020; 
[root@node01 ~]# 

2.認(rèn)證中(此處使用密碼認(rèn)證):

[root@node01 ~]# kinit xichuan/admin
Password for xichuan/admin@XICHUAN.COM: 
[root@node01 ~]#

3.認(rèn)證后(正常訪問):

[root@node01 ~]# hdfs dfs -ls /
Found 2 items
drwxrwxrwt   - hdfs supergroup          0 2022-10-25 16:08 /tmp
drwxr-xr-x   - hdfs supergroup          0 2022-10-25 16:15 /user
[root@node01 ~]# 

3.2 訪問hive

1.認(rèn)證前(訪問異常):

[root@node01 ~]# hive
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/hive-common-2.1.1-cdh6.3.2.jar!/hive-log4j2.properties Async: false
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "node01/192.168.1.230"; destination host is: "node01":8020; 
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:604)
	at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:545)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:763)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "node01/192.168.1.230"; destination host is: "node01":8020; 
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:808)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1503)
	at org.apache.hadoop.ipc.Client.call(Client.java:1445)
	at org.apache.hadoop.ipc.Client.call(Client.java:1355)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1496)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1617)
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:712)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:580)
	... 9 more
Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
	at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
	at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
	at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
	at org.apache.hadoop.ipc.Client.call(Client.java:1391)
	... 33 more
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
	at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
	at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
	... 36 more

2.認(rèn)證中(此處使用密碼認(rèn)證):

[root@node01 ~]# kinit xichuan/admin
Password for xichuan/admin@XICHUAN.COM: 
[root@node01 ~]#

3.認(rèn)證后(正常訪問):

[root@node01 ~]# hive
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/hive-common-2.1.1-cdh6.3.2.jar!/hive-log4j2.properties Async: false

WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> show databases;
OK
default
test_dws
Time taken: 2.369 seconds, Fetched: 4 row(s)
hive> select * from test_dws.test_table limit 1;
OK
NULL	1919	3	1	1	P	2630	480										HUAWEI_JSCC	ZTT	    JSCC		SA8200RFIV110		35042026		NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULLNULL	NULL	NULL	NULL		NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL	NULL		NULL		NULL	NULL	NULLNULL	NULL	0	1641440662000	FT_ECID	1	NULL	FTTB3Q1G40000	LX-2021122
Time taken: 21.826 seconds, Fetched: 1 row(s)
hive> 

3.3 訪問impala-shell

1.認(rèn)證前(無法訪問):

[root@node01 ~]# impala-shell
Starting Impala Shell without Kerberos authentication
Opened TCP connection to node01:21000
Error connecting: TTransportException, TSocket read 0 bytes
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v3.2.0-cdh6.3.2 (1bb9836) built on Fri Nov  8 07:22:06 PST 2019)

To see a summary of a query's progress that updates in real-time, run 'set
LIVE_PROGRESS=1;'.
***********************************************************************************
[Not connected] >

2.認(rèn)證中(此處使用密碼認(rèn)證):

[root@node01 ~]# kinit xichuan/admin
Password for xichuan/admin@XICHUAN.COM: 
[root@node01 ~]#

3.認(rèn)證后(正常訪問):

root@node01 ~]# impala-shell
Starting Impala Shell without Kerberos authentication
Opened TCP connection to node01:21000
Error connecting: TTransportException, TSocket read 0 bytes
Kerberos ticket found in the credentials cache, retrying the connection with a secure transport.
Opened TCP connection to node01:21000
Connected to node01:21000
Server version: impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v3.2.0-cdh6.3.2 (1bb9836) built on Fri Nov  8 07:22:06 PST 2019)

To see live updates on a query's progress, run 'set LIVE_SUMMARY=1;'.
***********************************************************************************
[node01:21000] default> 
[node01:21000] default> select * from  kudu_test_dws.test_table2;
Query: select * from  kudu_test_dws.test_table2
Query submitted at: 2022-10-26 16:29:26 (Coordinator: http://node01:25000)
Query progress can be monitored at: http://node01:25000/query_plan?query_id=af489e22bc4c3f46:c0d08c6100000000
+----------------------------------+-------------+-------------+------------+----------------+------------+---------------+----------------+------------------++---------------+---------------+----------------+---------------+-----------------+-------------------+----------------+------+---------+---------+-----------+----------+------------+
| id                               | range_month | total_count | good_count | total_td_count | gross_time | net_test_time | good_test_time | good_td_sequence || start_time    | end_time      | lot_start_time | lot_end_time  | handler_id      | loadboard         | loadboard_type | year | month   | qualter | work_week | week_day | run_date   |
+----------------------------------+-------------+-------------+------------+----------------+------------+---------------+----------------+------------------++---------------+---------------+----------------+---------------+-----------------+-------------------+----------------+------+---------+---------+-----------+----------+------------+
| 55e9b624ab7ecc10b4595c791f49917a | 8           | 226         | 57         | 205            | 1838000    | 1191436       | 591308         | 52               || 1628291104000 | 1628292942000 | 1628221346000  | 1628292942000 | PTSK-O-011-UF30 | V93-G12BX2-PH04/2 |                | 2021 | 2021-08 | 2021-03 | 2021-31   | 6        | 2021-08-07 |
| d802906d9817897f54383a492e7464b6 | 8           | 250         | 76         | 216            | 1786000    | 1304322       | 706299         | 62               || 1628750593000 | 1628752379000 | 1628750593000  | 1628752379000 | PTSK-O-011-UF30 | V93-G12BX2-PH04/2 |                | 2021 | 2021-08 | 2021-03 | 2021-32   | 4        | 2021-08-12 |
| d9f613e8c6924c77493304d265ca9fa6 | 8           | 1344        | 1137       | 690            | 8890000    | 7728323       | 5845658        | 502              || 1628230300000 | 1628239190000 | 1628230300000  | 1628294556000 | PTSK-O-011-UF30 | V93-G12BX2-PH04/2 |                | 2021 | 2021-08 | 2021-03 | 2021-31   | 5        | 2021-08-06 |
+----------------------------------+-------------+-------------+------------+----------------+------------+---------------+----------------+------------------++---------------+---------------+----------------+---------------+-----------------+-------------------+----------------+------+---------+---------+-----------+----------+------------+
Fetched 3 row(s) in 0.39s
[node01:21000] default> 


3.4 錯(cuò)誤記錄

3.4.1 hdfs相關(guān)指令報(bào)錯(cuò)

錯(cuò)誤信息 : AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]

[root@node01 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@XICHUAN.COM

Valid starting       Expires              Service principal
10/26/2022 15:19:15  10/27/2022 15:19:15  krbtgt/XICHUAN.COM@XICHUAN.COM
	renew until 11/02/2022 15:19:15
[root@node01 ~]#
[root@node01 ~]# hadoop fs -ls /
2021-03-28 20:23:27,667 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
ls: DestHost:destPort master01:8020 , LocalHost:localPort master01/192.xx.xx:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]

解決辦法:
3. 修改 Kerboeros配置文件 /etc/krb5.conf , 注釋掉 : default_ccache_name屬性 .然后執(zhí)行kdestroy,重新kinit .
4. CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.5 Dbever訪問Kerberos環(huán)境下的Impala

3.5.1 windows下安裝kfw客戶端

下載地址:https://web.mit.edu/kerberos/dist/index.html

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

安裝過程沒什么好說的,傻瓜式安裝,唯一需要注意的是:

安裝之后不要點(diǎn)擊重啟(其實(shí)也可以,但是沒必要)!不要打開軟件!

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.5.2 修改C:\ProgramData\MIT\Kerberos5\krb5.ini文件

kfw啟動(dòng)時(shí)會(huì)讀取C:\ProgramData\MIT\Kerberos5\krb5.ini的配置文件,我們需要把它配置成和集群中一樣的配置

1.連接你的集群krb5kdc和kadmin服務(wù)所在的機(jī)器,復(fù)制/etc/krb5.conf中的配置

vim /etc/krb5.conf

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
 default_realm = XICHUAN.COM
 # default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 XICHUAN.COM = {
  kdc = node02
  admin_server = node02
 }

[domain_realm]
 .example.com = XICHUAN.COM
 example.com = XICHUAN.COM

2.修改C:\ProgramData\MIT\Kerberos5\krb5.ini的配置文件

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.5.3 修改環(huán)境變量

dbeaver會(huì)讀取我們的環(huán)境變量 $KRB5CCNAME 來獲取kfw的緩存

1.在C盤創(chuàng)建temp文件夾

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

2.增加環(huán)境變量

KRB5CCNAME=C:\temp\krb5cache

KRB5_CONFIG=C:\ProgramData\MIT\Kerberos5\krb5.ini

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.打開kfw軟件登陸

確認(rèn)可以登陸后重啟windows

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

重啟后我們會(huì)發(fā)現(xiàn)這里多了個(gè)文件:

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

4.查看登錄狀態(tài)

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.5.4 修改dbeaver配置文件和連接配置

1.在DBeaver的安裝目錄下找到dbeaver.ini文件

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

2.在dbeaver.ini中后面添加

-Djavax.security.auth.useSubjectCredsOnly=false

-Djava.security.krb5.conf=C:\ProgramData\MIT\Kerberos5\krb5.ini

-Dsun.security.krb5.debug=true

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

切記

-Djava.security.krb5.conf=C:\ProgramData\MIT\Kerberos5\krb5.ini

不要加引號「“”」!?。?/p>

3.Dbever登錄impala

點(diǎn)擊編輯驅(qū)動(dòng)設(shè)置

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

修改URL模板為:

jdbc:impala://{host}:{port}/{database};AuthMech=1;KrbRealm=XICHUAN.COM;KrbHostFQDN={host};KrbServiceName=impala;KrbAuthType=2

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

添加impalaJDBC文件

impalaJDBC下載:https://mvnrepository.com/artifact/com.cloudera/ImpalaJDBC41/2.6.3

測試連接:

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

OK!可以愉快的用DBeaver寫sql了!

3.6 Windows訪問HDFS/YARN/HIVESERVER2 等服務(wù)的webui

參考:https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cdh_sg_browser_access_kerberos_protected_url.html#topic_6_2__section_vj5_gwv_ls

3.6.1 HDFS/YARN/HIVESERVER2開啟webui驗(yàn)證

1.修改CDH中的配置:

hdfs 開啟 Enable Kerberos Authentication for HTTP Web-Consoles

下圖中的配置選項(xiàng)選中,使其生效

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

yarn開啟 Enable Kerberos Authentication for HTTP Web-Consoles

下圖中的配置選項(xiàng)選中,使其生效

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

hive設(shè)置 hive.server2.webui.use.spnego=true

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

2.重啟CDH

3.訪問相應(yīng)的頁面

訪問hdfs webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

訪問yarn webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

訪問hive webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.6.2 修改配置并進(jìn)行驗(yàn)證后訪問

注:此處只設(shè)置了Firefox,其他瀏覽器設(shè)置,參考本章節(jié)參考鏈接自行設(shè)置

1.修改Firefox配置

由于chrome瀏覽器中 kerberos 相關(guān)配置比較復(fù)雜,建議配置使用firefox瀏覽器。打開firefox瀏覽器,在地址欄輸入about:config,然后搜索并配置如下兩個(gè)參數(shù):
network.auth.use-sspi:將值改為false;
network.negotiate-auth.trusted-uris:將值為集群節(jié)點(diǎn)ip或主機(jī)名;

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

2.打開kfw軟件登陸

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.再次訪問頁面

訪問hdfs webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

訪問yarn webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

訪問hive webui

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.7代碼中的認(rèn)證

3.7.1 spark程序的認(rèn)證

windows本地運(yùn)行:

  val krb5ConfPath = "D:\\development\\license_dll\\krb5.conf"
  val keyTabPath = "D:\\development\\license_dll\\xichuan.keytab"
  val principle = "xichuan/admin@XICHUAN.COM"

  def kerberosAuth(krb5ConfPath:String,keytabPath:String,principle:String): Unit ={
    val conf = new Configuration
    System.setProperty("java.security.krb5.conf", krb5ConfPath)

    //conf.addResource(new Path("C:\\Users\\xichuan\\Desktop\\hive-conf\\hive-site.xml"))
    conf.set("hadoop.security.authentication", "Kerberos")
    conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem")
    UserGroupInformation.setConfiguration(conf)
    val loginInfo = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principle, keytabPath)

    if (loginInfo.hasKerberosCredentials) {
      println("kerberos authentication success!")
      println("login user: "+loginInfo.getUserName())
    } else {
      println("kerberos authentication fail!")
    }
  }

spark-submit集群運(yùn)行:

##1.可以本地kinit 認(rèn)證一個(gè)用戶(會(huì)過期)

##2.spark-submit添加
--principal xichuan/admin@XICHUAN.COM \
--keytab /opt/xichuan.keytab \

提交命令:

spark-submit \
--master yarn \
--deploy-mode cluster \
--conf spark.sql.shuffle.partitions=200 \
--principal xichuan/admin@XICHUAN.COM \
--keytab /opt/xichuan.keytab \
--num-executors 1 \
--executor-memory 2G  \
--executor-cores 1 \
--queue root.default \
--class com.xichuan.dev.TestSparkHive.scala /opt/xichuan/spark-test.jar 
3.7.2 flink程序的認(rèn)證

flink-session:

$ vim flink-conf.yaml
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.keytab: /opt/xichuan.keytab
security.kerberos.login.principal: xichuan/admin@XICHUAN.COM
security.kerberos.login.contexts: Client
#從kafka接收數(shù)據(jù)寫入到hdfs中,同時(shí)受kerberos+ranger權(quán)限管控
val sink: StreamingFileSink[String] = StreamingFileSink
      .forRowFormat(new Path("hdfs://temp/xichuan/kerberos.test"), new SimpleStringEncoder[String]("UTF-8"))
      .withRollingPolicy(
        DefaultRollingPolicy.builder()
          .withRolloverInterval(TimeUnit.SECONDS.toMillis(15))
          .withInactivityInterval(TimeUnit.SECONDS.toMillis(5))
          .withMaxPartSize(1024 * 1024 * 1024)
          .build())
      .build()
    sinkstream.addSink(sink)
3.7.3 Java連接impala的認(rèn)證
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;

import java.io.IOException;
import java.security.PrivilegedAction;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
/**
 * @Author Xichuan
 * @Date 2022/10/28 17:53
 * @Description
 */
public class TestKerberosImpala {
        public static final String KRB5_CONF = "D:\\development\\license_dll\\krb5.conf";
        public static final String PRINCIPAL = "xichuan/admin@XICHUAN.COM";
        public static final String KEYTAB = "D:\\development\\license_dll\\xichuan.keytab";
        public static String connectionUrl = "jdbc:impala://node01:21050/;AuthMech=1;KrbRealm=XICHUAN.COM;KrbHostFQDN=node01;KrbServiceName=impala";
        public static String jdbcDriverName = "com.cloudera.impala.jdbc41.Driver";

        public static void main(String[] args) throws Exception {
            UserGroupInformation loginUser = kerberosAuth(KRB5_CONF,KEYTAB,PRINCIPAL);

            int result = loginUser.doAs((PrivilegedAction<Integer>) () -> {
                int result1 = 0;
                try {
                    Class.forName(jdbcDriverName);
                } catch (ClassNotFoundException e) {
                    e.printStackTrace();
                }
                try (Connection con = DriverManager.getConnection(connectionUrl)) {
                    Statement stmt = con.createStatement();
                    ResultSet rs = stmt.executeQuery("SELECT count(1) FROM test_dws.dws_test_id");
                    while (rs.next()) {
                        result1 = rs.getInt(1);
                    }
                    stmt.close();
                    con.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
                return result1;
            });
            System.out.println("count: "+ result);
        }

    /**
     * kerberos authentication
     * @param krb5ConfPath
     * @param keyTabPath
     * @param principle
     * @return
     * @throws IOException
     */
        public static UserGroupInformation kerberosAuth(String krb5ConfPath, String keyTabPath, String principle) throws IOException {
            System.setProperty("java.security.krb5.conf", krb5ConfPath);
            Configuration conf = new Configuration();
            conf.set("hadoop.security.authentication", "Kerberos");
            UserGroupInformation.setConfiguration(conf);
            UserGroupInformation loginInfo = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principle, keyTabPath);


            if (loginInfo.hasKerberosCredentials()) {
                System.out.println("kerberos authentication success!");
                System.out.println("login user: "+loginInfo.getUserName());
            } else {
                System.out.println("kerberos authentication fail!");
            }

            return loginInfo;
        }
}
3.7.4 Springboot使用hikari連接池并進(jìn)行Kerberos認(rèn)證訪問Impala

項(xiàng)目與文檔地址:https://github.com/Raray-chuan/springboot-kerberos-hikari-impala

四.CDH disable Kerberos

禁用Kerberos,由于沒有按鈕可以直接操作,需要我們手動(dòng)一個(gè)個(gè)修改開啟了Kerberos的組件的配置。修改步驟按以下來:

1.先關(guān)閉集群(如果yarn上有任務(wù)則等待停止,或手動(dòng)停止)

2.修改zookeeper的配置
enableSecurity取消勾選,Enable Kerberos Authentication取消勾選,在zoo.cfg 的Server 高級配置代碼段(安全閥)寫入skipACL: yes
如下圖:

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

3.修改HDFS配置
修改hadoop.security.authenticationsimple,hadoop.security.authorization取消勾選,dfs.datanode.address1004修改為50010,dfs.datanode.http.address1006修改為50075,dfs.datanode.data.dir.perm700修改為755。

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

4.修改Kudu配置
enable_security取消勾選

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

5.修改Kafka配置
kerberos.auth.enable取消勾選

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

6.修改HBase配置(如有此組件,進(jìn)行修改)
hbase.security.authentication選擇simple,hbase.security.authorization取消勾選,hbase.thrift.security.qop選擇none。

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

CDH6.3.2集成Kerberos,redis,數(shù)據(jù)庫,緩存

7.修改Solr配置(如有此組件,進(jìn)行修改)
solr Secure Authentication選擇simple

8.修改Hue配置(如有此組件,進(jìn)行修改)
進(jìn)入HUE的實(shí)例頁面,停止“Kerberos Ticket Renewer”后刪除Hue實(shí)例中的“Kerberos Ticket Renewer”服務(wù)。

9.以上的組件有的就做相關(guān)的操作,沒有的組件略過,要注意去除所有的Kerberos相關(guān)的配置,不然CM的管理-安全頁面還是會(huì)顯示已啟用Kerberos

10.修改完相關(guān)的Kerberos配置后,Cloudera Management Service會(huì)顯示配置已更改,需要重啟,重啟Cloudera Management Service,然后管理-安全頁面就會(huì)顯示禁用Kerberos

11.啟動(dòng)zookeeper,然后連接客戶端,刪除幾個(gè)路徑:

rmr /rmstore/ZKRMStateRoot
rmr /hbase

然后刪除zookeeper里在zoo.cfg 的Server 高級配置代碼段(安全閥)中的skipACL: yes內(nèi)容,重啟zookeeper。

12.啟動(dòng)所有服務(wù),完成禁用Kerberos。文章來源地址http://www.zghlxwxcb.cn/news/detail-691195.html

到了這里,關(guān)于CDH6.3.2集成Kerberos的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • CDH6.3.2,不互通的cdh平臺互導(dǎo)hive數(shù)據(jù)

    1、先導(dǎo)出所有建表語句,在源CDH服務(wù)器命令行輸入下面命令,該庫下所有建表語句保存至hive目錄中的tables.sql文件中,不知道具體路徑可以全局搜索一下,拿到源庫hive的建表語句后,稍微處理一下,去目標(biāo)庫把表建好。 附加:我自己需要導(dǎo)很多個(gè)庫,所以寫了一行python代碼

    2024年01月19日
    瀏覽(27)
  • CDH6.3.2搭建HIVE ON TEZ

    CDH6.3.2搭建HIVE ON TEZ

    參考 https://blog.csdn.net/ly8951677/article/details/124152987 ----配置hive運(yùn)行引擎 在/etc/hive/conf/hive-site.xml中修改如下: hive.execution.engine mr–tez 或者運(yùn)行代碼的時(shí)候: 如果內(nèi)存不夠:可以修改如下參數(shù)設(shè)置 在配置文件設(shè)置后,如果集群重啟會(huì)把配置的恢復(fù),需要再CDH界面配置:

    2024年02月13日
    瀏覽(31)
  • flink1.14.5使用CDH6.3.2的yarn提交作業(yè)

    flink1.14.5使用CDH6.3.2的yarn提交作業(yè)

    使用CDH6.3.2安裝了hadoop集群,但是CDH不支持flink的安裝,網(wǎng)上有CDH集成flink的文章,大都比較麻煩;但其實(shí)我們只需要把flink的作業(yè)提交到y(tǒng)arn集群即可,接下來以CDH yarn為基礎(chǔ),flink on yarn模式的配置步驟。 一、部署flink 1、下載解壓 官方下載地址:Downloads | Apache Flink 注意:CD

    2024年01月16日
    瀏覽(25)
  • 服務(wù)器編譯spark3.3.1源碼支持CDH6.3.2

    服務(wù)器編譯spark3.3.1源碼支持CDH6.3.2

    1、一定要注意編譯環(huán)境的配置 2、下載連接 3、安裝直接解壓,到/opt/softwear/文件夾 4、配置環(huán)境變量 5、更改相關(guān)配置文件 一定注意下面的修改配置 6、修改mvn地址 6.1、如果編譯報(bào)錯(cuò)棧已經(jīng)滿了修改如下 7、更改 scala版本 8、執(zhí)行腳本編譯 9、打包完在/opt/softwear/spark-3.3.1 有一

    2023年04月15日
    瀏覽(28)
  • Unrecognized Hadoop major version number: 3.0.0-cdh6.3.2

    ?一.環(huán)境描述 spark提交job到y(tǒng)arn報(bào)錯(cuò),業(yè)務(wù)代碼比較簡單,通過接口調(diào)用獲取數(shù)據(jù),將數(shù)據(jù)通過sparksql將數(shù)據(jù)寫入hive中,嘗試各種替換hadoop版本,最后拿下 1.hadoop環(huán)境 2.項(xiàng)目?pom.xml 3.項(xiàng)目集群提交報(bào)錯(cuò) ? ? ? ? at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupRelation(SessionCatalog

    2024年02月12日
    瀏覽(24)
  • cdh6.3.2 Flink On Yarn taskmanager任務(wù)分配傾斜問題的解決辦法

    cdh6.3.2 Flink On Yarn taskmanager任務(wù)分配傾斜問題的解決辦法

    Flink On Yarn任務(wù)啟動(dòng) CDH:6.3.2 Flink:1.13.2 Hadoop:3.0.0 在使用FLink on Yarn調(diào)度過程中,發(fā)現(xiàn)taskmanager總是分配在集中的幾個(gè)節(jié)點(diǎn)上,集群有11個(gè)節(jié)點(diǎn),但每個(gè)任務(wù)啟動(dòng),只用到兩三個(gè)節(jié)點(diǎn),導(dǎo)致這幾臺服務(wù)器負(fù)載過高,其他節(jié)點(diǎn)又比較空閑。 1、yarn.scheduler.fair.assignmultiple 2、yarn.s

    2024年02月12日
    瀏覽(25)
  • CDH6.3.2 的pyspark讀取excel表格數(shù)據(jù)寫入hive中的問題匯總

    需求:內(nèi)網(wǎng)通過Excel文件將數(shù)據(jù)同步到外網(wǎng)的CDH服務(wù)器中,將CDH中的文件數(shù)據(jù)寫入hive中。 CDH版本為:6.3.2 spark版本為:2.4 python版本:2.7.5 操作系統(tǒng):CentOS Linux 7 集群方式:yarn-cluster 一、在linux中將excel文件轉(zhuǎn)換成CSV文件,然后上傳到hdfs中。 為何要先轉(zhuǎn)csv呢?主要原因是pysp

    2024年02月07日
    瀏覽(32)
  • CDH整合Flink(CDH6.3.0+Flink1.12.1)

    CDH整合Flink(CDH6.3.0+Flink1.12.1)

    下載 準(zhǔn)備FLINK1.12.1包 準(zhǔn)備paecel環(huán)境 修改配置文件 執(zhí)行這部分操作需要稍等一會(huì),打包結(jié)束后執(zhí)行另外一個(gè)操作 生成這倆包為:FLINK-1.12.1-BIN-SCALA_2.12.tar FLINK_ON_YARN-1.12.1.jar 由于Flink1.12版本編譯后確實(shí)沒有flink-shaded-hadoop-2-uber 3.0.0-cdh6.3.0-10.0文件,但是flink-shaded-10.0也適配flink

    2024年01月23日
    瀏覽(24)
  • 大數(shù)據(jù)技術(shù)(入門篇) --- 使用 Spring Boot 操作 CDH6.2.0 Hadoop

    大數(shù)據(jù)技術(shù)(入門篇) --- 使用 Spring Boot 操作 CDH6.2.0 Hadoop

    本人是web后端研發(fā),習(xí)慣使用spring boot 相關(guān)框架,因此技術(shù)選型直接使用的是spring boot,目前并未使用 spring-data-hadoop 依賴,因?yàn)檫@個(gè)依賴已經(jīng)在 2019 年終止了,可以點(diǎn)擊查看 ,所以我這里使用的是自己找的依賴, 聲明:此依賴可能和你使用的不兼容,我這個(gè)適用于我自己的

    2024年02月02日
    瀏覽(21)
  • Windows Kerberos客戶端配置并訪問CDH

    Windows Kerberos客戶端配置并訪問CDH

    安裝 Kerberos 客戶端 配置 hosts 1、配置集群 hosts 到 Windows( C:WindowsSystem32driversetchosts ); 2、調(diào)整windows環(huán)境變量,將系統(tǒng)環(huán)境變量 PATH 中的 C:Program FilesMITKerberosbin 放置在最前邊,建議放在 JDK 路徑之前; 3、驗(yàn)證:需能 ping 通 kdc 機(jī)器域名和 IP 地址 下載 MIT Kerberos 鏈接:

    2024年02月09日
    瀏覽(17)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包