国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

hive中的datagrip和beeline客戶(hù)端的權(quán)限問(wèn)題

這篇具有很好參考價(jià)值的文章主要介紹了hive中的datagrip和beeline客戶(hù)端的權(quán)限問(wèn)題。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

hive中的datagrip和beeline客戶(hù)端的權(quán)限問(wèn)題

使用ranger和kerberos配置了hadoop和hive,今天想用來(lái)測(cè)試其權(quán)限
測(cè)試xwq用戶(hù):
1.首先添加xwq用戶(hù)權(quán)限,命令如下:

useradd  xwq -G hadoop
echo xwq | passwd --stdin xwq
echo 'xwq  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
kadmin -padmin/admin -wNTVfPQY9kNs6  -q"addprinc -randkey xwq"
kadmin -padmin/admin -wNTVfPQY9kNs6  -q"xst -k /etc/security/keytab/xwq.keytab xwq"
chown xwq:hadoop /etc/security/keytab/xwq.keytab
chmod 660 /etc/security/keytab/xwq.keytab

2.進(jìn)行認(rèn)證

[root@hadoop102 keytab]# kinit xwq
Password for xwq@EXAMPLE.COM: 
[root@hadoop102 keytab]# klist 
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xwq@EXAMPLE.COM

Valid starting       Expires              Service principal
07/01/2023 10:09:21  07/02/2023 10:09:21  krbtgt/EXAMPLE.COM@EXAMPLE.COM
	renew until 07/08/2023 10:09:21

3.連接beeline客戶(hù)端

[root@hadoop102 ~]# beeline 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/module/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/ha/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Beeline version 3.1.2 by Apache Hive
beeline> !connect jdbc:hive2://hadoop102:10000/;principal=hive/hadoop102@EXAMPLE.COM
Connecting to jdbc:hive2://hadoop102:10000/;principal=hive/hadoop102@EXAMPLE.COM
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoop102:10000/> select current_user();
INFO  : Compiling command(queryId=hive_20230701095227_419c1fe7-2f6b-47af-828c-bcf67fd6043a): select current_user()
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)`在這里插入代碼片`
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230701095227_419c1fe7-2f6b-47af-828c-bcf67fd6043a); Time taken: 0.212 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230701095227_419c1fe7-2f6b-47af-828c-bcf67fd6043a): select current_user()
INFO  : Completed executing command(queryId=hive_20230701095227_419c1fe7-2f6b-47af-828c-bcf67fd6043a); Time taken: 0.0 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
+------+
| _c0  |
+------+
| xwq  |
+------+
1 row selected (0.301 seconds)

4.執(zhí)行插入語(yǔ)句

0: jdbc:hive2://hadoop102:10000/> insert into student values(2,'1');
INFO  : Compiling command(queryId=hive_20230701095229_d7d5807d-ff37-4aef-81d5-bc10fd929ebf): insert into student values(2,'1')
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230701095229_d7d5807d-ff37-4aef-81d5-bc10fd929ebf); Time taken: 0.318 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230701095229_d7d5807d-ff37-4aef-81d5-bc10fd929ebf): insert into student values(2,'1')
INFO  : Query ID = hive_20230701095229_d7d5807d-ff37-4aef-81d5-bc10fd929ebf
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
ERROR : Job hasn't been submitted after 61s. Aborting it.
Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.
Please check YARN or Spark driver's logs for further information.
The timeout is controlled by hive.spark.job.monitor.timeout.
ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
INFO  : Completed executing command(queryId=hive_20230701095229_d7d5807d-ff37-4aef-81d5-bc10fd929ebf); Time taken: 216.921 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. (state=42000,code=2)
0: jdbc:hive2://hadoop102:10000/> insert into student values(2,'1');
INFO  : Compiling command(queryId=hive_20230701095708_a92293d5-eb6e-448c-b623-c5c49660ae66): insert into student values(2,'1')
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230701095708_a92293d5-eb6e-448c-b623-c5c49660ae66); Time taken: 0.28 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230701095708_a92293d5-eb6e-448c-b623-c5c49660ae66): insert into student values(2,'1')
INFO  : Query ID = hive_20230701095708_a92293d5-eb6e-448c-b623-c5c49660ae66
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
ERROR : Job hasn't been submitted after 61s. Aborting it.
Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.
Please check YARN or Spark driver's logs for further information.
The timeout is controlled by hive.spark.job.monitor.timeout.
ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
INFO  : Completed executing command(queryId=hive_20230701095708_a92293d5-eb6e-448c-b623-c5c49660ae66); Time taken: 181.098 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. (state=42000,code=2)

任務(wù)執(zhí)行失敗,后面觀(guān)察了yarn界面,發(fā)現(xiàn)是向yarn成功提交了job,job也經(jīng)歷 了accept到running的過(guò)程,但是最后執(zhí)行失敗了,日志的報(bào)錯(cuò)信息如下:

2023-07-01T10:16:07,513  INFO [44a5e8c7-dc6f-43f7-8a98-1037e8deffa3 HiveServer2-Handler-Pool: Thread-85] session.SessionState: Resetting thread name to  HiveServer2-Handler-Pool: Thread-85
2023-07-01T10:16:30,717 ERROR [HiveServer2-Background-Pool: Thread-158] client.SparkClientImpl: Timed out waiting for client to connect.
Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.
Please check YARN or Spark driver's logs for further information.
java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
	at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:106) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:88) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:105) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:101) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.2.jar:3.1.2]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) ~[hive-service-3.1.2.jar:3.1.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_361]
	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_361]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) ~[hadoop-common-3.1.3.jar:?]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) ~[hive-service-3.1.2.jar:3.1.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_361]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_361]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_361]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_361]
	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_361]
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
	at org.apache.hive.spark.client.rpc.RpcServer$2.run(RpcServer.java:172) ~[hive-exec-3.1.2.jar:3.1.2]
	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
	... 1 more
2023-07-01T10:16:30,741 ERROR [HiveServer2-Background-Pool: Thread-158] spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 2c551365-6d3c-458d-8d7c-3c8566d3c802)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session 2c551365-6d3c-458d-8d7c-3c8566d3c802
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:215)
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)
	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

檢查了spark分配的資源和yarn的資源,資源充足,不知道是什么原因

后面用sarah用戶(hù)進(jìn)行測(cè)試,發(fā)現(xiàn)job執(zhí)行成功,結(jié)果如下:

INFO  : Concurrency mode is disabled, not creating a lock manager
+--------+
|  _c0   |
+--------+
| sarah  |
+--------+
1 row selected (0.399 seconds)
0: jdbc:hive2://hadoop102:10000/> insert into student values(2,'1');
INFO  : Compiling command(queryId=hive_20230701095037_eb26098a-e4e9-438b-a33b-9bf8b6205d1f): insert into student values(2,'1')
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230701095037_eb26098a-e4e9-438b-a33b-9bf8b6205d1f); Time taken: 0.281 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230701095037_eb26098a-e4e9-438b-a33b-9bf8b6205d1f): insert into student values(2,'1')
INFO  : Query ID = hive_20230701095037_eb26098a-e4e9-438b-a33b-9bf8b6205d1f
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Running with YARN Application = application_1688108003994_0006
INFO  : Kill Command = /opt/ha/hadoop/bin/yarn application -kill application_1688108003994_0006
INFO  : Hive on Spark Session Web UI URL: http://hadoop102:32853
INFO  : 
Query Hive on Spark job[0] stages: [0, 1]
INFO  : Spark job[0] status = RUNNING
INFO  : Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
INFO  : 2023-07-01 09:51:30,101	Stage-0_0: 0(+1)/1	Stage-1_0: 0/1	
INFO  : 2023-07-01 09:51:33,126	Stage-0_0: 0(+1)/1	Stage-1_0: 0/1	
INFO  : 2023-07-01 09:51:34,131	Stage-0_0: 1/1 Finished	Stage-1_0: 0/1	
INFO  : 2023-07-01 09:51:35,139	Stage-0_0: 1/1 Finished	Stage-1_0: 1/1 Finished	
INFO  : Spark job[0] finished successfully in 8.11 second(s)
INFO  : Starting task [Stage-0:MOVE] in serial mode
INFO  : Loading data to table default.student from hdfs://mycluster/user/hive/warehouse/student/.hive-staging_hive_2023-07-01_09-50-37_765_7507751690563815963-7/-ext-10000
INFO  : Starting task [Stage-2:STATS] in serial mode
INFO  : Completed executing command(queryId=hive_20230701095037_eb26098a-e4e9-438b-a33b-9bf8b6205d1f); Time taken: 57.47 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
No rows affected (57.758 seconds)

不知道為什么sarah用戶(hù)可以成功,但是xwq用戶(hù)失敗了,后面有使用了hdfs進(jìn)行測(cè)試,和xwq用戶(hù)一樣失敗
如下結(jié)果:

INFO  : Concurrency mode is disabled, not creating a lock manager
+-------+
|  _c0  |
+-------+
| hdfs  |
+-------+
1 row selected (0.376 seconds)
0: jdbc:hive2://hadoop102:10000/> insert into student values(2,'1');
INFO  : Compiling command(queryId=hive_20230630172135_4dc05a0d-7783-48ec-a6d7-8b11f81f8f85): insert into student values(2,'1')
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230630172135_4dc05a0d-7783-48ec-a6d7-8b11f81f8f85); Time taken: 0.841 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230630172135_4dc05a0d-7783-48ec-a6d7-8b11f81f8f85): insert into student values(2,'1')
INFO  : Query ID = hive_20230630172135_4dc05a0d-7783-48ec-a6d7-8b11f81f8f85
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
ERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 98d08d85-f20b-4ce9-8e88-aed333485cb5
INFO  : Completed executing command(queryId=hive_20230630172135_4dc05a0d-7783-48ec-a6d7-8b11f81f8f85); Time taken: 300.171 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
Error: Error while processing statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 98d
0: jdbc:hive2://hadoop102:10000/> !quit
Closing: 0: jdbc:hive2://hadoop102:10000/;principal=hive/hadoop102@EXAMPLE.COM

分析:
看日志發(fā)現(xiàn)也看不出什么,但是我剛剛看yarn web頁(yè)面時(shí),突然發(fā)現(xiàn)了一個(gè)比較明顯的錯(cuò)誤

hive中的datagrip和beeline客戶(hù)端的權(quán)限問(wèn)題
詳細(xì)的bug是

Diagnostics:	
Application application_1688108003994_0009 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1688108003994_0009_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2023-07-01 10:10:53.661]Application application_1688108003994_0009 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is xwq
main : requested yarn user is xwq
User xwq not found
For more detailed output, check the application tracking page: http://hadoop103:8088/cluster/app/application_1688108003994_0009 Then click on links to logs of each attempt.
. Failing the application.

看到這個(gè)錯(cuò)誤我就知道了,之前碰到過(guò)這樣的錯(cuò)誤,當(dāng)時(shí)是執(zhí)行一個(gè)MR任務(wù)是失敗

解決辦法
在其他節(jié)點(diǎn)上執(zhí)行

useradd xwq  -G hadoop
echo xwq | passwd --stdin xwq
echo 'xwq  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
kadmin -padmin/admin -wNTVfPQY9kNs6  -q"addprinc -randkey xwq"

然後再執(zhí)行job

INFO  : Concurrency mode is disabled, not creating a lock manager
+------+
| _c0  |
+------+
| xwq  |
+------+
1 row selected (0.316 seconds)
0: jdbc:hive2://hadoop102:10000/> insert into student values(2,'1');
INFO  : Compiling command(queryId=hive_20230701104001_8b825171-b12d-416a-9044-14e40ce66b4e): insert into student values(2,'1')
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20230701104001_8b825171-b12d-416a-9044-14e40ce66b4e); Time taken: 0.274 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20230701104001_8b825171-b12d-416a-9044-14e40ce66b4e): insert into student values(2,'1')
INFO  : Query ID = hive_20230701104001_8b825171-b12d-416a-9044-14e40ce66b4e
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Running with YARN Application = application_1688108003994_0011
INFO  : Kill Command = /opt/ha/hadoop/bin/yarn application -kill application_1688108003994_0011
INFO  : Hive on Spark Session Web UI URL: http://hadoop104:38576
INFO  : 
Query Hive on Spark job[0] stages: [0, 1]
INFO  : Spark job[0] status = RUNNING
INFO  : Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
INFO  : 2023-07-01 10:40:47,412	Stage-0_0: 0(+1)/1	Stage-1_0: 0/1	
INFO  : 2023-07-01 10:40:50,428	Stage-0_0: 0(+1)/1	Stage-1_0: 0/1	
INFO  : 2023-07-01 10:40:51,432	Stage-0_0: 1/1 Finished	Stage-1_0: 1/1 Finished	
INFO  : Spark job[0] finished successfully in 7.05 second(s)
INFO  : Starting task [Stage-0:MOVE] in serial mode
INFO  : Loading data to table default.student from hdfs://mycluster/user/hive/warehouse/student/.hive-staging_hive_2023-07-01_10-40-01_182_6190336792685741127-8/-ext-10000
INFO  : Starting task [Stage-2:STATS] in serial mode
INFO  : Completed executing command(queryId=hive_20230701104001_8b825171-b12d-416a-9044-14e40ce66b4e); Time taken: 50.419 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
No rows affected (50.703 seconds)
0: jdbc:hive2://hadoop102:10000/> 

執(zhí)行成功

至於hdfs用戶(hù)為什麼沒(méi)有成功,是因?yàn)閥arn是禁止hdfs用戶(hù)提交任務(wù)的

所以以後看hive的bug可以看log日誌,還是可以yarn web頁(yè)面Diagnostics板塊文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-514326.html

到了這里,關(guān)于hive中的datagrip和beeline客戶(hù)端的權(quán)限問(wèn)題的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀(guān)點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 微服務(wù)架構(gòu),客戶(hù)端如何catch服務(wù)端的異常?

    微服務(wù)架構(gòu),客戶(hù)端如何catch服務(wù)端的異常?

    在微服務(wù)架構(gòu)或者分布式系統(tǒng)中,客戶(hù)端如何捕捉服務(wù)端的異常? 這里說(shuō)的客戶(hù)端指調(diào)用方、服務(wù)端指被調(diào)用方,它們通常運(yùn)行在不同的進(jìn)程之中,這些進(jìn)程可能運(yùn)行在同一臺(tái)服務(wù)器,也可能運(yùn)行在不同的服務(wù)器,甚至不同的數(shù)據(jù)機(jī)房;其使用的技術(shù)??赡芟嗤部赡艽嬖?/p>

    2024年03月09日
    瀏覽(23)
  • 【Kafka源碼走讀】Admin接口的客戶(hù)端與服務(wù)端的連接流程

    【Kafka源碼走讀】Admin接口的客戶(hù)端與服務(wù)端的連接流程

    注:本文對(duì)應(yīng)的kafka的源碼的版本是trunk分支。寫(xiě)這篇文章的主要目的是當(dāng)作自己閱讀源碼之后的筆記,寫(xiě)的有點(diǎn)凌亂,還望大佬們海涵,多謝! 最近在寫(xiě)一個(gè)Web版的kafka客戶(hù)端工具,然后查看Kafka官網(wǎng),發(fā)現(xiàn)想要與Server端建立連接,只需要執(zhí)行 方法即可,但其內(nèi)部是如何工作

    2024年02月16日
    瀏覽(27)
  • C++實(shí)現(xiàn)一個(gè)簡(jiǎn)單的客戶(hù)端與服務(wù)端的通信(筆記附代碼)

    C++實(shí)現(xiàn)一個(gè)簡(jiǎn)單的客戶(hù)端與服務(wù)端的通信(筆記附代碼)

    目錄 前言 一、Socket的客戶(hù)端與服務(wù)端的通訊原理 二、各接口介紹 1.WSAStartup:異步啟動(dòng)套接字命令 2.Socket創(chuàng)建套接字 3.bind:綁定套接字 4.listen:監(jiān)聽(tīng) 5.accept:接受連接請(qǐng)求 6. connet:發(fā)送連接請(qǐng)求 ?7.send:發(fā)送數(shù)據(jù) 8.recv:接收數(shù)據(jù)函數(shù)? 9.closesocket,?WSACleanup:釋放socket 三、代碼塊的

    2024年02月06日
    瀏覽(18)
  • .NetCore gRpc 客戶(hù)端與服務(wù)端的單工通信Demo

    .NetCore gRpc 客戶(hù)端與服務(wù)端的單工通信Demo

    方式一 使用vs 2022(也可以是其他版本)創(chuàng)建一個(gè)grpc的服務(wù),如下這樣 [外鏈圖片轉(zhuǎn)存失敗,源站可能有防盜鏈機(jī)制,建議將圖片保存下來(lái)直接上傳(img-uipEG9Xu-1687172462785)(C:UsersAdministratorAppDataRoamingTyporatypora-user-imagesimage-20230619183828284.png)] 簡(jiǎn)單方便,創(chuàng)建項(xiàng)目后的目錄結(jié)構(gòu)如下圖

    2024年02月09日
    瀏覽(21)
  • 《QT從基礎(chǔ)到進(jìn)階·十六》Q(chēng)T實(shí)現(xiàn)客戶(hù)端和服務(wù)端的簡(jiǎn)單交互

    《QT從基礎(chǔ)到進(jìn)階·十六》Q(chēng)T實(shí)現(xiàn)客戶(hù)端和服務(wù)端的簡(jiǎn)單交互

    QT版本:5.15.2 VS版本:2019 客戶(hù)端程序主要包含三塊:連接服務(wù)器,發(fā)送消息,關(guān)閉客戶(hù)端 服務(wù)端程序主要包含三塊:打開(kāi)消息監(jiān)聽(tīng),接收消息并反饋,關(guān)閉服務(wù)端 1、先打開(kāi)服務(wù)端監(jiān)聽(tīng)功能 2、點(diǎn)擊客戶(hù)端connect連接服務(wù)端 3、在客戶(hù)端輸入消息點(diǎn)擊send發(fā)送到服務(wù)端 4、在服務(wù)

    2024年02月03日
    瀏覽(21)
  • AI對(duì)話(huà)交互場(chǎng)景使用WebSocket建立H5客戶(hù)端和服務(wù)端的信息實(shí)時(shí)雙向通信

    AI對(duì)話(huà)交互場(chǎng)景使用WebSocket建立H5客戶(hù)端和服務(wù)端的信息實(shí)時(shí)雙向通信

    WebSocket使得客戶(hù)端和服務(wù)器之間的數(shù)據(jù)交換變得更加簡(jiǎn)單,允許 服務(wù)端主動(dòng)向客戶(hù)端推送數(shù)據(jù) 。在WebSocket API中,瀏覽器和服務(wù)器只需要完成一次握手,兩者之間就可以創(chuàng)建持久性的連接,并進(jìn)行雙向數(shù)據(jù)傳輸。 初次接觸 WebSocket 的人,都會(huì)問(wèn)同樣的問(wèn)題:我們已經(jīng)有了 HT

    2024年02月03日
    瀏覽(24)
  • SOCKET通信中的各種ip問(wèn)題,服務(wù)端獲取客戶(hù)端ip地址

    ??????? 在進(jìn)行socket網(wǎng)絡(luò)通信的時(shí)候,我們經(jīng)常需要獲取ip地址,下面介紹一些獲取ip地址的方法。 1.自動(dòng)獲取本機(jī)ip地址 ??????? 可以獲取本地所有ip地址,可根據(jù)自己需求取出ip。 2.在綁定套接字的時(shí)候,要獲取本機(jī)ip ????????可用INADDR_ANY,宏INADDR_ANY轉(zhuǎn)換過(guò)來(lái)就是

    2024年02月12日
    瀏覽(31)
  • Hive(3):Hive客戶(hù)端使用

    1 Hive Client、Hive Beeline Client Hive發(fā)展至今,總共歷經(jīng)了兩代客戶(hù)端工具。 第一代客戶(hù)端(deprecated不推薦使用):$HIVE_HOME/bin/hive, 是一個(gè) shellUtil。主要功能:一是可用于以交互或批處理模式運(yùn)行Hive查詢(xún);二是用于Hive相關(guān)服務(wù)的啟動(dòng),比如metastore服務(wù)。 第二代客戶(hù)端(recommen

    2024年02月03日
    瀏覽(21)
  • hive客戶(hù)端連接

    hive客戶(hù)端 第一代客戶(hù)端直接使用hive指令連接,連接的就是metastore服務(wù),并且只要連接成功就可以操作hive數(shù)據(jù)庫(kù) 第一代客戶(hù)端的啟動(dòng)和使用 第一代客戶(hù)端,直接啟動(dòng)metastore服務(wù)即可 前臺(tái)啟動(dòng)(當(dāng)終端任務(wù)結(jié)束后,立刻結(jié)束服務(wù)) 前臺(tái)啟動(dòng),可以方便查看日志信息 后臺(tái)啟動(dòng)

    2024年04月23日
    瀏覽(25)
  • 解決Hive在DataGrip 中注釋亂碼問(wèn)題

    解決Hive在DataGrip 中注釋亂碼問(wèn)題

    注釋屬于元數(shù)據(jù)的一部分,同樣存儲(chǔ)在mysql的metastore庫(kù)中,如果metastore庫(kù)的字符集不支持中文,就會(huì)導(dǎo)致中文顯示亂碼。 不建議修改Hive元數(shù)據(jù)庫(kù)的編碼,此處我們?cè)趍etastore中找存儲(chǔ)注釋的表,找到表中存儲(chǔ)注釋的字段,只改對(duì)應(yīng)表對(duì)應(yīng)字段的編碼。 如下兩步修改,缺一不可

    2024年01月19日
    瀏覽(81)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包