国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

[flink1.14.4]Unable to create a source for reading table ‘default_catalog.default_database.new_buyer

這篇具有很好參考價(jià)值的文章主要介紹了[flink1.14.4]Unable to create a source for reading table ‘default_catalog.default_database.new_buyer。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

升級(jí)flink1.14.4報(bào)錯(cuò)?

Caused by: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'??

CAUSED BY: 2022-03-11 16:45:04,169 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.class=org.apache.flink.metrics.influxdb.InfluxdbReporter
2022-03-11 16:45:04,169 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.class=org.apache.flink.metrics.influxdb.InfluxdbReporter
2022-03-11 16:45:04,170 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: state.backend.rocksdb.ttl.compaction.filter.enabled=true
2022-03-11 16:45:04,170 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: state.backend.rocksdb.ttl.compaction.filter.enabled=true
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.db=flink
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.db=flink
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.host=192.168.5.57
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.host=192.168.5.57
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: web.timeout=120000
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: web.timeout=120000
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.ask.timeout=120 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.ask.timeout=120 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: env.java.opts=-verbose:gc -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:ParallelGCThreads=4 -Duser.timezone=Asia/Shanghai
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: env.java.opts=-verbose:gc -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:ParallelGCThreads=4 -Duser.timezone=Asia/Shanghai
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.port=8086
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.port=8086
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.watch.heartbeat.interval=10 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.watch.heartbeat.interval=10 s
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: slotmanager.taskmanager-timeout=600000
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: slotmanager.taskmanager-timeout=600000
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: containerized.heap-cutoff-min=100
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: containerized.heap-cutoff-min=100
建表語(yǔ)句解析成功!connector:upsert-kafka
設(shè)置kafka時(shí)間撮:upsert-kafka
建表語(yǔ)句解析成功!connector:print
hive conf path:/etc/ecm/hive-conf
hive conf path:/etc/ecm/hive-conf
create iceberg catalog success!
start to run sql:CREATE TABLE new_buyer_trade_order2 (
  database VARCHAR,  `table`  VARCHAR,  type  VARCHAR,  ts   BIGINT,  xid   BIGINT,  xoffset  BIGINT,  data  VARCHAR,    `old`   VARCHAR      
) WITH (
'value.format' = 'json',
'key.format' = 'json',
'properties.bootstrap.servers' = '192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092',
'connector' = 'upsert-kafka',
'topic' = 'new_buyer_trade_order2')
start to run sql:create table result_print(   database VARCHAR,  `table`  VARCHAR,  type  VARCHAR,  ts   BIGINT,  xid   BIGINT,  xoffset  BIGINT,  data  VARCHAR,    `old`   VARCHAR )with(     'connector' = 'print' )
start to run sql:insert into result_print select      database,  `table`,  type,  ts,  xid,  xoffset,  database,    `old` from new_buyer_trade_order2

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'.

Table options are:

'connector'='upsert-kafka'
'key.format'='json'
'properties.bootstrap.servers'='192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092'
'topic'='new_buyer_trade_order2'
'value.format'='json'
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
	at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
	at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
	at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
	at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
	at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
	at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
	at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'.

Table options are:

'connector'='upsert-kafka'
'key.format'='json'
'properties.bootstrap.servers'='192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092'
'topic'='new_buyer_trade_order2'
'value.format'='json'
	at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:150)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:116)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:82)
	at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:177)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:169)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1057)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1026)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:301)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:639)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:290)
	at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:834)
	at com.gegejia.flink.FlinkJobBoot.main(FlinkJobBoot.java:95)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
	... 11 more
Caused by: org.apache.flink.table.api.ValidationException: 'upsert-kafka' tables require to define a PRIMARY KEY constraint. The PRIMARY KEY specifies which columns should be read from or write to the Kafka message key. The PRIMARY KEY also defines records in the 'upsert-kafka' table should update or delete on which keys.
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.validatePKConstraints(UpsertKafkaDynamicTableFactory.java:261)
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.validateSource(UpsertKafkaDynamicTableFactory.java:219)
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.createDynamicTableSource(UpsertKafkaDynamicTableFactory.java:119)
	at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:147)
	... 37 more

?source表未加主鍵導(dǎo)致,注釋放開(kāi),提交成功

[flink1.14.4]Unable to create a source for reading table ‘default_catalog.default_database.new_buyer,flink,大數(shù)據(jù),問(wèn)題集錦,大數(shù)據(jù),flink文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-610572.html

到了這里,關(guān)于[flink1.14.4]Unable to create a source for reading table ‘default_catalog.default_database.new_buyer的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • flink1.14.5使用CDH6.3.2的yarn提交作業(yè)

    flink1.14.5使用CDH6.3.2的yarn提交作業(yè)

    使用CDH6.3.2安裝了hadoop集群,但是CDH不支持flink的安裝,網(wǎng)上有CDH集成flink的文章,大都比較麻煩;但其實(shí)我們只需要把flink的作業(yè)提交到y(tǒng)arn集群即可,接下來(lái)以CDH yarn為基礎(chǔ),flink on yarn模式的配置步驟。 一、部署flink 1、下載解壓 官方下載地址:Downloads | Apache Flink 注意:CD

    2024年01月16日
    瀏覽(25)
  • Flink1.14提交任務(wù)報(bào)錯(cuò)classloader.check-leaked-classloader問(wèn)題解決

    我的hadoop版本是3.1.3,F(xiàn)link版本是1.14。不知道是hadoop版本的原因還是Flink版本更新的原因。當(dāng)我運(yùn)行一個(gè)簡(jiǎn)單的Flink測(cè)試時(shí),雖然結(jié)果出來(lái)了但是后面還跟著一段報(bào)錯(cuò)信息。 測(cè)試命令: flink run -m yarn-cluster -p 2 -yjm 2G -ytm 2G $FLINK_HOME/examples/batch/WordCount.jar 報(bào)錯(cuò)信息: Trying to acce

    2024年02月11日
    瀏覽(25)
  • docker出現(xiàn)Error response from daemon: error while creating mount source path...read-only file system..

    docker出現(xiàn)Error response from daemon: error while creating mount source path...read-only file system..

    圖示 網(wǎng)上查找很多,感覺(jué)是docker文件或系統(tǒng)文件損壞,已經(jīng)嘗試很多方式,重啟docekr無(wú)解并無(wú)法重啟和進(jìn)入容器,最終選擇卸載重裝。 出現(xiàn)docker無(wú)法卸載,docker --version仍然有版本信息 因?yàn)榘惭bdocker同時(shí)使用了snap源和apt 源 使用 https://www.python100.com/html/5E074TD2ZY4R.html 方式卸載

    2024年02月07日
    瀏覽(27)
  • docker Error response from daemon error while creating mount source path mkdir data read-only file

    可能原因: docker是由snap安裝的, 這種情況下,docker只在用戶目錄下?lián)碛凶x寫權(quán)限. 解決方法: 創(chuàng)建地址映射的時(shí)候?qū)⒛夸泟?chuàng)建在用戶目錄下,如root用戶: 之前的錯(cuò)誤命令: 更改之后的命令: 自己對(duì)比一下吧。

    2024年01月24日
    瀏覽(29)
  • Unable to add a source with url `` named `-1`.

    今天在發(fā)布私有庫(kù)的時(shí)候,執(zhí)行 pod repo push name name.podspec --allow-warnings --verbose .的時(shí)候遇到下面這個(gè)錯(cuò)誤 Unable to add a source with url `` named -1 . (/usr/bin/git clone – -1 fatal: repository ‘’ does not exist ) You can try adding it manually in /Users/****/.cocoapods/repos or via pod repo add . 解決方法: pod rep

    2024年02月15日
    瀏覽(85)
  • Flink1.14新版KafkaSource和KafkaSink實(shí)踐使用(自定義反序列化器、Topic選擇器、序列化器、分區(qū)器)

    Flink1.14新版KafkaSource和KafkaSink實(shí)踐使用(自定義反序列化器、Topic選擇器、序列化器、分區(qū)器)

    在官方文檔的描述中,API FlinkKafkaConsumer和FlinkKafkaProducer將在后續(xù)版本陸續(xù)棄用、移除,所以在未來(lái)生產(chǎn)中有版本升級(jí)的情況下,新API KafkaSource和KafkaSink還是有必要學(xué)會(huì)使用的。下面介紹下基于新API的一些自定義類以及主程序的簡(jiǎn)單實(shí)踐。 官方文檔地址: https://nightlies.apache.o

    2024年01月21日
    瀏覽(31)
  • 【containerd錯(cuò)誤解決系列】failed to create shim task, OCI runtime create failed, unable to retrieve OCI...

    【containerd錯(cuò)誤解決系列】failed to create shim task, OCI runtime create failed, unable to retrieve OCI...

    pod的狀態(tài)全部都是ContainerCreating的狀態(tài) containerd進(jìn)程有大量報(bào)錯(cuò),主要有: failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c4847070fad34a8da9b16b5c20cdc38e28a15cfcf9913d712e4fe60d8c9029f7/log.json: no

    2023年04月25日
    瀏覽(27)
  • docker: Error response from daemon: failed to create shim task: OCI runtime create failed: unable to

    docker: Error response from daemon: failed to create shim task: OCI runtime create failed: unable to

    1.先下載runc源碼: https://github.com/opencontainers/runc/releases/tag/v1.0.3 2.我的是centos8 ? 運(yùn)行以下代碼 3.安裝go環(huán)境 ?wget https://studygolang.com/dl/golang/go1.16.linux-amd64.tar.gz ?tar -C /usr/local -xzf go1.16.linux-amd64.tar.gz 4.添加配置: 進(jìn)去到 vi /etc/profile 5.檢測(cè)配置成功 go env 6.將下載好的runc解壓 ?

    2024年02月06日
    瀏覽(25)
  • Git報(bào)錯(cuò):fatal: Unable to create ‘.../.git/index.lock‘

    ??今天提交一份很早之前寫的代碼的時(shí)候,遇到git報(bào)錯(cuò),報(bào)錯(cuò)如下: ??git在執(zhí)行耗時(shí)操作的時(shí)候?yàn)榱吮苊鈱?duì)同一個(gè)目錄進(jìn)行多個(gè)操作的沖突 ,會(huì)自動(dòng)生成一個(gè)index.lock文件。作為鎖文件。當(dāng)操作結(jié)束,git會(huì)自動(dòng)刪除該文件。 當(dāng)git在運(yùn)行過(guò)程中,用戶強(qiáng)制關(guān)閉了git,導(dǎo)致

    2024年02月16日
    瀏覽(33)
  • error: unable to read askpass response from 解決辦法

    error: unable to read askpass response from 解決辦法

    出現(xiàn)這個(gè)報(bào)錯(cuò),我認(rèn)為原因與你的碼云賬號(hào)有關(guān),因?yàn)槲以诰W(wǎng)上大量搜過(guò)這個(gè)問(wèn)題,最后 檢查了一番原來(lái)是gitee賬號(hào)登錄過(guò)期,于是重新進(jìn)行登錄,可是登錄成功還是提示錯(cuò)誤,最后網(wǎng)上找了好久的方法終于找到解決辦法,特此記錄一下。 直接修改項(xiàng)目目錄下面的.git文件夾

    2024年02月10日
    瀏覽(93)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包