国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Flink連接Hbase時的kafka報錯:java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils

這篇具有很好參考價值的文章主要介紹了Flink連接Hbase時的kafka報錯:java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

寫在前面

書接上文 【Flink實時數倉】需求一:用戶屬性維表處理-Flink CDC 連接 MySQL 至 Hbase 實驗及報錯分析http://t.csdn.cn/bk96r
我隔了一天跑Hbase中的數據,發(fā)現(xiàn)kafka報錯,但是kafka在這個代碼段中并沒有使用,原因就是我在今天的其他項目中添加的kafka依賴導致了沖突。

錯誤全文

+--------+
| result |
+--------+
|     OK |
+--------+
1 row in set
[WARN ] 2023-07-23 12:48:34,083(0) --> [main] org.apache.flink.runtime.webmonitor.WebMonitorUtils$LogFileLocation.find(WebMonitorUtils.java:82): Log file environment variable 'log.file' is not set.  
[WARN ] 2023-07-23 12:48:34,088(5) --> [main] org.apache.flink.runtime.webmonitor.WebMonitorUtils$LogFileLocation.find(WebMonitorUtils.java:88): JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'web.log.path'.  
[WARN ] 2023-07-23 12:48:35,781(1698) --> [Source: TableSourceScan(table=[[default_catalog, default_database, ums_member]], fields=[id, username, phone, status, create_time, gender, birthday, city, job, source_type]) -> NotNullEnforcer(fields=[id]) -> Sink: Collect table sink (1/1)#0] org.apache.flink.runtime.metrics.groups.TaskMetricGroup.getOrAddOperator(TaskMetricGroup.java:154): The operator name Source: TableSourceScan(table=[[default_catalog, default_database, ums_member]], fields=[id, username, phone, status, create_time, gender, birthday, city, job, source_type]) exceeded the 80 characters length limit and was truncated.  
[WARN ] 2023-07-23 12:48:36,481(2398) --> [Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, ums_member]], fields=[id, username, phone, status, create_time, gender, birthday, city, job, source_type]) -> NotNullEnforcer(fields=[id]) -> Sink: Collect table sink (1/1)#0] org.apache.kafka.connect.runtime.WorkerConfig.logPluginPathConfigProviderWarning(WorkerConfig.java:420): Variables cannot be used in the 'plugin.path' property, since the property is used by plugin scanning before the config providers that replace the variables are initialized. The raw value 'null' was used for plugin scanning, as opposed to the transformed value 'null', and this may cause unexpected results.  
[ERROR] 2023-07-23 12:48:36,487(2404) --> [debezium-engine] com.ververica.cdc.debezium.internal.Handover.reportError(Handover.java:147): Reporting error:  
java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.start(FlinkOffsetBackingStore.java:152)
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.configure(FlinkOffsetBackingStore.java:71)
	at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:690)
	at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:188)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.utils.ThreadUtils
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 7 more
[WARN ] 2023-07-23 12:48:36,499(2416) --> [Source: TableSourceScan(table=[[default_catalog, default_database, ums_member]], fields=[id, username, phone, status, create_time, gender, birthday, city, job, source_type]) -> NotNullEnforcer(fields=[id]) -> Sink: Collect table sink (1/1)#0] org.apache.flink.runtime.taskmanager.Task.transitionState(Task.java:1074): Source: TableSourceScan(table=[[default_catalog, default_database, ums_member]], fields=[id, username, phone, status, create_time, gender, birthday, city, job, source_type]) -> NotNullEnforcer(fields=[id]) -> Sink: Collect table sink (1/1)#0 (472d9a4f02e261cfd2f115da78d97e03) switched from RUNNING to FAILED with failure cause: java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.start(FlinkOffsetBackingStore.java:152)
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.configure(FlinkOffsetBackingStore.java:71)
	at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:690)
	at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:188)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.utils.ThreadUtils
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 7 more
  
[WARN ] 2023-07-23 12:48:36,581(2498) --> [main] org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.isJobTerminated(CollectResultFetcher.java:215): Failed to get job status so we assume that the job has terminated. Some data might be lost.  
java.lang.IllegalStateException: MiniCluster is not yet running or has already been shut down.
	at org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
	at org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:852)
	at org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:752)
	at org.apache.flink.runtime.minicluster.MiniCluster.getJobStatus(MiniCluster.java:705)
	at org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobStatus(MiniClusterJobClient.java:90)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.isJobTerminated(CollectResultFetcher.java:203)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:117)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
	at org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
	at org.apache.flink.table.utils.PrintUtils.printAsTableauForm(PrintUtils.java:152)
	at org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:160)
	at demo.UserInfo2Hbase.main(UserInfo2Hbase.java:93)
[WARN ] 2023-07-23 12:48:36,582(2499) --> [main] org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.isJobTerminated(CollectResultFetcher.java:215): Failed to get job status so we assume that the job has terminated. Some data might be lost.  
java.lang.IllegalStateException: MiniCluster is not yet running or has already been shut down.
	at org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
	at org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:852)
	at org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:752)
	at org.apache.flink.runtime.minicluster.MiniCluster.getJobStatus(MiniCluster.java:705)
	at org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobStatus(MiniClusterJobClient.java:90)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.isJobTerminated(CollectResultFetcher.java:203)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.cancelJob(CollectResultFetcher.java:225)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.close(CollectResultFetcher.java:150)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:108)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
	at org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
	at org.apache.flink.table.utils.PrintUtils.printAsTableauForm(PrintUtils.java:152)
	at org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:160)
	at demo.UserInfo2Hbase.main(UserInfo2Hbase.java:93)
Exception in thread "main" java.lang.RuntimeException: Failed to fetch next result
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
	at org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
	at org.apache.flink.table.utils.PrintUtils.printAsTableauForm(PrintUtils.java:152)
	at org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:160)
	at demo.UserInfo2Hbase.main(UserInfo2Hbase.java:93)
Caused by: java.io.IOException: Failed to fetch job execution result
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:177)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:120)
	at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
	... 5 more
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:175)
	... 7 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
	at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
	at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
	at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
	at java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628)
	at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996)
	at org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobExecutionResult(MiniClusterJobClient.java:134)
	at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:174)
	... 7 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:216)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:206)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:197)
	at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:682)
	at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
	at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
	at akka.actor.ActorCell.invoke(ActorCell.scala:561)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
	at akka.dispatch.Mailbox.run(Mailbox.scala:225)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.start(FlinkOffsetBackingStore.java:152)
	at com.ververica.cdc.debezium.internal.FlinkOffsetBackingStore.configure(FlinkOffsetBackingStore.java:71)
	at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:690)
	at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:188)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.utils.ThreadUtils
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 7 more

Process finished with exit code 1

Flink測試代碼

   StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

 tenv.executeSql("CREATE TABLE ums_member (\n" +
                "    id    BIGINT,        \n" +
                "    username  STRING,        \n" +
                "    phone    STRING,        \n" +
                "    status    int,           \n" +
                "    create_time timestamp(3),  \n" +
                "    gender \t\tint,           \n" +
                "    birthday\tdate,          \n" +
                "    city \t\tSTRING,        \n" +
                "    job \t\tSTRING ,       \n" +
                "    source_type INT ,  \n" +
                "    PRIMARY KEY(id) NOT ENFORCED\n" +
                " ) WITH (\n" +
                " 'connector' = 'mysql-cdc',\n" +
                " 'hostname' = 'hadoop10',\n" +
                " 'port' = '3306',\n" +
                " 'username' = 'root',\n" +
                " 'password' = '0000',\n" +
                " 'database-name' = 'db1',\n" +
                //" 'scan.startup.mode' = 'latest-offset',\n" +
                " 'scan.incremental.snapshot.enabled' = 'false',\n" +
                " 'table-name' = 'ums_member')").print();
tenv.executeSql("select * from ums_member").print();

姐姐方案

注釋掉kafka依賴,此時我又重新跑,仍然報錯。
org.apache.kafka.common.utils.threadutils,報錯,flink,hbase,kafka

經過我一頓全網搜索,解決方法五花八門,八仙過海。
我選擇了重啟idea2020,隨后解決。org.apache.kafka.common.utils.threadutils,報錯,flink,hbase,kafka
數據成功回到了hbase。org.apache.kafka.common.utils.threadutils,報錯,flink,hbase,kafka文章來源地址http://www.zghlxwxcb.cn/news/detail-762465.html

到了這里,關于Flink連接Hbase時的kafka報錯:java.lang.NoClassDefFoundError: org/apache/kafka/common/utils/ThreadUtils的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網!

本文來自互聯(lián)網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉載,請注明出處: 如若內容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • 解決報錯Exception in thread “main“ java.lang.NoClassDefFoundError: org/openqa/selenium/HasAuthentication

    解決報錯Exception in thread “main“ java.lang.NoClassDefFoundError: org/openqa/selenium/HasAuthentication

    解決報錯Exception in thread “main” java.lang.NoClassDefFoundError: org/openqa/selenium/HasAuthentication 網上查資料發(fā)現(xiàn) 是導入了不同版本的selenium包。 解決辦法: 修改pom.xml 重新下載后 selenium相關依賴包都是同個版本了 重新運行代碼 ,運行通過!

    2024年02月16日
    瀏覽(36)
  • 報錯:springboot項目啟動,Exception in thread “main“ java.lang.NoClassDefFoundError: org/springframework/com

    報錯:springboot項目啟動,Exception in thread “main“ java.lang.NoClassDefFoundError: org/springframework/com

    ?異常處理:當啟動springboot項目時,報錯Exception in thread \\\"main\\\" java.lang.NoClassDefFoundError: org/springframework/core/metrics/ApplicationStartup 報錯截圖: 解決措施:找到pom.xml,將其 這個部分替換為可運行的springboot項目的(一般為版本問題) 。

    2024年02月16日
    瀏覽(32)
  • 報錯Handler dispatch failed; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/io

    報錯Handler dispatch failed; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/io

    報錯: 原因 :這里version 版本未指定,去遠程倉庫找最新發(fā)布版本的構件,可能會沖突 先根據version版本去本地倉庫找,如果本地倉庫找不到,再判斷版本號是否為明確版本號,如果版本號明確,會從遠程倉庫下載相應版本的依賴 如果版本號不明確,如 RELEASE、LATEST 和 SNAP

    2024年04月12日
    瀏覽(40)
  • 單元測試報錯解決java.lang.NoClassDefFoundError: org/apache/logging/log4j/util/ReflectionUtil

    單元測試報錯解決java.lang.NoClassDefFoundError: org/apache/logging/log4j/util/ReflectionUtil

    原因是新版本的log4j-core包中不包含org/apache/logging/log4j/util/ReflectionUtil這個類,在2.2版本后這個類被遷移到log4j-jcl包中。 引入新的包即可 maven引入 gradle引入 即可解決

    2024年02月16日
    瀏覽(32)
  • EasyExcel本地導出正常,服務器導出報錯:java.lang.NoClassDefFoundError: Could not initialize class sun.awt.X11FontMa

    今天又是掉坑的一天,本地代碼和dev環(huán)境同步了好幾次,數據也同步了,本地可以正常導出,但是dev環(huán)境就是不行,崩潰… 先說一下異常內容: 原因:由于dev環(huán)境是在docker容器中部署的,所以導出時容器會對字體進行解析,如果部分字體無法解析會導致內容渲染失敗,導出

    2024年02月20日
    瀏覽(29)
  • 解決java.lang.NoClassDefFoundError錯誤

    在日常Java開發(fā)中,碰到java.lang.NoClassDefFoundError這樣的錯誤,需要花費很多時間去找錯誤的原因,具體是哪個類不見了?類明明還在,為什么找不到?而且我們很容易把java.lang.NoClassDefFoundError和java.lang.ClassNotfoundException這兩個錯誤搞混,事實上這兩個錯誤是完全不同的。 我們往

    2024年02月11日
    瀏覽(18)
  • Java.lang.NoClassDefFoundError 的解決方法

    Java.lang.NoClassDefFoundError 的解決方法 Java開發(fā)過程中,經常會遇到 java.lang.NoClassDefFoundError 錯誤。這個錯誤通常表示在運行時找不到所需的類。本文將探討這個問題的原因,并提供一些解決方案。 當嘗試運行 Java 應用程序時,可能會遇到以下錯誤消息: 該錯誤消息表明在運行時

    2024年02月08日
    瀏覽(24)
  • 如何處理 java.lang.NoClassDefFoundError

    如何處理 java.lang.NoClassDefFoundError

    1. 問題背景 系統(tǒng)異常提示java.lang.NoClassDefFoundError: ch/qos/logback/classic/spi/ThrowableProxy 2.問題分析 了解NoClassDefFoundError含義 在開始解決這個問題之前,我們需要先了解一下java.lang.NoClassDefFoundError錯誤的含義。這個錯誤通常表示在運行時找不到某個類的定義。在你的情況下,錯誤信

    2024年02月03日
    瀏覽(21)
  • 已解決java.lang.NoClassDefFoundError異常的正確解決方法,親測有效?。?!已解決java.lang.NoClassDefFoundError異常的正確解決方法,親測有效?。?!

    已解決java.lang.NoClassDefFoundError異常的正確解決方法,親測有效?。?! java.lang.NoClassDefFoundError java.lang.NoClassDefFoundError是Java虛擬機在運行時無法找到特定類的錯誤。 下滑查看解決方法 該錯誤通常發(fā)生在以下情況下: 編譯時缺少依賴項:如果在開發(fā)過程中缺少所需的庫或依賴項

    2024年02月14日
    瀏覽(60)
  • java.lang.NoClassDefFoundError異常的正確解決方法

    java.lang.NoClassDefFoundError 是 Java 運行時環(huán)境中的一個錯誤,表明 JVM 在運行時嘗試加載一個類的定義,但未能找到。這通常發(fā)生在編譯時該類是可用的,但在運行時 JVM 的類路徑(classpath)上卻找不到這個類。此錯誤不同于 ClassNotFoundException,后者通常在加載類時拋出,而 NoCl

    2024年04月25日
    瀏覽(51)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包