国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

【Flink-HDFS】Call From * to * failed on connection exception: java.net.ConnectException: 拒絕連接;

這篇具有很好參考價(jià)值的文章主要介紹了【Flink-HDFS】Call From * to * failed on connection exception: java.net.ConnectException: 拒絕連接;。希望對大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

Flink-Hadoop分布式集群模式運(yùn)行WordCount示例報(bào)錯(cuò)

記錄一下Flink-Hadoop分布式集群模式運(yùn)行WordCount示例報(bào)錯(cuò)的經(jīng)歷

指令和報(bào)錯(cuò)信息

指令

flink run /usr/local/flink/flink-1.7.2/examples/batch/WordCount.jar --input hdfs://192.168.87.135:8020/usr/d0228/1.input --output hdfs://192.168.87.135:8020/usr/d0228/1.output

運(yùn)行flink,嘗試讀取hdfs上的文件,并輸出到hdfs里

報(bào)錯(cuò)信息

主要就是這幾條報(bào)錯(cuò)

  • Could not retrieve the execution result.
  • Could not set up JobManager
  • Call From localhost-node1/192.168.87.133 to localhost-node2:8020 failed on connection exception: java.net.ConnectException: 拒絕連接;

這個(gè)第三條大概意思是node1虛擬機(jī)連不到hadoop那個(gè)端口上。

解決方法

查看端口號

查看hadoop文件下etc/hadoop里的core-site.xml文件
failed on connection exception: java.net.connectexception: 拒絕連接; for mor,成長體會,hdfs,java,hadoop,flink
紅框里我之前配置的端口是9000,所以我把指令里之前的8020改成9000就ok了

修改后的指令:
flink run /usr/local/flink/flink-1.7.2/examples/batch/WordCount.jar --input hdfs://192.168.87.135:9000/usr/d0228/1.input --output hdfs://192.168.87.135:9000/usr/d0228/1.output

關(guān)閉HDFS集群權(quán)限校驗(yàn)

查看hadoop文件下etc/hadoop里的hdfs-site.xml文件,在對應(yīng)位置添加紅框里的內(nèi)容。好像和文件權(quán)限有關(guān)。記得分發(fā)到所有虛擬機(jī)上,統(tǒng)一配置。
failed on connection exception: java.net.connectexception: 拒絕連接; for mor,成長體會,hdfs,java,hadoop,flink

我在經(jīng)過這兩個(gè)更改后就可以順利執(zhí)行WordCount示例,并且在HDFS上有輸出的文件。

完整報(bào)錯(cuò)信息

下面是完整的報(bào)錯(cuò)信息:文章來源地址http://www.zghlxwxcb.cn/news/detail-771156.html

root@localhost-node1 bin]# flink run /usr/local/flink/flink-1.7.2/examples/batch/WordCount.jar --input hdfs://192.168.87.135:8020/usr/d0228/1.input --output hdfs://192.168.87.135:8020/usr/d0228/1.output
Starting execution of program

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: 42c079d08bb4bce07e65f31c0b090baf)
	at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:261)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:487)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:475)
	at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
	at org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:85)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
	at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
	at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
	at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
	at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
	at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
	at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
	at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
	at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:380)
	at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
	at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
	at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:203)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$submitJob$2(Dispatcher.java:267)
	at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
	at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:753)
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
	at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
	at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
	at akka.actor.ActorCell.invoke(ActorCell.scala:495)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
	at akka.dispatch.Mailbox.run(Mailbox.scala:224)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
	at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
	... 4 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
	at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:176)
	at org.apache.flink.runtime.dispatcher.Dispatcher$DefaultJobManagerRunnerFactory.createJobManagerRunner(Dispatcher.java:1058)
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:308)
	at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
	... 7 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'DataSink (CsvOutputFormat (path: hdfs://192.168.87.135:8020/usr/d0228/1.output, delimiter:  ))': Call From localhost-node1/192.168.87.133 to localhost-node2:8020 failed on connection exception: java.net.ConnectException: 拒絕連接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:220)
	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:100)
	at org.apache.flink.runtime.jobmaster.JobMaster.createExecutionGraph(JobMaster.java:1173)
	at org.apache.flink.runtime.jobmaster.JobMaster.createAndRestoreExecutionGraph(JobMaster.java:1153)
	at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:296)
	at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:157)
	... 10 more
Caused by: java.net.ConnectException: Call From localhost-node1/192.168.87.133 to localhost-node2:8020 failed on connection exception: java.net.ConnectException: 拒絕連接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
	at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.exists(HadoopFileSystem.java:152)
	at org.apache.flink.core.fs.FileSystem.initOutPathDistFS(FileSystem.java:886)
	at org.apache.flink.api.common.io.FileOutputFormat.initializeGlobal(FileOutputFormat.java:286)
	at org.apache.flink.runtime.jobgraph.OutputFormatVertex.initializeOnMaster(OutputFormatVertex.java:89)
	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)
	... 15 more
Caused by: java.net.ConnectException: 拒絕連接
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 37 more

End of exception on server side>]
	at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:380)
	at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:364)
	at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
	... 4 more


到了這里,關(guān)于【Flink-HDFS】Call From * to * failed on connection exception: java.net.ConnectException: 拒絕連接;的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 使用docker時(shí)報(bào)Error response from daemon: driver failed programming external connectivity on endpoint

    前段時(shí)間在使用Docker進(jìn)行開發(fā)部署中,遇到端口沖突的問題。當(dāng)我們啟動一個(gè)Docker容器時(shí),有時(shí)會出現(xiàn)以下錯(cuò)誤信息: 在這里我將介紹如何解決這個(gè)問題的。 該錯(cuò)誤信息提示了一個(gè)端口已經(jīng)被占用的問題。在啟動一個(gè)容器時(shí),Docker會嘗試將容器的端口映射到宿主機(jī)上,以便

    2024年02月08日
    瀏覽(21)
  • docker: Error response from daemon: driver failed programming external connectivity on endpoint mysq

    docker: Error response from daemon: driver failed programming external connectivity on endpoint mysq

    當(dāng)我們看見這個(gè)錯(cuò)誤提示時(shí),不要緊張,easy,easy。問題不大,沒有你想象中的那么麻煩。 我先說說我遇到的這種情況: 首先,我這個(gè)docker是剛下載的,什么鏡像和容器都沒有,所以不存在同名存在的情況。 其次,我會查看其MySQL端口號是否被占用。輸入命令: 會出現(xiàn)下圖

    2024年01月17日
    瀏覽(20)
  • 《已解決: docker: Error response from daemon: driver failed programming external connectivity on endpoin

    《已解決: docker: Error response from daemon: driver failed programming external connectivity on endpoin

    ???? 博主貓頭虎(????)帶您 Go to New World??? ?? 博客首頁 : ????貓頭虎的博客?? 《面試題大全專欄》 ?? 文章圖文并茂??生動形象??簡單易學(xué)!歡迎大家來踩踩~?? 《IDEA開發(fā)秘籍專欄》 ?? 學(xué)會IDEA常用操作,工作效率翻倍~?? 《100天精通Golang(基礎(chǔ)入門篇)》 ??

    2024年02月04日
    瀏覽(60)
  • org.apache.rocketmq.remoting.exception.RemotingConnectException: connection to ip : 10911 failed

    broker使用的IP一般是本機(jī)IP地址,默認(rèn)系統(tǒng)自動識別,但是某些多網(wǎng)卡機(jī)器會存在識別錯(cuò)誤的情況,導(dǎo)致無法識別到正確的本地IP地址,從而導(dǎo)致broker啟動是使用了內(nèi)網(wǎng)IP。 雖然啟動時(shí)已經(jīng)配置了本地IP地址,但是并為通過配置文件啟動broker,導(dǎo)致配置文件沒有生效 這是由于跨

    2024年02月11日
    瀏覽(21)
  • 啟動Docker容器報(bào)錯(cuò)docker: Error response from daemon: driver failed programming external connectivity on..

    Linux系統(tǒng)在啟動Docker容器時(shí),出現(xiàn)報(bào)錯(cuò)docker: Error response from daemon: driver failed programming external connectivity on endpoint lucid_banach(端口映射或啟動容器時(shí)報(bào)錯(cuò)): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.2:8080 ! -i docker0: iptables: No chain/target

    2024年02月12日
    瀏覽(31)
  • Failed to obtain JDBC Connection; nested exception is java.sql.SQLException

    Failed to obtain JDBC Connection; nested exception is java.sql.SQLException

    Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: The server time zone value \\\'?й???????\\\' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the \\\'serverTimezone\\\' configuration property) to use a more specifc time zone value if you want to utilize time zone s

    2024年02月11日
    瀏覽(25)
  • Docker啟動容器出現(xiàn)報(bào)錯(cuò)docker: Error response from daemon: driver failed programming external connectivity on

    Docker啟動容器出現(xiàn)報(bào)錯(cuò)docker: Error response from daemon: driver failed programming external connectivity on

    當(dāng)在使用docker啟動容器時(shí)出現(xiàn)報(bào)錯(cuò),docker: Error response from daemon: driver failed programming external connectivity on.. 這是因?yàn)楫?dāng)我們啟用docker后對防火墻firewall進(jìn)行了操作,當(dāng)firewall啟動或重啟時(shí)docker的規(guī)則被從iptables中移除,從而發(fā)生報(bào)錯(cuò)。 我們只需要 重啟docker服務(wù) ,重新生成自定義

    2024年02月09日
    瀏覽(29)
  • Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.excepti

    Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.excepti

    ? ? 這是第一個(gè)錯(cuò),解決了還有其他的 這個(gè)錯(cuò)誤的原因在我配置applicationContext.xml中,沒有給數(shù)據(jù)庫配置時(shí)區(qū),但是我之前的代碼都正常運(yùn)行,直到使用jdbcTemplate.execute方法才出現(xiàn)這個(gè)錯(cuò)誤。在url后面添加上?useSSL=falseserverTimezone=UTC即可 ? ? ? ? 添加完報(bào)錯(cuò)少了十幾行,出現(xiàn)了

    2024年02月16日
    瀏覽(25)
  • Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: com.mysql.jdbc.Driver

    Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: com.mysql.jdbc.Driver

    記錄idea報(bào)錯(cuò), 1.引用外部庫 mysql-connector-java-8.0.26 dependency ? ? ? groupIdmysql/groupId ? ? ??artifactIdmysql-connector-java/artifactId ? ? ? version8.0.26/version /dependency ? 2. 引入最新版的c3p0 dependency ? ? groupIdcom.mchange/groupId ? ? artifactIdc3p0/artifactId ? ? version0.9.5.5/version /dependency 3.連接的驅(qū)

    2024年02月02日
    瀏覽(21)
  • Failed to obtain JDBC Connection;nested exception is dm.jdbc.driver.DMException:初始化SSL環(huán)境失敗

    一個(gè)基于若依單體架構(gòu)的多模塊 Maven 項(xiàng)目的國產(chǎn)化遷移適配,由于是客戶的代碼,我們不用關(guān)心具體的功能實(shí)現(xiàn),直接來做遷移即可。實(shí)施時(shí),按照我們總結(jié)的整改建議調(diào)整源碼,具體遷移適配過程可參考本專欄的其他文章。 組件 操作系統(tǒng):麒麟V10 CPU: HUAWEI, Kunpeng 920 數(shù)據(jù)

    2023年04月25日
    瀏覽(22)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包