国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Hadoop學(xué)習(xí)筆記之HDFS

這篇具有很好參考價(jià)值的文章主要介紹了Hadoop學(xué)習(xí)筆記之HDFS。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

HDFS (Hadoop Distributed File System)

分布式存儲(chǔ)系統(tǒng)

支持海量數(shù)據(jù)的存儲(chǔ),成百上千的計(jì)算機(jī)組成存儲(chǔ)集群,HDFS可以運(yùn)行在低成本的硬件之上,具有的高容錯(cuò)、高可靠性、高可擴(kuò)展性、高吞吐率等特征,非常適合大規(guī)模數(shù)據(jù)集上的應(yīng)用。

優(yōu)點(diǎn)

  • 高容錯(cuò)性
  • 適合批處理
  • 適合大數(shù)據(jù)處理
  • 流式文件訪問(wèn)
  • 可構(gòu)建在廉價(jià)機(jī)器上

缺點(diǎn)

  • 不適合低延遲數(shù)據(jù)訪問(wèn)
  • 不適合小文件存取
  • 不適合并發(fā)寫(xiě)入、文件隨機(jī)修改

HDFS操作

命令操作HDFS

# 顯示目錄 / 下的文件
hdfs dfs -ls /
# 新建文件夾,絕對(duì)路徑
hdfs dfs -mkdir /test
# 上傳文件
hdfs dfs -put test.txt /test/
# 下載文件
hdfs dfs -get /test/test.txt
# 輸出文件內(nèi)容
hdfs dfs -cat /test/test.txt

Web端操作HDFS

打開(kāi)http://192.168.9.200:9870/,可以對(duì)HDFS進(jìn)行操作

Hadoop學(xué)習(xí)筆記之HDFS

文件在這里管理

Hadoop學(xué)習(xí)筆記之HDFS

可視化操作還是簡(jiǎn)單一些

Hadoop學(xué)習(xí)筆記之HDFS

Java操作HDFS

  1. 添加依賴
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>3.3.2</version>
</dependency>
  1. 建立連接
public void setUp() throws Exception {
    System.out.println("開(kāi)始建立與HDFS的連接");
    configuration = new Configuration();
    fileSystem = FileSystem.get(new URI(HDFS_PATH), configuration, "hadoop");
}
  1. 文件操作
    Configuration configuration = null;
    FileSystem fileSystem = null;
    public static final String HDFS_PATH = "hdfs://192.168.9.200:9000";

    /**
     * 在 hdfs中新建文件夾
     *
     * @throws Exception
     */
    @Test
    public void mkdir() throws Exception {
        fileSystem.mkdirs(new Path("/JavaDemo/test"));
    }

    /**
     * 創(chuàng)建文件
     *
     * @throws Exception
     */

    @Test
    public void create() throws Exception {
        FSDataOutputStream outputStream = fileSystem.create(new Path("/JavaDemo/test/haha.txt"));
        outputStream.write("hello bigdata from javaDemo".getBytes());
        outputStream.flush();
        outputStream.close();
    }


    /**
     * 查看文件 hdfs -fs -cat file
     *
     * @throws Exception
     */
    @Test
    public void cat() throws Exception {
        FSDataInputStream in = fileSystem.open(new Path("/JavaDemo/test/haha.txt"));
        IOUtils.copyBytes(in, System.out, 1024);
        in.close();
    }

    /**
     * 重命名文件
     *
     * @throws Exception
     */
    @Test
    public void rename() throws Exception {
        Path oldPath = new Path("/JavaDemo/test/haha.txt");
        Path newPath = new Path("/JavaDemo/test/hehe.txt");
        fileSystem.rename(oldPath, newPath);
    }

    /**
     * 上傳文件到HDFS
     *
     * @throws Exception
     */
    @Test
    public void copyFromLocalFile() throws Exception {
        Path loacalPath = new Path("hello.txt");
        Path hdfsPath = new Path("/");
        fileSystem.copyFromLocalFile(loacalPath, hdfsPath);
    }


    /**
     * 上傳文件到HDFS帶進(jìn)度信息
     *
     * @throws Exception
     */

    @Test
    public void copyFromLocalFileWithProgress() throws Exception {
        InputStream in = new BufferedInputStream(Files.newInputStream(new File("hbase-2.2.7-bin.tar.gz").toPath()));
        FSDataOutputStream ouput = fileSystem.create(new Path("/JavaDemo/test/hbase-2.2.7-bin.tar.gz"), () -> {
            System.out.print(".");
        });
        IOUtils.copyBytes(in, ouput, 4096);
    }

    /**
     * 下載文件到HDFS
     *
     * @throws Exception
     */
    @Test
    public void copyToLocalFile() throws Exception {
        Path hdfsPath = new Path("/JavaDemo/test/haha.txt");
        Path loacalPath = new Path("./haha.txt");
        //  useRawLocalFileSystem
        fileSystem.copyToLocalFile(false, hdfsPath, loacalPath, true);
    }

    /**
     * 查看某個(gè)目錄下所有文件
     *
     * @throws Exception
     */
    @Test
    public void listFiles() throws Exception {
        FileStatus[] fileStatuses = fileSystem.listStatus(new Path("/"));
        for (FileStatus f : fileStatuses) {
            String isDir = f.isDirectory() ? "文件夾" : "文件";
            short replication = f.getReplication();
            long len = f.getLen();
            String path = f.getPath().toString();

            System.out.println(isDir + "\t" + replication + "\t" + len + "\t" + path);

        }
    }

    /**
     * 刪除文件
     *
     * @throws Exception
     */
    @Test
    public void delete() throws IOException {
        fileSystem.delete(new Path("/JavaDemo/haha.txt"), true);
    }
  1. 關(guān)閉連接
public void tearDown() {
    configuration = null;
    fileSystem = null;
    System.out.println("關(guān)閉與HDFS的連接");
}

完整測(cè)試文件

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.IOUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.io.*;
import java.net.URI;
import java.nio.file.Files;

/**
 * @author Gettler
 * Java 操作HDFS
 */
public class HdfsDemo {
    Configuration configuration = null;
    FileSystem fileSystem = null;
    public static final String HDFS_PATH = "hdfs://192.168.9.200:9000";

    /**
     * 在 hdfs中新建文件夾
     *
     * @throws Exception
     */
    @Test
    public void mkdir() throws Exception {
        fileSystem.mkdirs(new Path("/JavaDemo/test"));
    }

    /**
     * 創(chuàng)建文件
     *
     * @throws Exception
     */

    @Test
    public void create() throws Exception {
        FSDataOutputStream outputStream = fileSystem.create(new Path("/JavaDemo/test/haha.txt"));
        outputStream.write("hello bigdata from javaDemo".getBytes());
        outputStream.flush();
        outputStream.close();
    }


    /**
     * 查看文件 hdfs -fs -cat file
     *
     * @throws Exception
     */
    @Test
    public void cat() throws Exception {
        FSDataInputStream in = fileSystem.open(new Path("/JavaDemo/test/haha.txt"));
        IOUtils.copyBytes(in, System.out, 1024);
        in.close();
    }

    /**
     * 重命名文件
     *
     * @throws Exception
     */
    @Test
    public void rename() throws Exception {
        Path oldPath = new Path("/JavaDemo/test/haha.txt");
        Path newPath = new Path("/JavaDemo/test/hehe.txt");
        fileSystem.rename(oldPath, newPath);
    }

    /**
     * 上傳文件到HDFS
     *
     * @throws Exception
     */
    @Test
    public void copyFromLocalFile() throws Exception {
        Path loacalPath = new Path("hello.txt");
        Path hdfsPath = new Path("/");
        fileSystem.copyFromLocalFile(loacalPath, hdfsPath);
    }


    /**
     * 上傳文件到HDFS帶進(jìn)度信息
     *
     * @throws Exception
     */

    @Test
    public void copyFromLocalFileWithProgress() throws Exception {
        InputStream in = new BufferedInputStream(Files.newInputStream(new File("hbase-2.2.7-bin.tar.gz").toPath()));
        FSDataOutputStream ouput = fileSystem.create(new Path("/JavaDemo/test/hbase-2.2.7-bin.tar.gz"), () -> {
            System.out.print(".");
        });
        IOUtils.copyBytes(in, ouput, 4096);
    }

    /**
     * 下載文件到HDFS
     *
     * @throws Exception
     */
    @Test
    public void copyToLocalFile() throws Exception {
        Path hdfsPath = new Path("/JavaDemo/test/haha.txt");
        Path loacalPath = new Path("./haha.txt");
        //  useRawLocalFileSystem
        fileSystem.copyToLocalFile(false, hdfsPath, loacalPath, true);
    }

    /**
     * 查看某個(gè)目錄下所有文件
     *
     * @throws Exception
     */
    @Test
    public void listFiles() throws Exception {
        FileStatus[] fileStatuses = fileSystem.listStatus(new Path("/"));
        for (FileStatus f : fileStatuses) {
            String isDir = f.isDirectory() ? "文件夾" : "文件";
            short replication = f.getReplication();
            long len = f.getLen();
            String path = f.getPath().toString();

            System.out.println(isDir + "\t" + replication + "\t" + len + "\t" + path);

        }
    }

    /**
     * 刪除文件
     *
     * @throws Exception
     */
    @Test
    public void delete() throws IOException {
        fileSystem.delete(new Path("/JavaDemo/haha.txt"), true);
    }


    //測(cè)試之前執(zhí)行的代碼
    @Before
    public void setUp() throws Exception {
        System.out.println("開(kāi)始建立與HDFS的連接");
        configuration = new Configuration();
        fileSystem = FileSystem.get(new URI(HDFS_PATH), configuration, "hadoop");
    }

    //測(cè)試之完執(zhí)行的代碼
    @After
    public void tearDown() {
        configuration = null;
        fileSystem = null;
        System.out.println("關(guān)閉與HDFS的連接");
    }
}

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999ssssssssss999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999ssssssssss999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999ssssssssss999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999ssssssssss999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999ssssssssss99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-468973.html

到了這里,關(guān)于Hadoop學(xué)習(xí)筆記之HDFS的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 【Distributed】分布式ELK日志文件分析系統(tǒng)(一)

    【Distributed】分布式ELK日志文件分析系統(tǒng)(一)

    ??日志主要包括系統(tǒng)日志、應(yīng)用程序日志和安全日志。系統(tǒng)遠(yuǎn)維和開(kāi)發(fā)人員可以通過(guò)日志了解服務(wù)器軟硬件信息、檢查配置過(guò)程中的錯(cuò)誤及錯(cuò)誤發(fā)生的原因。經(jīng)常分析日志可以了解服務(wù)器的負(fù)荷,性能安全性,從而及時(shí)采取措施糾正錯(cuò)誤。 ??往往單臺(tái)機(jī)器的日志我們使用

    2024年02月15日
    瀏覽(31)
  • Clickhouse分布式表引擎(Distributed)寫(xiě)入核心原理解析

    Clickhouse分布式表引擎(Distributed)寫(xiě)入核心原理解析

    Clickhouse分布式表引擎(Distributed)寫(xiě)入核心原理解析 Clickhouse分布式表引擎(Distributed)查詢核心原理解析 Distributed表引擎是分布式表的代名詞,它自身不存儲(chǔ)任何數(shù)據(jù),而是作為數(shù)據(jù)分片的透明代理,能夠自動(dòng)路由數(shù)據(jù)至集群中的各個(gè)節(jié)點(diǎn) ,所以Distributed表引擎需要和其他數(shù)

    2023年04月27日
    瀏覽(55)
  • GlusterFs 分布式復(fù)制卷(Distributed-Replicate)性能測(cè)試

    GlusterFs 分布式復(fù)制卷(Distributed-Replicate)性能測(cè)試

    目錄 fio工具參數(shù)解釋 Glusterfs 和NFS 性能測(cè)試 順序?qū)懀?隨機(jī)寫(xiě): 順序讀: 隨機(jī)讀: 隨機(jī)讀寫(xiě): 參數(shù)說(shuō)明: 測(cè)試結(jié)論: 與NFS對(duì)比 壓測(cè)對(duì)比結(jié)果 NFS和GlusterFs的優(yōu)缺點(diǎn) NFS的優(yōu)點(diǎn) NFS的缺點(diǎn) GlusterFS的優(yōu)點(diǎn) GlusterFS的缺點(diǎn) NFS與GlusterFS對(duì)比 1. 功能對(duì)比 2. 吞吐量對(duì)比 3. 可靠性對(duì)比 4

    2024年02月12日
    瀏覽(26)
  • 大數(shù)據(jù)學(xué)習(xí)02-Hadoop分布式集群部署

    大數(shù)據(jù)學(xué)習(xí)02-Hadoop分布式集群部署

    操作系統(tǒng):centos7 軟件環(huán)境:jdk8、hadoop-2.8.5 1.下載VMware,建議支持正版 2.安裝到Widows目錄下任意位置即可,安裝目錄自定義。打開(kāi)VMware,界面如下: 3.創(chuàng)建虛擬機(jī) 創(chuàng)建虛擬機(jī)—選擇自定義 這一步按照默認(rèn)的配置就好 選擇系統(tǒng),安裝程序光盤(pán)映像文件iso,這里需要下載cenos鏡像

    2024年02月16日
    瀏覽(22)
  • Hadoop學(xué)習(xí)1:概述、單體搭建、偽分布式搭建

    Hadoop學(xué)習(xí)1:概述、單體搭建、偽分布式搭建

    Hadoop: 分布式系統(tǒng)基礎(chǔ)架構(gòu) ? 解決問(wèn)題: 海量數(shù)據(jù)存儲(chǔ)、海量數(shù)據(jù)的分析計(jì)算 ? 官網(wǎng): https://hadoop.apache.org/ ? HDFS(Hadoop Distributed File System): 分布式文件系統(tǒng),用于存儲(chǔ)數(shù)據(jù) ? Hadoop的默認(rèn)配置【core-site.xml】: https://hadoop.apache.org/docs/r3.3.6/hadoop-project-dist/hadoop-common/c

    2024年03月15日
    瀏覽(20)
  • FPGA原理與結(jié)構(gòu)(6)——分布式RAM(Distributed RAM,DRAM)

    FPGA原理與結(jié)構(gòu)(6)——分布式RAM(Distributed RAM,DRAM)

    系列文章目錄:FPGA原理與結(jié)構(gòu)(0)——目錄與傳送門 目錄 一、RAM概述 1、RAM基本概念 2、FPGA中RAM的分類 二、DRAM詳解 1、FPGA資源? ? ? ?? 2、DRAM的配置形式 2.1?Single-Port(單端口) 2.2?Dual-Port(雙端口) 2.3?Quad-Port(四端口) 2.4?Simple Dual-Port(簡(jiǎn)單雙端口) 2.5 更大深度 ?

    2024年02月08日
    瀏覽(36)
  • pytorch 進(jìn)行分布式調(diào)試debug torch.distributed.launch 三種方式

    pytorch 進(jìn)行分布式調(diào)試debug torch.distributed.launch 三種方式

    一. pytorch 分布式調(diào)試debug torch.distributed.launch 三種方式 1. 方式1:ipdb調(diào)試(建議) 參考之前的博客:python調(diào)試器 ipdb 注意:pytorch 分布式調(diào)試只能使用侵入式調(diào)試,也即是在你需要打斷點(diǎn)的地方(或者在主程序的第一行)添加下面的代碼: 當(dāng)進(jìn)入pdb調(diào)試后,跟原先使用pdb調(diào)試

    2024年02月07日
    瀏覽(41)
  • 分布式鏈路追蹤——Dapper, a Large-Scale Distributed Systems Tracing Infrastructure

    分布式鏈路追蹤——Dapper, a Large-Scale Distributed Systems Tracing Infrastructure

    要解決的問(wèn)題 如何記錄請(qǐng)求經(jīng)過(guò)多個(gè)分布式服務(wù)的信息,以便分析問(wèn)題所在? 如何保證這些信息得到完整的追蹤? 如何盡可能不影響服務(wù)性能? 當(dāng)用戶請(qǐng)求到達(dá)前端A,將會(huì)發(fā)送rpc請(qǐng)求給中間層B、C;B可以立刻作出反應(yīng),但是C需要后端服務(wù)D、E的配合才能應(yīng)答 一個(gè)簡(jiǎn)單有用

    2024年02月12日
    瀏覽(30)
  • 39學(xué)習(xí)分布式計(jì)算框架 Hadoop 的高可用方案,如 NameNode 集群、ZooKeeper

    39學(xué)習(xí)分布式計(jì)算框架 Hadoop 的高可用方案,如 NameNode 集群、ZooKeeper

    Hadoop 是一個(gè)分布式計(jì)算框架,用于存儲(chǔ)和處理大數(shù)據(jù)。在 Hadoop 集群中,NameNode 是一個(gè)關(guān)鍵組件,它負(fù)責(zé)管理 Hadoop 分布式文件系統(tǒng)(HDFS)中的文件和目錄。為了確保高可用性,需要使用多個(gè) NameNode 節(jié)點(diǎn)進(jìn)行冗余備份,并使用 ZooKeeper 進(jìn)行故障檢測(cè)和自動(dòng)故障切換。 以下是學(xué)

    2023年04月26日
    瀏覽(27)
  • [論文筆記](méi) Gemini: A Computation-Centric Distributed Graph Processing System

    [論文筆記](méi) Gemini: A Computation-Centric Distributed Graph Processing System

    Gemini: 以計(jì)算為中心的分布式圖處理系統(tǒng) [Paper] [Slides] [Code] OSDI’16 提出了 Gemini, 一個(gè)分布式圖處理系統(tǒng), 應(yīng)用了多種針對(duì)計(jì)算性能的優(yōu)化以在 效率之上構(gòu)建可擴(kuò)展性 . Gemini 采用: 稀疏-稠密信號(hào)槽 抽象, 將混合推拉計(jì)算模型擴(kuò)展到分布式場(chǎng)景 基于分塊的劃分 (chunk-based partiti

    2024年02月15日
    瀏覽(37)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包