目錄
前言
1. 什么是 Apache Paimon
一、本地環(huán)境快速上手
1、本地Flink偽集群
2、IDEA中跑Paimon Demo
2.1 代碼
2.2 IDEA中成功運行
3、IDEA中Stream讀寫
3.1 流寫
3.2 流讀(toChangeLogStream)
二、進階:本地(IDEA)多流拼接測試
要解決的問題:
note:
1、'changelog-producer' = 'full-compaction'
(1)multiWrite代碼
(2)讀延遲
2、'changelog-producer' = 'lookup'
三、可能遇到的問題
四、展望
前言
1. 什么是 Apache Paimon
????????Apache Paimon (incubating) 是一項流式數(shù)據(jù)湖存儲技術,可以為用戶提供高吞吐、低延遲的數(shù)據(jù)攝入、流式訂閱以及實時查詢能力。
????????Paimon 采用開放的數(shù)據(jù)格式和技術理念,可以與 Apache Flink / Spark / Trino 等諸多業(yè)界主流計算引擎進行對接,共同推進 Streaming Lakehouse 架構的普及和發(fā)展。
????????Paimon 以湖存儲的方式基于分布式文件系統(tǒng)管理元數(shù)據(jù),并采用開放的 ORC、Parquet、Avro 文件格式,支持各大主流計算引擎,包括 Flink、Spark、Hive、Trino、Presto。未來會對接更多引擎,包括 Doris 和 Starrocks。
官網(wǎng):https://paimon.apache.org/?
Github:https://github.com/apache/incubator-paimon
以下為快速入門上手Paimon的example:
一、本地環(huán)境快速上手
基于paimon 0.4-SNAPSHOT (Flink 1.14.4),F(xiàn)link版本太低是不支持的,paimon基于最低版本1.14.6,經嘗試在Flink1.14.0是不可以的!
paimon-flink-1.14-0.4-20230504.002229-50.jar
1、本地Flink偽集群
0. 需要先下載jar包,并添加至flink的lib中;
1. 根據(jù)官網(wǎng)demo,啟動flinksql-client,創(chuàng)建catalog,創(chuàng)建表,創(chuàng)建數(shù)據(jù)源(視圖),insert數(shù)據(jù)到表中。
2. 通過 localhost:8081 查看 Flink UI
3. 查看filesystem數(shù)據(jù)、元數(shù)據(jù)文件
2、IDEA中跑Paimon Demo
pom依賴:
<dependency>
<groupId>org.apache.paimon</groupId>
<artifactId>paimon-flink-1.14</artifactId>
<version>0.4-SNAPSHOT</version>
</dependency>
拉取不到的可以手動添加到本地maven倉庫:
mvn install:install-file -DgroupId=org.apache.paimon -DartifactId=paimon-flink-1.14 -Dversion=0.4-SNAPSHOT -Dpackaging=jar -Dfile=D:\software\paimon-flink-1.14-0.4-20230504.002229-50.jar
2.1 代碼
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
/**
* @Author: YK.Leo
* @Date: 2023-05-14 15:12
* @Version: 1.0
*/
// Succeed at local ?。。?public class OfficeDemoV1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(10000l);
env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");
TableEnvironment tableEnv = StreamTableEnvironment.create(env);
// 0. Create a Catalog and a Table
tableEnv.executeSql("CREATE CATALOG my_catalog_api WITH (\n" +
" 'type'='paimon',\n" + // todo: !!!
" 'warehouse'='file:///D:/tmp/paimon'\n" +
")");
tableEnv.executeSql("USE CATALOG my_catalog_api");
tableEnv.executeSql("CREATE TABLE IF NOT EXISTS word_count_api (\n" +
" word STRING PRIMARY KEY NOT ENFORCED,\n" +
" cnt BIGINT\n" +
")");
// 1. Write Data
tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS word_table_api (\n" +
" word STRING\n" +
") WITH (\n" +
" 'connector' = 'datagen',\n" +
" 'fields.word.length' = '1'\n" +
")");
// tableEnv.executeSql("SET 'execution.checkpointing.interval' = '10 s'");
tableEnv.executeSql("INSERT INTO word_count_api SELECT word, COUNT(*) FROM word_table_api GROUP BY word");
env.execute();
}
}
2.2 IDEA中成功運行
3、IDEA中Stream讀寫
3.1 流寫
代碼:
package com.study.flink.table.paimon.demo;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
/**
* @Author: YK.Leo
* @Date: 2023-05-17 11:11
* @Version: 1.0
*/
// succeed at local ?。?!
public class OfficeStreamsWriteV2 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(10000L);
env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");
TableEnvironment tableEnv = StreamTableEnvironment.create(env);
// 0. Create a Catalog and a Table
tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
" 'type'='paimon',\n" + // todo: !!!
" 'warehouse'='file:///D:/tmp/paimon'\n" +
")");
tableEnv.executeSql("USE CATALOG my_catalog_local");
tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
tableEnv.executeSql("USE local_db");
// drop tbl
tableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");
tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"
+ " uuid bigint,\n"
+ " name VARCHAR(3),\n"
+ " age int,\n"
+ " ts TIMESTAMP(3),\n"
+ " dt VARCHAR(10), \n"
+ " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"
+ ") PARTITIONED BY (dt) \n"
+ " WITH (\n" +
" 'merge-engine' = 'partial-update',\n" +
" 'changelog-producer' = 'full-compaction', \n" +
" 'file.format' = 'orc', \n" +
" 'scan.mode' = 'compacted-full', \n" +
" 'bucket' = '5', \n" +
" 'sink.parallelism' = '5', \n" +
" 'sequence.field' = 'ts' \n" + // todo, to check
")"
);
// datagen ====================================================================
tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +
" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
" `name` VARCHAR(3)," +
" _ts1 TIMESTAMP(3)\n" +
") WITH (\n" +
" 'connector' = 'datagen', \n" +
" 'fields.uuid.kind'='sequence',\n" +
" 'fields.uuid.start'='0', \n" +
" 'fields.uuid.end'='1000000', \n" +
" 'rows-per-second' = '1' \n" +
")");
tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +
" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
" `age` int," +
" _ts2 TIMESTAMP(3)\n" +
") WITH (\n" +
" 'connector' = 'datagen', \n" +
" 'fields.uuid.kind'='sequence',\n" +
" 'fields.uuid.start'='0', \n" +
" 'fields.uuid.end'='1000000', \n" +
" 'rows-per-second' = '1' \n" +
")");
//
//tableEnv.executeSql("insert into paimon_tbl_streams(uuid, name, _ts1) select uuid, concat(name,'_A') as name, _ts1 from source_A");
//tableEnv.executeSql("insert into paimon_tbl_streams(uuid, age, _ts1) select uuid, concat(age,'_B') as age, _ts1 from source_B");
StatementSet statementSet = tableEnv.createStatementSet();
statementSet
.addInsertSql("insert into paimon_tbl_streams(uuid, name, ts, dt) select uuid, name, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A")
.addInsertSql("insert into paimon_tbl_streams(uuid, age, dt) select uuid, age, date_format(_ts2,'yyyy-MM-dd') as dt from source_B")
;
statementSet.execute();
// env.execute();
}
}
結果:
? ? ? ?如果只有一個流,上述代碼完全沒有問題【僅作為write demo一個流即可】,兩個流會出現(xiàn)“寫沖突”問題!
如下:
? ? ? ? 使用了官網(wǎng)的方法:Dedicated Compaction Job,似乎并沒有奏效,至于解決方法請看下文 “二、進階:本地(IDEA)多流拼接測試”;?
3.2 流讀(toChangeLogStream)
代碼:
package com.study.flink.table.paimon.demo;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.connector.ChangelogMode;
import org.apache.flink.types.Row;
import org.apache.flink.types.RowKind;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
/**
* @Author: YK.Leo
* @Date: 2023-05-15 18:50
* @Version: 1.0
*/
// 流讀單表OK!
public class OfficeStreamReadV1 {
public static final Logger LOGGER = LogManager.getLogger(OfficeStreamReadV1.class);
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(10000L);
env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");
TableEnvironment tableEnv = StreamTableEnvironment.create(env);
// 0. Create a Catalog and a Table
tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
" 'type'='paimon',\n" + // todo: !!!
" 'warehouse'='file:///D:/tmp/paimon'\n" +
")");
tableEnv.executeSql("USE CATALOG my_catalog_local");
tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
tableEnv.executeSql("USE local_db");
// 不需要再次創(chuàng)建表
// convert to DataStream
// Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams");
Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams WHERE name is not null and age is not null");
// DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toChangelogStream(table);
// todo : doesn't support consuming update and delete changes which is produced by node TableSourceScan
// DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toDataStream(table);
// 剔除 -U 數(shù)據(jù)(即:更新前的數(shù)據(jù)不需要重新發(fā)送,剔除)?。?!
DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv)
.toChangelogStream(table, Schema.newBuilder().primaryKey("dt","uuid").build(), ChangelogMode.upsert())
.filter(new FilterFunction<Row>() {
@Override
public boolean filter(Row row) throws Exception {
boolean isNoteUpdateBefore = !(row.getKind().equals(RowKind.UPDATE_BEFORE));
if (!isNoteUpdateBefore) {
LOGGER.info("UPDATE_BEFORE: " + row.toString());
}
return isNoteUpdateBefore;
}
})
;
// use this datastream
dataStream.executeAndCollect().forEachRemaining(System.out::println);
env.execute();
}
}
結果:
二、進階:本地(IDEA)多流拼接測試
要解決的問題:
????????多個流擁有相同的主鍵,每個流更新除主鍵外的部分字段,通過主鍵完成多流拼接。
note:
????????如果是兩個Flink Job 或者 兩個 pipeline 寫同一個paimon表,則直接會產生conflict,其中一條流不斷exception、重啟;
????????可以使用 “UNION ALL” 將多個流合并為一個流,最終一個Flink job寫paimon表;
????????使用主鍵表,'merge-engine' = 'partial-update' ;
1、'changelog-producer' = 'full-compaction'
(1)multiWrite代碼
package com.study.flink.table.paimon.multi;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
/**
* @Author: YK.Leo
* @Date: 2023-05-18 10:17
* @Version: 1.0
*/
// Succeed as local ?。?!
// 而且不會產生conflict,跑5分鐘沒有任何異常(公司跑幾天無異常)! 數(shù)據(jù)也可以在另一個job流讀!
public class MultiStreamsUnionWriteV1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(10*1000L);
env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");
TableEnvironment tableEnv = StreamTableEnvironment.create(env);
// 0. Create a Catalog and a Table
tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
" 'type'='paimon',\n" + // todo: !!!
" 'warehouse'='file:///D:/tmp/paimon'\n" +
")");
tableEnv.executeSql("USE CATALOG my_catalog_local");
tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
tableEnv.executeSql("USE local_db");
// drop & create tbl
tableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");
tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"
+ " uuid bigint,\n"
+ " name VARCHAR(3),\n"
+ " age int,\n"
+ " ts TIMESTAMP(3),\n"
+ " dt VARCHAR(10), \n"
+ " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"
+ ") PARTITIONED BY (dt) \n"
+ " WITH (\n" +
" 'merge-engine' = 'partial-update',\n" +
" 'changelog-producer' = 'full-compaction', \n" +
" 'file.format' = 'orc', \n" +
" 'scan.mode' = 'compacted-full', \n" +
" 'bucket' = '5', \n" +
" 'sink.parallelism' = '5', \n" +
// " 'write_only' = 'true', \n" +
" 'sequence.field' = 'ts' \n" + // todo, to check
")"
);
// datagen ====================================================================
tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +
" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
" `name` VARCHAR(3)," +
" _ts1 TIMESTAMP(3)\n" +
") WITH (\n" +
" 'connector' = 'datagen', \n" +
" 'fields.uuid.kind'='sequence',\n" +
" 'fields.uuid.start'='0', \n" +
" 'fields.uuid.end'='1000000', \n" +
" 'rows-per-second' = '1' \n" +
")");
tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +
" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
" `age` int," +
" _ts2 TIMESTAMP(3)\n" +
") WITH (\n" +
" 'connector' = 'datagen', \n" +
" 'fields.uuid.kind'='sequence',\n" +
" 'fields.uuid.start'='0', \n" +
" 'fields.uuid.end'='1000000', \n" +
" 'rows-per-second' = '1' \n" +
")");
//
StatementSet statementSet = tableEnv.createStatementSet();
String sqlText = "INSERT INTO paimon_tbl_streams(uuid, name, age, ts, dt) \n" +
"select uuid, name, cast(null as int) as age, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A \n" +
"UNION ALL \n" +
"select uuid, cast(null as string) as name, age, _ts2 as ts, date_format(_ts2,'yyyy-MM-dd') as dt from source_B"
;
statementSet.addInsertSql(sqlText);
statementSet.execute();
}
}
讀代碼同上。
(2)讀延遲
????????即:從client數(shù)據(jù)落到paimon,完成與server的join,再到被Flink-paimon流讀到的時間延遲;
? ? ? ?分鐘級別延遲!
2、'changelog-producer' = 'lookup'
讀寫同上,建表時修改參數(shù)即可: changelog-producer='lookup',與此匹配的scan-mode需要分別配置為 'latest' ;
lookup延遲性可能會更低,但是數(shù)據(jù)質量有待驗證。
note:
經測試,在企業(yè)生產環(huán)境中full-compaction模式目前一切穩(wěn)定(兩條join的流QPS約3K左右,延遲2-3分鐘)。
?????????99.9%的數(shù)據(jù)延遲在2-3分鐘;
????????(multiWrite的checkpoint間隔為60s時)
三、可能遇到的問題
1. Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
原因:org.codehaus.janino 依賴沖突,
辦法:全部exclude掉
<exclude>org.codehaus.janino:*</exclude>
2. Caused by: java.lang.ClassNotFoundException: org.apache.flink.util.function.SerializableFunction
原因:Flink steaming版本與Flink table版本不一致 或 確實相關依賴 (這里是paimon依賴的flink版本最低為1.14.6,與1.14.0的flink不兼容)
辦法:升級Flink版本到1.14.4以上
參考Flink配置:Configuration | Apache Flink
3. Caused by: java.util.ServiceConfigurationError: org.apache.flink.table.factories.Factory: Provider org.apache.flink.table.store.connector.TableStoreManagedFactory not found
在項目的META-INF/services路徑下添加 Factory 文件(這樣才能匹配Flink的CatalogFactory,才能創(chuàng)建catalog)
4. Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: No operators defined in streaming topology. Cannot execute.
已經存在tableEnv.executeSql 或者 statementSet.execute() 時就不需要再 env.execute() 了!
5. Flink SQL不能直接使用null as,需要寫成 cast(null as data_type), 如 cast(null as string);
6. 如果創(chuàng)建paimon分區(qū)表,必須要把分區(qū)字段放在主鍵中!,否則建表報錯:
四、展望
如果有數(shù)據(jù)格式:
主鍵? ?stream_client? ?stream_server? ?ts?
1001? ? null? ? ? ? ? ? ? ? ? ?a? ? ? ? ? ? ? ? ? ? ? ? ?1
1001? ? A? ? ? ? ? ? ? ? ? ? ? null? ? ? ? ? ? ? ? ? ? ? 2
1001? ? B? ? ? ? ? ? ? ? ? ? ? null? ? ? ? ? ? ? ? ? ? ? 3
按照paimon官方的實現(xiàn),使用主鍵表的partial update進行多流拼接會被拼接為如下結果:
1001? ? B? ? a? ? 3;
? ? ? 即:主鍵會被去重(取每個流里邊最新的一條),如果想要保留?? stream_client??的全部數(shù)據(jù),官方源碼實現(xiàn)不了,需要進行改造!
? ? ??我們已經改造并實現(xiàn)了非去重的效果,后續(xù)出一篇專門的文章闡述一下改造思路和方法。
想象:
? ? ? stream_client為客戶端數(shù)據(jù),請求一次服務之后,可以上下滑動屏幕(或者進入后回退),使某個商品產生多次曝光(但不會多次請求server端);此時 client 端產生了多條數(shù)據(jù),server端只有一條數(shù)據(jù)。但是,client端多次的曝光/點擊是可以反應用戶對某個商品的感興趣程度的,是有意義的數(shù)據(jù),不應該被去重掉!文章來源:http://www.zghlxwxcb.cn/news/detail-721424.html
【未完待續(xù)...】文章來源地址http://www.zghlxwxcb.cn/news/detail-721424.html
到了這里,關于新一代數(shù)據(jù)湖存儲技術Apache Paimon入門Demo的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!