Paimon
Paimon
使用须知
- 数据放在hdfs时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml 放到 FE 和 BE 的 conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量
HADOOP_CONF_DIR
的相关配置文件。 - 当前适配的paimon版本为0.6.0
创建 Catalog
Paimon Catalog 当前支持两种类型的Metastore创建Catalog:
- filesystem(默认),同时存储元数据和数据在filesystem。
- hive metastore,它还将元数据存储在Hive metastore中。用户可以直接从Hive访问这些表。
基于FileSystem创建Catalog
2.0.1 及之前版本,请使用后面的
基于Hive Metastore创建Catalog
。
HDFS
CREATE CATALOG `paimon_hdfs` PROPERTIES (
"type" = "paimon",
"warehouse" = "hdfs://HDFS8000871/user/paimon",
"dfs.nameservices" = "HDFS8000871",
"dfs.ha.namenodes.HDFS8000871" = "nn1,nn2",
"dfs.namenode.rpc-address.HDFS8000871.nn1" = "172.21.0.1:4007",
"dfs.namenode.rpc-address.HDFS8000871.nn2" = "172.21.0.2:4007",
"dfs.client.failover.proxy.provider.HDFS8000871" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"hadoop.username" = "hadoop"
);
MINIO
注意:
用户需要手动下载paimon-s3-0.6.0-incubating.jar
放在${DORIS_HOME}/be/lib/java_extensions/preload-extensions目录下并重启be。
从 2.0.2 版本起,可以将这个文件放置在BE的
custom_lib/
目录下(如不存在,手动创建即可),以防止升级集群时因为 lib 目录被替换而导致文件丢失。
CREATE CATALOG `paimon_s3` PROPERTIES (
"type" = "paimon",
"warehouse" = "s3://bucket_name/paimons3",
"s3.endpoint" = "http://<ip>:<port>",
"s3.access_key" = "ak",
"s3.secret_key" = "sk"
);
OBS
注意:
用户需要手动下载paimon-s3-0.6.0-incubating.jar 放在
${DORIS_HOME}/be/lib/java_extensions/preload-extensions
目录下并重启be。从 2.0.2 版本起,可以将这个文件放置在BE的
custom_lib/
目录下(如不存在,手动创建即可),以防止升级集群时因为 lib 目录被替换而导致文件丢失。CREATE CATALOG `paimon_obs` PROPERTIES (
"type" = "paimon",
"warehouse" = "obs://bucket_name/paimon",
"obs.endpoint"="obs.cn-north-4.myhuaweicloud.com",
"obs.access_key"="ak",
"obs.secret_key"="sk"
);
COS
CREATE CATALOG `paimon_s3` PROPERTIES (
"type" = "paimon",
"warehouse" = "cosn://paimon-1308700295/paimoncos",
"cos.endpoint" = "cos.ap-beijing.myqcloud.com",
"cos.access_key" = "ak",
"cos.secret_key" = "sk"
);
OSS
CREATE CATALOG `paimon_oss` PROPERTIES (
"type" = "paimon",
"warehouse" = "oss://paimon-zd/paimonoss",
"oss.endpoint" = "oss-cn-beijing.aliyuncs.com",
"oss.access_key" = "ak",
"oss.secret_key" = "sk"
);
基于Hive Metastore创建Catalog
CREATE CATALOG `paimon_hms` PROPERTIES (
"type" = "paimon",
"paimon.catalog.type" = "hms",
"warehouse" = "hdfs://HDFS8000871/user/zhangdong/paimon2",
"hive.metastore.uris" = "thrift://172.21.0.44:7004",
"dfs.nameservices" = "HDFS8000871",
"dfs.ha.namenodes.HDFS8000871" = "nn1,nn2",
"dfs.namenode.rpc-address.HDFS8000871.nn1" = "172.21.0.1:4007",
"dfs.namenode.rpc-address.HDFS8000871.nn2" = "172.21.0.2:4007",
"dfs.client.failover.proxy.provider.HDFS8000871" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"hadoop.username" = "hadoop"
);
列类型映射
Paimon Data Type | Doris Data Type | Comment |
---|---|---|
BooleanType | Boolean | |
TinyIntType | TinyInt | |
SmallIntType | SmallInt | |
IntType | Int | |
FloatType | Float | |
BigIntType | BigInt | |
DoubleType | Double | |
VarCharType | VarChar | |
CharType | Char | |
DecimalType(precision, scale) | Decimal(precision, scale) | |
TimestampType,LocalZonedTimestampType | DateTime | |
DateType | Date | |
MapType | Map | 支持Map嵌套 |
ArrayType | Array | 支持Array嵌套 |
VarBinaryType, BinaryType | Binary |
访问对象存储(OSS、S3 等)报错文件系统不支持
在 2.0.5(含)之前的版本,用户需手动下载以下 jar 包并放置在
${DORIS_HOME}/be/lib/java_extensions/preload-extensions
目录下,重启 BE。- 访问 OSS:paimon-oss-0.6.0-incubating.jar
- 访问其他对象存储:paimon-s3-0.6.0-incubating.jar