Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR] docs: Add paimon spark connector doc #6328

Merged
merged 5 commits into from
Jan 20, 2025
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 89 additions & 0 deletions docs/spark-connector/spark-catalog-paimon.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
title: "Spark connector Paimon catalog"
slug: /spark-connector/spark-catalog-paimon
keyword: spark connector paimon catalog
license: "This software is licensed under the Apache License version 2."
---

The Apache Gravitino Spark connector offers the capability to read and write Paimon tables, with the metadata managed by the Gravitino server. To enable the use of the Paimon catalog within the Spark connector now, you must set download [Paimon Spark runtime jar](https://paimon.apache.org/docs/0.8/spark/quick-start/#preparation) to Spark classpath.

## Capabilities
caican00 marked this conversation as resolved.
Show resolved Hide resolved

### Paimon Catalog Backend Support
caican00 marked this conversation as resolved.
Show resolved Hide resolved
- Only supports Paimon FilesystemCatalog on HDFS now.

### Support DDL and DML operations:
#### Namespace Support
- `CREATE NAMESPACE`
- `DROP NAMESPACE`
- `LIST NAMESPACE`
- `LOAD NAMESPACE`
- It can not return any user-specified configs now, as we only support FilesystemCatalog in spark-connector now.

#### Namespace Not Support
- `ALTER NAMESPACE`
- Paimon does not support alter namespace.

#### Table DDL and DML Support
- `CREATE TABLE`
- Doesn't support distribution and sort orders.
- `DROP TABLE`
- `ALTER TABLE`
- `LIST TABLE`
- `DESRICE TABLE`
- `SELECT`
- `INSERT INTO & OVERWRITE`
- `Schema Evolution`
- `PARTITION MANAGEMENT`, such as `LIST PARTITIONS`, `ALTER TABLE ... DROP PARTITION ...`

#### Table DML Not Supported
- Row Level operations, such as `MERGE INTO`, `DELETE`, `UPDATE`, `TRUNCATE`
- Metadata tables, such as `{paimon_catalog}.{paimon_database}.{paimon_table}$snapshots`
- Other Paimon extension SQLs, such as `Tag`
- Call Statements
- View
- Time Travel
- Hive and Jdbc backend, and Object Storage for FilesystemCatalog

## SQL example

```sql
-- Suppose paimon_catalog is the Paimon catalog name managed by Gravitino
USE paimon_catalog;

CREATE DATABASE IF NOT EXISTS mydatabase;
USE mydatabase;

CREATE TABLE IF NOT EXISTS employee (
id bigint,
name string,
department string,
hire_date timestamp
) PARTITIONED BY (name);

SHOW TABLES;
DESC TABLE EXTENDED employee;

INSERT INTO employee
VALUES
(1, 'Alice', 'Engineering', TIMESTAMP '2021-01-01 09:00:00'),
(2, 'Bob', 'Marketing', TIMESTAMP '2021-02-01 10:30:00'),
(3, 'Charlie', 'Sales', TIMESTAMP '2021-03-01 08:45:00');

SELECT * FROM employee WHERE name = 'Alice';

SHOW PARTITIONS employee;
ALTER TABLE employee DROP PARTITION (`name`='Alice');
```

## Catalog properties

Gravitino spark connector will transform below property names which are defined in catalog properties to Spark Paimon connector configuration.

| Gravitino catalog property name | Spark Paimon connector configuration | Description | Since Version |
|---------------------------------|--------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|
| `catalog-backend` | `metastore` | Catalog backend type | 0.6.0 |
caican00 marked this conversation as resolved.
Show resolved Hide resolved
| `uri` | `uri` | Catalog backend uri | 0.6.0 |
| `warehouse` | `warehouse` | Catalog backend warehouse | 0.6.0 |

Gravitino catalog property names with the prefix `spark.bypass.` are passed to Spark Paimon connector. For example, using `spark.bypass.client-pool-size` to pass the `client-pool-size` to the Spark Paimon connector.
Loading