Kaynağa Gözat

doc(lookup): doc for lookup table (#1428)

Signed-off-by: Jiyong Huang <huangjy@emqx.io>

Signed-off-by: Jiyong Huang <huangjy@emqx.io>
ngjaying 2 yıl önce
ebeveyn
işleme
09f1e25c65
41 değiştirilmiş dosya ile 602 ekleme ve 47 silme
  1. 16 8
      docs/directory.json
  2. 17 6
      docs/en_US/concepts/sources/table.md
  3. 6 0
      docs/en_US/extension/native/develop/sink.md
  4. 45 0
      docs/en_US/extension/native/develop/source.md
  5. 24 1
      docs/en_US/rules/sinks/builtin/memory.md
  6. 19 3
      docs/en_US/rules/sinks/plugin/redis.md
  7. 32 4
      docs/en_US/rules/sinks/overview.md
  8. 22 0
      docs/en_US/rules/sinks/plugin/sql.md
  9. 3 0
      docs/en_US/rules/sources/builtin/edgex.md
  10. 2 0
      docs/en_US/rules/sources/builtin/file.md
  11. 3 0
      docs/en_US/rules/sources/builtin/http_pull.md
  12. 3 0
      docs/en_US/rules/sources/builtin/http_push.md
  13. 18 2
      docs/en_US/rules/sources/builtin/memory.md
  14. 3 0
      docs/en_US/rules/sources/builtin/mqtt.md
  15. 3 0
      docs/en_US/rules/sources/builtin/neuron.md
  16. 25 0
      docs/en_US/rules/sources/builtin/redis.md
  17. 2 0
      docs/en_US/rules/sources/overview.md
  18. 4 1
      docs/en_US/rules/sources/plugin/random.md
  19. 31 0
      docs/en_US/rules/sources/plugin/sql.md
  20. 3 0
      docs/en_US/rules/sources/plugin/zmq.md
  21. 29 3
      docs/en_US/sqls/tables.md
  22. 16 5
      docs/zh_CN/concepts/sources/table.md
  23. 6 0
      docs/zh_CN/extension/native/develop/sink.md
  24. 46 1
      docs/zh_CN/extension/native/develop/source.md
  25. 24 1
      docs/zh_CN/rules/sinks/builtin/memory.md
  26. 22 0
      docs/zh_CN/rules/sinks/plugin/redis.md
  27. 32 4
      docs/zh_CN/rules/sinks/overview.md
  28. 19 0
      docs/zh_CN/rules/sinks/plugin/sql.md
  29. 4 3
      docs/zh_CN/rules/sources/builtin/edgex.md
  30. 3 1
      docs/zh_CN/rules/sources/builtin/file.md
  31. 3 0
      docs/zh_CN/rules/sources/builtin/http_pull.md
  32. 3 0
      docs/zh_CN/rules/sources/builtin/http_push.md
  33. 18 2
      docs/zh_CN/rules/sources/builtin/memory.md
  34. 3 0
      docs/zh_CN/rules/sources/builtin/mqtt.md
  35. 3 0
      docs/zh_CN/rules/sources/builtin/neuron.md
  36. 25 0
      docs/zh_CN/rules/sources/builtin/redis.md
  37. 2 0
      docs/zh_CN/rules/sources/overview.md
  38. 3 0
      docs/zh_CN/rules/sources/plugin/random.md
  39. 29 0
      docs/zh_CN/rules/sources/plugin/sql.md
  40. 3 0
      docs/zh_CN/rules/sources/plugin/zmq.md
  41. 28 2
      docs/zh_CN/sqls/tables.md

+ 16 - 8
docs/directory.json

@@ -207,6 +207,10 @@
 								{
 								{
 									"title": "文件源",
 									"title": "文件源",
 									"path": "rules/sources/builtin/file"
 									"path": "rules/sources/builtin/file"
+								},
+								{
+									"title": "Redis 源",
+									"path": "rules/sources/builtin/redis"
 								}
 								}
 							]
 							]
 						},
 						},
@@ -270,6 +274,10 @@
 								{
 								{
 									"title": "Nop 动作",
 									"title": "Nop 动作",
 									"path": "rules/sinks/builtin/nop"
 									"path": "rules/sinks/builtin/nop"
+								},
+								{
+									"title": "Redis 动作",
+									"path": "rules/sinks/builtin/redis"
 								}
 								}
 							]
 							]
 						},
 						},
@@ -301,10 +309,6 @@
 									"path": "rules/sinks/plugin/tdengine"
 									"path": "rules/sinks/plugin/tdengine"
 								},
 								},
 								{
 								{
-									"title": "Redis 动作",
-									"path": "rules/sinks/plugin/redis"
-								},
-								{
 									"title": "图像动作",
 									"title": "图像动作",
 									"path": "rules/sinks/plugin/image"
 									"path": "rules/sinks/plugin/image"
 								}
 								}
@@ -769,6 +773,10 @@
 								{
 								{
 									"title": "File source",
 									"title": "File source",
 									"path": "rules/sources/builtin/file"
 									"path": "rules/sources/builtin/file"
+								},
+								{
+									"title": "Redis source",
+									"path": "rules/sources/builtin/redis"
 								}
 								}
 							]
 							]
 						},
 						},
@@ -832,6 +840,10 @@
 								{
 								{
 									"title": "Nop action",
 									"title": "Nop action",
 									"path": "rules/sinks/builtin/nop"
 									"path": "rules/sinks/builtin/nop"
+								},
+								{
+									"title": "Redis sink",
+									"path": "rules/sinks/builtin/redis"
 								}
 								}
 							]
 							]
 						},
 						},
@@ -863,10 +875,6 @@
 									"path": "rules/sinks/plugin/tdengine"
 									"path": "rules/sinks/plugin/tdengine"
 								},
 								},
 								{
 								{
-									"title": "Redis sink",
-									"path": "rules/sinks/plugin/redis"
-								},
-								{
 									"title": "Image sink",
 									"title": "Image sink",
 									"path": "rules/sinks/plugin/image"
 									"path": "rules/sinks/plugin/image"
 								}
 								}

+ 17 - 6
docs/en_US/concepts/sources/table.md

@@ -1,16 +1,27 @@
 # Table
 # Table
 
 
-In eKuiper, a table is a snapshot of the source data. In contrast to the common static tables that represent batch data, eKuiper tables can change over time.
+A table is a snapshot of the source data. We support two kinds of table: scan table and lookup table.
 
 
-The source for table can be either bounded or unbounded. For bounded source table, the content of the table is static. For unbounded table, the content of the table is dynamic.
+- Scan table: Consume the stream data as changelog and update the table continuously. In contrast to the common static tables that represent batch data, scan tables can change over time. All stream sources like MQTT, neuron source can also be a scan table source. Scan table was supported since v1.2.0.
+- Lookup table: an external table whose content is usually never read entirely but queried for individual values when necessary. We support to bind physical table as table and generate lookup command(e.g. a SQL on db) on demand. Notice that, not all source type can be a lookup table source, only sources like SQL source which has an external storage can be a lookup source. Lookup table was supported since v1.7.0.
 
 
-## Table Updates
+## Scan table
 
 
-Currently, the table update in eKuiper is append-only. Users can specify the properties to limit the table size to avoid too much memory consumption.
+The source for table can be either bounded or unbounded. For bounded source table, the content of the table is static. For unbounded table, the content of the table is dynamic. The content of the table are stored in memory.
 
 
-## Table Usages
+Currently, the scan table update in eKuiper is append-only. Users can specify the properties to limit the table size to avoid too much memory consumption.
 
 
-Table cannot be used standalone in a rule. It is usually used to join with streams. It can be used to enrich stream data or as a switch for calculation.
+Scan table cannot be used standalone in a rule. It is usually used to join with streams. It can be used to enrich stream data or as a switch for calculation.
+
+## Lookup Table
+
+Lookup table do not store the table content in memory but refer to the external table. Apparently, only a few of sources is suitable as lookup table which requires the source itself is queryable. The supported sources are:
+
+- Memory source: if a memory source is used as table type, we need to accumulate the data as a table in memory. It can serve as a intermediate to convert any stream into a lookup table.
+- Redis source: Support to query by redis key.
+- SQL source: This is the most typical lookup source. We can use SQL directly to query. 
+
+Unlike scan tables, lookup table will run separately from rules. Thus, all rules that refer to a lookup table can actually query the same table content.
 
 
 ## More Readings
 ## More Readings
 
 

+ 6 - 0
docs/en_US/extension/native/develop/sink.md

@@ -53,6 +53,12 @@ func MySink() api.Sink {
 
 
 The [Memory Sink](https://github.com/lf-edge/ekuiper/blob/master/extensions/sinks/memory/memory.go) is a good example.
 The [Memory Sink](https://github.com/lf-edge/ekuiper/blob/master/extensions/sinks/memory/memory.go) is a good example.
 
 
+#### Updatable Sink
+
+If your sink is updatable, you'll need to deal with the `rowkindField` property. Some sink may also need a `keyField` property to specify which field is the primary key to update.
+
+So in the _Configure_ method, parse the `rowkindField` to know which field in the data is the update action. Then in the _Collect_ method, retrieve the rowkind by the `rowkindField` and perform the proper action. The rowkind value could be `insert`, `update`, `upsert` and `delete`. For example, in SQL sink, each rowkind value will generate different SQL statement to execute.
+
 #### Parse dynamic properties
 #### Parse dynamic properties
 
 
 For customized sink plugins, users may still want to support [dynamic properties](../../../rules/overview.md#dynamic-properties) like the built-in ones.
 For customized sink plugins, users may still want to support [dynamic properties](../../../rules/overview.md#dynamic-properties) like the built-in ones.

Dosya farkı çok büyük olduğundan ihmal edildi
+ 45 - 0
docs/en_US/extension/native/develop/source.md


Dosya farkı çok büyük olduğundan ihmal edildi
+ 24 - 1
docs/en_US/rules/sinks/builtin/memory.md


+ 19 - 3
docs/en_US/rules/sinks/plugin/redis.md

@@ -50,9 +50,25 @@ Below is a sample for selecting temperature great than 50 degree, and some profi
   ]
   ]
 }
 }
 ```
 ```
-### /tmp/redis.txt
+
+### Updatable sample
+
+By specifying the `rowkindField` property, the sink can update according the action specified in that field.
+
 ```json
 ```json
 {
 {
-   "file":"http://localhost:8080/redis.zip"
- }
+  "id": "ruleUpdateAlert",
+  "sql":"SELECT * FROM alertStream",
+  "actions":[
+    {
+      "redis": {
+        "addr": "127.0.0.1:6379",
+        "dataType": "string",
+        "field": "id",
+        "rowkindField": "action",
+        "sendSingle": true
+      }
+    }
+  ]
+}
 ```
 ```

Dosya farkı çok büyük olduğundan ihmal edildi
+ 32 - 4
docs/en_US/rules/sinks/overview.md


+ 22 - 0
docs/en_US/rules/sinks/plugin/sql.md

@@ -94,3 +94,25 @@ The following configuration will write telemetry field's values into database
 }
 }
 ```
 ```
 
 
+### Update Sample
+
+By specifying the `rowkindField` and `keyField`, the sink can generate insert, update or delete statement against the primary key.
+
+```json
+{
+  "id": "ruleUpdateAlert",
+  "sql":"SELECT * FROM alertStream",
+  "actions":[
+    {
+      "sql": {
+        "url": "sqlite://test.db",
+        "keyField": "id",
+        "rowkindField": "action",
+        "table": "alertTable",
+        "sendSingle": true
+      }
+    }
+  ]
+}
+```
+

+ 3 - 0
docs/en_US/rules/sources/builtin/edgex.md

@@ -1,5 +1,8 @@
 ## EdgeX Source
 ## EdgeX Source
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper provides built-in support for EdgeX source stream, which can subscribe the message from [EdgeX message bus](https://github.com/edgexfoundry/go-mod-messaging) and feed into the eKuiper streaming process pipeline.  
 eKuiper provides built-in support for EdgeX source stream, which can subscribe the message from [EdgeX message bus](https://github.com/edgexfoundry/go-mod-messaging) and feed into the eKuiper streaming process pipeline.  
 
 
 ### Stream definition for EdgeX
 ### Stream definition for EdgeX

+ 2 - 0
docs/en_US/rules/sources/builtin/file.md

@@ -1,5 +1,7 @@
 ## File source
 ## File source
 
 
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper provides built-in support for reading file content into the eKuiper processing pipeline. The file source is usually used as a [table](../../../sqls/tables.md) and it is the default type for create table statement.
 eKuiper provides built-in support for reading file content into the eKuiper processing pipeline. The file source is usually used as a [table](../../../sqls/tables.md) and it is the default type for create table statement.
 
 
 ```sql
 ```sql

+ 3 - 0
docs/en_US/rules/sources/builtin/http_pull.md

@@ -1,5 +1,8 @@
 # HTTP pull source 
 # HTTP pull source 
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper provides built-in support for pulling HTTP source stream, which can pull the message from HTTP server broker and feed into the eKuiper processing pipeline.  The configuration file of HTTP pull source is at `etc/sources/httppull.yaml`. Below is the file format.
 eKuiper provides built-in support for pulling HTTP source stream, which can pull the message from HTTP server broker and feed into the eKuiper processing pipeline.  The configuration file of HTTP pull source is at `etc/sources/httppull.yaml`. Below is the file format.
 
 
 ```yaml
 ```yaml

+ 3 - 0
docs/en_US/rules/sources/builtin/http_push.md

@@ -1,5 +1,8 @@
 # HTTP push source 
 # HTTP push source 
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper provides built-in HTTP source stream, which serves as an HTTP server and can receive the message from HTTP client. There will be a single global HTTP server for all HTTP push sources. Each source can have its own endpoint so that multiple endpoints are supported.
 eKuiper provides built-in HTTP source stream, which serves as an HTTP server and can receive the message from HTTP client. There will be a single global HTTP server for all HTTP push sources. Each source can have its own endpoint so that multiple endpoints are supported.
 
 
 ## Configurations
 ## Configurations

Dosya farkı çok büyük olduğundan ihmal edildi
+ 18 - 2
docs/en_US/rules/sources/builtin/memory.md


+ 3 - 0
docs/en_US/rules/sources/builtin/mqtt.md

@@ -1,5 +1,8 @@
 # MQTT source 
 # MQTT source 
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper provides built-in support for MQTT source stream, which can subscribe the message from MQTT broker and feed into the eKuiper processing pipeline.  The configuration file of MQTT source is at `$ekuiper/etc/mqtt_source.yaml`. Below is the file format.
 eKuiper provides built-in support for MQTT source stream, which can subscribe the message from MQTT broker and feed into the eKuiper processing pipeline.  The configuration file of MQTT source is at `$ekuiper/etc/mqtt_source.yaml`. Below is the file format.
 
 
 ```yaml
 ```yaml

Dosya farkı çok büyük olduğundan ihmal edildi
+ 3 - 0
docs/en_US/rules/sources/builtin/neuron.md


+ 25 - 0
docs/en_US/rules/sources/builtin/redis.md

@@ -0,0 +1,25 @@
+## Redis source
+
+<span style="background:green;color:white">lookup table source</span>
+
+eKuiper provides built-in support for looking up data in redis. Notice that, the redis source can only be used as a lookup table now. Stream and scan table is not supported.
+
+```text
+create table table1 () WITH (DATASOURCE="0", FORMAT="json", TYPE="redis", KIND="lookup");
+```
+
+You can use [cli](../../../operation/cli/tables.md) or [rest api](../../../operation/restapi/tables.md) to manage the tables.
+
+The configure file for the redis source is in */etc/sources/redis.yaml* in which the path to the file can be specified.
+
+```yaml
+default:
+  # the redis host address
+  addr: "127.0.0.1:6379"
+  # currently supports string and list only
+  datatype: "string"
+#  username: ""
+#  password: ""
+```
+
+With this yaml file, the table will refer to the database 0 in redis instance of address 127.0.0.1:6379. The value type is `string`.

+ 2 - 0
docs/en_US/rules/sources/overview.md

@@ -12,6 +12,7 @@ Users can directly use the built-in sources in the standard eKuiper instance. Th
 - [Http pull source](./builtin/http_pull.md): source to pull data from http servers.
 - [Http pull source](./builtin/http_pull.md): source to pull data from http servers.
 - [Memory source](./builtin/memory.md): source to read from eKuiper memory topic to form rule pipelines.
 - [Memory source](./builtin/memory.md): source to read from eKuiper memory topic to form rule pipelines.
 - [File source](./builtin/file.md): source to read from file, usually used as tables.
 - [File source](./builtin/file.md): source to read from file, usually used as tables.
+- [Redis source](./builtin/redis.md): source to lookup from redis as a lookup table.
 
 
 ## Predefined Source Plugins
 ## Predefined Source Plugins
 
 
@@ -23,6 +24,7 @@ The list of predefined source plugins:
 
 
 - [Zero MQ source](./plugin/zmq.md): read data from zero mq.
 - [Zero MQ source](./plugin/zmq.md): read data from zero mq.
 - [Random source](./plugin/random.md): a source to generate random data for testing.
 - [Random source](./plugin/random.md): a source to generate random data for testing.
+- [SQL source](./plugin/sql.md): a source to periodically fetch data from SQL DB.
 
 
 ## Use of sources
 ## Use of sources
 
 

+ 4 - 1
docs/en_US/rules/sources/plugin/random.md

@@ -1,6 +1,9 @@
 # Random Source
 # Random Source
 
 
-he source will generate random inputs with a specified pattern.
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
+The source will generate random inputs with a specified pattern.
 
 
 ## Compile & deploy plugin
 ## Compile & deploy plugin
 
 

+ 31 - 0
docs/en_US/rules/sources/plugin/sql.md

@@ -1,5 +1,9 @@
 # Sql Source
 # Sql Source
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+<span style="background:green;color:white">lookup table source</span>
+
 The source will query the database periodically to get data stream.
 The source will query the database periodically to get data stream.
 
 
 ## Compile & deploy plugin
 ## Compile & deploy plugin
@@ -127,3 +131,30 @@ demo (
 ```
 ```
 
 
 The configuration keys "template_config" will be used.
 The configuration keys "template_config" will be used.
+
+## Lookup Table
+
+The SQL source supports to be a lookup table. We can use create table statement to create a SQL lookup table. It will bind to the physical SQL DB and query on demand.
+
+```text
+CREATE TABLE alertTable() WITH (DATASOURCE="tableName", CONF_KEY="sqlite_config", TYPE="sql", KIND="lookup")
+```
+
+### Lookup cache
+
+Query external DB is supposed to be slower than in memory calculation. If the throughput is high, the lookup cache can be used to improve the performance.
+
+If lookup cache is not enabled, so all the requests are sent to external database. When lookup cache is enabled, each lookup table instance will hold a cache. When querying, we will first look up the cache before sending to the external database.
+
+The cache configuration lies in the `sql.yaml`.
+
+```yaml
+  lookup:
+    cache: true
+    cacheTtl: 600
+    cacheMissingKey: true
+```
+
+- cache: bool value to indicate whether to enable cache.
+- cacheTtl: the time to live of the cache in seconds.
+- cacheMissingKey: whether to cache nil value for a key.

+ 3 - 0
docs/en_US/rules/sources/plugin/zmq.md

@@ -1,5 +1,8 @@
 # Zmq Source
 # Zmq Source
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 The source will subscribe to a Zero Mq topic to import the messages into eKuiper
 The source will subscribe to a Zero Mq topic to import the messages into eKuiper
 
 
 ## Compile & deploy plugin
 ## Compile & deploy plugin

+ 29 - 3
docs/en_US/sqls/tables.md

@@ -2,7 +2,10 @@
 
 
 eKuiper streams is unbounded and immutable, any new data are appended in the current stream for processing.  **Table** is provided to represent the current state of the stream. It can be considered as a snapshot of the stream. Users can use table to retain a batch of data for processing.
 eKuiper streams is unbounded and immutable, any new data are appended in the current stream for processing.  **Table** is provided to represent the current state of the stream. It can be considered as a snapshot of the stream. Users can use table to retain a batch of data for processing.
 
 
-Table is not allowed to use alone in eKuiper. It is only recommended to join with streams. When joining with stream, table will be updated continuously when new event coming. However, only events arriving on the stream side trigger downstream updates and produce join output.
+There are two kinds of table:
+
+- Scan table: accumulates the data in memory. It is suitable for smaller dataset and the table content do NOT need to share between rules.
+- Lookup table: refer to external table content. It is suitable for bigger dataset and share table content across rules.
 
 
 ## Syntax
 ## Syntax
 
 
@@ -19,11 +22,34 @@ Table supports the same [data types](./streams.md#data-types) as stream.
 
 
 Table also supports all [the properties of the stream](./streams.md#language-definitions). Thus, all the source type are also supported in table. Many sources are not batched which have one event at any given time point, which means the table will always have only one event. An additional property `RETAIN_SIZE` to specify the size of the table snapshot so that the table can hold an arbitrary amount of history data.
 Table also supports all [the properties of the stream](./streams.md#language-definitions). Thus, all the source type are also supported in table. Many sources are not batched which have one event at any given time point, which means the table will always have only one event. An additional property `RETAIN_SIZE` to specify the size of the table snapshot so that the table can hold an arbitrary amount of history data.
 
 
+### Lookup Table Syntax
+
+The syntax is the same as creating a normal scan table, just need to specify kind property to be `lookup`. Below is an example to create a lookup data, which binds to redis database 0.
+
+```sql
+CREATE TABLE alertTable() WITH (DATASOURCE="0", TYPE="redis", KIND="lookup")
+```
+
+Currently, only `memory`, 'redis' and 'sql' source can be lookup table.
+
+### Table properties
+
+| Property name | Optional | Description                                                                                                                                                          |
+|---------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| DATASOURCE    | false    | The value is determined by source type. The topic names list if it's a MQTT data source. Please refer to related document for other sources.                         |
+| FORMAT        | true     | The data format, currently the value can be "JSON", "PROTOBUF" and "BINARY". The default is "JSON". Check [Binary Stream](#binary-stream) for more detail.           |
+| SCHEMAID      | true     | The schema to be used when decoding the events. Currently, only use when format is PROTOBUF.                                                                         |
+| KEY           | true     | The primary key of the table. For example, for SQL source key specifies the primary key in the SQL table. It is not obeyed by all source types.                      |
+| TYPE          | true     | The source type. Each source type may support one kind or both kind of tables. Please refer to related documents.                                                    |
+| CONF_KEY      | true     | If additional configuration items are requied to be configured, then specify the config key here. See [MQTT stream](../rules/sources/builtin/mqtt.md) for more info. |
+| KIND          | true     | The table kind, could be `scan` or `lookup`. If not specified, the default value is `scan`.                                                                          |
+
+
 ## Usage scenarios
 ## Usage scenarios
 
 
-Typically, table will be joined with stream with or without a window. When joining with stream, table data won't affect the downstream updata, it is treated like a static referenced data although it may be updated internally.
+Typically, table will be joined with stream with or without a window. When joining with stream, table data won't affect the downstream data, it is treated like a static referenced data, although it may be updated internally.
 
 
-### Lookup table
+### Enrich data
 
 
 A typical usage for table is as a lookup table. Sample SQL will be like:
 A typical usage for table is as a lookup table. Sample SQL will be like:
 
 

+ 16 - 5
docs/zh_CN/concepts/sources/table.md

@@ -1,16 +1,27 @@
 # 表
 # 表
 
 
-在 eKuiper 中,表是源数据在当前时间的快照。相比于普通的数据库表,eKuiper 中的表会随着时间而变化。
+表是源数据在当前时间的快照。我们支持两种类型的表:扫描表(Scan Table)和查询表(Lookup Table)。
+
+- 扫描表。消费流数据作为变化日志,并持续更新表。与常见的代表批处理数据的静态表相比,扫描表可以随时间变化。所有的流数据源如 MQTT、 Neuron 源等都可以是扫描表源。扫描表从 v1.2.0 版本开始支持。
+- 查询表:一个外部表,其内容通常不会被完全读取,仅在必要时进行查询。我们支持将实体表绑定为查询表,并根据需要生成查询命令(例如,数据库上的 SQL)。请注意,不是所有的源类型都可以成为查询表源,只有像 SQL 源这样的有外部存储的源可以成为查询源。我们从 v1.7.0 版本开始支持查询表。
+
+## 扫描表
 
 
 表的数据源既可以是无界的也可以是有界的。对于有界数据源来说,其表的内容为静态的。若表的数据源为无界数据流,则其内容会动态变化。
 表的数据源既可以是无界的也可以是有界的。对于有界数据源来说,其表的内容为静态的。若表的数据源为无界数据流,则其内容会动态变化。
 
 
-## 表内容更新
+当前,扫描表的内容更新仅支持追加。用户创建表时,可以指定参数限制表的大小,防止占用过多内存。
+
+扫描表不能在规则中单独使用,必须与流搭配,通常用于与流进行连接。扫描表可用于补全流数据或作为计算的开关。
+
+## 查询表
 
 
-当前,表的内容更新仅支持追加。用户创建表时,可以指定参数限制表的大小,防止占用过多内存。
+查询表不在内存中存储表的内容,而是引用外部表。显然,只有少数源适合作为查询表,这需要源本身是可查询的。支持的源包括:
 
 
-## 表的使用
+- 内存源:如果内存源被用作表类型,我们需要在内存中把数据积累成表。它可以作为一个中间环节,将任何流转换为查询表。
+- Redis:支持按Redis键查询。
+- SQL源:这是最典型的查询源。我们可以直接使用SQL来查询。
 
 
-表不能在规则中单独使用,必须与流搭配,通常用于与流进行连接。表可用于补全流数据或作为计算的开关。
+与扫描表不同,查询表将与规则分开运行。因此,所有使用查询表的规则实际上可以查询同一个表的内容
 
 
 ## 更多信息
 ## 更多信息
 
 

+ 6 - 0
docs/zh_CN/extension/native/develop/sink.md

@@ -56,6 +56,12 @@ func MySink() api.Sink {
 
 
 [Memory Sink](https://github.com/lf-edge/ekuiper/blob/master/extensions/sinks/memory/memory.go) 是一个很好的示例。
 [Memory Sink](https://github.com/lf-edge/ekuiper/blob/master/extensions/sinks/memory/memory.go) 是一个很好的示例。
 
 
+#### 可更新的 Sink
+
+如果你的 Sink 是可更新的,你将需要处理 `rowkindField` 属性。有些 sink 可能还需要一个 `keyField' 属性来指定哪个字段是要更新的主键。
+
+因此,在_Configure_方法中,需要解析 `rowkindField` 以知道数据中的哪个字段表示更新的动作。然后在_Collect_方法中,通过该字段获取动作类型,并执行适当的操作。rowkind 的值可以是 `insert`、`update`、`upsert` 和 `delete`。例如,在 SQL sink 中,每种 rowkind 值将产生不同的SQL语句来执行。
+
 #### 解析动态属性
 #### 解析动态属性
 
 
 在自定义的 sink 插件中,用户可能仍然想要像内置的 sink 一样支持[动态属性](../../../rules/overview.md#动态属性)。 我们在 context 对象中提供了 `ParseTemplate` 方法使得开发者可以方便地解析动态属性并应用于插件中。开发组应当根据业务逻辑,设计那些属性支持动态值。然后在代码编写时,使用此方法解析用户传入的属性值。
 在自定义的 sink 插件中,用户可能仍然想要像内置的 sink 一样支持[动态属性](../../../rules/overview.md#动态属性)。 我们在 context 对象中提供了 `ParseTemplate` 方法使得开发者可以方便地解析动态属性并应用于插件中。开发组应当根据业务逻辑,设计那些属性支持动态值。然后在代码编写时,使用此方法解析用户传入的属性值。

Dosya farkı çok büyük olduğundan ihmal edildi
+ 46 - 1
docs/zh_CN/extension/native/develop/source.md


+ 24 - 1
docs/zh_CN/rules/sinks/builtin/memory.md

@@ -1,5 +1,7 @@
 # 内存动作
 # 内存动作
 
 
+<span style="background:green;color:white">updatable</span>
+
 该动作用于将结果刷新到内存中的主题中,以便 [内存源](../../sources/builtin/memory.md) 可以使用它。 该主题类似于 pubsub 主题,例如 mqtt,因此可能有多个内存目标发布到同一主题,也可能有多个内存源订阅同一主题。 内存动作的典型用途是形成[规则管道](../../rule_pipeline.md)。
 该动作用于将结果刷新到内存中的主题中,以便 [内存源](../../sources/builtin/memory.md) 可以使用它。 该主题类似于 pubsub 主题,例如 mqtt,因此可能有多个内存目标发布到同一主题,也可能有多个内存源订阅同一主题。 内存动作的典型用途是形成[规则管道](../../rule_pipeline.md)。
 
 
 | 属性名称  | 是否可选 | 描述                                  |
 | 属性名称  | 是否可选 | 描述                                  |
@@ -30,4 +32,25 @@
 
 
 ::: v-pre
 ::: v-pre
 内存动作和内存源之间的数据传输采用内部格式,不经过编解码以提高效率。因此,内存动作的格式相关配置项,除了数据模板之外都会被忽略。内存动作可支持数据模板对结果格式进行变化,但是数据模板的结果必须为 JSON 字符串的 object 形式,例如 `"{\"key\":\"{{.key}}\"}"`。数组形式的 JSON 字符串或者非 JSON 字符串都不支持。
 内存动作和内存源之间的数据传输采用内部格式,不经过编解码以提高效率。因此,内存动作的格式相关配置项,除了数据模板之外都会被忽略。内存动作可支持数据模板对结果格式进行变化,但是数据模板的结果必须为 JSON 字符串的 object 形式,例如 `"{\"key\":\"{{.key}}\"}"`。数组形式的 JSON 字符串或者非 JSON 字符串都不支持。
-:::
+:::
+
+## 更新
+
+内存动作支持[更新](../overview.md#更新)。可用于更新订阅了与 sink 相同的主题的查询表。一个典型的用法是创建一个规则,使用可更新的 sink 来累积更新内存表。在下面的例子中,来自流alertStream的数据将更新内存主题`alertVal`。更新动作是由流入的数据中的 `action` 字段指定的。
+
+```json
+{
+  "id": "ruleUpdateAlert",
+  "sql":"SELECT * FROM alertStream",
+  "actions":[
+    {
+      "memory": {
+        "keyField": "id",
+        "rowkindField": "action",
+        "topic": "alertVal",
+        "sendSingle": true
+      }
+    }
+  ]
+}
+```

+ 22 - 0
docs/zh_CN/rules/sinks/plugin/redis.md

@@ -56,4 +56,26 @@ redis 源代码在 extensions 目录中,但是需要在 eKuiper 根目录编
 {
 {
   "file":"http://localhost:8080/redis.zip"
   "file":"http://localhost:8080/redis.zip"
 }
 }
+```
+
+### 更新示例
+
+通过指定 `rowkindField` 属性,sink 可以根据该字段中指定的动作进行更新。
+
+```json
+{
+  "id": "ruleUpdateAlert",
+  "sql":"SELECT * FROM alertStream",
+  "actions":[
+    {
+      "redis": {
+        "addr": "127.0.0.1:6379",
+        "dataType": "string",
+        "field": "id",
+        "rowkindField": "action",
+        "sendSingle": true
+      }
+    }
+  ]
+}
 ```
 ```

Dosya farkı çok büyük olduğundan ihmal edildi
+ 32 - 4
docs/zh_CN/rules/sinks/overview.md


+ 19 - 0
docs/zh_CN/rules/sinks/plugin/sql.md

@@ -90,5 +90,24 @@
 }
 }
 ```
 ```
 
 
+### 更新示例
 
 
+通过指定 `rowkindField` 和 `keyField` 属性,sink 可以生成针对主键的插入、更新或删除语句。
 
 
+```json
+{
+  "id": "ruleUpdateAlert",
+  "sql":"SELECT * FROM alertStream",
+  "actions":[
+    {
+      "sql": {
+        "url": "sqlite://test.db",
+        "keyField": "id",
+        "rowkindField": "action",
+        "table": "alertTable",
+        "sendSingle": true
+      }
+    }
+  ]
+}
+```

+ 4 - 3
docs/zh_CN/rules/sources/builtin/edgex.md

@@ -1,7 +1,8 @@
-
-
 # EdgeX 源
 # EdgeX 源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper 提供了内置的 EdgeX 源支持,它可以被用来订阅来自于[EdgeX 消息总线](https://github.com/edgexfoundry/go-mod-messaging)的数据,并且将数据放入 eKuiper 数据处理流水线中。
 eKuiper 提供了内置的 EdgeX 源支持,它可以被用来订阅来自于[EdgeX 消息总线](https://github.com/edgexfoundry/go-mod-messaging)的数据,并且将数据放入 eKuiper 数据处理流水线中。
 
 
 ## EdgeX 流定义
 ## EdgeX 流定义
@@ -55,7 +56,7 @@ EdgeX 中所有的 `INT8` , `INT16`, `INT32`,  `INT64` , `UINT8` , `UINT16` ,  `
 
 
 EdgeX 中所有的 `FLOAT32`, `FLOAT64`  数组类型会被转换为 `Float` 数组。 
 EdgeX 中所有的 `FLOAT32`, `FLOAT64`  数组类型会被转换为 `Float` 数组。 
 
 
-# 全局配置
+## 全局配置
 
 
 EdgeX 源配置文件为 `$ekuiper/etc/sources/edgex.yaml`,以下配置文件内容。
 EdgeX 源配置文件为 `$ekuiper/etc/sources/edgex.yaml`,以下配置文件内容。
 
 

+ 3 - 1
docs/zh_CN/rules/sources/builtin/file.md

@@ -1,5 +1,7 @@
 ## 文件源
 ## 文件源
 
 
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper 提供了内置支持,可将文件内容读入 eKuiper 处理管道。 文件源通常用作 [表格](../../../sqls/tables.md), 并且采用 create table 语句的默认类型。
 eKuiper 提供了内置支持,可将文件内容读入 eKuiper 处理管道。 文件源通常用作 [表格](../../../sqls/tables.md), 并且采用 create table 语句的默认类型。
 
 
 ```sql
 ```sql
@@ -10,7 +12,7 @@ CREATE TABLE table1 (
 ) WITH (DATASOURCE="lookup.json", FORMAT="json", TYPE="file");
 ) WITH (DATASOURCE="lookup.json", FORMAT="json", TYPE="file");
 ```
 ```
 
 
-您可以使用 [cli](../../../operation/cli/tables.md) 或 [rest api](../../../operation/restapi/tables.md) 来管理表
+您可以使用 [cli](../../../operation/cli/tables.md) 或 [rest api](../../../operation/restapi/tables.md) 来管理表
 
 
 文件源的配置文件是 */etc/sources/file.yaml* ,可以在其中指定文件的路径。
 文件源的配置文件是 */etc/sources/file.yaml* ,可以在其中指定文件的路径。
 
 

+ 3 - 0
docs/zh_CN/rules/sources/builtin/http_pull.md

@@ -1,5 +1,8 @@
 # HTTP 提取源
 # HTTP 提取源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper 为提取 HTTP 源流提供了内置支持,该支持可从 HTTP 服务器代理提取消息并输入 eKuiper 处理管道。 HTTP提取源的配置文件位于 `etc/sources/httppull.yaml`中。 以下是文件格式。
 eKuiper 为提取 HTTP 源流提供了内置支持,该支持可从 HTTP 服务器代理提取消息并输入 eKuiper 处理管道。 HTTP提取源的配置文件位于 `etc/sources/httppull.yaml`中。 以下是文件格式。
 
 
 ```yaml
 ```yaml

+ 3 - 0
docs/zh_CN/rules/sources/builtin/http_push.md

@@ -1,5 +1,8 @@
 # HTTP push 源
 # HTTP push 源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper 提供了内置的 HTTP 源,它作为一个 HTTP 服务器,可以接收来自 HTTP 客户端的消息。所有的 HTTP 推送源共用单一的全局 HTTP 数据服务器。每个源可以有自己的 URL,这样就可以支持多个端点。
 eKuiper 提供了内置的 HTTP 源,它作为一个 HTTP 服务器,可以接收来自 HTTP 客户端的消息。所有的 HTTP 推送源共用单一的全局 HTTP 数据服务器。每个源可以有自己的 URL,这样就可以支持多个端点。
 
 
 ## 配置
 ## 配置

+ 18 - 2
docs/zh_CN/rules/sources/builtin/memory.md

@@ -1,11 +1,15 @@
 # 内存源
 # 内存源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+<span style="background:green;color:white">lookup table source</span>
+
 内存源通过主题消费由 [内存目标](../../sinks/builtin/memory.md) 生成的事件。该主题类似于 pubsub 主题,例如 mqtt,因此可能有多个内存目标发布到同一主题,也可能有多个内存源订阅同一主题。 内存动作的典型用途是形成[规则管道](../../rule_pipeline.md)。内存动作和内存源之间的数据传输采用内部格式,不经过编解码以提高效率。因此,内存源的`format`属性会被忽略。
 内存源通过主题消费由 [内存目标](../../sinks/builtin/memory.md) 生成的事件。该主题类似于 pubsub 主题,例如 mqtt,因此可能有多个内存目标发布到同一主题,也可能有多个内存源订阅同一主题。 内存动作的典型用途是形成[规则管道](../../rule_pipeline.md)。内存动作和内存源之间的数据传输采用内部格式,不经过编解码以提高效率。因此,内存源的`format`属性会被忽略。
 
 
 主题没有配置属性,由流数据源属性指定,如以下示例所示:
 主题没有配置属性,由流数据源属性指定,如以下示例所示:
 
 
 ```text
 ```text
-CREATE TABLE table1 (
+CREATE STREAM table1 (
     name STRING,
     name STRING,
     size BIGINT,
     size BIGINT,
     id BIGINT
     id BIGINT
@@ -21,4 +25,16 @@ CREATE TABLE table1 (
 
 
 示例:
 示例:
 1. `home/device1/+/sensor1`
 1. `home/device1/+/sensor1`
-2. `home/device1/#`
+2. `home/device1/#`
+
+## 查询表
+
+内存源支持查询表。下面是一个针对内存主题 "topicName" 创建查询表的例子。注意,作为查询表使用时,`KEY` 属性是必须的,它将作为虚拟表的主键来加速查询。
+
+```text
+CREATE TABLE alertTable() WITH (DATASOURCE="topicName", TYPE="memory", KIND="lookup", KEY="id")
+```
+
+在创建一个内存查询表后,它将开始积累由键字段索引的内存主题的数据。它将一直独立于规则运行。每个主题和键对将有一个虚拟表的内存拷贝。所有引用同一表或具有相同主题/键对的内存表的规则将共享同一数据副本。
+
+内存查询表可以像多个规则之间的管道一样使用,这与[规则管道](../../rule_pipeline.md)的概念类似。它可以在内存中存储任何流类型的历史,以便其他流可以与之合作。通过与[可更新的内存动作](../../sinks/builtin/memory.md#更新)一起工作,表格内容可以被更新。

+ 3 - 0
docs/zh_CN/rules/sources/builtin/mqtt.md

@@ -1,5 +1,8 @@
 # MQTT源
 # MQTT源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 eKuiper 为 MQTT 源流提供了内置支持,流可以订阅来自 MQTT 代理的消息并输入eKuiper 处理管道。 MQTT 源的配置文件位于 `$ekuiper/etc/mqtt_source.yaml`。 以下是文件格式。
 eKuiper 为 MQTT 源流提供了内置支持,流可以订阅来自 MQTT 代理的消息并输入eKuiper 处理管道。 MQTT 源的配置文件位于 `$ekuiper/etc/mqtt_source.yaml`。 以下是文件格式。
 
 
 ```yaml
 ```yaml

Dosya farkı çok büyük olduğundan ihmal edildi
+ 3 - 0
docs/zh_CN/rules/sources/builtin/neuron.md


+ 25 - 0
docs/zh_CN/rules/sources/builtin/redis.md

@@ -0,0 +1,25 @@
+## Redis 源
+
+<span style="background:green;color:white">lookup table source</span>
+
+eKuiper 提供了对 redis 中数据查询的内置支持。请注意,现在 redis 源只能作为一个查询表使用。不支持流和扫描表。
+
+```text
+create table table1 () WITH (DATASOURCE="0", FORMAT="json", TYPE="redis", KIND="lookup");
+```
+
+您可以使用 [cli](../../../operation/cli/tables.md) 或 [rest api](../../../operation/restapi/tables.md) 来管理表。
+
+Redis 源的配置文件是 */etc/sources/redis.yaml* ,可以在其中指定 redis 的连接信息等属性。
+
+```yaml
+default:
+  # the redis host address
+  addr: "127.0.0.1:6379"
+  # currently supports string and list only
+  datatype: "string"
+#  username: ""
+#  password: ""
+```
+
+在这个 yaml 文件的配置中,表将引用的 redis 实例地址是127.0.0.1:6379。值的类型是 "string"。

+ 2 - 0
docs/zh_CN/rules/sources/overview.md

@@ -12,6 +12,7 @@
 - [Http pull source](./builtin/http_pull.md):从http服务器中拉取数据。
 - [Http pull source](./builtin/http_pull.md):从http服务器中拉取数据。
 - [Memory source](./builtin/memory.md):从 eKuiper 内存主题读取数据以形成规则管道。
 - [Memory source](./builtin/memory.md):从 eKuiper 内存主题读取数据以形成规则管道。
 - [File source](./builtin/file.md):从文件中读取数据,通常用作表格。
 - [File source](./builtin/file.md):从文件中读取数据,通常用作表格。
+- [Redis source](./builtin/redis.md): 从 Redis 中查询数据,用作查询表。
 
 
 ## 预定义的源插件
 ## 预定义的源插件
 
 
@@ -23,6 +24,7 @@
 
 
 - [Zero MQ source](./plugin/zmq.md):从Zero MQ读取数据。
 - [Zero MQ source](./plugin/zmq.md):从Zero MQ读取数据。
 - [Random source](./plugin/random.md): 一个生成随机数据的源,用于测试。
 - [Random source](./plugin/random.md): 一个生成随机数据的源,用于测试。
+- [SQL source](./plugin/sql.md): 定期从关系数据库中拉取数据。
 
 
 ## 源的使用
 ## 源的使用
 
 

+ 3 - 0
docs/zh_CN/rules/sources/plugin/random.md

@@ -1,5 +1,8 @@
 # 随机源
 # 随机源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 随机源将生成具有指定样式的随机输入。
 随机源将生成具有指定样式的随机输入。
 
 
 ## 编译和部署插件
 ## 编译和部署插件

+ 29 - 0
docs/zh_CN/rules/sources/plugin/sql.md

@@ -1,5 +1,9 @@
 # Sql 源
 # Sql 源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+<span style="background:green;color:white">lookup table source</span>
+
 源将定期查询数据库以获取数据流。
 源将定期查询数据库以获取数据流。
 
 
 ## 编译和部署插件
 ## 编译和部署插件
@@ -125,3 +129,28 @@ demo (
 ```
 ```
 
 
 将使用配置键 `template_config`
 将使用配置键 `template_config`
+
+## 查询表
+
+SQL 源支持成为一个查询表。我们可以使用创建表语句来创建一个 SQL 查询表。它将与实体关系数据库绑定并按需查询。
+
+```text
+CREATE TABLE alertTable() WITH (DATASOURCE="tableName", CONF_KEY="sqlite_config", TYPE="sql", KIND="lookup")
+```
+
+### 查询缓存
+
+查询外部数据库比在内存中计算要慢。如果吞吐量很高,可以使用查找缓存来提高性能。如果不启用查找缓存,那么所有的请求都被发送到外部数据库。当启用查找缓存时,每个查找表实例将持有一个缓存。当查询时,我们将首先查询缓存,然后再发送到外部数据库。
+
+缓存的配置在`sql.yaml`中。
+
+```yaml
+  lookup:
+    cache: true
+    cacheTtl: 600
+    cacheMissingKey: true
+```
+
+- cache: bool值,表示是否启用缓存。
+- cacheTtl: 缓存的生存时间,单位是秒。
+- cacheMissingKey:是否对空值进行缓存。

+ 3 - 0
docs/zh_CN/rules/sources/plugin/zmq.md

@@ -1,5 +1,8 @@
 # Zmq 源
 # Zmq 源
 
 
+<span style="background:green;color:white;">stream source</span>
+<span style="background:green;color:white">scan table source</span>
+
 源将订阅 Zero Mq 主题以将消息导入 eKuiper。
 源将订阅 Zero Mq 主题以将消息导入 eKuiper。
 
 
 ## 编译和部署插件
 ## 编译和部署插件

+ 28 - 2
docs/zh_CN/sqls/tables.md

@@ -2,7 +2,10 @@
 
 
 eKuiper 流是无界且不可变的,任何新数据都会附加到当前流中进行处理。 **Table** 用于表示流的当前状态。它可以被认为是流的快照。用户可以使用 table 来保留一批数据进行处理。
 eKuiper 流是无界且不可变的,任何新数据都会附加到当前流中进行处理。 **Table** 用于表示流的当前状态。它可以被认为是流的快照。用户可以使用 table 来保留一批数据进行处理。
 
 
-在 eKuiper 中不允许单独使用表格。仅建议与流进行 join 操作。join 流时,表格将在新事件到来时不断更新。但是,只有到达流端的事件才会触发下游更新并产生连接输出。
+有两种类型的表。
+
+- 扫描表(Scan Table):在内存中积累数据。它适用于较小的数据集,表的内容不需要在规则之间共享。
+- 查询表(Lookup Table):绑定外部表并按需查询。它适用于更大的数据集,并且在规则之间共享表的内容。
 
 
 ## 语法定义
 ## 语法定义
 
 
@@ -17,11 +20,34 @@ CREATE TABLE
 表支持与流相同的 [数据类型](./streams.md#数据类型)。
 表支持与流相同的 [数据类型](./streams.md#数据类型)。
 表还支持所有[流的属性](./streams.md#语言定义)。因此,表中也支持所有源类型。许多源不是批处理的,它们在任何给定时间点都有一个事件,这意味着表将始终只有一个事件。一个附加属性 `RETAIN_SIZE` 来指定表快照的大小,以便表可以保存任意数量的历史数据。
 表还支持所有[流的属性](./streams.md#语言定义)。因此,表中也支持所有源类型。许多源不是批处理的,它们在任何给定时间点都有一个事件,这意味着表将始终只有一个事件。一个附加属性 `RETAIN_SIZE` 来指定表快照的大小,以便表可以保存任意数量的历史数据。
 
 
+### 查询表的语法
+
+语法与创建普通扫描表相同,只需指定 `KIND` 的属性为 `lookup`。下面是一个创建查询表的例子,它会与 redis 的数据库 0 绑定。
+
+```sql
+CREATE TABLE alertTable() WITH (DATASOURCE="0", TYPE="redis", KIND="lookup")
+```
+
+目前,只有 `memory`、`redis` 和 `sql` 源可以作为查找表。
+
+### 表的属性
+
+| 属性名称       | 可选   | 描述                                                                                                                                                                      |
+|------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| DATASOURCE | 否    | 取决于不同的源类型;如果是 MQTT 源,则为 MQTT 数据源主题名;其它源请参考相关的文档。                                                                                                                        |
+| FORMAT     | 是    | 传入的数据类型,支持 "JSON", "PROTOBUF" 和 "BINARY",默认为 "JSON" 。关于 "BINARY" 类型的更多信息,请参阅 [Binary Stream](#二进制流)。该属性是否生效取决于源的类型,某些源自身解析的时固定私有格式的数据,则该配置不起作用。可支持该属性的源包括 MQTT 和 ZMQ 等。 |
+| SCHEMAID   | 是    | 解码时使用的模式,目前仅在格式为 PROTOBUF 的情况下使用。                                                                                                                                       |
+| KEY        | true | 表的主键。例如,对于SQL源,用于指定SQL表中的主键。并非所有的源类型都支持该属性。                                                                                                                             |
+| TYPE       | true | 源类型。每个源类型可以支持一种或两种表。请参考相关文件。                                                                                                                                            |
+| CONF_KEY   | 是    | 如果需要配置其他配置项,请在此处指定 config 键。 有关更多信息,请参见 [MQTT stream](../rules/sources/builtin/mqtt.md) 。                                                                               |
+| KIND       | true | 表的种类,可以是 `scan` 或 `lookup`。如果没有指定,默认值是`scan`。                                                                                                                           |
+
+
 ## 使用场景
 ## 使用场景
 
 
 通常,表格将与带有或不带有窗口的流连接。与流连接时,表数据不会影响下游更新数据,它被视为静态引用数据,尽管它可能会在内部更新。
 通常,表格将与带有或不带有窗口的流连接。与流连接时,表数据不会影响下游更新数据,它被视为静态引用数据,尽管它可能会在内部更新。
 
 
-### 查询表
+### 数据补全
 
 
 表的典型用法是作为查找表。示例 SQL 将类似于:
 表的典型用法是作为查找表。示例 SQL 将类似于:
 ```sql
 ```sql