Browse Source

docs: Modify the style of Markdown files (#2001)

* docs: Modify the style of Markdown files

Signed-off-by: t_max <1172915550@qq.com>

* ci: Modify the markdownlint check.

Signed-off-by: t_max <1172915550@qq.com>

---------

Signed-off-by: t_max <1172915550@qq.com>
Xuefeng Tan 1 year ago
parent
commit
82cf8dabf2
100 changed files with 1161 additions and 1000 deletions
  1. 10 11
      .github/workflows/markdown_config.json
  2. 8 8
      docs/en_US/CONTRIBUTING.md
  3. 1 1
      docs/en_US/README.md
  4. 4 4
      docs/en_US/api/cli/data.md
  5. 1 2
      docs/en_US/api/cli/overview.md
  6. 10 5
      docs/en_US/api/cli/plugins.md
  7. 6 5
      docs/en_US/api/cli/rules.md
  8. 1 2
      docs/en_US/api/cli/ruleset.md
  9. 1 2
      docs/en_US/api/cli/schemas.md
  10. 2 5
      docs/en_US/api/cli/services.md
  11. 10 8
      docs/en_US/api/cli/streams.md
  12. 7 6
      docs/en_US/api/cli/tables.md
  13. 6 3
      docs/en_US/api/restapi/authentication.md
  14. 21 21
      docs/en_US/api/restapi/configKey.md
  15. 1 5
      docs/en_US/api/restapi/data.md
  16. 1 2
      docs/en_US/api/restapi/overview.md
  17. 14 6
      docs/en_US/api/restapi/plugins.md
  18. 6 6
      docs/en_US/api/restapi/rules.md
  19. 2 2
      docs/en_US/api/restapi/ruleset.md
  20. 2 3
      docs/en_US/api/restapi/schemas.md
  21. 5 2
      docs/en_US/api/restapi/services.md
  22. 4 4
      docs/en_US/api/restapi/streams.md
  23. 1 2
      docs/en_US/api/restapi/tables.md
  24. 1 1
      docs/en_US/api/restapi/uploads.md
  25. 3 3
      docs/en_US/concepts/ekuiper.md
  26. 1 1
      docs/en_US/concepts/extensions.md
  27. 1 1
      docs/en_US/concepts/rules.md
  28. 1 1
      docs/en_US/concepts/sinks.md
  29. 1 4
      docs/en_US/concepts/sources/overview.md
  30. 0 1
      docs/en_US/concepts/sources/stream.md
  31. 2 2
      docs/en_US/concepts/sources/table.md
  32. 1 1
      docs/en_US/concepts/sql.md
  33. 1 1
      docs/en_US/concepts/streaming/join.md
  34. 1 2
      docs/en_US/concepts/streaming/overview.md
  35. 1 1
      docs/en_US/concepts/streaming/windowing.md
  36. 3 2
      docs/en_US/configuration/configuration.md
  37. 21 7
      docs/en_US/configuration/global_configurations.md
  38. 8 8
      docs/en_US/edgex/edgex_meta.md
  39. 46 39
      docs/en_US/edgex/edgex_rule_engine_command.md
  40. 21 16
      docs/en_US/edgex/edgex_rule_engine_tutorial.md
  41. 1 1
      docs/en_US/edgex/edgex_source_tutorial.md
  42. 6 6
      docs/en_US/example/data_merge/merge_single_stream.md
  43. 1 1
      docs/en_US/example/data_merge/overview.md
  44. 3 2
      docs/en_US/example/howto.md
  45. 12 11
      docs/en_US/extension/external/external_func.md
  46. 3 0
      docs/en_US/extension/native/develop/function.md
  47. 170 174
      docs/en_US/extension/native/develop/overview.md
  48. 137 96
      docs/en_US/extension/native/develop/plugins_tutorial.md
  49. 24 16
      docs/en_US/extension/native/develop/sink.md
  50. 20 17
      docs/en_US/extension/native/develop/source.md
  51. 13 11
      docs/en_US/extension/native/overview.md
  52. 4 4
      docs/en_US/extension/overview.md
  53. 41 41
      docs/en_US/extension/portable/go_sdk.md
  54. 8 7
      docs/en_US/extension/portable/overview.md
  55. 7 1
      docs/en_US/extension/portable/python_sdk.md
  56. 22 11
      docs/en_US/extension/wasm/overview.md
  57. 3 3
      docs/en_US/getting_started/debug_rules.md
  58. 14 12
      docs/en_US/getting_started/getting_started.md
  59. 5 5
      docs/en_US/getting_started/quick_start_docker.md
  60. 38 33
      docs/en_US/guide/ai/python_tensorflow_lite_tutorial.md
  61. 39 46
      docs/en_US/guide/ai/tensorflow_lite.md
  62. 9 10
      docs/en_US/guide/ai/tensorflow_lite_external_function_tutorial.md
  63. 52 51
      docs/en_US/guide/ai/tensorflow_lite_tutorial.md
  64. 6 4
      docs/en_US/guide/rules/graph_rule.md
  65. 2 3
      docs/en_US/guide/rules/overview.md
  66. 1 3
      docs/en_US/guide/rules/rule_pipeline.md
  67. 6 5
      docs/en_US/guide/rules/state_and_fault_tolerance.md
  68. 3 2
      docs/en_US/guide/serialization/protobuf_tutorial.md
  69. 19 4
      docs/en_US/guide/serialization/serialization.md
  70. 105 94
      docs/en_US/guide/sinks/builtin/edgex.md
  71. 1 1
      docs/en_US/guide/sinks/builtin/file.md
  72. 0 1
      docs/en_US/guide/sinks/builtin/log.md
  73. 1 1
      docs/en_US/guide/sinks/builtin/memory.md
  74. 4 3
      docs/en_US/guide/sinks/builtin/mqtt.md
  75. 1 1
      docs/en_US/guide/sinks/builtin/neuron.md
  76. 0 2
      docs/en_US/guide/sinks/builtin/nop.md
  77. 4 2
      docs/en_US/guide/sinks/builtin/redis.md
  78. 1 0
      docs/en_US/guide/sinks/builtin/rest.md
  79. 20 17
      docs/en_US/guide/sinks/data_template.md
  80. 3 4
      docs/en_US/guide/sinks/overview.md
  81. 1 1
      docs/en_US/guide/sinks/plugin/image.md
  82. 6 1
      docs/en_US/guide/sinks/plugin/influx.md
  83. 16 5
      docs/en_US/guide/sinks/plugin/influx2.md
  84. 14 6
      docs/en_US/guide/sinks/plugin/kafka.md
  85. 3 4
      docs/en_US/guide/sinks/plugin/sql.md
  86. 4 4
      docs/en_US/guide/sinks/plugin/tdengine.md
  87. 0 1
      docs/en_US/guide/sinks/plugin/zmq.md
  88. 12 11
      docs/en_US/guide/sources/builtin/edgex.md
  89. 1 1
      docs/en_US/guide/sources/builtin/file.md
  90. 7 7
      docs/en_US/guide/sources/builtin/http_pull.md
  91. 1 1
      docs/en_US/guide/sources/builtin/http_push.md
  92. 3 2
      docs/en_US/guide/sources/builtin/memory.md
  93. 31 26
      docs/en_US/guide/sources/builtin/mqtt.md
  94. 2 2
      docs/en_US/guide/sources/builtin/neuron.md
  95. 1 1
      docs/en_US/guide/sources/builtin/redis.md
  96. 1 2
      docs/en_US/guide/sources/overview.md
  97. 4 4
      docs/en_US/guide/sources/plugin/random.md
  98. 4 4
      docs/en_US/guide/sources/plugin/sql.md
  99. 4 5
      docs/en_US/guide/sources/plugin/video.md
  100. 0 0
      docs/en_US/guide/sources/plugin/zmq.md

+ 10 - 11
.github/workflows/markdown_config.json

@@ -1,13 +1,12 @@
 {
-    "default": false,
-    "MD001": true,
-    "MD003": {"style": "atx"},
-    "MD011": true,
-    "MD018": true,
-    "MD019": true,
-    "MD023": true,
-    "MD025": {"level": 1, "front_matter_title": ""},
-    "MD042": true,
-    "MD046": {"style": "fenced"},
-    "MD048": {"style": "backtick"}
+    "MD013": false,
+    "MD014": false,
+    "MD024": false,
+    "MD032": false,
+    "MD033": false,
+    "MD034": false,
+    "MD036": false,
+    "MD041": false,
+    "MD045": false,
+    "MD051": false
   }

+ 8 - 8
docs/en_US/CONTRIBUTING.md

@@ -9,7 +9,7 @@ We're really glad you're reading this, because we need volunteer developers to h
 
 ## Code and doc contribution
 
-Welcome to contribute code to provide features or fix bugs. 
+Welcome to contribute code to provide features or fix bugs.
 
 ### One time setup
 
@@ -18,6 +18,7 @@ We use GitHub pull request to review proposed code changes. So you'll need to ob
 1. **Fork** eKuiper to your private repository. Click the `Fork` button in the top right corner of eKuiper repository.
 2. **Clone** the repository locally from your personal fork. `git clone https://github.com/<Github_user>/ekuiper.git`.
 3. Add eKuiper repo as additional Git remote so that you can sync between local repo and eKuiper.
+
   ```shell
   git remote add upstream https://github.com/lf-edge/ekuiper.git
   ```
@@ -66,7 +67,7 @@ Alternatively, if you use GoLand, you can check `Group` and `Group stdlib import
 
 ### Debug your code
 
-Take GoLand as an example, developers can debug the code: 
+Take GoLand as an example, developers can debug the code:
 
 1. Debug the whole program. Make sure all directories mentioned in [Makefile](https://github.com/lf-edge/ekuiper/blob/master/Makefile) build_prepare sections are created in your eKuiper root path. Add your breakpoints. Open `cmd/kuiperd/main.go`. In the main function, you'll find a green triangle in the ruler, click it and select debug. Then create your stream/rule that would run through your breakpoint, the debugger will pause there.
 2. To debug a small portion of code, we recommend writing a unit test and debug it. You can go to any test file and find the same green triangle to run in debug mode. For example, `pkg/cast/cast_test.go` TestMapConvert_Funcs can run as debug.
@@ -139,7 +140,6 @@ default:
 After changing this, redis will listen on the host 6379 port, developers can connect to the machine that edgex runs remotely by the server address.
 For example, the host ip address is 10.65.38.224 , users can connect to this machine by the ip address.
 
-
 #### enable eKuiper console log and set rest api port
 
 Change the config file in `etc/kuiper.yaml`, set the console log true and set eKuiper rest api port to 59720
@@ -179,8 +179,8 @@ basic:
 ```
 
 #### run locally
-  Use the [former method](./CONTRIBUTING.md#debug-your-code) to run the eKuiper
 
+  Use the [former method](./CONTRIBUTING.md#debug-your-code) to run the eKuiper
 
 ### Testing
 
@@ -234,7 +234,7 @@ You'll then push to your branch on your forked repo and then navigate to eKuiper
 
 Each commit message consists of a **header**, a **body** and a **footer**. The header has a special format that includes a **type**, a **scope** and a **subject**:
 
-```
+```text
 <type>(<scope>): <subject>
 <BLANK LINE>
 <body>
@@ -250,11 +250,11 @@ The footer should contain a [closing reference to an issue](https://help.github.
 
 Example 1:
 
-```
+```text
 feat: add Fuji release compose files
 ```
 
-```
+```text
 fix(script): correct run script to use the right ports
 
 Previously device services used wrong port numbers. This commit fixes the port numbers to use the latest port numbers.
@@ -317,4 +317,4 @@ The promotion contributions include but not limit to:
 - Write tutorials for how project can be used
 - Offer to mentor another contributor
 
-Thank you for taking the time to contribute!
+Thank you for taking the time to contribute!

+ 1 - 1
docs/en_US/README.md

@@ -15,7 +15,7 @@ SQL-based or graph based (similar to Node-RED) rules to create IoT edge analytic
 
 - Cross-platform
 
-  - CPU Arch:X86 AMD * 32/64; ARM * 32/64; PPC
+  - CPU Arch:X86 AMD 32/64; ARM 32/64; PPC
   - Popular Linux distributions, OpenWrt Linux, MacOS and Docker
   - Industrial PC, Raspberry Pi, industrial gateway, home gateway, MEC edge cloud server
 

+ 4 - 4
docs/en_US/api/cli/data.md

@@ -41,13 +41,13 @@ The file format for importing and exporting Data is JSON, which can contain : `s
 
 ## Reset And Import Data
 
-The API resets all existing data and then imports the new data into the system. 
+The API resets all existing data and then imports the new data into the system.
 
 ```shell
 # bin/kuiper import data -f myrules.json -s false
 ```
 
-## Import Data 
+## Import Data
 
 The API imports the data into the system(overwrite the tables/streams/rules/source config/sink config. install plugins/schema if not exist, else ignore them).
 
@@ -65,7 +65,7 @@ This API returns Data import errors. If all returns are empty, it means that the
 
 ## Data Export
 
-This command exports the Data to the specified file. 
+This command exports the Data to the specified file.
 
 ```shell
 # bin/kuiper export data myrules.json
@@ -75,4 +75,4 @@ This command exports the specific rules related Data to the specified file.
 
 ```shell
 # bin/kuiper export data myrules.json -r '["rules1", "rules2"]'
-```
+```

+ 1 - 2
docs/en_US/api/cli/overview.md

@@ -1,4 +1,4 @@
-The eKuiper CLI (command line interface) tools provides streams and rules management. 
+The eKuiper CLI (command line interface) tools provides streams and rules management.
 
 The eKuiper CLI acts as a client to the eKuiper server. The eKuiper server runs the engine that executes the stream or rule queries. This includes processing stream or rule definitions, manage rule status and io.
 
@@ -8,4 +8,3 @@ The eKuiper CLI acts as a client to the eKuiper server. The eKuiper server runs
 - [Streams](streams.md)
 - [Rules](rules.md)
 - [Plugins](plugins.md)
-

+ 10 - 5
docs/en_US/api/cli/plugins.md

@@ -1,6 +1,7 @@
 # Plugins management
 
 The eKuiper plugin command line tools allows you to manage plugins, such as create, show and drop plugins. Notice that, drop a plugin will need to restart eKuiper to take effect. To update a plugin, do the following:
+
 1. Drop the plugin.
 2. Restart eKuiper.
 3. Create the plugin with the new configuration.
@@ -13,7 +14,7 @@ The command is used for creating a plugin.  The plugin's definition is specified
 create plugin $plugin_type $plugin_name $plugin_json | create plugin $plugin_type $plugin_name -f $plugin_def_file
 ```
 
-The plugin can be created with two ways. 
+The plugin can be created with two ways.
 
 - Specify the plugin definition in command line.
 
@@ -23,7 +24,7 @@ Sample:
 # bin/kuiper create plugin source random {"file":"http://127.0.0.1/plugins/sources/random.zip"}
 ```
 
-The command create a source plugin named `random`. 
+The command create a source plugin named `random`.
 
 - Specify the plugin definition in a file. If the plugin is complex, or the plugin is already wrote in text files with well organized formats, you can just specify the plugin definition through `-f` option.
 
@@ -48,6 +49,7 @@ To create a function plugin with multiple exported functions, specify the export
 ```
 
 ### parameters
+
 1. plugin_type: the type of the plugin. Available values are `["source", "sink", "function", "portable"]`
 2. plugin_name: a unique name of the plugin. The name must be the same as the camel case version of the plugin with lowercase first letter. For example, if the exported plugin name is `Random`, then the name of this plugin is `random`.
 3. file: the url of the plugin files. It must be a zip file with: a compiled so file and the yaml file(only required for sources). The name of the files must match the name of the plugin. Please check [Extension](../../extension/overview.md) for the naming rule.
@@ -70,13 +72,14 @@ function2
 ```
 
 ## describe a plugin
+
 The command is used to print out the detailed definition of a plugin.
 
 ```shell
 describe plugin $plugin_type $plugin_name
 ```
 
-Sample: 
+Sample:
 
 ```shell
 # bin/kuiper describe plugin source plugin1
@@ -93,6 +96,7 @@ The command is used for drop the plugin.
 ```shell
 drop plugin $plugin_type $plugin_name -s $stop 
 ```
+
 In which, `-s $stop` is an optional boolean parameter. If it is set to true, the eKuiper server will be stopped for the delete to take effect. The user will need to restart it manually.
 Sample:
 
@@ -107,7 +111,7 @@ Unlike source and sink plugins, function plugin can export multiple functions at
 
 ### show udfs
 
-The command will list all user defined functions. 
+The command will list all user defined functions.
 
 ```shell
 show udfs
@@ -139,6 +143,7 @@ register plugin function $pluginName "{\"functions\":[\"$funcName\",\"$anotherFu
 ```
 
 Sample:
+
 ```shell
 # bin/kuiper register plugin function myPlugin "{\"functions\":[\"func1\",\"func2\",\"funcn\"]}"
-```
+```

+ 6 - 5
docs/en_US/api/cli/rules.md

@@ -1,6 +1,6 @@
 # Rules management
 
-The eKuiper rule command line tools allows you to manage rules, such as create, show, drop, describe, start, stop and restart rules. 
+The eKuiper rule command line tools allows you to manage rules, such as create, show, drop, describe, start, stop and restart rules.
 
 ## create a rule
 
@@ -10,7 +10,7 @@ The command is used for creating a rule.  The rule's definition is specified wit
 create rule $rule_name '$rule_json' | create rule $rule_name -f $rule_def_file
 ```
 
-The rule can be created with two ways. 
+The rule can be created with two ways.
 
 - Specify the rule definition in command line. Notice that, the json string must be quoted.
 
@@ -20,7 +20,7 @@ Sample:
 # bin/kuiper create rule rule1 '{"sql": "SELECT * from demo","actions": [{"log":  {}},{"mqtt":  {"server":"tcp://127.0.0.1:1883", "topic":"demoSink"}}]}'
 ```
 
-The command create a rule named `rule1`. 
+The command create a rule named `rule1`.
 
 - Specify the rule definition in file. If the rule is complex, or the rule is already wrote in text files with well organized formats, you can just specify the rule definition through `-f` option.
 
@@ -81,7 +81,7 @@ The command is used for print the detailed definition of rule.
 describe rule $rule_name
 ```
 
-Sample: 
+Sample:
 
 ```shell
 # bin/kuiper describe rule rule1
@@ -164,6 +164,7 @@ Rule rule1 was restarted.
 ## get the status of a rule
 
 The command is used to get the status of the rule. If the rule is running, the metrics will be retrieved realtime. The status can be
+
 - $metrics
 - stopped: $reason
 
@@ -220,4 +221,4 @@ Sample result:
     ]
   }
 }
-```
+```

+ 1 - 2
docs/en_US/api/cli/ruleset.md

@@ -23,7 +23,6 @@ The file format for importing and exporting ruleset is JSON, which can contain t
 
 This command accepts the ruleset and imports it into the system. If a stream or rule in the ruleset already exists, it is not created. The imported rules are started immediately. The command returns text about the number of streams and rules created
 
-
 ```shell
 # bin/kuiper import ruleset -f myrules.json
 ```
@@ -34,4 +33,4 @@ This command exports the ruleset to the specified file. The command returns text
 
 ```shell
 # bin/kuiper export ruleset myrules.json
-```
+```

+ 1 - 2
docs/en_US/api/cli/schemas.md

@@ -38,7 +38,6 @@ This command creates a schema named `schema1` whose content is provided by `file
 2. schema_name:The unique name of the schema which is also the name of the schema file.
 3. schema_json:The json to define the schema. It must contain name and file or content field.
 
-
 ## Show schemas
 
 The command is used for displaying all schemas defined in the server.
@@ -89,4 +88,4 @@ Example:
 ```shell
 # bin/kuiper drop schema protobuf schema1
 Schema schema1 is dropped.
-```
+```

+ 2 - 5
docs/en_US/api/cli/services.md

@@ -18,8 +18,7 @@ Example:
 # bin/kuiper create service sample '{"name": "sample","file": "file:///tmp/sample.zip"}'
 ```
 
-This command creates a service named sample whose content is provided by `file` field in the json. 
-
+This command creates a service named sample whose content is provided by `file` field in the json.
 
 ## Show services and service_funcs
 
@@ -81,7 +80,6 @@ Example:
 
 ```
 
-
 ## Describe a service function
 
 The command prints the detailed information of a service function.
@@ -101,7 +99,6 @@ Example:
 }
 ```
 
-
 ## Drop a service
 
 The command drops the service.
@@ -114,4 +111,4 @@ Example:
 
 ```shell
 # bin/kuiper drop service sample
-```
+```

+ 10 - 8
docs/en_US/api/cli/streams.md

@@ -19,7 +19,7 @@ Sample:
 stream my_stream created
 ```
 
-The command create a stream named `my_stream`. 
+The command create a stream named `my_stream`.
 
 - Specify the stream definition in file. If the stream is complex, or the stream is already wrote in text files with well organized formats, you can just specify the stream definition through `-f` option.
 
@@ -66,9 +66,9 @@ Sample:
 # bin/kuiper describe stream my_stream
 Fields
 --------------------------------------------------------------------------------
-id	bigint
-name	string
-score	float
+id  bigint
+name  string
+score  float
 
 FORMAT: json
 KEY: id
@@ -91,8 +91,10 @@ stream my_stream dropped
 ```
 
 ## query against streams
+
 The command is used for querying data from stream.  
-```
+
+```shell
 query
 ```
 
@@ -103,7 +105,7 @@ Sample:
 kuiper > 
 ```
 
-After typing `query` sub-command, it prompts `kuiper > `, then type SQLs (see [eKuiper SQL reference](../../sqls/overview.md) for how to use eKuiper SQL) in the command prompt and press enter. 
+After typing `query` sub-command, it prompts `kuiper >`, then type SQLs (see [eKuiper SQL reference](../../sqls/overview.md) for how to use eKuiper SQL) in the command prompt and press enter.
 
 The results will be print in the console.
 
@@ -112,7 +114,7 @@ kuiper > SELECT * FROM my_stream WHERE id > 10;
 [{"...":"..." ....}]
 ...
 ```
-- Press `CTRL + C` to stop the query; 
 
-- If no SQL are type, you can type `quit` or `exit` to quit the `kuiper` prompt console.
+- Press `CTRL + C` to stop the query;
 
+- If no SQL are type, you can type `quit` or `exit` to quit the `kuiper` prompt console.

+ 7 - 6
docs/en_US/api/cli/tables.md

@@ -19,7 +19,7 @@ Sample:
 table my_table created
 ```
 
-The command create a table named `my_table`. 
+The command create a table named `my_table`.
 
 - Specify the table definition in file. If the table is complex, or the table is already wrote in text files with well organized formats, you can just specify the table definition through `-f` option.
 
@@ -32,7 +32,7 @@ table my_table created
 
 Below is the contents of `my_table.txt`.
 
-```
+```text
 my_table(id bigint, name string, score float)
     WITH ( datasource = "lookup.json", FORMAT = "json", KEY = "id");
 ```
@@ -66,14 +66,15 @@ Sample:
 # bin/kuiper describe table my_table
 Fields
 --------------------------------------------------------------------------------
-id	bigint
-name	string
-score	float
+id  bigint
+name  string
+score  float
 
 FORMAT: json
 KEY: id
 DATASOURCE: lookup.json
 ```
+
  *Note*: eKuiper do not support query table data by cli. Users need join the table with a stream and check the result
 
 ## drop a table
@@ -89,4 +90,4 @@ Sample:
 ```shell
 # bin/kuiper drop table my_table
 table my_table dropped
-```
+```

+ 6 - 3
docs/en_US/api/restapi/authentication.md

@@ -2,11 +2,12 @@
 
 eKuiper support `JWT RSA256` authentication for the RESTful management APIs since `1.4.0` if enabled . Users need put their Public Key in `etc/mgmt` folder and use the corresponding Private key to sign the JWT Tokens.
 When user request the RESTful apis, put the `Token` in http request headers in the following format:
+
 ```go
 Authorization: XXXXXXXXXXXXXXX
 ```
-If the token is correct, eKuiper will respond the result; otherwise, it will return http `401`code.
 
+If the token is correct, eKuiper will respond the result; otherwise, it will return http `401`code.
 
 ### JWT Header
 
@@ -17,8 +18,8 @@ If the token is correct, eKuiper will respond the result; otherwise, it will ret
 }
 ```
 
-
 ### JWT payload
+
 The JWT Payload should use the following format
 
 | field | optional | meaning                                                               |
@@ -32,14 +33,16 @@ The JWT Payload should use the following format
 | sub   | true     | Subject                                                               |
 
 There is an example in json format
+
 ```json
 {
   "iss": "sample_key.pub",
   "adu": "eKuiper"
 }
 ```
+
 When use this format, user must make sure the correct Public key file `sample_key.pub` are under `etc/mgmt` .
 
 ### JWT Signature
 
-need use the Private key to sign the Tokens and put the corresponding Public Key in `etc/mgmt` .
+need use the Private key to sign the Tokens and put the corresponding Public Key in `etc/mgmt` .

+ 21 - 21
docs/en_US/api/restapi/configKey.md

@@ -7,12 +7,12 @@ This API is used to get all Config Keys under a specific source name
 ```shell
 GET http://localhost:9081/metadata/sources/yaml/{name}
 ```
+
 ### Parameter
- 
+
  name:Source name, supports built-in sources and extended sources. The built-in sources include mqtt, redis, neuron, memory, httppull, httppush, file, edgex,
  Extended sources include random, sql, video, zmq and user-defined sources
 
-
 ### Example
 
 Example request to get all Config Keys from an MQTT source:
@@ -23,20 +23,20 @@ Example request to get all Config Keys from an MQTT source:
 
 ```json
 {
-	"amd_broker": {
-		"insecureSkipVerify": false,
-		"protocolVersion": "3.1.1",
-		"qos": 1,
-		"server": "tcp://122.9.166.75:1883"
-	},
-	"default": {
-		"qos": 2,
-		"server": "tcp://emqx:1883"
-	},
-	"demo_conf": {
-		"qos": 0,
-		"server": "tcp://10.211.55.6:1883"
-	}
+    "amd_broker": {
+        "insecureSkipVerify": false,
+        "protocolVersion": "3.1.1",
+        "qos": 1,
+        "server": "tcp://122.9.166.75:1883"
+    },
+    "default": {
+        "qos": 2,
+        "server": "tcp://emqx:1883"
+    },
+    "demo_conf": {
+        "qos": 0,
+        "server": "tcp://10.211.55.6:1883"
+    }
 }
 ```
 
@@ -47,13 +47,13 @@ This API is used to delete a Config Key configuration under a specific source na
 ```shell
 DELETE http://localhost:9081/metadata/sources/{name}/confKeys/{confKey}
 ```
+
 ### Parameter
 
 1. name:Source name, supports built-in sources and extended sources. The built-in sources include mqtt, redis, neuron, memory, httppull, httppush, file, edgex,
    Extended sources include random, sql, video, zmq and user-defined sources
 2. confKey: Config Key Name。Taking the above as an example, the Config Keys are amd_broker, default, demo_conf in sequence.
 
-
 ### Example
 
 Delete the Config Key named demo_conf under the MQTT source
@@ -69,13 +69,13 @@ This API is used to register a Config Key under a specific source name
 ```shell
 PUT http://localhost:9081/metadata/sources/{name}/confKeys/{confKey}
 ```
+
 ### Parameter
 
 1. name:Source name, supports built-in sources and extended sources. The built-in sources include mqtt, redis, neuron, memory, httppull, httppush, file, edgex,
    Extended sources include random, sql, video, zmq and user-defined sources
 2. confKey: Config Key name to register
 
-
 ### Example
 
 Register the Config Key named demo_conf under the MQTT source
@@ -84,8 +84,8 @@ Register the Config Key named demo_conf under the MQTT source
  curl -X PUT http://localhost:9081/metadata/sources/mqtt/confKeys/demo_conf
  {
    "demo_conf": {
-		"qos": 0,
-		"server": "tcp://10.211.55.6:1883"
-	}
+        "qos": 0,
+        "server": "tcp://10.211.55.6:1883"
+    }
  }
 ```

+ 1 - 5
docs/en_US/api/restapi/data.md

@@ -44,10 +44,8 @@ The file format for importing and exporting data is JSON, which can contain : `s
 The API resets all existing data and then imports the new data into the system by default. But user can specify ``partial=1`` parameter in HTTP URL to keep the existing data and apply the new data.
 The API supports specifying data by means of text content or file URIs.
 
-
 Example 1: Import by text content
 
-
 ```shell
 POST http://{{host}}/data/import
 Content-Type: application/json
@@ -79,7 +77,6 @@ Content-Type: application/json
 }
 ```
 
-
 Example 4: Keep the old data and import new data (overwrite the tables/streams/rules/source config/sink config. install plugins/schema if not exist, else ignore them)
 
 ```shell
@@ -93,7 +90,6 @@ Content-Type: application/json
 
 ## Import data status
 
-
 This API returns data import errors. If all returns are empty, it means that the import is completely successful.
 
 ```shell
@@ -156,4 +152,4 @@ Example 2: export specific rules related data
 
 ```shell
 POST -d '["rule1","rule2"]' http://{{host}}/data/export
-```
+```

+ 1 - 2
docs/en_US/api/restapi/overview.md

@@ -1,4 +1,4 @@
-eKuiper provides a set of REST API for streams and rules management in addition to CLI. 
+eKuiper provides a set of REST API for streams and rules management in addition to CLI.
 
 By default, the REST API are running in port 9081. You can change the port in `/etc/kuiper.yaml` for the `restPort` property.
 
@@ -27,4 +27,3 @@ GET http://localhost:9081/ping
 - [Streams](streams.md)
 - [Rules](rules.md)
 - [Plugins](plugins.md)
-

File diff suppressed because it is too large
+ 14 - 6
docs/en_US/api/restapi/plugins.md


+ 6 - 6
docs/en_US/api/restapi/rules.md

@@ -1,13 +1,15 @@
 # Rules management
 
-The eKuiper REST api for rules allows you to manage rules, such as create, show, drop, describe, start, stop and restart rules. 
+The eKuiper REST api for rules allows you to manage rules, such as create, show, drop, describe, start, stop and restart rules.
 
 ## create a rule
 
 The API accepts a JSON content and create and start a rule.
+
 ```shell
 POST http://localhost:9081/rules
 ```
+
 Request Sample
 
 ```json
@@ -20,7 +22,6 @@ Request Sample
 }
 ```
 
-
 ## show rules
 
 The API is used for displaying all of rules defined in the server with a brief status.
@@ -54,7 +55,7 @@ GET http://localhost:9081/rules/{id}
 
 Path parameter `id` is the id or name of the rule.
 
-Response Sample: 
+Response Sample:
 
 ```json
 {
@@ -103,7 +104,6 @@ The API is used for drop the rule.
 DELETE http://localhost:9081/rules/{id}
 ```
 
-
 ## start a rule
 
 The API is used to start running the rule.
@@ -112,7 +112,6 @@ The API is used to start running the rule.
 POST http://localhost:9081/rules/{id}/start
 ```
 
-
 ## stop a rule
 
 The API is used to stop running the rule.
@@ -132,6 +131,7 @@ POST http://localhost:9081/rules/{id}/restart
 ## get the status of a rule
 
 The command is used to get the status of the rule. If the rule is running, the metrics will be retrieved realtime. The status can be
+
 - $metrics
 - stopped: $reason
 
@@ -187,4 +187,4 @@ Response Sample:
     ]
   }
 }
-```
+```

+ 2 - 2
docs/en_US/api/restapi/ruleset.md

@@ -47,8 +47,8 @@ Content-Type: application/json
 
 ## Export Ruleset
 
-The export API returns a file to download. 
+The export API returns a file to download.
 
 ```shell
 POST http://{{host}}/ruleset/export
-```
+```

+ 2 - 3
docs/en_US/api/restapi/schemas.md

@@ -8,7 +8,7 @@ The API accepts a JSON content and create a schema. Each schema type has a stand
 POST http://localhost:9081/schemas/protobuf
 ```
 
-Schema content inside request body: 
+Schema content inside request body:
 
 ```json
 {
@@ -36,7 +36,6 @@ Schema with static plugin:
 }
 ```
 
-
 ### Parameters
 
 1. name:the unique name of the schema.
@@ -99,4 +98,4 @@ PUT http://localhost:9081/schemas/protobuf/{name}
   "name": "schema2",
   "file": "http://ahot.com/test2.proto"
 }
-```
+```

+ 5 - 2
docs/en_US/api/restapi/services.md

@@ -7,6 +7,7 @@ This API accepts JSON content to create new external services.
 ```shell
 POST http://localhost:9081/services
 ```
+
 An example of a request for a file on an HTTP server:
 
 ```json
@@ -17,6 +18,7 @@ An example of a request for a file on an HTTP server:
 ```
 
 An example of a request for a file on the eKuiper server:
+
 ```json
 {
   "name":"random",
@@ -30,11 +32,12 @@ An example of a request for a file on the eKuiper server:
 2. file: URL of external service file. URL supports http, https and file modes. When using the file mode, the file must be on the machine where the eKuiper server is located. It must be a zip file, which contains the service description json file with the same name as the service and any other auxiliary files. The schema file must be in the schema folder.
 
 ### Service file format
+
 A sample zip file of the source named sample.zip
+
 1. sample.json
 2. Schema directory: it contains one or more schema files used by the service. For example, sample.proto.
 
-
 ## Display external services
 
 This API is used to display all external services defined in the server.
@@ -109,4 +112,4 @@ Result example:
   "name": "funcName",
   "serviceName": "serviceName"
 }
-```
+```

+ 4 - 4
docs/en_US/api/restapi/streams.md

@@ -9,6 +9,7 @@ The API is used for creating a stream. For more detailed information of stream d
 ```shell
 POST http://localhost:9081/streams
 ```
+
 Request sample, the request is a json string with `sql` field.
 
 ```json
@@ -79,13 +80,13 @@ The format is like Json schema:
 {
     "id": {
         "type": "bigint"
-	},
+  },
     "name": {
         "type": "string"
-	},
+  },
     "age": {
         "type": "bigint"
-	},
+  },
     "hobbies": {
         "type": "struct",
         "properties": {
@@ -129,4 +130,3 @@ The API is used for drop the stream definition.
 ```shell
 DELETE http://localhost:9081/streams/{id}
 ```
-

+ 1 - 2
docs/en_US/api/restapi/tables.md

@@ -9,6 +9,7 @@ The API is used for creating a table. For more detailed information of table def
 ```shell
 POST http://localhost:9081/tables
 ```
+
 Request sample, the request is a json string with `sql` field.
 
 ```json
@@ -79,7 +80,6 @@ The API is used to get the table schema. The schema is inferred from the physica
 GET http://localhost:9081/tables/{id}/schema
 ```
 
-
 ## update a table
 
 The API is used for update the table definition.
@@ -103,4 +103,3 @@ The API is used for drop the table definition.
 ```shell
 DELETE http://localhost:9081/tables/{id}
 ```
-

+ 1 - 1
docs/en_US/api/restapi/uploads.md

@@ -66,4 +66,4 @@ The API is used for deleting a file in the `${dataPath}/uploads` path.
 
 ```shell
 DELETE http://localhost:9081/config/uploads/{fileName}
-```
+```

+ 3 - 3
docs/en_US/concepts/ekuiper.md

@@ -1,6 +1,6 @@
 # Architecture Design
 
-LF Edge eKuiper is an edge lightweight IoT data analytics and stream processing engine. It is a universal edge computing service or middleware designed for resource constrained edge gateways or devices. 
+LF Edge eKuiper is an edge lightweight IoT data analytics and stream processing engine. It is a universal edge computing service or middleware designed for resource constrained edge gateways or devices.
 
 eKuiper is written by Go. The architecture of eKuiper is as below:
 
@@ -19,7 +19,7 @@ These helps eKuiper to achieve low latency and high throughput data processing.
 
 ## Computing components
 
-In eKuiper, a computing job is presented as a rule. The rule defines the streaming data sources as the input, the computing logic by SQL and the sinks/actions as the output. 
+In eKuiper, a computing job is presented as a rule. The rule defines the streaming data sources as the input, the computing logic by SQL and the sinks/actions as the output.
 
 Once a rule is defined, it will run continuously. It will keep fetching data from the source, calculate according to the SQL logic and trigger the actions with the result.
 
@@ -27,4 +27,4 @@ Read further about the components' concepts:
 
 - [rule](./rules.md)
 - [source](./sources/overview.md)
-- [sink](./sinks.md)
+- [sink](./sinks.md)

+ 1 - 1
docs/en_US/concepts/extensions.md

@@ -20,4 +20,4 @@ We support 3 kinds of extension:
 
 ## More Readings
 
-- [Extension Reference](../extension/overview.md)
+- [Extension Reference](../extension/overview.md)

+ 1 - 1
docs/en_US/concepts/rules.md

@@ -16,4 +16,4 @@ Multiple rules can form a processing pipeline by specifying a joint point in sin
 
 ## More Readings
 
-- [Rule Reference](../guide/rules/overview.md)
+- [Rule Reference](../guide/rules/overview.md)

+ 1 - 1
docs/en_US/concepts/sinks.md

@@ -10,4 +10,4 @@ The sink result is a string as always. It will be encoded into json string by de
 
 ## More Readings
 
-- [Sink Reference](../guide/sinks/overview.md)
+- [Sink Reference](../guide/sinks/overview.md)

+ 1 - 4
docs/en_US/concepts/sources/overview.md

@@ -12,7 +12,7 @@ By default, if multiple rules refer to the same source, each rule will have its
 
 ## Decode
 
-Users can define the format to decode by setting `format` property. Currently, `json`,  `binary`, `protobuf`, and `delimited` formats are supported. And you can also use your own decoding methods by setting it to `custom`. 
+Users can define the format to decode by setting `format` property. Currently, `json`,  `binary`, `protobuf`, and `delimited` formats are supported. And you can also use your own decoding methods by setting it to `custom`.
 
 ## Schema
 
@@ -27,6 +27,3 @@ The source defines the external system connection. When using the source with a
 ## More Readings
 
 - [Source Reference](../../guide/sources/overview.md)
-
-
-

+ 0 - 1
docs/en_US/concepts/sources/stream.md

@@ -7,4 +7,3 @@ When using as a stream, the source must be unbounded. The stream acts like a tri
 ## More Readings
 
 - [Stream Reference](../../sqls/streams.md)
-

+ 2 - 2
docs/en_US/concepts/sources/table.md

@@ -19,10 +19,10 @@ Lookup table do not store the table content in memory but refer to the external
 
 - Memory source: if a memory source is used as table type, we need to accumulate the data as a table in memory. It can serve as a intermediate to convert any stream into a lookup table.
 - Redis source: Support to query by redis key.
-- SQL source: This is the most typical lookup source. We can use SQL directly to query. 
+- SQL source: This is the most typical lookup source. We can use SQL directly to query.
 
 Unlike scan tables, lookup table will run separately from rules. Thus, all rules that refer to a lookup table can actually query the same table content.
 
 ## More Readings
 
-- [Table Reference](../../sqls/tables.md)
+- [Table Reference](../../sqls/tables.md)

+ 1 - 1
docs/en_US/concepts/sql.md

@@ -8,4 +8,4 @@ When create and manage stream or table source, SQL DDL and DML are used as the c
 
 ## SQL queries in rules
 
-In rules, SQL queries are used to define the business logic. Please check [sql reference](../sqls/overview.md) for detail.
+In rules, SQL queries are used to define the business logic. Please check [sql reference](../sqls/overview.md) for detail.

+ 1 - 1
docs/en_US/concepts/streaming/join.md

@@ -7,4 +7,4 @@ The supported joins in eKuiper include:
 - Join of streams: must do in a window.
 - Join of stream and table: the stream will be the trigger of join operation.
 
-The supported join type includes LEFT, RIGHT, FULL & CROSS in eKuiper.
+The supported join type includes LEFT, RIGHT, FULL & CROSS in eKuiper.

+ 1 - 2
docs/en_US/concepts/streaming/overview.md

@@ -6,7 +6,7 @@ Streaming data is a sequence of data elements made available over time. Stream p
 
 Stream processing has the below characteristics:
 
-- Unbounded data: streaming data is a type of ever-growing, essentially infinite data set which cannot be operated as a whole. 
+- Unbounded data: streaming data is a type of ever-growing, essentially infinite data set which cannot be operated as a whole.
 - Unbounded data processing: As applying to unbounded data, the stream processing itself is also unbounded. The workload can distribute evenly across time compared to batch processing.
 - Low-latency, near real-time: stream processing can process data once it is produced to get the result in a very low latency.
 
@@ -28,4 +28,3 @@ The state information can be found or managed by:
 
 - [Windows](./windowing.md)
 - [State API](../../extension/native/overview.md#state-storage)
-

+ 1 - 1
docs/en_US/concepts/streaming/windowing.md

@@ -9,4 +9,4 @@ In eKuiper, the built-in windowing supports:
 
 In time window, both processing time and event time are supported.
 
-For all the supported window type, please check [window functions](../../sqls/windows.md).
+For all the supported window type, please check [window functions](../../sqls/windows.md).

+ 3 - 2
docs/en_US/configuration/configuration.md

@@ -5,6 +5,7 @@ eKuiper configuration is based on yaml file and allow to configure by updating t
 ## Configuration Scope
 
 eKuiper configurations include
+
 1. `etc/kuiper.yaml`: global configuration file. Make change to it need to restart the eKuiper instance. Please refer to [basic configuration file](./global_configurations.md) for detail.
 2. `etc/sources/${source_name}.yaml`: the configuration file for each source to define the default properties (except MQTT source, whose configuration file is `etc/mqtt_source.yaml`). Please refer to the doc for each source for detail. For example, [MQTT source](../guide/sources/builtin/mqtt.md) and [Neuron source](../guide/sources/builtin/neuron.md) covers the configuration items.
 3. `etc/connections/connection.yaml`: shared connection configuration file.
@@ -25,11 +26,11 @@ When deploying in docker or k8s, it is not easy enough to manipulate files, smal
 
 There is a mapping from environment variable to the configuration yaml file. When modifying configuration through environment variables, the environment variables need to be set according to the prescribed format, for example:
 
-```
+```text
 KUIPER__BASIC__DEBUG => basic.debug in etc/kuiper.yaml
 MQTT_SOURCE__DEMO_CONF__QOS => demo_conf.qos in etc/mqtt_source.yaml
 EDGEX__DEFAULT__PORT => default.port in etc/sources/edgex.yaml
 CONNECTION__EDGEX__REDISMSGBUS__PORT => edgex.redismsgbus.port int etc/connections/connection.yaml
 ```
 
-The environment variables are separated by "__", the content of the first part after the separation matches the file name of the configuration file, and the remaining content matches the different levels of the configuration items. The file name could be `KUIPER` and `MQTT_SOURCE` in the `etc` folder; or  `CONNECTION` in `etc/connection` folder. Otherwise, the file should in `etc/sources` folder.
+The environment variables are separated by "__", the content of the first part after the separation matches the file name of the configuration file, and the remaining content matches the different levels of the configuration items. The file name could be `KUIPER` and `MQTT_SOURCE` in the `etc` folder; or  `CONNECTION` in `etc/connection` folder. Otherwise, the file should in `etc/sources` folder.

File diff suppressed because it is too large
+ 21 - 7
docs/en_US/configuration/global_configurations.md


+ 8 - 8
docs/en_US/edgex/edgex_meta.md

@@ -6,7 +6,7 @@ When data are published into EdgeX message bus, besides the actual device value,
 
 The data structure received from EdgeX message bus is list as in below. An `Event` structure encapsulates related metadata (ID, DeviceName, ProfileName, SourceName, Origin, Tags), along with the actual data (in `Readings` field) collected from device service.  
 
-Similar to `Event`, `Reading` also has some metadata (ID, DeviceName... etc). 
+Similar to `Event`, `Reading` also has some metadata (ID, DeviceName... etc).
 
 - Event
   - ID
@@ -33,7 +33,7 @@ Similar to `Event`, `Reading` also has some metadata (ID, DeviceName... etc).
 
 If upgrading from eKuiper versions v1.2.0 and before which integrates with EdgeX v1, there will be some breaking changes of the meta datas due to the refactor of EdgeX v2.
 
-1. The metadata `Pushed`, `Created` and `Modified` for both events and readings are removed. 
+1. The metadata `Pushed`, `Created` and `Modified` for both events and readings are removed.
 2. The metadata `Device` for both events and readings are renamed to `DeviceName`.
 3. The metadata `Name` of readings is renamed to `ResourceName`.
 
@@ -45,10 +45,10 @@ As in below - firstly, user creates an EdgeX stream named `events` with yellow c
 
 <img src="./create_stream.png" style="zoom:50%;" />
 
-Secondly, one message is published to message bus as in below. 
+Secondly, one message is published to message bus as in below.
 
 - The device name is `demo` with green color
-- Reading name `temperature` & `Humidity` with red color. 
+- Reading name `temperature` & `Humidity` with red color.
 - It has some `metadata` that is not necessary to "visible", but it probably will be used during data analysis, such as `DeviceName` field in `Event` structure. eKuiper saves these values into message tuple named metadata, and user can get these values during analysis. **Notice that, metadata name `DeviceName` was renamed from `Device` in EdgeX v2.**
 
 <img src="./bus_data.png" style="zoom:50%;" />
@@ -63,15 +63,15 @@ Thirdly, a SQL is provided for data analysis. Please notice that,
 
 Below are some other samples that extract other metadata through `meta` function.
 
-1. `meta(origin)`: 000 
+1. `meta(origin)`: 000
 
    Get 'Origin' metadata from Event structure
 
-2. `meta(temperature -> origin)`: 123 
+2. `meta(temperature -> origin)`: 123
 
    Get 'origin' metadata from reading[0], key with 'temperature'
 
-3. `meta(humidity -> origin)`: 456 
+3. `meta(humidity -> origin)`: 456
 
    Get 'origin' metadata from reading[1], key with 'humidity'
 
@@ -90,4 +90,4 @@ SELECT temperature,humidity, meta(id) AS eid,meta(origin) AS eo, meta(temperatur
 `meta` function can be used in eKuiper to access metadata values. Below lists all available keys for `Event` and `Reading`.
 
 - Event: id, deviceName, profileName, sourceName, origin, tags, correlationid
-- Reading: id, deviceName, profileName, origin, valueType
+- Reading: id, deviceName, profileName, origin, valueType

+ 46 - 39
docs/en_US/edgex/edgex_rule_engine_command.md

@@ -161,7 +161,7 @@ actuate `Random-Boolean-Device`, it is time to build the eKuiper rules.
 Again, the 1st rule is to monitor for events coming from the `Random-UnsignedInteger-Device` device (one of the default
 virtual device managed devices), and if a `uint8` reading value is found larger than `20` in the event, then send the
 command to `Random-Boolean-Device` device to start generating random numbers (specifically - set random generation bool
-to true). 
+to true).
 
 #### Option 1: Use Rest API
 
@@ -195,49 +195,54 @@ curl -X POST \
 See [core-command](https://docs.edgexfoundry.org/3.0/microservices/core/command/Ch-Command/#commands-via-messaging) for details. Take the first rule as an example to describe how to configure it:
 
 1. Set the MESSAGEQUEUE_EXTERNAL_ENABLED environment variable to true to enable the external messagebus of core-command.
-Set the MESSAGEQUEUE_EXTERNAL_URL environment variable to the address and port number of the external messagebus.
+   Set the MESSAGEQUEUE_EXTERNAL_URL environment variable to the address and port number of the external messagebus.
 2. Create the rule using the following configuration:
-```shell
-{
-  "sql": "SELECT uint8 FROM demo WHERE uint8 > 20",
-  "actions": [
-    {
-      "mqtt": {
-        "server": "tcp://mqtt-server:1883",
-        "topic": "edgex/command/request/Random-Boolean-Device/WriteBoolValue/set",
-        "dataTemplate": "{\"ApiVersion\": \"v2\", \"contentType\": \"application/json\", \"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6840\", \"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911380\", \"Payload\": \"eyJCb29sIjogInRydWUiLCAiRW5hYmxlUmFuZG9taXphdGlvbl9Cb29sIjogInRydWUifQ==\"}"
-      }
-    },
-    {
-      "log":{}
-    }
-  ]
-}
-```
-The payload is the base64-encoding json struct:
-```shell
-{"Bool":"true", "EnableRandomization_Bool": "true"}
-```
+
+   ```shell
+   {
+     "sql": "SELECT uint8 FROM demo WHERE uint8 > 20",
+     "actions": [
+       {
+         "mqtt": {
+           "server": "tcp://mqtt-server:1883",
+           "topic": "edgex/command/request/Random-Boolean-Device/WriteBoolValue/set",
+           "dataTemplate": "{\"ApiVersion\": \"v2\", \"contentType\": \"application/json\", \"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6840\", \"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911380\", \"Payload\": \"eyJCb29sIjogInRydWUiLCAiRW5hYmxlUmFuZG9taXphdGlvbl9Cb29sIjogInRydWUifQ==\"}"
+         }
+       },
+       {
+         "log":{}
+       }
+     ]
+   }
+   ```
+
+   The payload is the base64-encoding json struct:
+
+   ```shell
+   {"Bool":"true", "EnableRandomization_Bool": "true"}
+   ```
+
 3. Receive command response message from external MQTT broker on topic ```edgex/command/response/#```
-```shell
-{
-  "ReceivedTopic": "edgex/device/command/response/device-virtual/Random-Boolean-Device/WriteBoolValue/set",
-  "CorrelationID": "14a42ea6-c394-41c3-8bcd-a29b9f5e6840",
-  "ApiVersion": "v2",
-  "RequestID": "e6e8a2f4-eb14-4649-9e2b-175247911380",
-  "ErrorCode": 0,
-  "Payload": null,
-  "ContentType": "application/json",
-  "QueryParams": {}
-}
-```
+
+   ```shell
+   {
+     "ReceivedTopic": "edgex/device/command/response/device-virtual/Random-Boolean-Device/WriteBoolValue/set",
+     "CorrelationID": "14a42ea6-c394-41c3-8bcd-a29b9f5e6840",
+     "ApiVersion": "v2",
+     "RequestID": "e6e8a2f4-eb14-4649-9e2b-175247911380",
+     "ErrorCode": 0,
+     "Payload": null,
+     "ContentType": "application/json",
+     "QueryParams": {}
+   }
+   ```
 
 ### The second rule
 
 The 2nd rule is to monitor for events coming from the `Random-Integer-Device` device (another of the default virtual
 device managed devices), and if the average for `int8` reading values (within 20 seconds) is larger than 0, then send a
 command to `Random-Boolean-Device` device to stop generating random numbers (specifically - set random generation bool
-to false). 
+to false).
 
 #### Option 1: Use Rest API
 
@@ -268,7 +273,9 @@ curl -X POST \
 ```
 
 #### Option 2: Use Messaging
+
 The procedure is the same as the previous step. Use the following configuration to create a rule:
+
 ```shell
 {
   "sql": "SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY  TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0",
@@ -312,7 +319,7 @@ The output of the SQL should look similar to the results below.
 [{"int8":-75, "randomization":"true"}]
 ```
 
-Let's suppose a service need following data format, while `value` field is read from field `int8`, and `EnableRandomization_Bool` is read from field `randomization`. 
+Let's suppose a service need following data format, while `value` field is read from field `int8`, and `EnableRandomization_Bool` is read from field `randomization`.
 
 ```shell
 curl -X PUT \
@@ -323,7 +330,7 @@ curl -X PUT \
 
 eKuiper uses [Go template](https://golang.org/pkg/text/template/) to extract data from analysis result, and the `dataTemplate` should be similar as following.
 
-```
+```text
 "dataTemplate": "{\"value\": {{.int8}}, \"EnableRandomization_Bool\": \"{{.randomization}}\"}"
 ```
 
@@ -334,4 +341,4 @@ In some cases, you probably need to iterate over returned array values, or set d
  If you want to explore more features of eKuiper, please refer to below resources.
 
 - [eKuiper Github code repository](https://github.com/lf-edge/ekuiper/)
-- [eKuiper reference guide](https://github.com/lf-edge/ekuiper/blob/edgex/docs/en_US/reference.md)
+- [eKuiper reference guide](https://github.com/lf-edge/ekuiper/blob/edgex/docs/en_US/reference.md)

+ 21 - 16
docs/en_US/edgex/edgex_rule_engine_tutorial.md

@@ -5,7 +5,7 @@
 In EdgeX Geneva, [LF Edge eKuiper - an SQL based rule engine](https://github.com/lf-edge/ekuiper) is integrated with EdgeX. Before diving into this tutorial, let's spend a little time on learning basic knowledge of eKuiper. eKuiper is an edge lightweight IoT data analytics / streaming software implemented by Golang, and it can be run at all kinds of resource constrained edge devices. eKuiper rules are based on `Source`, `SQL` and `Sink`.
 
 - Source: The data source of streaming data, such as data from MQTT broker. In EdgeX scenario, the data source is EdgeX message bus, which could be ZeroMQ or MQTT broker.
-- SQL: SQL is where you specify the business logic of streaming data processing. eKuiper provides SQL-like statements to allow you to extract, filter & transform data. 
+- SQL: SQL is where you specify the business logic of streaming data processing. eKuiper provides SQL-like statements to allow you to extract, filter & transform data.
 - Sink: Sink is used for sending analysis result to a specified target. For example, send analysis result to another MQTT broker, or an HTTP rest address.
 
 ![](../resources/arch.png)
@@ -22,7 +22,7 @@ The tutorial demonstrates how to use eKuiper to process the data from EdgeX mess
 
 ## eKuiper EdgeX integration
 
-EdgeX uses [message bus](https://github.com/edgexfoundry/go-mod-messaging) to exchange information between different micro services. It contains the abstract message bus interface and implementations for ZeroMQ & MQTT. The integration work for eKuiper & EdgeX includes following 3 parts. 
+EdgeX uses [message bus](https://github.com/edgexfoundry/go-mod-messaging) to exchange information between different micro services. It contains the abstract message bus interface and implementations for ZeroMQ & MQTT. The integration work for eKuiper & EdgeX includes following 3 parts.
 
 - An EdgeX message bus source is extended to support consuming data from EdgeX message bus.  
 
@@ -34,7 +34,7 @@ EdgeX uses [message bus](https://github.com/edgexfoundry/go-mod-messaging) to ex
 
   However, data type definitions are already specified in the EdgeX events/readings and to improve the using experience, user are NOT necessary to specify data types when creating stream. For any data sending from message bus, it will be converted into [corresponding data types](../guide/sources/builtin/edgex.md).
 
-- An EdgeX message bus sink is extended to support send analysis result back to EdgeX Message Bus. User can also choose to send analysis result to RestAPI, eKuiper already supported it. 
+- An EdgeX message bus sink is extended to support send analysis result back to EdgeX Message Bus. User can also choose to send analysis result to RestAPI, eKuiper already supported it.
 
 ![](./arch_light.png)
 
@@ -51,7 +51,7 @@ In out tutorial, we will use [Random Integer Device Service](https://github.com/
 
 ### Run EdgeX Docker instances
 
-Go to [EdgeX-compose project](https://github.com/edgexfoundry/edgex-compose), and download related Docker compose file for Ireland release,  then bring up EdgeX Docker instances. 
+Go to [EdgeX-compose project](https://github.com/edgexfoundry/edgex-compose), and download related Docker compose file for Ireland release,  then bring up EdgeX Docker instances.
 
 ```shell
 $ docker-compose -f ./docker-compose-no-secty.yml up -d --build
@@ -81,9 +81,11 @@ d4b236a7b561   redis:6.2.4-alpine                                              "
 
 When eKuiper gets data from messageBus and send back the processed result, user needs to specify the connection info separately when creating the source and sink.
 Since `eKuiper 1.4.0` and `EdgeX Jakarta`, there is a new feature that user can specify the connection info in a fixed place and then source and sink can make a reference to it.
-* `redis` messageBus: this is especially useful when EdgeX use `secure` mode, in which case the client credentials will be injected into that share place automatically when services bootstrap.
+
+- `redis` messageBus: this is especially useful when EdgeX use `secure` mode, in which case the client credentials will be injected into that share place automatically when services bootstrap.
 In order to use this feature, users need do some modifications on the target `docker-compose` file's `rulesengine` service part
-add these in `environment` part and make sure the image is `1.4.0` or later. 
+add these in `environment` part and make sure the image is `1.4.0` or later.
+
   ```yaml
   environment:
       CONNECTION__EDGEX__REDISMSGBUS__PORT: 6379
@@ -93,8 +95,9 @@ add these in `environment` part and make sure the image is `1.4.0` or later.
       EDGEX__DEFAULT__CONNECTIONSELECTOR: edgex.redisMsgBus
   ```
   
-* `mqtt/zeromq` messageBus: adjust the parameters accordingly and specify the client credentials if have.
-  There is a `mqtt` message bus example, make sure the connection info exists in `etc/connections/connection.yaml`, for [more info](../guide/sources/builtin/edgex.md#connectionselector) please check this. 
+- `mqtt/zeromq` messageBus: adjust the parameters accordingly and specify the client credentials if have.
+  There is a `mqtt` message bus example, make sure the connection info exists in `etc/connections/connection.yaml`, for [more info](../guide/sources/builtin/edgex.md#connectionselector) please check this.
+
   ```yaml
   environment:
       CONNECTION__EDGEX__MQTTMSGBUS__PORT: 1883
@@ -105,12 +108,14 @@ add these in `environment` part and make sure the image is `1.4.0` or later.
       CONNECTION__EDGEX__MQTTMSGBUS__OPTIONAL__PASSWORD: password
       EDGEX__DEFAULT__CONNECTIONSELECTOR: edgex.mqttMsgBus
   ```
+
 After these modifications and eKuiper starts up, please read [this](../guide/sinks/builtin/edgex.md#connection-reuse-publish-example) to learn how to refer to the connection info
 
 #### Use Redis as KV storage
 
 Since `1.4.0`, eKuiper supports redis to store the KV metadata, user can make some modifications on the target `docker-compose` file's `rulesengine` service part to apply this change.
 Users can add these in `environment` part and make sure the image is `1.4.0` or later.
+
   ```yaml
   environment:
     KUIPER__STORE__TYPE: redis
@@ -118,6 +123,7 @@ Users can add these in `environment` part and make sure the image is `1.4.0` or
     KUIPER__STORE__REDIS__PORT: 6379
     KUIPER__STORE__REDIS__PASSWORD: ""
   ```
+
 *Note*: This feature only works when redis in `no-secty` mode
 
 #### Run with native
@@ -187,11 +193,11 @@ For more detailed information of configuration file, please refer to [this doc](
 
 ### Create a rule
 
-Let's create a rule that send result data to an MQTT broker, for detailed information of MQTT sink, please refer to [this link](../guide/sinks/builtin/mqtt.md).  Similar to create a stream, you can also choose REST or CLI to manage rules. 
+Let's create a rule that send result data to an MQTT broker, for detailed information of MQTT sink, please refer to [this link](../guide/sinks/builtin/mqtt.md).  Similar to create a stream, you can also choose REST or CLI to manage rules.
 
-So the below rule will get all of values from `event` topic. The sink result will 
+So the below rule will get all of values from `event` topic. The sink result will
 
-- Published to topic `result` of public MQTT broker `broker.emqx.io`. 
+- Published to topic `result` of public MQTT broker `broker.emqx.io`.
 - Print to log file.
 
 #### Option 1: Use Rest API
@@ -254,10 +260,10 @@ Rule rule1 was created successfully, please use 'cli getstatus rule rule1' comma
 If you want to send analysis result to another sink, please refer to [other sinks](../guide/sinks/overview.md)
 that supported in eKuiper.
 
-Now you can also take a look at the log file under `log/stream.log`, or through command `docker logs edgex-kuiper `
+Now you can also take a look at the log file under `log/stream.log`, or through command `docker logs edgex-kuiper`
 to see detailed info of rule.
 
-```
+```text
 time="2021-07-08 01:03:08" level=info msg="Serving kuiper (version - 1.2.1) on port 20498, and restful api on http://0.0.0.0:59720. \n" file="server/server.go:144"
 Serving kuiper (version - 1.2.1) on port 20498, and restful api on http://0.0.0.0:59720. 
 time="2021-07-08 01:08:14" level=info msg="Successfully subscribed to edgex messagebus topic rules-events." file="extensions/edgex_source.go:111" rule=rule1
@@ -333,13 +339,13 @@ Connecting to 127.0.0.1:20498...
 
 In this tutorial,  we introduce a very simple use of EdgeX eKuiper rule engine. If having any issues regarding to use of eKuiper rule engine, you can open issues in EdgeX or eKuiper Github respository.
 
-### More Excecise 
+### More Excecise
 
 Current rule does not filter any data that are sent to eKuiper, so how to filter data?  Please [drop rule](../api/cli/rules.md) and change the SQL in previous rule accordingly.  After update the rule file, and then deploy the rule again. Please monitor the `result` topic of MQTT broker, and please verify see if the rule works or not.
 
 #### Extended Reading
 
-- Starting from eKuiper 0.9.1 version, [a visualized web UI](../operation/manager-ui/overview.md) is released with a separated Docker image. You can manage the streams, rules and plugins through web page. 
+- Starting from eKuiper 0.9.1 version, [a visualized web UI](../operation/manager-ui/overview.md) is released with a separated Docker image. You can manage the streams, rules and plugins through web page.
 - Read [EdgeX source](../guide/sources/builtin/edgex.md) for more detailed information of configurations and data type conversion.
 - [How to use meta function to extract additional data from EdgeX message bus?](edgex_meta.md) There are some other information are sent along with device service, such as event created time, event id etc. If you want to use such metadata information in your SQL statements, please refer to this doc.
 - [Use Golang template to customize analaysis result in eKuiper](../guide/sinks/data_template.md) Before the analysis result is sent to different sinks, the data template can be used to make more processing. You can refer to this doc for more scenarios of using data templates.
@@ -350,4 +356,3 @@ Current rule does not filter any data that are sent to eKuiper, so how to filter
 
 - [eKuiper Github code repository](https://github.com/lf-edge/ekuiper/)
 - [eKuiper reference guide](../guide/streams/overview.md)
-

+ 1 - 1
docs/en_US/edgex/edgex_source_tutorial.md

@@ -124,4 +124,4 @@ CREATE STREAM edgexAll() WITH (FORMAT="JSON", TYPE="edgex", SHARED="true")
 
 ## Summary
 
-In the previous tutorial, we usually create an overall stream for edgeX, and it is not obvious to know how to configure and filter the edgeX events. In this tutorial, we learn the configuration in both edgeX and eKuiper together to filter the events into multiple streams and let the rules only process events of interests. Finally, we discuss how to use shared instance of source for performance and consistency.
+In the previous tutorial, we usually create an overall stream for edgeX, and it is not obvious to know how to configure and filter the edgeX events. In this tutorial, we learn the configuration in both edgeX and eKuiper together to filter the events into multiple streams and let the rules only process events of interests. Finally, we discuss how to use shared instance of source for performance and consistency.

+ 6 - 6
docs/en_US/example/data_merge/merge_single_stream.md

@@ -10,7 +10,7 @@ In IoT scenarios, devices such as sensors are often numerous, and usually the ac
 
 The temperature and humidity sensor data is mixed in the data stream, and none of the data is complete.
 
-```
+```text
 {"device_id":"B","humidity":79.66,"ts":1681786070367}
 {"device_id":"A","temperature":27.23,"ts":1681786070368}
 {"device_id":"B","humidity":83.86,"ts":1681786070477}
@@ -63,7 +63,7 @@ As shown in the above SQL, latest(temperature, 0) will get the latest temperatur
 
 With this rule, from the sample input sequence we can get the following output:
 
-```
+```text
 {"humidity":79.66,"temperature":0,"ts":1681786070367}
 {"humidity":79.66,"temperature":27.23,"ts":1681786070368}
 {"humidity":83.86,"temperature":27.23,"ts":1681786070477}
@@ -100,7 +100,7 @@ As shown in the above SQL, `WHERE isNull(temperature) = false` will filter out e
 
 With this rule, from the sample input sequence we can get the following output:
 
-```
+```text
 {"humidity":79.66,"temperature":27.23,"ts":1681786070368}
 {"humidity":83.86,"temperature":27.68,"ts":1681786070479}
 {"humidity":83.86,"temperature":27.28,"ts":1681786070588}
@@ -125,7 +125,7 @@ As shown in the above SQL, `WHERE ts - lag(ts) < 10` will filter out events with
 
 With this rule, from the sample input sequence we can get the following output:
 
-```
+```text
 {"humidity":79.66,"temperature":27.23,"ts":1681786070368}
 {"humidity":83.86,"temperature":27.68,"ts":1681786070479}
 {"humidity":75.79,"temperature":27.28,"ts":1681786070590}
@@ -150,7 +150,7 @@ As shown in the above SQL, `GROUP BY TUMBLINGWINDOW(ms, 500)` will merge each 50
 
 With this rule, from the sample input sequence we can get the following output:
 
-```
+```text
 {"humidity":81.75999999999999,"temperature":27.455,"ts":1681786070500}
 {"humidity":77.5625,"temperature":27.332500000000003,"ts":1681786071000}
 ```
@@ -159,4 +159,4 @@ Because the time window is aligned to natural time, the 500-millisecond window w
 
 ### More merge algorithms
 
-The above are some of the most common merge algorithms. If you have better merge algorithms and unique merge scenarios, please discuss in [GitHub Discussions](https://github.com/lf-edge/ekuiper/discussions/categories/use-case).
+The above are some of the most common merge algorithms. If you have better merge algorithms and unique merge scenarios, please discuss in [GitHub Discussions](https://github.com/lf-edge/ekuiper/discussions/categories/use-case).

+ 1 - 1
docs/en_US/example/data_merge/overview.md

@@ -2,4 +2,4 @@
 
 In IoT scenarios, devices such as sensors are often numerous in number. Data between multiple devices is correlated, and applications often need to acquire data from multiple devices in order to perform effective calculations. However, the data generated by each device is often reported independently. The huge amount of data makes data merging a challenge. This chapter introduces some typical data merging scenarios and explains how to use eKuiper for data merging.
 
-- [Merge multiple devices' data in single stream](./merge_single_stream.md)
+- [Merge multiple devices' data in single stream](./merge_single_stream.md)

+ 3 - 2
docs/en_US/example/howto.md

@@ -1,6 +1,6 @@
 # Run Examples
 
-This page explains how to use eKuiper to run the examples in this document. Before running the examples, you need to [install eKuiper](../installation.md). 
+This page explains how to use eKuiper to run the examples in this document. Before running the examples, you need to [install eKuiper](../installation.md).
 
 You can use SQL to define rules in the example documents. You can run rules using eKuiper manager management console UI or eKuiper’s REST API or command-line tool.
 
@@ -25,6 +25,7 @@ eKuiper provides many kinds of data source access methods and the File sink outp
 To run an example, we recommend that you use File source as input for ease of debugging. After the example runs successfully, you can replace the input data source with your own data source according to your needs.
 
 When running an example, most examples will follow these conventions, and the following will not be repeated.
+
 - We will create a data stream, the data source is file source, the data stream name is `demoStream`. Through the file source, we can conveniently **"replay"** the data prepared in the first step.
 - The output action of the rule is MQTT sink, which outputs the data to the `result/{{ruleId}}` topic.
 
@@ -39,4 +40,4 @@ Next, we will introduce how to use the eKuiper manager UI to run an example. Whe
 
 ## Summary
 
-This document introduces how to use eKuiper manager to run the examples in this document. In actual use, users can use eKuiper’s REST API or eKuiper CLI to process data according to their needs. After the case runs successfully, users can also modify the SQL statements in the case to play around and achieve their own needs.
+This document introduces how to use eKuiper manager to run the examples in this document. In actual use, users can use eKuiper’s REST API or eKuiper CLI to process data according to their needs. After the case runs successfully, users can also modify the SQL statements in the case to play around and achieve their own needs.

+ 12 - 11
docs/en_US/extension/external/external_func.md

@@ -15,14 +15,14 @@ The json configuration file includes the following two parts:
 
 - about: Used to describe the Meta-information of service, including author, detailed description, help document url, etc. For detailed usage, please refer to the example below.
 - interfaces: Used to define a set of service interfaces. Services provided by the same server often have the same service address and can be used as a service interface. Each service interface contains the following attributes:
-    - protocol: The protocol used by the service. "grpc", "rest" are supported currently. The "msgpack-rpc" is not built by default, you need to build it with build tag "msgpack" by yourself. Please refer to [feature compilation](../../operation/compile/features.md#usage) for detail.
-    - address: Service address, which must be url. For example, typical rpc service address: "tcp://localhost:50000" or http service address "https://localhost:8000".
-    - schemaType: The type of service description file. Only "protobuf" is supported currently .
-    - schemaFile: service description file, currently only proto file is supported. The rest and msgpack services also need to be described in proto.
-    - functions: function mapping array, used to map the services defined in the schema to SQL functions. It is mainly used to provide function aliases. For example,`{"name":"helloFromMsgpack","serviceName":"SayHello"}` can map the SayHello service in the service definition to the SQL function helloFromMsgpack. For unmapped functions, the defined service uses the original name as the SQL function name.
-    - options: Service interface options. Different service types have different options. Among them, the configurable options of rest service include:
-      - headers: configure HTTP headers
-      - insecureSkipVerify: whether to skip the HTTPS security check
+  - protocol: The protocol used by the service. "grpc", "rest" are supported currently. The "msgpack-rpc" is not built by default, you need to build it with build tag "msgpack" by yourself. Please refer to [feature compilation](../../operation/compile/features.md#usage) for detail.
+  - address: Service address, which must be url. For example, typical rpc service address: "tcp://localhost:50000" or http service address "https://localhost:8000".
+  - schemaType: The type of service description file. Only "protobuf" is supported currently .
+  - schemaFile: service description file, currently only proto file is supported. The rest and msgpack services also need to be described in proto.
+  - functions: function mapping array, used to map the services defined in the schema to SQL functions. It is mainly used to provide function aliases. For example,`{"name":"helloFromMsgpack","serviceName":"SayHello"}` can map the SayHello service in the service definition to the SQL function helloFromMsgpack. For unmapped functions, the defined service uses the original name as the SQL function name.
+  - options: Service interface options. Different service types have different options. Among them, the configurable options of rest service include:
+    - headers: configure HTTP headers
+    - insecureSkipVerify: whether to skip the HTTPS security check
 
 Assuming we have a service named 'sample', we can define a service definition file named sample.json as follows:
 
@@ -153,7 +153,7 @@ service TSRest {
 }
 ```
 
-Another typical scenario is the REST services to search a list. The search parameters are usually appended to the url as the query parameters. 
+Another typical scenario is the REST services to search a list. The search parameters are usually appended to the url as the query parameters.
 
 ```protobuf
 service TSRest {
@@ -213,6 +213,7 @@ The REST service is **POST** by default currently, and the transmission format i
 - The marshalled json for int64 type will be string
 
 The msgpack-rpc service has the following limitation:
+
 - Input can not be empty
 
 ## Registration and Management
@@ -228,7 +229,7 @@ When eKuiper is started, it will read and register the external service configur
 
 2. The Schema file used must be placed in the schemas folder. The directory structure is similar to:
 
-   ```
+   ```text
    etc
      services
        schemas
@@ -239,6 +240,7 @@ When eKuiper is started, it will read and register the external service configur
        other.json
        ...
    ```
+
    Note: After eKuiper is started, it **cannot** automatically load the system by modifying the configuration file. If you need to update dynamically, please use the REST service.
 
 For dynamic registration and management of services, please refer to [External Service Management API](../../api/restapi/services.md).
@@ -270,4 +272,3 @@ message ObjectDetectionRequest {
 ```
 
 In eKuiper, users can pass in the entire struct as a parameter, or pass in two string parameters as cmd and base64_img respectively.
-

+ 3 - 0
docs/en_US/extension/native/develop/function.md

@@ -25,6 +25,7 @@ object.
 //The argument is a list of xsql.Expr
 Validate(args []interface{}) error
 ```
+
 There are 2 types of functions: aggregate function and common function. For aggregate function, if the argument is a column, the received value will always be a slice of the column values in a group. The extended function must distinguish the function type by implement _IsAggregate_ method.
 
 ```go
@@ -90,10 +91,12 @@ them available. There are two ways to register the functions.
 The customized function can be directly used in the SQL of a rule if it follows the below convention.
 
 If you have developed a function implementation MyFunction, you should have:
+
 1. In the plugin file, symbol MyFunction is exported.
 2. The compiled MyFunction.so file is located inside _plugins/functions_
 
 To use it, just call it in the SQL inside a rule definition:
+
 ```json
 {
   "id": "rule1",

+ 170 - 174
docs/en_US/extension/native/develop/overview.md

@@ -32,30 +32,30 @@ This part of the content defines which library dependencies are used by the plug
 
 #### about
 
-* trial: indicates whether the plugin is under beta test stage 
+- trial: indicates whether the plugin is under beta test stage
 
-* author
+- author
 
   This part contains the author information of the plugin. The plugin developer can provide this information as appropriate. The information of this part will be displayed in the plugin information list of the management console.
 
-  * name
-  * email
-  * company
-  * website
+  - name
+  - email
+  - company
+  - website
 
-* helpUrl
+- helpUrl
 
   The help file address of the plug-in. The console will link to the corresponding help file according to the language support.
 
-  * en_US: English document help address
-  * zh_CN: Chinese document help address
+  - en_US: English document help address
+  - zh_CN: Chinese document help address
 
-* description
+- description
 
   A simple description of the plugin. The console supports multiple languages.
 
-  * en_US: English description
-  * zh_CN: Chinese description
+  - en_US: English description
+  - zh_CN: Chinese description
 
 #### properties
 
@@ -81,11 +81,11 @@ The list of attributes supported by the plugin and the configuration related to
   - zh_CN
 - type: field type; **This field must be provided;**
 
-  * string
-  * float
-  * int
-  * list_object: list, element is structure
-  * list_string: list, elements is string
+  - string
+  - float
+  - int
+  - list_object: list, element is structure
+  - list_string: list, elements is string
 - values: If the control type is `list-box` or `radio`, **this field must be provided;**
 - Array: The data type can be number, character, boolean, etc.
 
@@ -95,74 +95,72 @@ The following is a sample of metadata file.
 
 ```json
 {
-	"libs": [""],
-	"about": {
-		"trial": false,
-		"author": {
-			"name": "",
-			"email": "",
-			"company": "",
-			"website": ""
-		},
-		"helpUrl": {
-			"en_US": "",
-			"zh_CN": ""
-		},
-		"description": {
-			"en_US": "",
-			"zh_CN": ""
-		}
-	},
-	"properties": {
-		"default": [{
-			"name": "",
-			"default": "",
-			"optional": false,
-			"control": "",
-			"type": "",
-			"hint": {
-				"en_US": "",
-				"zh_CN": ""
-			},
-			"label": {
-				"en_US": "",
-				"zh_CN": ""
-			}
-		}, {
-			"name": "",
-			"default": [{
-				"name": "",
-				"default": "",
-				"optional": false,
-				"control": "",
-				"type": "",
-				"hint": {
-					"en_US": "",
-					"zh_CN": ""
-				},
-				"label": {
-					"en_US": "",
-					"zh_CN": ""
-				}
-			}],
-			"optional": false,
-			"control": "",
-			"type": "",
-			"hint": {
-				"en_US": "",
-				"zh_CN": ""
-			},
-			"label": {
-				"en_US": "",
-				"zh_CN": ""
-			}
-		}]
-	}
+    "libs": [""],
+    "about": {
+        "trial": false,
+        "author": {
+            "name": "",
+            "email": "",
+            "company": "",
+            "website": ""
+        },
+        "helpUrl": {
+            "en_US": "",
+            "zh_CN": ""
+        },
+        "description": {
+            "en_US": "",
+            "zh_CN": ""
+        }
+    },
+    "properties": {
+        "default": [{
+            "name": "",
+            "default": "",
+            "optional": false,
+            "control": "",
+            "type": "",
+            "hint": {
+                "en_US": "",
+                "zh_CN": ""
+            },
+            "label": {
+                "en_US": "",
+                "zh_CN": ""
+            }
+        }, {
+            "name": "",
+            "default": [{
+                "name": "",
+                "default": "",
+                "optional": false,
+                "control": "",
+                "type": "",
+                "hint": {
+                    "en_US": "",
+                    "zh_CN": ""
+                },
+                "label": {
+                    "en_US": "",
+                    "zh_CN": ""
+                }
+            }],
+            "optional": false,
+            "control": "",
+            "type": "",
+            "hint": {
+                "en_US": "",
+                "zh_CN": ""
+            },
+            "label": {
+                "en_US": "",
+                "zh_CN": ""
+            }
+        }]
+    }
 }
 ```
 
-
-
 ## Sinks/Actions
 
 | Name                                                | Description                                                      | Remarks                                                   |
@@ -181,30 +179,30 @@ The content of this part defines which library dependencies are used by the plug
 
 #### about
 
-* trial: indicates whether the plugin is under beta test stage
+- trial: indicates whether the plugin is under beta test stage
 
-* author
+- author
 
   This part contains the author information of the plugin. The plugin developer can provide this information as appropriate. The information of this part will be displayed in the plugin information list of the management console.
 
-  * name
-     * email
-     * company
-     * website
+  - name
+    - email
+    - company
+    - website
 
-* helpUrl
+- helpUrl
 
   The help file address of the plugin. The console will link to the corresponding help file according to the language support.
 
-     * en_US: English document help address
-  * zh_CN: Chinese document help address
+  - en_US: English document help address
+  - zh_CN: Chinese document help address
 
-* description
+- description
 
   A simple description of the plugin. The console supports multiple languages.
 
-  * en_US: English description
-  * zh_CN: Chinese description
+  - en_US: English description
+  - zh_CN: Chinese description
 
 #### properties
 
@@ -230,13 +228,13 @@ The list of attributes supported by the plugin and the configuration related to
   - zh_CN
 - type: field type; **This field must be provided;**
 
-  * string
-  * float
-  * int
-  * list_object: list, element is structure
-  * list_string: list, elements is string
-  * list_float: list, elements is float
-   * list_int: list, elements is int
+  - string
+  - float
+  - int
+  - list_object: list, element is structure
+  - list_string: list, elements is string
+  - list_float: list, elements is float
+  - list_int: list, elements is int
 - values: If the control type is `list-box` or `radio`, **this field must be provided;**
 - Array: The data type can be number, character, boolean, etc.
 
@@ -246,39 +244,39 @@ The following is a sample of metadata file.
 
 ```json
 {
-	"about": {
-		"trial": false,
-		"author": {
-			"name": "",
-			"email": "",
-			"company": "",
-			"website": ""
-		},
-		"helpUrl": {
-			"en_US": "",
-			"zh_CN": ""
-		},
-		"description": {
-			"en_US": "",
-			"zh_CN": ""
-		}
-	},
-	"libs": [""],
-	"properties": [{
-		"name": "",
-		"default": "",
-		"optional": false,
-		"control": "",
-		"type": "",
-		"hint": {
-			"en_US": "",
-			"zh_CN": ""
-		},
-		"label": {
-			"en_US": "",
-			"zh_CN": ""
-		}
-	}]
+    "about": {
+        "trial": false,
+        "author": {
+            "name": "",
+            "email": "",
+            "company": "",
+            "website": ""
+        },
+        "helpUrl": {
+            "en_US": "",
+            "zh_CN": ""
+        },
+        "description": {
+            "en_US": "",
+            "zh_CN": ""
+        }
+    },
+    "libs": [""],
+    "properties": [{
+        "name": "",
+        "default": "",
+        "optional": false,
+        "control": "",
+        "type": "",
+        "hint": {
+            "en_US": "",
+            "zh_CN": ""
+        },
+        "label": {
+            "en_US": "",
+            "zh_CN": ""
+        }
+    }]
 }
 ```
 
@@ -300,37 +298,37 @@ The metadata file format is JSON and is mainly divided into the following parts:
 
 #### about
 
-* trial: indicates whether the plugin is under beta test stage
+- trial: indicates whether the plugin is under beta test stage
 
-* author
+- author
 
   This part contains the author information of the plugin. The plugin developer can provide this information as appropriate. The information of this part will be displayed in the plugin information list of the management console.
 
-  * name
-     * email
-     * company
-     * website
+  - name
+    - email
+    - company
+    - website
 
-* helpUrl
+- helpUrl
 
   The help file address of the plugin. The console will link to the corresponding help file according to the language support.
 
-     * en_US: English document help address
-  * zh_CN: Chinese document help address
+  - en_US: English document help address
+  - zh_CN: Chinese document help address
 
-* description
+- description
 
   A simple description of the plugin. The console supports multiple languages.
 
-  * en_US: English description
-  * zh_CN: Chinese description
+  - en_US: English description
+  - zh_CN: Chinese description
 
 #### functions
 
 - name: attribute name; **This field must be provided;**
 - example
 - hint: hint information of the function; this field is optional;
-- - en_US
+  - en_US
   - zh_CN
 
 #### Sample file
@@ -339,32 +337,30 @@ The following is a sample of metadata file.
 
 ```json
 {
-	"about": {
-		"trial":false,
-		"author": {
-			"name": "",
-			"email": "",
-			"company": "",
-			"website": ""
-		},
-		"helpUrl": {
-			"en_US": "",
-			"zh_CN": ""
-		},
-		"description": {
-			"en_US": "",
-			"zh_CN": ""
-		}
-	},
-	"functions": [{
-		"name": "",
-		"example": "",
-		"hint": {
-			"en_US": "",
-			"zh_CN": ""
-		}
-	}]
+    "about": {
+        "trial":false,
+        "author": {
+            "name": "",
+            "email": "",
+            "company": "",
+            "website": ""
+        },
+        "helpUrl": {
+            "en_US": "",
+            "zh_CN": ""
+        },
+        "description": {
+            "en_US": "",
+            "zh_CN": ""
+        }
+    },
+    "functions": [{
+        "name": "",
+        "example": "",
+        "hint": {
+            "en_US": "",
+            "zh_CN": ""
+        }
+    }]
 }
 ```
-
-

File diff suppressed because it is too large
+ 137 - 96
docs/en_US/extension/native/develop/plugins_tutorial.md


File diff suppressed because it is too large
+ 24 - 16
docs/en_US/extension/native/develop/sink.md


File diff suppressed because it is too large
+ 20 - 17
docs/en_US/extension/native/develop/source.md


+ 13 - 11
docs/en_US/extension/native/overview.md

@@ -1,6 +1,6 @@
 # Native Plugin
 
-eKuiper allows user to customize the different kinds of extensions by the native golang plugin system. 
+eKuiper allows user to customize the different kinds of extensions by the native golang plugin system.
 
 - The source extension is used for extending different stream sources, such as consuming data from other message
   brokers. eKuiper has built-in source support for [MQTT broker](../../guide/sources/builtin/mqtt.md).
@@ -40,9 +40,11 @@ If multiple versions of plugins with the same name in place, only the latest ver
 It is required to build the plugin with exactly the same version of dependencies. And the plugin must implement interfaces exported by Kuiper, so the Kuiper project must be in the gopath.
 
 A typical environment for developing plugins is to put the plugin and Kuiper in the same project. To set it up:
+
 1. Clone Kuiper project.
 2. Create the plugin implementation file inside plugins/sources or plugin/sinks or plugin/functions according to what extension type is developing.
 3. Build the file as plugin into the same folder. The build command is typically like:
+
 ```bash
 go build -trimpath --buildmode=plugin -o plugins/sources/MySource.so plugins/sources/my_source.go
 ```
@@ -79,15 +81,15 @@ Below is an example of a function extension to access states. This function will
 ```go
 func (f *accumulateWordCountFunc) Exec(args []interface{}, ctx api.FunctionContext) (interface{}, bool) {
     logger := ctx.GetLogger()    
-	err := ctx.IncrCounter("allwordcount", len(strings.Split(args[0], args[1])))
-	if err != nil {
-		return err, false
-	}
-	if c, err := ctx.GetCounter("allwordcount"); err != nil   {
-		return err, false
-	} else {
-		return c, true
-	}
+    err := ctx.IncrCounter("allwordcount", len(strings.Split(args[0], args[1])))
+    if err != nil {
+        return err, false
+    }
+    if c, err := ctx.GetCounter("allwordcount"); err != nil   {
+        return err, false
+    } else {
+        return c, true
+    }
 }
 ```
 
@@ -103,4 +105,4 @@ the context:
 
 ```go
 ctx.GetRootPath()
-```
+```

File diff suppressed because it is too large
+ 4 - 4
docs/en_US/extension/overview.md


+ 41 - 41
docs/en_US/extension/portable/go_sdk.md

@@ -14,11 +14,11 @@ For source, implement the source interface as below as the same as described in
 
 ```go
 type Source interface {
-	// Open Should be sync function for normal case. The container will run it in go func
-	Open(ctx StreamContext, consumer chan<- SourceTuple, errCh chan<- error)
-	// Configure Called during initialization. Configure the source with the data source(e.g. topic for mqtt) and the properties read from the yaml
-	Configure(datasource string, props map[string]interface{}) error
-	Closable
+    // Open Should be sync function for normal case. The container will run it in go func
+    Open(ctx StreamContext, consumer chan<- SourceTuple, errCh chan<- error)
+    // Configure Called during initialization. Configure the source with the data source(e.g. topic for mqtt) and the properties read from the yaml
+    Configure(datasource string, props map[string]interface{}) error
+    Closable
 }
 ```
 
@@ -26,13 +26,13 @@ For sink, implement the sink interface as below as the same as described in [nat
 
 ```go
 type Sink interface {
-	//Should be sync function for normal case. The container will run it in go func
-	Open(ctx StreamContext) error
-	//Called during initialization. Configure the sink with the properties from rule action definition
-	Configure(props map[string]interface{}) error
-	//Called when each row of data has transferred to this sink
-	Collect(ctx StreamContext, data interface{}) error
-	Closable
+    //Should be sync function for normal case. The container will run it in go func
+    Open(ctx StreamContext) error
+    //Called during initialization. Configure the sink with the properties from rule action definition
+    Configure(props map[string]interface{}) error
+    //Called when each row of data has transferred to this sink
+    Collect(ctx StreamContext, data interface{}) error
+    Closable
 }
 ```
 
@@ -40,13 +40,13 @@ For function, implement the function interface as below as the same as described
 
 ```go
 type Function interface {
-	//The argument is a list of xsql.Expr
-	Validate(args []interface{}) error
-	//Execute the function, return the result and if execution is successful.
-	//If execution fails, return the error and false.
-	Exec(args []interface{}, ctx FunctionContext) (interface{}, bool)
-	//If this function is an aggregate function. Each parameter of an aggregate function will be a slice
-	IsAggregate() bool
+    //The argument is a list of xsql.Expr
+    Validate(args []interface{}) error
+    //Execute the function, return the result and if execution is successful.
+    //If execution fails, return the error and false.
+    Exec(args []interface{}, ctx FunctionContext) (interface{}, bool)
+    //If this function is an aggregate function. Each parameter of an aggregate function will be a slice
+    IsAggregate() bool
 }
 ```
 
@@ -58,30 +58,30 @@ As the portable plugin is a standalone program, it needs a main program to be ab
 package main
 
 import (
-	"github.com/lf-edge/ekuiper/sdk/go/api"
-	sdk "github.com/lf-edge/ekuiper/sdk/go/runtime"
-	"os"
+    "github.com/lf-edge/ekuiper/sdk/go/api"
+    sdk "github.com/lf-edge/ekuiper/sdk/go/runtime"
+    "os"
 )
 
 func main() {
-	sdk.Start(os.Args, &sdk.PluginConfig{
-		Name: "mirror",
-		Sources: map[string]sdk.NewSourceFunc{
-			"random": func() api.Source {
-				return &randomSource{}
-			},
-		},
-		Functions: map[string]sdk.NewFunctionFunc{
-			"echo": func() api.Function {
-				return &echo{}
-			},
-		},
-		Sinks: map[string]sdk.NewSinkFunc{
-			"file": func() api.Sink {
-				return &fileSink{}
-			},
-		},
-	})
+    sdk.Start(os.Args, &sdk.PluginConfig{
+        Name: "mirror",
+        Sources: map[string]sdk.NewSourceFunc{
+            "random": func() api.Source {
+                return &randomSource{}
+            },
+        },
+        Functions: map[string]sdk.NewFunctionFunc{
+            "echo": func() api.Function {
+                return &echo{}
+            },
+        },
+        Sinks: map[string]sdk.NewSinkFunc{
+            "file": func() api.Sink {
+                return &fileSink{}
+            },
+        },
+    })
 }
 ```
 
@@ -91,4 +91,4 @@ For the full examples, please check the sdk [example](https://github.com/lf-edge
 
 ## Package
 
-We need to prepare the executable file and the json file and then package them. For GO SDK, we need to build the main program into an executable by merely using `go build` like a normal program (it is actually a normal program). Due to go binary file may have different binary name in different os, make sure the file name is correct in the json file. For detail, please check [packaing](./overview.md#package).
+We need to prepare the executable file and the json file and then package them. For GO SDK, we need to build the main program into an executable by merely using `go build` like a normal program (it is actually a normal program). Due to go binary file may have different binary name in different os, make sure the file name is correct in the json file. For detail, please check [packaing](./overview.md#package).

+ 8 - 7
docs/en_US/extension/portable/overview.md

@@ -14,7 +14,7 @@ We aim to provide SDK for all mainstream language. Currently, [go SDK](go_sdk.md
 
 ## Development
 
-Unlike the native plugin, a portable plugin can bundle multiple *symbols*. Each symbol represents an extension of source, sink or function. The implementation of a symbol is to implement the interface of source, sink or function similar to the native plugin. In portable plugin mode, it is to implement the interface with the selected language. 
+Unlike the native plugin, a portable plugin can bundle multiple *symbols*. Each symbol represents an extension of source, sink or function. The implementation of a symbol is to implement the interface of source, sink or function similar to the native plugin. In portable plugin mode, it is to implement the interface with the selected language.
 
 Then, the user need to create a main program to define and serve all the symbols. The main program will be run when starting the plugin. The development varies for languages, please check [go SDK](go_sdk.md) and [python SDK](python_sdk.md) for the detail.
 
@@ -23,11 +23,13 @@ Then, the user need to create a main program to define and serve all the symbols
 We provide a portable plugin test server to simulate the eKuiper main program part while the developers can start the plugin side manually to support debug.
 
 You can find the tool in `tools/plugin_test_server`. It only supports to test a single plugin Testing process.
-0. Edit the testingPlugin variable to match your plugin meta.
-1. Start this server, and wait for handshake.
-2. Start or debug your plugin. Make sure the handshake completed.
-3. Issue startSymbol/stopSymbol REST API  to debug your plugin symbol. The REST API is like:
-   ```
+
+1. Edit the testingPlugin variable to match your plugin meta.
+2. Start this server, and wait for handshake.
+3. Start or debug your plugin. Make sure the handshake completed.
+4. Issue startSymbol/stopSymbol REST API  to debug your plugin symbol. The REST API is like:
+
+   ```shell
    POST http://localhost:33333/symbol/start
    Content-Type: application/json
    
@@ -101,4 +103,3 @@ Currently, there are two limitations compared to native plugins:
 
 1. Support less context methods. For example, [State](../native/overview.md#state-storage) and Connection API are not supported; dynamic properties are required to be parsed by developers. Whereas, state is planned to be supported in the future.
 2. In the function interface, the arguments cannot be transferred with the AST which means the user cannot validate the argument types. The only validation supported may be the argument count. In the sink interface, the collect function parameter data will always be a json encoded `[]byte`, developers need to decode by themselves.
-

+ 7 - 1
docs/en_US/extension/portable/python_sdk.md

@@ -3,6 +3,7 @@
 By using Python SDK for portable plugins, user can develop portable plugins with python language. The Python SDK provides APIs for the source, sink and function interfaces. Additionally, it provides a plugin start function as the execution entry point to define the plugin and its symbols.
 
 To run python plugin, there are two prerequisites in the runtime environment:
+
 1. Install Python 3.x environment.
 2. Install nng and ekuiper package by `pip install nng ekuiper`.
 
@@ -13,6 +14,7 @@ By default, the eKuiper portable plugin runtime will run python script with `pyt
 The process is the same: develop the symbols and then develop the main program. Python SDK provides the similar source, sink and function interfaces in python language.
 
 Source interface:
+
 ```python
   class Source(object):
     """abstract class for eKuiper source plugin"""
@@ -34,6 +36,7 @@ Source interface:
 ```
 
 Sink interface:
+
 ```python
 class Sink(object):
     """abstract class for eKuiper sink plugin"""
@@ -60,6 +63,7 @@ class Sink(object):
 ```
 
 Function interface:
+
 ```python
 class Function(object):
     """abstract class for eKuiper function plugin"""
@@ -116,6 +120,7 @@ To use conda environment, the common steps are:
 1. Create and set up the conda environment.
 2. When packaging the plugin, make sure `virtualEnvType` is set to `conda` and `env` is set to the created virtual
    environment. Below is an example.
+
     ```json
     {
       "version": "v1.0.0",
@@ -134,4 +139,5 @@ To use conda environment, the common steps are:
       ]
     }
     ```
-3. If the plugin has installation script, make sure the script install the dependencies to the correct environment.
+
+3. If the plugin has installation script, make sure the script install the dependencies to the correct environment.

+ 22 - 11
docs/en_US/extension/wasm/overview.md

@@ -20,9 +20,11 @@ go version
 ```
 
 To check if tinygo is installed, run the following command.
+
 ```shell
 tinygo version
 ```
+
 tinygo download address: https://github.com/tinygo-org/tinygo/releases
 
 To check whether wasmedge is installed, please run the following command.
@@ -30,6 +32,7 @@ To check whether wasmedge is installed, please run the following command.
 ```shell
 wasmedge -v
 ```
+
 wasmedge download location: https://wasmedge.org/book/en/quick_start/install.html
 
 Download command:
@@ -51,6 +54,7 @@ Official tutorial (https://wasmedge.org/book/en/write_wasm/go.html)
 Develop fibonacci plugin:
 
 fibonacci.go
+
 ```go
 package main
 
@@ -59,16 +63,16 @@ func main() {
 
 //export fib
 func fibArray(n int32) int32 {
-	arr := make([]int32, n)
-	for i := int32(0); i < n; i++ {
-		switch {
-		case i < 2:
-			arr[i] = i
-		default:
-			arr[i] = arr[i-1] + arr[i-2]
-		}
-	}
-	return arr[n-1]
+  arr := make([]int32, n)
+  for i := int32(0); i < n; i++ {
+    switch {
+    case i < 2:
+      arr[i] = i
+    default:
+      arr[i] = arr[i-1] + arr[i-2]
+    }
+  }
+  return arr[n-1]
 }
 ```
 
@@ -95,6 +99,7 @@ After development is complete, we need to package the results into a zip for ins
 In the json file, we need to describe the metadata of this plugin. This information must match the definition in the main plugin program. The following is an example.
 
 fibonacci.json
+
 ```json
 {
   "version": "v1.0.0",
@@ -104,6 +109,7 @@ fibonacci.json
   "wasmEngine": "wasmedge"
 }
 ```
+
 ## Build eKuiper
 
 The official released eKuiper do not have wasm support, users need build eKuiper by himself
@@ -123,26 +129,31 @@ Check plugin installation.
 ```shell
 bin/kuiper describe plugin wasm fibonacci
 ```
+
 ## Run
 
 1. Create a stream
+
     ```shell
     bin/kuiper create stream demo_fib '(num float) WITH (FORMAT="JSON", DATASOURCE="demo_fib")'
     bin/kuiper query
     select fib(num) from demo_fib
     ```
+
 2. Install EMQX to send data.
+
     ```shell
     docker pull emqx/emqx:v4.0.0
     docker run -d --name emqx -p 1883:1883 -p 8081:8081 -p 8083:8083 -p 8883:8883 -p 8084:8084 -p 18083:18083 emqx/emqx:v4.0.0
     ```
+
 3. Send data by EMQX
 
 Login to: http://127.0.0.1:18083/ with admin/public.
 
 Use TOOLS/Websocket  to send data:
 
-Tpoic    : demo_fib 
+Tpoic    : demo_fib
 
 Messages : {"num" : 25}
 

+ 3 - 3
docs/en_US/getting_started/debug_rules.md

@@ -61,7 +61,7 @@ invalid SQL statement, you will get an error message like this:
   "code": 400,
   "message": "invalid sql: near \"SELEC\": syntax error"
 }
-```    
+```
 
 #### Check the logs
 
@@ -173,7 +173,7 @@ is not receiving any data. You need to check the source side: if the data source
 configuration is correct. For example, if your MQTT source topic is configured to `topic1`, but you send data
 to `topic2`, then the source will not receive any data which can be observed by the source metric.
 
-```
+```text
 "source_demo_0_records_in_total": 0,
 "source_demo_0_records_out_total": 0,
 ```
@@ -619,4 +619,4 @@ Finally, we'll receive the data on the `result` topic when condition met:
 ## Summary
 
 In this tutorial, we learned how to diagnose a rule from the metrics, logs and debug rules. We also have a step-by-step
-guide to create a rule and debug it. Hope this tutorial can help you to diagnose your rules.
+guide to create a rule and debug it. Hope this tutorial can help you to diagnose your rules.

+ 14 - 12
docs/en_US/getting_started/getting_started.md

@@ -4,13 +4,13 @@ Starting from download and installation, this document will guide you to start e
 
 ## Install eKuiper
 
-eKuiper provides docker image, binary package and helm chart to install. 
+eKuiper provides docker image, binary package and helm chart to install.
 
 In this tutorial, we provide both web UI and CLI to create and manage the rules. If you want to run the eKuiper manager which is the web management console for eKuiper, please refer to [running eKuiper with management console](../installation.md#running-ekuiper-with-management-console).
 
 ### Running in docker
 
-Docker deployment is the fastest way to start experimenting with eKuiper. 
+Docker deployment is the fastest way to start experimenting with eKuiper.
 
 ```shell
 docker run -p 9081:9081 -d --name kuiper -e MQTT_SOURCE__DEFAULT__SERVER="tcp://broker.emqx.io:1883" lfedge/ekuiper:$tag
@@ -130,7 +130,7 @@ query is submit successfully.
 
 Now if any data are published to the MQTT server available at `tcp://127.0.0.1:1883`, then it prints message as following.
 
-```
+```shell
 kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
 [{"avg_hum":62,"count":5,"max_hum":96}]
 [{"avg_hum":36,"count":3,"max_hum":63}]
@@ -144,7 +144,7 @@ kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
 
 You can press `ctrl + c` to break the query, and server will terminate streaming if detecting client disconnects from the query. Below is the log print at server.
 
-```
+```text
 ...
 time="2019-09-09T21:46:54+08:00" level=info msg="The client seems no longer fetch the query result, stop the query now."
 time="2019-09-09T21:46:54+08:00" level=info msg="stop the query."
@@ -154,6 +154,7 @@ time="2019-09-09T21:46:54+08:00" level=info msg="stop the query."
 ### Writing the rule
 
 As part of the rule, we need to specify the following:
+
 * rule id: the id of the rule. It must be unique
 * rule name: the description of the rule
 * sql: the query to run for the rule
@@ -179,11 +180,12 @@ The content of `myRule` file as below. It publishes the result to the mqtt topic
     }]
 }
 ```
+
 You should see a successful message `rule myRule created` in the stream log, and the rule is now set up and running.
 
 ### Managing the rules
 
-You can use command line tool to stop the rule for a while and restart it and other management work. The rule name is the identifier of a rule. 
+You can use command line tool to stop the rule for a while and restart it and other management work. The rule name is the identifier of a rule.
 
 ```sh
 $ bin/kuiper stop rule myRule
@@ -207,10 +209,10 @@ Below is an example data and the output in MQTT X.
 
 Refer to the following topics for guidance on using the eKuiper.
 
-- [Installation](../installation.md)
-- [Rules](../guide/rules/overview.md)
-- [SQL reference](../sqls/overview.md)
-- [Stream](../guide/streams/overview.md)
-- [Sink](../guide/sinks/overview.md)
-- [Command line interface tools - CLI](../api/cli/overview.md)
-- [Management Console](../guide/rules/overview.md)
+* [Installation](../installation.md)
+* [Rules](../guide/rules/overview.md)
+* [SQL reference](../sqls/overview.md)
+* [Stream](../guide/streams/overview.md)
+* [Sink](../guide/sinks/overview.md)
+* [Command line interface tools - CLI](../api/cli/overview.md)
+* [Management Console](../guide/rules/overview.md)

+ 5 - 5
docs/en_US/getting_started/quick_start_docker.md

@@ -1,6 +1,6 @@
 ## 5 minutes quick start
 
-1. Pull a eKuiper Docker image from `https://hub.docker.com/r/lfedge/ekuiper/tags`. It's recommended to use `alpine` image in this tutorial (refer to [eKuiper Docker](https://hub.docker.com/r/lfedge/ekuiper) for the difference of eKuiper Docker image variants). 
+1. Pull a eKuiper Docker image from `https://hub.docker.com/r/lfedge/ekuiper/tags`. It's recommended to use `alpine` image in this tutorial (refer to [eKuiper Docker](https://hub.docker.com/r/lfedge/ekuiper) for the difference of eKuiper Docker image variants).
 
 2. Set eKuiper source to an MQTT server. This sample uses server locating at `tcp://broker.emqx.io:1883`. `broker.emqx.io` is a public MQTT test server hosted by [EMQ](https://www.emqx.io).
 
@@ -32,9 +32,9 @@
    # mqttx pub -h broker.emqx.io -m '{"temperature": 40, "humidity" : 20}' -t devices/device_001/messages
    ```
 
-5. If everything goes well,  you can see the message is print on docker `bin/kuiper query` window. Please try to publish another message with `temperature` less than 30, and it will be filtered by WHERE condition of the SQL. 
+5. If everything goes well,  you can see the message is print on docker `bin/kuiper query` window. Please try to publish another message with `temperature` less than 30, and it will be filtered by WHERE condition of the SQL.
 
-   ```
+   ```shell
    kuiper > select * from demo WHERE temperature > 30;
    [{"temperature": 40, "humidity" : 20}]
    ```
@@ -47,5 +47,5 @@ You can also refer to [eKuiper dashboard documentation](../operation/manager-ui/
 
 Next for exploring more powerful features of eKuiper? Refer to below for how to apply LF Edge eKuiper in edge and integrate with AWS / Azure IoT cloud.
 
-   - [Lightweight edge computing eKuiper and Azure IoT Hub integration solution](https://www.emqx.com/en/blog/lightweight-edge-computing-emqx-kuiper-and-azure-iot-hub-integration-solution) 
-   - [Lightweight edge computing eKuiper and AWS IoT Hub integration solution](https://www.emqx.com/en/blog/lightweight-edge-computing-emqx-kuiper-and-aws-iot-hub-integration-solution)
+- [Lightweight edge computing eKuiper and Azure IoT Hub integration solution](https://www.emqx.com/en/blog/lightweight-edge-computing-emqx-kuiper-and-azure-iot-hub-integration-solution)
+- [Lightweight edge computing eKuiper and AWS IoT Hub integration solution](https://www.emqx.com/en/blog/lightweight-edge-computing-emqx-kuiper-and-aws-iot-hub-integration-solution)

File diff suppressed because it is too large
+ 38 - 33
docs/en_US/guide/ai/python_tensorflow_lite_tutorial.md


+ 39 - 46
docs/en_US/guide/ai/tensorflow_lite.md

@@ -7,10 +7,8 @@ software which can be run at all kinds of resource constrained IoT devices.
 mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and a small binary
 size.
 
-
 By integrating eKuiper and TensorFlow Lite, users only need to upload a pre-built TensorFlow model, which can be used in rules to analyze data in the flow. In this tutorial, we will demonstrate how to quickly call a pre-trained TensorFlow model through ekuiper.
 
-
 ## Prerequisite
 
 ### Model Download
@@ -24,7 +22,6 @@ This tutorial uses the eKuiper Docker image `lfedge/ekuiper:1.8.0-slim` and the
 
 ### TensorFlow Lite Plugin Download
 
-
 TensorFlow Lite is provided as a precompiled plug-in, and users need to download and install it themselves.
 
 ![download plugin](../../resources/tflite_install.png)
@@ -45,7 +42,6 @@ Note that the model input data format must be a byte array, and json does not su
 Users can upload model files to eKuiper through eKuiper manager. As shown below.
 ![model upload](../../resources/sin_upload.png)
 
-
 ### Call Model in TensorFlow Lite
 
 After users install the TensorFlow Lite plugin, they can call the model in SQL as normal built-in functions. The first parameter is the model name, and the second parameter is the data to be processed.
@@ -56,7 +52,6 @@ After users install the TensorFlow Lite plugin, they can call the model in SQL a
 The result is shown in the figure below, when the input is 1.57, the derivation result is about 1.
 ![result check](../../resources/mqttx_sin.png)
 
-
 ## MobileNet V1 Model Set up
 
 Please download the [MobileNet V1 model](https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_1.0_224/1/default/1), this model inputs image information of 224 * 224 pixels, and returns a size of 1001 float array.
@@ -78,13 +73,11 @@ Since the precompiled model requires 224 * 224 pixel image data, another precomp
 ![image install](../../resources/image_install.png)
 ![resize register](../../resources/image_register.png)
 
-
 ### Model Upload
 
 Users can upload model files to eKuiper through eKuiper manager. As shown below.
 ![model upload](../../resources/mobilenet_upload.png)
 
-
 ### Call Model in TensorFlow Lite
 
 After users install the TensorFlow Lite plugin, they can call the model in SQL as normal built-in functions. The first parameter is the model name, and the second parameter is the return result of calling the resize function. Where `self` is the key corresponding to the binary data.
@@ -106,57 +99,57 @@ Users can write code to filter out the item tags with the highest matching degre
 package demo
 
 import (
-	"bufio"
-	"os"
-	"sort"
+    "bufio"
+    "os"
+    "sort"
 )
 
 func loadLabels() ([]string, error) {
-	labels := []string{}
-	f, err := os.Open("./labels.txt")
-	if err != nil {
-		return nil, err
-	}
-	defer f.Close()
-	scanner := bufio.NewScanner(f)
-	for scanner.Scan() {
-		labels = append(labels, scanner.Text())
-	}
-	return labels, nil
+    labels := []string{}
+    f, err := os.Open("./labels.txt")
+    if err != nil {
+        return nil, err
+    }
+    defer f.Close()
+    scanner := bufio.NewScanner(f)
+    for scanner.Scan() {
+        labels = append(labels, scanner.Text())
+    }
+    return labels, nil
 }
 
 type result struct {
-	score float64
-	index int
+    score float64
+    index int
 }
 
 func bestMatchLabel(keyValue map[string]interface{}) (string, bool) {
-	labels, _ := loadLabels()
-	resultArray := keyValue["tfLite"].([]interface{})
-	outputArray := resultArray[0].([]byte)
-	outputSize := len(outputArray)
-	
-	var results []result
-	for i := 0; i < outputSize; i++ {
-		score := float64(outputArray[i]) / 255.0
-		if score < 0.2 {
-			continue
-		}
-		results = append(results, result{score: score, index: i})
-	}
-	sort.Slice(results, func(i, j int) bool {
-		return results[i].score > results[j].score
-	})
-	// output is the biggest score labelImage
-	if len(results) > 0 {
-		return labels[results[0].index], true
-	} else {
-		return "", true
-	}
+    labels, _ := loadLabels()
+    resultArray := keyValue["tfLite"].([]interface{})
+    outputArray := resultArray[0].([]byte)
+    outputSize := len(outputArray)
+    
+    var results []result
+    for i := 0; i < outputSize; i++ {
+        score := float64(outputArray[i]) / 255.0
+        if score < 0.2 {
+            continue
+        }
+        results = append(results, result{score: score, index: i})
+    }
+    sort.Slice(results, func(i, j int) bool {
+        return results[i].score > results[j].score
+    })
+    // output is the biggest score labelImage
+    if len(results) > 0 {
+        return labels[results[0].index], true
+    } else {
+        return "", true
+    }
 
 }
 ```
 
 ## in conclusion
 
-In this tutorial, we use the pre-compiled TensorFlow Lite plugin to directly call the pre-trained TensorFlow Lite model in ekuiper, which avoids writing code and simplifies the inference steps.
+In this tutorial, we use the pre-compiled TensorFlow Lite plugin to directly call the pre-trained TensorFlow Lite model in ekuiper, which avoids writing code and simplifies the inference steps.

+ 9 - 10
docs/en_US/guide/ai/tensorflow_lite_external_function_tutorial.md

@@ -10,22 +10,22 @@ size.
 By integrating eKuiper and TensorFlow Lite, users can analyze the data in stream by AI with prebuilt TensorFlow models.
 In this tutorial, we will walk you through building a eKuiper external function plugin to label pictures produced by an edge
 device in stream by pre-trained image recognition TensorFlow model. By using the external functions, eKuiper and external functions
-can run in totally different processes or host machines, which means eKuiper and external functions can have different lifecycles, what's more, external functions 
+can run in totally different processes or host machines, which means eKuiper and external functions can have different lifecycles, what's more, external functions
 can provide services to others except eKuiper.
 
 ## Prerequisite
 
 The external functions plugins will be a gRPC Server, so users should have knowledge of gRPC. This tutorial will give the example code to set up the GRPC server.
-Users can download the example code [here](https://github.com/lf-edge/ekuiper/blob/master/docs/resources/pythonGRPC.zip). 
+Users can download the example code [here](https://github.com/lf-edge/ekuiper/blob/master/docs/resources/pythonGRPC.zip).
 
-Users also need have basic knowledge of Docker. 
+Users also need have basic knowledge of Docker.
 
 ## Develop the external function
 
 In the example code, the gRPC Server provide ``label`` method, and users just need write an interface description file and register them into eKuiper. Then eKuiper can call the RPC method
 just as built-in functions. The ``label`` method is powered by ``tflite_runtime`` image classification, for more detail, please check the `label.py` file in the example code.
 
-This is the proto file for the external functions plugins that provide services. The parameter of ``label`` method should be base64 encoded image. 
+This is the proto file for the external functions plugins that provide services. The parameter of ``label`` method should be base64 encoded image.
 
 ```proto
 syntax = "proto3";
@@ -67,8 +67,7 @@ And then set up the service by following command
  docker run -d  -p 50051:50051 --name rpc-test test:1.1.1
 ```
 
-Now, the gRPC server are providing services on 50051 port. 
-
+Now, the gRPC server are providing services on 50051 port.
 
 ## Package and register the external function
 
@@ -83,7 +82,6 @@ For more detail about the file format and content, please refer to [this](../../
 
 You can get the example zip file in [example code](https://github.com/lf-edge/ekuiper/blob/master/docs/resources/pythonGRPC.zip) in ``ekuiper_package`` folder
 
-
 ### Register the external function
 
 put the sample.zip file in /tmp directory in the same machine with eKuiper and register by cli
@@ -92,7 +90,7 @@ put the sample.zip file in /tmp directory in the same machine with eKuiper and r
 # bin/kuiper create service sample '{"name": "sample","file": "file:///tmp/sample.zip"}'
 ```
 
-## Run the external function 
+## Run the external function
 
 Once the external function registered, we can use it in our rule. We will create a rule to receive base64 encoded image data from a mqtt topic and label the image by tflite model.
 
@@ -119,6 +117,7 @@ kuiper >  select label(image) from demo
 ### Feed the data
 
 User need send the data in json format like this
+
 ```json
 {"image": "base64 encoded data"}
 ```
@@ -127,7 +126,7 @@ User can get the real data from the example code in ``images/example.json`` file
 
 ### Check the result
 
-You can get the result after you publish the base64 encoded image. 
+You can get the result after you publish the base64 encoded image.
 
 ```shell
 kuiper > [{"label":{"results":[{"confidence":0.5789139866828918,"label":"tailed frog"},{"confidence":0.3095814287662506,"label":"bullfrog"},{"confidence":0.040725912898778915,"label":"whiptail"},{"confidence":0.03226377069950104,"label":"frilled lizard"},{"confidence":0.01566782221198082,"label":"agama"}]}}]
@@ -135,4 +134,4 @@ kuiper > [{"label":{"results":[{"confidence":0.5789139866828918,"label":"tailed
 
 ## Conclusion
 
-In this tutorial, we walk you through building external function to leverage a pre-trained TensorFlowLite model. If you need to use other gRPC services, just follow the steps to create customized function. Enjoy the AI in edge device.
+In this tutorial, we walk you through building external function to leverage a pre-trained TensorFlowLite model. If you need to use other gRPC services, just follow the steps to create customized function. Enjoy the AI in edge device.

+ 52 - 51
docs/en_US/guide/ai/tensorflow_lite_tutorial.md

@@ -22,28 +22,28 @@ To integrate eKuiper with TensorFlow lite, we will develop a customized eKuiper
 To develop the function plugin, we need to:
 
 1. Create the plugin go file.  For example, in eKuiper source code, create *plugins/functions/labelImage/labelImage.go* file.
-2. Create a struct that implements [api.Function interface](https://github.com/lf-edge/ekuiper/blob/master/pkg/api/stream.go). 
+2. Create a struct that implements [api.Function interface](https://github.com/lf-edge/ekuiper/blob/master/pkg/api/stream.go).
 3. Export the struct.
 
 The key part of the implementation is the *Exec* function. The pseudo code is like:
 
 ```go
 func (f *labelImage) Exec(args []interface{}, ctx api.FunctionContext) (interface{}, bool) {
-	
+    
     //... do some initialization and validation
     
     // decode the input image
-	img, _, err := image.Decode(bytes.NewReader(arg[0]))
-	if err != nil {
-		return err, false
-	}
-	var outerErr error
-	f.once.Do(func() {		
-		// Load labels, tflite model and initialize the tflite interpreter
-	})
-
-	// Run the interpreter against the input image
-	
+    img, _, err := image.Decode(bytes.NewReader(arg[0]))
+    if err != nil {
+        return err, false
+    }
+    var outerErr error
+    f.once.Do(func() {        
+        // Load labels, tflite model and initialize the tflite interpreter
+    })
+
+    // Run the interpreter against the input image
+    
     // Return the label with the highest possibility
     return result, true
 }
@@ -53,8 +53,8 @@ Another thing to notice is the export of plugin. The function is stateless, so w
 
 ```go
 var LabelImage = labelImage{
-	modelPath: "labelImage/mobilenet_quant_v1_224.tflite",
-	labelPath: "labelImage/labels.txt",
+    modelPath: "labelImage/mobilenet_quant_v1_224.tflite",
+    labelPath: "labelImage/labels.txt",
 }
 ```
 
@@ -105,6 +105,7 @@ There is a very simple [instruction](https://github.com/tensorflow/tensorflow/tr
    $ cp bazel-bin/tensorflow/lite/libtensorflowlite.so lib
    $ cp bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so lib
    ```
+
 6. Install the so files.
    1. Update ldconfig file. `sudo vi /etc/ld.so.conf.d/tflite.conf`.
    2. Add the path `{{tensorflowPath}}/lib` to tflite.conf then save and exit.
@@ -131,11 +132,11 @@ By these commands, the plugin is built into plugins/functions/LabelImage.so and
 Package all files and directories inside *plugins/functions/labelImage* into a zip file along with the built LabelImage.so. The file structure inside the zip file should be like:
 
 - etc
-    - labels.txt
-    - mobilenet_quant_v1_224.tflite
+  - labels.txt
+  - mobilenet_quant_v1_224.tflite
 - lib
-    - libtensorflowlite.so
-    - libtensorflowlite_c.so
+  - libtensorflowlite.so
+  - libtensorflowlite_c.so
 - install.sh
 - LabelImage.so
 - tflite.conf
@@ -184,41 +185,41 @@ Here we create a go program to send image data to the tfdemo topic to be process
 package main
 
 import (
-	"fmt"
-	"os"
-	"time"
+    "fmt"
+    "os"
+    "time"
 
-	mqtt "github.com/eclipse/paho.mqtt.golang"
+    mqtt "github.com/eclipse/paho.mqtt.golang"
 )
 
 func main() {
-	const TOPIC = "tfdemo"
-
-	images := []string{
-		"peacock.png",
-		"frog.jpg",
-		// other images you want
-	}
-	opts := mqtt.NewClientOptions().AddBroker("tcp://yourownhost:1883")
-	client := mqtt.NewClient(opts)
-	if token := client.Connect(); token.Wait() && token.Error() != nil {
-		panic(token.Error())
-	}
-	for _, image := range images {
-		fmt.Println("Publishing " + image)
-		payload, err := os.ReadFile(image)
-		if err != nil {
-			fmt.Println(err)
-			continue
-		}
-		if token := client.Publish(TOPIC, 0, false, payload); token.Wait() && token.Error() != nil {
-			fmt.Println(token.Error())
-		} else {
-			fmt.Println("Published " + image)
-		}
-		time.Sleep(1 * time.Second)
-	}
-	client.Disconnect(0)
+    const TOPIC = "tfdemo"
+
+    images := []string{
+        "peacock.png",
+        "frog.jpg",
+        // other images you want
+    }
+    opts := mqtt.NewClientOptions().AddBroker("tcp://yourownhost:1883")
+    client := mqtt.NewClient(opts)
+    if token := client.Connect(); token.Wait() && token.Error() != nil {
+        panic(token.Error())
+    }
+    for _, image := range images {
+        fmt.Println("Publishing " + image)
+        payload, err := os.ReadFile(image)
+        if err != nil {
+            fmt.Println(err)
+            continue
+        }
+        if token := client.Publish(TOPIC, 0, false, payload); token.Wait() && token.Error() != nil {
+            fmt.Println(token.Error())
+        } else {
+            fmt.Println("Published " + image)
+        }
+        time.Sleep(1 * time.Second)
+    }
+    client.Disconnect(0)
 }
 
 ```
@@ -238,4 +239,4 @@ The images are labeled correctly.
 
 ## Conclusion
 
-In this tutorial, we walk you through building a customized eKuiper plugin to leverage a pre-trained TensorFlowLite model. If you need to use other models, just follow the steps to create another function. Notice that, the built TensorFlow C API can be shared among all functions if running in the same environment. Enjoy the AI in edge device.
+In this tutorial, we walk you through building a customized eKuiper plugin to leverage a pre-trained TensorFlowLite model. If you need to use other models, just follow the steps to create another function. Notice that, the built TensorFlow C API can be shared among all functions if running in the same environment. Enjoy the AI in edge device.

+ 6 - 4
docs/en_US/guide/rules/graph_rule.md

@@ -49,7 +49,7 @@ The `graph` property is a JSON presentation of the DAG. It is consisted by `node
 Each node in the graph JSON has at least 3 fields:
 
 - type: the type of the node, could be `source`, `operator` and `sink`.
-- nodeType: the node type which defines the business logic of a node. There are various node types including built-in types and extended types defined by the plugins. 
+- nodeType: the node type which defines the business logic of a node. There are various node types including built-in types and extended types defined by the plugins.
 - props: the properties for the node. It is different for each nodeType.
 
 ### Node Type
@@ -69,7 +69,6 @@ For source node, the nodeType is the type of the source like `mqtt` and `edgex`.
 
 For sink node, the nodeType is the type of the sink like `mqtt` and `edgex`. Please refer to [sink](../sinks/overview.md) for all supported types. For all sink nodes, they share some common properties but each type will have some owned properties.
 
-
 For operator node, the nodeType are newly defined. Each nodeType will have different properties.
 
 ### Source Node
@@ -373,12 +372,13 @@ are met.
 
 This node allows JavaScript code to be run against the messages that are passed through it.
 
-- script: The inline javascript code to be run. 
+- script: The inline javascript code to be run.
 - isAgg: Whether the node is for aggregated data.
 
 There must be a function named `exec` defined in the script. If isAgg is false, the script node can accept a single message and must return a processed message. If isAgg is true, it will receive a message array (connected to window etc.) and must return an array.
 
 1. Example to deal with single message.
+
    ```json
    {
      "type": "operator",
@@ -388,7 +388,9 @@ There must be a function named `exec` defined in the script. If isAgg is false,
       }
    }
    ```
+
 2. Example to deal with window aggregated messages.
+
    ```json
    {
       "type": "operator",
@@ -398,4 +400,4 @@ There must be a function named `exec` defined in the script. If isAgg is false,
         "isAgg": true
       }
    }
-   ```
+   ```

+ 2 - 3
docs/en_US/guide/rules/overview.md

@@ -39,7 +39,7 @@ There are two ways to define the flow aka. business logic of a rule. Either usin
 
 ### SQL Query
 
-By specifying the `sql` and `actions` property, we can define the business logic of a rule in a declarative way. Among these, `sql` defines the SQL query to run against a predefined stream which will transform the data. The output data can then route to multiple locations by `actions`. 
+By specifying the `sql` and `actions` property, we can define the business logic of a rule in a declarative way. Among these, `sql` defines the SQL query to run against a predefined stream which will transform the data. The output data can then route to multiple locations by `actions`.
 
 #### SQL
 
@@ -164,7 +164,7 @@ The restart strategy options include:
 | multiplier   | float: 2             | The exponential to increase the interval.                                                                                             |
 | jitterFactor | float: 0.1           | How large random value will be added or subtracted to the delay to prevent restarting multiple rules at the same time.                |
 
-The default values can be changed by editing the `etc/kuiper.yaml` file. 
+The default values can be changed by editing the `etc/kuiper.yaml` file.
 
 ### Scheduled Rule
 
@@ -244,5 +244,4 @@ When we try to send a record to the stream, the status of the rule is obtained a
 }
 ```
 
-
 It can be seen that `records_in_total` and `records_out_total` of each operator have changed from 0 to 1, which means that the operator has received a record and passed a record to the next operator, and finally sent to the `sink` and the `sink` wrote 1 record.

+ 1 - 3
docs/en_US/guide/rules/rule_pipeline.md

@@ -51,8 +51,6 @@ Rule pipeline will be implicit. Each rule can use an memory sink / source. This
 ```
 
 By using the memory topic as the bridge, we now form a rule pipeline:
-`rule1->{rule2-1, rule2-2}`. The pipeline can be multiple to multiple and very flexible. 
+`rule1->{rule2-1, rule2-2}`. The pipeline can be multiple to multiple and very flexible.
 
 Notice that, the memory sink can be used together with other sinks to create multiple rule actions for a rule. And the memory source topic can use wildcard to subscirbe to a filtered topic list.
-
-     

+ 6 - 5
docs/en_US/guide/rules/state_and_fault_tolerance.md

@@ -1,6 +1,7 @@
 ## State
 
 eKuiper supports stateful rule stream. There are two kinds of states in eKuiper:
+
 1. Internal state for window operation and rewindable source
 2. User state exposed to extensions with stream context, check [state storage](../../extension/native/overview.md#state-storage).
 
@@ -18,7 +19,7 @@ When things go wrong in a stream processing application, it is possible to have
 
 1. At-most-once(0): eKuiper makes no effort to recover from failures
 2. At-least-once(1): Nothing is lost, but you may experience duplicated results
-3. Exactly-once(2): Nothing is lost or duplicated 
+3. Exactly-once(2): Nothing is lost or duplicated
 
 Given that eKuiper recovers from faults by rewinding and replaying the source data streams, when the ideal situation is described as exactly once does not mean that every event will be processed exactly once. Instead, it means that every event will affect the state being managed by eKuiper exactly once.
 
@@ -34,13 +35,13 @@ For extended source, the user must implement the api.Rewindable interface as wel
 
 ```go
 type Rewindable interface {
-	GetOffset() (interface{}, error)
-	Rewind(offset interface{}) error
+    GetOffset() (interface{}, error)
+    Rewind(offset interface{}) error
 }
 ```
 
 #### Sink consideration
 
-We cannot guarantee the sink to receive a data exactly once. If failures happen during the period of checkpointing, some states which have sent to the sink may not be checkpointed. And those states will be replayed as they are not restored because of not being checkpointed. In this case, the sink may receive them more than once. 
+We cannot guarantee the sink to receive a data exactly once. If failures happen during the period of checkpointing, some states which have sent to the sink may not be checkpointed. And those states will be replayed as they are not restored because of not being checkpointed. In this case, the sink may receive them more than once.
 
-To implement exactly-once, the user will have to implement deduplication tailored to fit the various sinking system.
+To implement exactly-once, the user will have to implement deduplication tailored to fit the various sinking system.

File diff suppressed because it is too large
+ 3 - 2
docs/en_US/guide/serialization/protobuf_tutorial.md


+ 19 - 4
docs/en_US/guide/serialization/serialization.md

@@ -37,6 +37,7 @@ All currently supported formats, their supported codec methods and modes are sho
 When using `custom` format or `protobuf` format, the user can customize the codec and schema in the form of a go language plugin. Among them, `protobuf` only supports custom codecs, and the schema needs to be defined by `*.proto` file. The steps for customizing the format are as follows:
 
 1. Implement codec-related interfaces. The Encode function encodes the incoming data (currently always `map[string]interface{}`) into a byte array. The Decode function, on the other hand, decodes the byte array into `map[string]interface{}`. The decode function is called in source, while the encode function will be called in sink.
+
     ```go
     // Converter converts bytes & map or []map according to the schema
     type Converter interface {
@@ -44,17 +45,23 @@ When using `custom` format or `protobuf` format, the user can customize the code
         Decode(b []byte) (interface{}, error)
     }
     ```
+
 2. Implements the schema description interface. If the custom format is strongly typed, then this interface can be implemented. The interface returns a JSON schema-like string for use by source. The returned data structure will be used as a physical schema to help eKuiper implement capabilities such as SQL validation and optimization during the parse and load phase.
+
     ```go
     type SchemaProvider interface {
-	    GetSchemaJson() string
+      GetSchemaJson() string
     }
     ```
+
 3. Compile as a plugin so file. Usually, format extensions do not need to depend on the main eKuiper project. Due to the limitations of the Go language plugin system, the compilation of the plugin still needs to be done in the same compilation environment as the main eKuiper application, including the same operations, Go language version, etc. If you need to [deploy to the official docker](#build-format-plugin-with-docker), you can use the corresponding docker image for compilation.
+
     ```shell
     go build -trimpath --buildmode=plugin -o data/test/myFormat.so internal/converter/custom/test/*.go
     ```
+
 4. Register the schema by REST API.
+
     ```shell
     ###
     POST http://{{host}}/schemas/custom
@@ -65,6 +72,7 @@ When using `custom` format or `protobuf` format, the user can customize the code
        "soFile": "file:///tmp/custom1.so"
     }
     ```
+
 5. Use custom format in source or sink with `format` and `schemaId` parameters.
 
 The complete custom format can be found in [myFormat.go](https://github.com/lf-edge/ekuiper/blob/master/internal/converter/custom/test/myformat.go). This file defines a simple custom format where the codec actually only calls JSON for serialization. It returns a data structure that can be used to infer the data structure of the eKuiper source.
@@ -80,6 +88,7 @@ To build for alpine environment, we can use the golang alpine image as the base
 1. In your plugin project, create a Makefile and make sure the plugin can be built by `make` command. Check the [sample project](https://github.com/lf-edge/ekuiper/tree/master/internal/converter/custom/test) for reference.
 2. Check the golang version of your eKuiper. Check the `GO_VERSION` arg in the [docker file](https://github.com/lf-edge/ekuiper/blob/master/deploy/docker/Dockerfile) of the corresponding eKuiper version. For example, if the version is `1.18.5`, use `golang:1.18.5-alpine` docker image for build.
 3. Switch to your project location then start the golang docker container with your project, install dependencies then execute `make`, make sure build is successful.
+
    ```shell
    cd ${yourProjectLoc}
    docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.18.5-alpine sh
@@ -87,6 +96,7 @@ To build for alpine environment, we can use the golang alpine image as the base
    /usr/src/myapp # apk add gcc make libc-dev
    /usr/src/myapp # make
    ```
+
 4. You should find the built *.so file (test.so in this example) for you plugin in your project. Use that to register the format plugin.
 
 ### Static Protobuf
@@ -96,16 +106,21 @@ specify the proto file during registration mode. For more demanding parsing perf
 Static parsing requires the development of a parsing plug-in, which proceeds as follows.
 
 1. Assume we have a proto file helloworld.proto. Use official protoc tool to generate go code. Check [Protocol Buffer Doc](https://developers.google.com/protocol-buffers/docs/reference/go-generated) for detail.
+
    ```shell
    protoc --go_opt=Mhelloworld.proto=com.main --go_out=. helloworld.proto
    ```
+
 2. Move the generated code helloworld.pb.go to the go language project and rename the package to main.
-3. Create the wrapper struct for each message type. Implement 3 methods `Encode`, `Decode`, `GetXXX`. The main purpose of encoding and decoding is to convert the struct and map types of messages. Note that to ensure performance, do not use reflection. 
+3. Create the wrapper struct for each message type. Implement 3 methods `Encode`, `Decode`, `GetXXX`. The main purpose of encoding and decoding is to convert the struct and map types of messages. Note that to ensure performance, do not use reflection.
 4. Compile as a plugin so file. Usually, format extensions do not need to depend on the main eKuiper project. Due to the limitations of the Go language plugin system, the compilation of the plugin still needs to be done in the same compilation environment as the main eKuiper application, including the same operations, Go language version, etc. If you need to deploy to the official docker, you can use the corresponding docker image for compilation.
+
    ```shell
     go build -trimpath --buildmode=plugin -o data/test/helloworld.so internal/converter/protobuf/test/*.go
    ```
+
 5. Register the schema by REST API. Notice that, the proto file and the so file are needed.
+
     ```shell
     ###
     POST http://{{host}}/schemas/protobuf
@@ -117,11 +132,11 @@ Static parsing requires the development of a parsing plug-in, which proceeds as
        "soFile": "file:///tmp/helloworld.so"
     }
     ```
+
 6. Use custom format in source or sink with `format` and `schemaId` parameters.
 
 The complete static protobuf plugin can be found in [helloworld protobuf](https://github.com/lf-edge/ekuiper/tree/master/internal/converter/protobuf/test).
 
-
 ## Schema
 
 A schema is a set of metadata that defines the data structure. For example, the .proto file is used in the Protobuf format as the data format for schema definition transfers. Currently, eKuiper supports schema types protobuf and custom.
@@ -137,4 +152,4 @@ When eKuiper starts, it will scan this configuration folder and automatically re
 Users can use the schema registry API to add, delete, and check schemas at runtime. For more information, please refer to.
 
 - [schema registry REST API](../../api/restapi/schemas.md)
-- [schema registry CLI](../../api/cli/schemas.md)
+- [schema registry CLI](../../api/cli/schemas.md)

File diff suppressed because it is too large
+ 105 - 94
docs/en_US/guide/sinks/builtin/edgex.md


+ 1 - 1
docs/en_US/guide/sinks/builtin/file.md

@@ -100,4 +100,4 @@ timestamp like `1699888888_deviceName.csv`.
     }
   ]
 }
-```
+```

+ 0 - 1
docs/en_US/guide/sinks/builtin/log.md

@@ -3,4 +3,3 @@
 The action is used for print output message into log file, the log file is at `$eKuiper_install/log/stream.log` by default.
 
 The common sink properties are supported. Please refer to the [sink common properties](../overview.md#common-properties) for more information.
-

+ 1 - 1
docs/en_US/guide/sinks/builtin/memory.md

@@ -54,4 +54,4 @@ The memory sink support [updatable](../overview.md#updatable-sink). It is used t
     }
   ]
 }
-```
+```

+ 4 - 3
docs/en_US/guide/sinks/builtin/mqtt.md

@@ -1,6 +1,6 @@
 # MQTT action
 
-The action is used for publish output message into an MQTT server. 
+The action is used for publish output message into an MQTT server.
 
 | Property name      | Optional | Description                                                                                                                                                                                                                                                                                                                                               |
 |--------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -17,11 +17,12 @@ The action is used for publish output message into an MQTT server.
 | insecureSkipVerify | true     | If InsecureSkipVerify is `true`, TLS accepts any certificate presented by the server and any host name in that certificate.  In this mode, TLS is susceptible to man-in-the-middle attacks. The default value is `false`. The configuration item can only be used with TLS connections.                                                                   |
 | retained           | true     | If retained is `true`,The broker stores the last retained message and the corresponding QoS for that topic.The default value is `false`.                                                                                                                                                                                                                  |
 | compression        | true     | Compress the payload with the specified compression method. Support `zlib`, `gzip`, `flate`, `zstd` method now.                                                                                                                                                                                                                                           |
-| connectionSelector | true     | reuse the connection to mqtt broker. [more info](../../sources/builtin/mqtt.md#connectionselector)                                                                                                                                                                                                                                                        | 
+| connectionSelector | true     | reuse the connection to mqtt broker. [more info](../../sources/builtin/mqtt.md#connectionselector)                                                                                                                                                                                                                                                        |
 
 Other common sink properties are supported. Please refer to the [sink common properties](../overview.md#common-properties) for more information.
 
 Below is sample configuration for connecting to Azure IoT Hub by using SAS authentication.
+
 ```json
     {
       "mqtt": {
@@ -70,4 +71,4 @@ If the result data contains the topic name, we can use it as the property of the
         "retained": false
       }
     }
-```
+```

+ 1 - 1
docs/en_US/guide/sinks/builtin/neuron.md

@@ -82,4 +82,4 @@ Below is another sample to publish data directly to neuron by the data template
     "dataTemplate": "your template here"
   }
 }
-```
+```

+ 0 - 2
docs/en_US/guide/sinks/builtin/nop.md

@@ -5,5 +5,3 @@ The action is an Nop sink, the result sent to this sink will be ignored. If spec
 | Property name | Optional | Description                                                                                                          |
 |---------------|----------|----------------------------------------------------------------------------------------------------------------------|
 | log           | true     | true/false - print the sink result to log or not. By default is `false`, that will not print the result to log file. |
-
-

+ 4 - 2
docs/en_US/guide/sinks/builtin/redis.md

@@ -15,11 +15,13 @@ The sink will publish the result into redis.
 | dataType      | false    | The default Redis data type is string. Note that the original key must be deleted after the Redis data type is changed. Otherwise, the modification is invalid. now only support "list" and "string"                                                                                                  |
 | expiration    | false    | Timeout duration of Redis data. This parameter is valid only for string data in seconds. The default value is -1                                                                                                                                                                                      |
 | rowkindField  | true     | Specify which field represents the action like insert or update. If not specified, all rows are default to insert.                                                                                                                                                                                    |
+
 ## Sample usage
 
 Below is a sample for selecting temperature greater than 50 degree, and some profiles only for your reference.
 
 ### /tmp/redisRule.txt
+
 ```json
 {
   "id": "redis",
@@ -64,7 +66,7 @@ By specifying the `rowkindField` property, the sink can update according the act
 
 ### Upsert multiple keys sample
 
-By specifying the ``keyType`` property to be ``multiple``, the sink can update multiple keys' corresponding value in redis 
+By specifying the ``keyType`` property to be ``multiple``, the sink can update multiple keys' corresponding value in redis
 
 ```json
 {
@@ -90,4 +92,4 @@ When result map is the following format, the ``temperature`` and ``humidity`` wi
     "temperature": 40.9,
     "humidity": 30.9
 }
-```
+```

+ 1 - 0
docs/en_US/guide/sinks/builtin/rest.md

@@ -71,6 +71,7 @@ Use visualization create rules SQL and Actions
 Use text json create rules SQL and Actions
 
 Example for taosdb rest:
+
 ```json
 {"id": "rest1",
   "sql": "SELECT tele[0].Tag00001 AS temperature, tele[0].Tag00002 AS humidity FROM neuron", 

File diff suppressed because it is too large
+ 20 - 17
docs/en_US/guide/sinks/data_template.md


File diff suppressed because it is too large
+ 3 - 4
docs/en_US/guide/sinks/overview.md


+ 1 - 1
docs/en_US/guide/sinks/plugin/image.md

@@ -51,4 +51,4 @@ In the following example, we take the `zmq` plugin as `source` and the `image` p
 curl http://127.0.0.1:9081/streams -X POST -d '{"sql":"create stream s(image bytea)WITH(DATASOURCE = \"\",FORMAT=\"binary\", TYPE=\"zmq\");"}'
 
 curl http://127.0.0.1:9081/rules -X POST -d '{"id":"r","sql":"SELECT * FROM s","actions":[{"image":{"path":"./tmp","format":"png"}}]}'
-```
+```

+ 6 - 1
docs/en_US/guide/sinks/plugin/influx.md

@@ -39,6 +39,7 @@ Other common sink properties are supported. Please refer to the [sink common pro
 Below is a sample for selecting temperature great than 50 degree, and some profiles only for your reference.
 
 ### /tmp/influxRule.txt
+
 ```json
 {
   "id": "influx",
@@ -60,14 +61,18 @@ Below is a sample for selecting temperature great than 50 degree, and some profi
   ]
 }
 ```
+
 ### /tmp/influxPlugin.txt
+
 ```json
 {
    "file":"http://localhost:8080/influx.zip"
  }
 ```
+
 ### plugins/go.mod
-```
+
+```go
 module plugins
 
 go 1.14

+ 16 - 5
docs/en_US/guide/sinks/plugin/influx2.md

@@ -10,6 +10,7 @@ Please make following update before compile the plugin,
 - Remove the first line `// +build plugins` of file `plugins/sinks/influx.go`.
 
 ### build in shell
+
 ```shell
 # cd $eKuiper_src
 # go build -trimpath --buildmode=plugin -o plugins/sinks/influx2.so extensions/sinks/influx/influx2.go
@@ -20,13 +21,16 @@ Please make following update before compile the plugin,
 ```
 
 ### build with image
-```
+
+```shell
 docker build -t demo/plugins:v1 -f build/plugins/Dockerfile .
 docker run demo/plugins:v1
 docker cp  90eae15a7245:/workspace/_plugins/debian/sinks /tmp
 ```
+
 Dockerfile like this:
-```
+
+```dockerfile
 ## plase check go version that kuiper used
 ARG GO_VERSION=1.18.5
 FROM ghcr.io/lf-edge/ekuiper/base:$GO_VERSION-debian AS builder
@@ -36,8 +40,10 @@ RUN go env -w GOPROXY=https://goproxy.cn,direct
 RUN make plugins_c
 CMD ["sleep","3600"]
 ```
+
 add this in Makefile:
-```
+
+```dockerfile
 PLUGINS_CUSTOM := sinks/influx2
 
 .PHONY: plugins_c $(PLUGINS_CUSTOM)
@@ -46,7 +52,7 @@ plugins_c: $(PLUGINS_CUSTOM)
 $(PLUGINS_CUSTOM): PLUGIN_TYPE = $(word 1, $(subst /, , $@))
 $(PLUGINS_CUSTOM): PLUGIN_NAME = $(word 2, $(subst /, , $@))
 $(PLUGINS_CUSTOM):
-	@$(CURDIR)/build-plugins.sh $(PLUGIN_TYPE) $(PLUGIN_NAME)
+  @$(CURDIR)/build-plugins.sh $(PLUGIN_TYPE) $(PLUGIN_NAME)
 ```
 
 Restart the eKuiper server to activate the plugin.
@@ -70,6 +76,7 @@ Other common sink properties are supported. Please refer to the [sink common pro
 Below is a sample for selecting temperature great than 50 degree, and some profiles only for your reference.
 
 ### /tmp/influxRule.txt
+
 ```json
 {
   "id": "influx",
@@ -91,14 +98,18 @@ Below is a sample for selecting temperature great than 50 degree, and some profi
   ]
 }
 ```
+
 ### /tmp/influxPlugin.txt
+
 ```json
 {
    "file":"http://localhost:8080/influx2.zip"
  }
 ```
+
 ### plugins/go.mod
-```
+
+```go
 module plugins
 
 go 1.18

+ 14 - 6
docs/en_US/guide/sinks/plugin/kafka.md

@@ -5,6 +5,7 @@ The sink will publish the result into a Kafka .
 ## Compile & deploy plugin
 
 ### build in shell
+
 ```shell
 # cd $eKuiper_src
 # go build -trimpath --buildmode=plugin -o plugins/sinks/kafka.so extensions/sinks/kafka/kafka.go
@@ -15,13 +16,16 @@ The sink will publish the result into a Kafka .
 ```
 
 ### build with image
-```
+
+```shell
 docker build -t demo/plugins:v1 -f build/plugins/Dockerfile .
 docker run demo/plugins:v1
 docker cp  90eae15a7245:/workspace/_plugins/debian/sinks /tmp
 ```
+
 Dockerfile like this:
-```
+
+```dockerfile
 ## plase check go version that kuiper used
 ARG GO_VERSION=1.18.5
 FROM ghcr.io/lf-edge/ekuiper/base:$GO_VERSION-debian AS builder
@@ -31,8 +35,10 @@ RUN go env -w GOPROXY=https://goproxy.cn,direct
 RUN make plugins_c
 CMD ["sleep","3600"]
 ```
+
 add this in Makefile:
-```
+
+```dockerfile
 PLUGINS_CUSTOM := sinks/kafka
 
 .PHONY: plugins_c $(PLUGINS_CUSTOM)
@@ -41,7 +47,7 @@ plugins_c: $(PLUGINS_CUSTOM)
 $(PLUGINS_CUSTOM): PLUGIN_TYPE = $(word 1, $(subst /, , $@))
 $(PLUGINS_CUSTOM): PLUGIN_NAME = $(word 2, $(subst /, , $@))
 $(PLUGINS_CUSTOM):
-	@$(CURDIR)/build-plugins.sh $(PLUGIN_TYPE) $(PLUGIN_NAME)
+  @$(CURDIR)/build-plugins.sh $(PLUGIN_TYPE) $(PLUGIN_NAME)
 ```
 
 Restart the eKuiper server to activate the plugin.
@@ -56,7 +62,6 @@ Restart the eKuiper server to activate the plugin.
 | saslUserName  | true     | The sasl user name                                |
 | saslPassword  | true     | The sasl password                                 |
 
-
 Other common sink properties are supported. Please refer to the [sink common properties](../overview.md#common-properties) for more information.
 
 ## Sample usage
@@ -64,6 +69,7 @@ Other common sink properties are supported. Please refer to the [sink common pro
 Below is a sample for selecting temperature great than 50 degree, and some profiles only for your reference.
 
 ### /tmp/kafkaRule.txt
+
 ```json
 {
   "id": "kafka",
@@ -82,7 +88,9 @@ Below is a sample for selecting temperature great than 50 degree, and some profi
   ]
 }
 ```
+
 ### /tmp/kafkaPlugin.txt
+
 ```json
 {
    "file":"http://localhost:8080/kafka.zip"
@@ -120,4 +128,4 @@ But kafka needs special attention `` KAFKA_CFG_ADVERTISED_LISTENERS `` needs to
       - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://122.9.166.75:9092
      depends_on:
       - zookeeper
-```
+```

+ 3 - 4
docs/en_US/guide/sinks/plugin/sql.md

@@ -11,6 +11,7 @@ This plugin supports `sqlserver\postgres\mysql\sqlite3\oracle` drivers by defaul
 for example, if he only wants mysql, then he can build with build tag `mysql`.
 
 ### Default build command
+
 ```shell
 # cd $eKuiper_src
 # go build -trimpath --buildmode=plugin -o plugins/sinks/Sql.so extensions/sinks/sql/sql.go
@@ -18,13 +19,13 @@ for example, if he only wants mysql, then he can build with build tag `mysql`.
 ```
 
 ### MySql build command
+
 ```shell
 # cd $eKuiper_src
 # go build -trimpath --buildmode=plugin -tags mysql -o plugins/sinks/Sql.so extensions/sinks/sql/sql.go
 # cp plugins/sinks/Sql.so $eKuiper_install/plugins/sinks
 ```
 
-
 ## Properties
 
 | Property name  | Optional | Description                                                                                                                                                   |
@@ -39,7 +40,7 @@ Other common sink properties are supported. Please refer to the [sink common pro
 
 ## Sample usage
 
-Below is a sample for using sql to get the target data and set to mysql database 
+Below is a sample for using sql to get the target data and set to mysql database
 
 ```json
 {
@@ -59,7 +60,6 @@ Below is a sample for using sql to get the target data and set to mysql database
 }
 ```
 
-
 Write values of tableDataField into database:
 
 The following configuration will write telemetry field's values into database
@@ -118,4 +118,3 @@ By specifying the `rowkindField` and `keyField`, the sink can generate insert, u
   ]
 }
 ```
-

+ 4 - 4
docs/en_US/guide/sinks/plugin/tdengine.md

@@ -5,6 +5,7 @@ In eKuiper source code root path, run the below command.
 ```shell
 go build -trimpath --buildmode=plugin -o plugins/sinks/Tdengine@v1.0.0.so extensions/sinks/tdengine/tdengine.go
 ```
+
 ### Install plugin
 
 Since the operation of the tdengine plug-in depends on the tdengine client, for the convenience of users, the tdengine client will be downloaded when the plug-in is installed. However, the tdengine client version corresponds to the server version one-to-one, which is not compatible with each other, so the user must inform the tdengine server version used.
@@ -32,7 +33,7 @@ Other common sink properties are supported. Please refer to the [sink common pro
 
 ## Operation example
 
-### To create a database or table, refer to the following documents:
+### To create a database or table, refer to the following documents
 
 ```http
 https://www.taosdata.com/cn/getting-started/
@@ -83,10 +84,9 @@ Write into dynamic table:
 }
 ```
 
-
 Write values of tableDataField into database:
 
-The following configuration will write telemetry field's values into database 
+The following configuration will write telemetry field's values into database
 
 ```json
 {
@@ -120,4 +120,4 @@ The following configuration will write telemetry field's values into database
     "tagFields":   ["f3","f4"] // Write f3, f4 fields' values in the result as tags in order
   }
 }
-```
+```

+ 0 - 1
docs/en_US/guide/sinks/plugin/zmq.md

@@ -38,4 +38,3 @@ Below is a sample for selecting temperature great than 50 degree, and publish th
   ]
 }
 ```
-

+ 12 - 11
docs/en_US/guide/sources/builtin/edgex.md

@@ -14,7 +14,7 @@ EdgeX already defines data types in [readings](https://docs.edgexfoundry.org/2.0
 # bin/kuiper CREATE STREAM demo'() with(format="json", datasource="demo" type="edgex")'
 ```
 
-EdgeX source will try to get the data type of fields, 
+EdgeX source will try to get the data type of fields,
 
 - convert to related data type if field of a type can be found in the readings's ValueType field;
 - or keep original value if  field of a type can not be found in the readings's ValueType field;
@@ -26,7 +26,7 @@ The types defined in readings will be converted into related [data types](../../
 
 If `ValueType` value of the reading is `Bool`, then eKuiper tries to convert to `boolean` type. Following values will be converted into `true`.
 
-- "1", "t", "T", "true", "TRUE", "True" 
+- "1", "t", "T", "true", "TRUE", "True"
 
 Following will be converted into `false`.
 
@@ -34,15 +34,15 @@ Following will be converted into `false`.
 
 #### Bigint
 
-If `ValueType` value of the reading is `INT8`, `INT16`, `INT32`, `INT64`, `UINT`, `UINT8`, `UINT16`, `UINT32`, `UINT64` then eKuiper tries to convert to `Bigint` type. 
+If `ValueType` value of the reading is `INT8`, `INT16`, `INT32`, `INT64`, `UINT`, `UINT8`, `UINT16`, `UINT32`, `UINT64` then eKuiper tries to convert to `Bigint` type.
 
 #### Float
 
-If `ValueType` value of the reading is `FLOAT32`, `FLOAT64`, then eKuiper tries to convert to `Float` type. 
+If `ValueType` value of the reading is `FLOAT32`, `FLOAT64`, then eKuiper tries to convert to `Float` type.
 
 #### String
 
-If `ValueType` value of the reading is `String`, then eKuiper tries to convert to `String` type. 
+If `ValueType` value of the reading is `String`, then eKuiper tries to convert to `String` type.
 
 #### Boolean array
 
@@ -74,9 +74,7 @@ default:
 #    Password: password
 ```
 
-
-
-Use can specify the global EdgeX settings here. The configuration items specified in `default` section will be taken as default settings for all EdgeX source. 
+Use can specify the global EdgeX settings here. The configuration items specified in `default` section will be taken as default settings for all EdgeX source.
 
 ### protocol
 
@@ -93,6 +91,7 @@ The port of EdgeX message bus, default value is `5573`.
 ### connectionSelector
 
 specify the stream to reuse the connection to EdgeX message bus. The connection profile located in `connections/connection.yaml`.
+
 ```yaml
 edgex:
   redisMsgBus: #connection key
@@ -116,8 +115,10 @@ edgex:
     #    KeyPEMBlock:
     #    SkipCertVerify: true/false
 ```
+
 There is one configuration group for EdgeX message bus in the example, user need use `edgex.redisMsgBus` as the selector.
 For example
+
 ```yaml
 #Global Edgex configurations
 default:
@@ -132,8 +133,8 @@ default:
   #    Username: user1
   #    Password: password
 ```
-*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case `protocol: tcp | server: localhost | port: 5573`
 
+*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case `protocol: tcp | server: localhost | port: 5573`
 
 ### topic
 
@@ -151,6 +152,7 @@ use the default `redis` value.
 - `redis`: Use Redis as EdgeX message bus. When using EdgeX docker compose, the type will be set to this by default.
 
 EdgeX Levski introduces two types of information message bus, eKuiper supports these two new types from 1.7.1, respectively
+
 - `nats-jetstream`
 - `nats-core`
 
@@ -200,9 +202,8 @@ If you have a specific connection that need to overwrite the default settings, y
 
 **Sample**
 
-```
+```sql
 create stream demo1() WITH (FORMAT="JSON", type="edgex", CONF_KEY="demo1");
 ```
 
 The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.
-

+ 1 - 1
docs/en_US/guide/sources/builtin/file.md

@@ -123,4 +123,4 @@ create stream linesFileDemo () WITH (FORMAT="JSON", TYPE="file", CONF_KEY="jsonl
 ```
 
 Moreover, the lines file type can be combined with any format. For example, if you set the format to protobuf and
-configure the schema, it can be used to parse data that contains multiple Protobuf encoded lines.
+configure the schema, it can be used to parse data that contains multiple Protobuf encoded lines.

+ 7 - 7
docs/en_US/guide/sources/builtin/http_pull.md

@@ -1,4 +1,4 @@
-# HTTP pull source 
+# HTTP pull source
 
 <span style="background:green;color:white;">stream source</span>
 <span style="background:green;color:white">scan table source</span>
@@ -58,13 +58,14 @@ application_conf: #Conf_key
 
 ## Global HTTP pull configurations
 
-Use can specify the global HTTP pull settings here. The configuration items specified in `default` section will be taken as default settings for all HTTP connections. 
+Use can specify the global HTTP pull settings here. The configuration items specified in `default` section will be taken as default settings for all HTTP connections.
 
 ### url
 
 The URL where to get the result.
 
 ### method
+
 HTTP method, it could be post, get, put & delete.
 
 ### interval
@@ -110,6 +111,7 @@ The HTTP request headers that you want to send along with the HTTP request.
 ### responseType
 
 Define how to parse the HTTP response. There are two types defined:
+
 - code: which means to check the response status from the HTTP status code.
 - body: which means to check the response status from the response body. The body must be "application/json" content type and contains a "code" field.
 
@@ -135,18 +137,16 @@ There are two parts to configure: access for access code fetch and refresh for t
 - headers: the request header to refresh the token. Usually put the tokens here for authorization.
 - body: the request body to refresh the token. May not need when using header to pass the refresh token.
 
-
 ## Override the default settings
 
 If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `application_conf`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
 
 **Sample**
 
-```
+```text
 demo (
-		...
-	) WITH (DATASOURCE="test/", FORMAT="JSON", TYPE="httppull", KEY="USERID", CONF_KEY="application_conf");
+    ...
+  ) WITH (DATASOURCE="test/", FORMAT="JSON", TYPE="httppull", KEY="USERID", CONF_KEY="application_conf");
 ```
 
 The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.
-

+ 1 - 1
docs/en_US/guide/sources/builtin/http_push.md

@@ -1,4 +1,4 @@
-# HTTP push source 
+# HTTP push source
 
 <span style="background:green;color:white;">stream source</span>
 <span style="background:green;color:white">scan table source</span>

+ 3 - 2
docs/en_US/guide/sources/builtin/memory.md

@@ -20,10 +20,11 @@ CREATE STREAM stream1 (
 
 Similar to mqtt topic, memory source also supports topic wildcards. Currently, there are two wildcards supported.
 
-**+** : Single level wildcard replaces one topic level. 
+**+** : Single level wildcard replaces one topic level.
 **#**: Multi level wildcard covers multiple topic levels, and it can only be used at the end.
 
 Examples:
+
 1. `home/device1/+/sensor1`
 2. `home/device1/#`
 
@@ -37,4 +38,4 @@ CREATE TABLE alertTable() WITH (DATASOURCE="topicName", TYPE="memory", KIND="loo
 
 After creating a memory lookup table, it will start to accumulate the data from the memory topic indexed by the key field. It will keep running independently of rules. Each topic and key pair will have a single in-memory copy of the virtual table. All rules that refer to the same table or the memory tables with the same topic/key pair will share the same copy of data.
 
-The memory lookup table can be used like a pipeline between multiple rules which is similar to the [rule pipeline](../../rules/rule_pipeline.md) concept. It can store the history of any stream type in memory so that other streams can work with. By working together with [updatable memory sink](../../sinks/builtin/memory.md#updatable-sink), the table content can be updated.
+The memory lookup table can be used like a pipeline between multiple rules which is similar to the [rule pipeline](../../rules/rule_pipeline.md) concept. It can store the history of any stream type in memory so that other streams can work with. By working together with [updatable memory sink](../../sinks/builtin/memory.md#updatable-sink), the table content can be updated.

+ 31 - 26
docs/en_US/guide/sources/builtin/mqtt.md

@@ -1,4 +1,4 @@
-# MQTT source 
+# MQTT source
 
 <span style="background:green;color:white;">stream source</span>
 <span style="background:green;color:white">scan table source</span>
@@ -30,7 +30,7 @@ demo_conf: #Conf_key
 
 ## Global MQTT configurations
 
-Use can specify the global MQTT settings here. The configuration items specified in `default` section will be taken as default settings for all MQTT connections. 
+Use can specify the global MQTT settings here. The configuration items specified in `default` section will be taken as default settings for all MQTT connections.
 
 ### qos
 
@@ -38,11 +38,11 @@ The default subscription QoS level.
 
 ### server
 
-The server for MQTT message broker. 
+The server for MQTT message broker.
 
 ### username
 
-The username for MQTT connection. 
+The username for MQTT connection.
 
 ### password
 
@@ -60,7 +60,6 @@ MQTT protocol version. 3.1 (also refer as MQTT 3) or 3.1.1 (also refer as MQTT 4
 
 The client id for MQTT connection. If not specified, an uuid will be used.
 
-
 ### certificationPath
 
 The location of certification path. It can be an absolute path, or a relative path. If it is an relative path, then the base path is where you excuting the `kuiperd` command. For example, if you run `bin/kuiperd` from `/var/kuiper`, then the base path is `/var/kuiper`; If you run `./kuiperd` from `/var/kuiper/bin`, then the base path is `/var/kuiper/bin`.  Such as  `d3807d9fa5-certificate.pem`.
@@ -80,6 +79,7 @@ Control if to skip the certification verification. If it is set to true, then sk
 ### connectionSelector
 
 specify the stream to reuse the connection to mqtt broker. The connection profile located in `connections/connection.yaml`.
+
 ```yaml
 mqtt:
   localConnection: #connection key
@@ -101,8 +101,10 @@ mqtt:
     #protocolVersion: 3
 
 ```
+
 There are two configuration groups for mqtt in the example, user need use `mqtt.localConnection` or `mqtt.cloudConnection` as the selector.
 For example
+
 ```yaml
 #Global MQTT configurations
 default:
@@ -114,6 +116,7 @@ default:
   #privateKeyPath: /var/kuiper/xyz-private.pem.key
   connectionSelector: mqtt.localConnection
 ```
+
 *Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case ``servers: [tcp://127.0.0.1:1883]``
 
 ### bufferLength
@@ -130,16 +133,16 @@ The name of the kubeedge template file. The file is located in the specified etc
 
 ```json
 {
-	"deviceModels": [{
-		"name": "device1",
-		"properties": [{
-			"name": "temperature",
-			"dataType": "int"
-		}, {
-			"name": "temperature-enable",
-			"dataType": "string"
-		}]
-	}]
+  "deviceModels": [{
+    "name": "device1",
+    "properties": [{
+      "name": "temperature",
+      "dataType": "int"
+    }, {
+      "name": "temperature-enable",
+      "dataType": "string"
+    }]
+  }]
 }
 ```
 
@@ -161,10 +164,10 @@ If you have a specific connection that need to overwrite the default settings, y
 
 **Sample**
 
-```
+```text
 demo (
-		...
-	) WITH (DATASOURCE="test/", FORMAT="JSON", KEY="USERID", CONF_KEY="demo_conf");
+    ...
+  ) WITH (DATASOURCE="test/", FORMAT="JSON", KEY="USERID", CONF_KEY="demo_conf");
 ```
 
 The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.
@@ -188,24 +191,26 @@ demo2_conf: #Conf_key
 ```
 
 create two streams using the config defined above
-```
+
+```text
 demo (
-		...
-	) WITH (DATASOURCE="test/", FORMAT="JSON", CONF_KEY="demo_conf");
+    ...
+  ) WITH (DATASOURCE="test/", FORMAT="JSON", CONF_KEY="demo_conf");
 
 demo2 (
-		...
-	) WITH (DATASOURCE="test2/", FORMAT="JSON", CONF_KEY="demo2_conf");
+    ...
+  ) WITH (DATASOURCE="test2/", FORMAT="JSON", CONF_KEY="demo2_conf");
 
 ```
-When create rules using the defined streams, the rules will share the same connection in source part. 
+
+When create rules using the defined streams, the rules will share the same connection in source part.
 The `DATASOURCE` here will be used as mqtt subscription topics, and subscription  `Qos` defined in config section.
 So stream `demo` will subscribe to topic `test/` with Qos 0 and stream `demo2` will subscribe to topic `test2/` with Qos 0 in this example.
-But if  `DATASOURCE` is same and `qos` not, will only subscribe one time when the first rule starts.       
+But if  `DATASOURCE` is same and `qos` not, will only subscribe one time when the first rule starts.
 
 ## Migration Guide
 
 Since 1.5.0, eKuiper changes the mqtt source broker configuration from `servers` to `server` and users can only configure a mqtt broker address instead of address array.
 Users who are using mqtt broker as stream source in previous release and want to migrate to 1.5.0 release or later, need make sure the ``etc/mqtt_source.yaml`` file ``server`` 's configuration is right.
 Users who are using environment variable to configure the mqtt source address need change their ENV successfully, for example, their broker address is ``tcp://broker.emqx.io:1883``. They need change the ENV from
-``MQTT_SOURCE__DEFAULT__SERVERS=[tcp://broker.emqx.io:1883]`` to ``MQTT_SOURCE__DEFAULT__SERVER="tcp://broker.emqx.io:1883"``
+``MQTT_SOURCE__DEFAULT__SERVERS=[tcp://broker.emqx.io:1883]`` to ``MQTT_SOURCE__DEFAULT__SERVER="tcp://broker.emqx.io:1883"``

File diff suppressed because it is too large
+ 2 - 2
docs/en_US/guide/sources/builtin/neuron.md


+ 1 - 1
docs/en_US/guide/sources/builtin/redis.md

@@ -22,4 +22,4 @@ default:
 #  password: ""
 ```
 
-With this yaml file, the table will refer to the database 0 in redis instance of address 127.0.0.1:6379. The value type is `string`.
+With this yaml file, the table will refer to the database 0 in redis instance of address 127.0.0.1:6379. The value type is `string`.

+ 1 - 2
docs/en_US/guide/sources/overview.md

@@ -24,7 +24,6 @@ Users can directly use the built-in sources in the standard eKuiper instance. Th
 - [File source](./builtin/file.md): source to read from file, usually used as tables.
 - [Memory source](./builtin/memory.md): source to read from eKuiper memory topic to form rule pipelines.
 
-
 ## Predefined Source Plugins
 
 We have developed some official source plugins. These plugins can be found in eKuiper's source code and users need to build them manually. Please check each source about how to build and use.
@@ -39,4 +38,4 @@ The list of predefined source plugins:
 
 ## Use of sources
 
-The user uses sources by means of streams or tables. The type `TYPE` property needs to be set to the name of the desired source in the stream properties created. The user can also change the behavior of the source during stream creation by configuring various general source attributes, such as the decoding type (default is JSON), etc. For the general properties and creation syntax supported by creating streams, please refer to the [Stream Specification](../streams/overview.md).
+The user uses sources by means of streams or tables. The type `TYPE` property needs to be set to the name of the desired source in the stream properties created. The user can also change the behavior of the source during stream creation by configuring various general source attributes, such as the decoding type (default is JSON), etc. For the general properties and creation syntax supported by creating streams, please refer to the [Stream Specification](../streams/overview.md).

+ 4 - 4
docs/en_US/guide/sources/plugin/random.md

@@ -34,6 +34,7 @@ dedup:
   interval: 100
   deduplicate: 50
 ```
+
 ### Global configurations
 
 Use can specify the global random source settings here. The configuration items specified in `default` section will be taken as default settings for the source when running this source.
@@ -60,11 +61,10 @@ If you have a specific connection that need to overwrite the default settings, y
 
 ## Sample usage
 
-```
+```text
 demo (
-		...
-	) WITH (DATASOURCE="demo", FORMAT="JSON", CONF_KEY="ext", TYPE="random");
+    ...
+  ) WITH (DATASOURCE="demo", FORMAT="JSON", CONF_KEY="ext", TYPE="random");
 ```
 
 The configuration keys "ext" will be used.
-

+ 4 - 4
docs/en_US/guide/sources/plugin/sql.md

@@ -123,7 +123,7 @@ If you have a specific connection that need to overwrite the default settings, y
 
 ## Sample usage
 
-```
+```text
 demo (
   ...
  ) WITH (DATASOURCE="demo", FORMAT="JSON", CONF_KEY="template_config", TYPE="sql");
@@ -154,6 +154,6 @@ The cache configuration lies in the `sql.yaml`.
     cacheMissingKey: true
 ```
 
-- cache: bool value to indicate whether to enable cache.
-- cacheTtl: the time to live of the cache in seconds.
-- cacheMissingKey: whether to cache nil value for a key.
+* cache: bool value to indicate whether to enable cache.
+* cacheTtl: the time to live of the cache in seconds.
+* cacheMissingKey: whether to cache nil value for a key.

+ 4 - 5
docs/en_US/guide/sources/plugin/video.md

@@ -33,6 +33,7 @@ dedup:
   interval: 100
 
 ```
+
 ### Global configurations
 
 Use can specify the global video source settings here. The configuration items specified in `default` section will be taken as default settings for the source when running this source.
@@ -45,18 +46,16 @@ The url address for the video streaming.
 
 The interval (ms) to issue a message.
 
-
 ## Override the default settings
 
 If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `ext`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
 
 ## Sample usage
 
-```
+```text
 demo (
-		...
-	) WITH (FORMAT="JSON", CONF_KEY="ext", TYPE="video");
+    ...
+  ) WITH (FORMAT="JSON", CONF_KEY="ext", TYPE="video");
 ```
 
 The configuration keys "ext" will be used.
-

+ 0 - 0
docs/en_US/guide/sources/plugin/zmq.md


Some files were not shown because too many files changed in this diff