Procházet zdrojové kódy

fix(docs): replace double quote with one (#1156)

Signed-off-by: Jianxiang Ran <rxan_embedded@163.com>
superxan před 3 roky
rodič
revize
869cf9614e
44 změnil soubory, kde provedl 266 přidání a 266 odebrání
  1. 6 6
      docs/en_US/README.md
  2. 17 17
      docs/en_US/edgex/edgex_meta.md
  3. 21 21
      docs/en_US/edgex/edgex_rule_engine_tutorial.md
  4. 4 4
      docs/en_US/extension/native/develop/source.md
  5. 2 2
      docs/en_US/extension/native/sinks/file.md
  6. 3 3
      docs/en_US/extension/native/sources/random.md
  7. 4 4
      docs/en_US/extension/native/sources/zmq.md
  8. 4 4
      docs/en_US/getting_started.md
  9. 3 3
      docs/en_US/operation/cli/plugins.md
  10. 3 3
      docs/en_US/operation/cli/rules.md
  11. 6 6
      docs/en_US/operation/cli/streams.md
  12. 3 3
      docs/en_US/operation/cli/tables.md
  13. 1 1
      docs/en_US/operation/compile/compile.md
  14. 7 7
      docs/en_US/operation/config/authentication.md
  15. 3 3
      docs/en_US/operation/config/configuration_file.md
  16. 2 2
      docs/en_US/operation/install/cent-os.md
  17. 3 3
      docs/en_US/operation/install/overview.md
  18. 7 7
      docs/en_US/quick_start_docker.md
  19. 3 3
      docs/en_US/rules/overview.md
  20. 25 25
      docs/en_US/rules/sinks/edgex.md
  21. 2 2
      docs/en_US/rules/sinks/rest.md
  22. 26 26
      docs/en_US/rules/sources/edgex.md
  23. 6 6
      docs/en_US/rules/sources/http_pull.md
  24. 10 10
      docs/en_US/rules/sources/mqtt.md
  25. 4 4
      docs/en_US/sqls/built-in_functions.md
  26. 2 2
      docs/en_US/sqls/data_types.md
  27. 2 2
      docs/en_US/sqls/json_expr.md
  28. 3 3
      docs/en_US/sqls/streams.md
  29. 3 3
      docs/en_US/sqls/windows.md
  30. 5 5
      docs/zh_CN/README.md
  31. 20 20
      docs/zh_CN/edgex/edgex_meta.md
  32. 16 16
      docs/zh_CN/edgex/edgex_rule_engine_tutorial.md
  33. 1 1
      docs/zh_CN/extension/native/develop/plugins_tutorial.md
  34. 10 10
      docs/zh_CN/getting_started.md
  35. 2 2
      docs/zh_CN/operation/cli/tables.md
  36. 7 7
      docs/zh_CN/operation/config/authentication.md
  37. 1 1
      docs/zh_CN/operation/config/configuration_file.md
  38. 1 1
      docs/zh_CN/operation/install/cent-os.md
  39. 2 2
      docs/zh_CN/rules/data_template.md
  40. 4 4
      docs/zh_CN/rules/sinks/edgex.md
  41. 6 6
      docs/zh_CN/rules/sources/edgex.md
  42. 3 3
      docs/zh_CN/rules/sources/mqtt.md
  43. 1 1
      docs/zh_CN/sqls/built-in_functions.md
  44. 2 2
      docs/zh_CN/sqls/windows.md

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 6 - 6
docs/en_US/README.md


+ 17 - 17
docs/en_US/edgex/edgex_meta.md

@@ -4,9 +4,9 @@ When data are published into EdgeX message bus, besides the actual device value,
 
 ## Events data model received in EdgeX message bus
 
-The data structure received from EdgeX message bus is list as in below. An ``Event`` structure encapsulates related metadata (ID, DeviceName, ProfileName, SourceName, Origin, Tags), along with the actual data (in ``Readings`` field) collected from device service.  
+The data structure received from EdgeX message bus is list as in below. An `Event` structure encapsulates related metadata (ID, DeviceName, ProfileName, SourceName, Origin, Tags), along with the actual data (in `Readings` field) collected from device service.  
 
-Similar to ``Event``, ``Reading`` also has some metadata (ID, DeviceName... etc). 
+Similar to `Event`, `Reading` also has some metadata (ID, DeviceName... etc). 
 
 - Event
   - ID
@@ -41,45 +41,45 @@ If upgrading from eKuiper versions v1.2.0 and before which integrates with EdgeX
 
 So how the EdgeX data are managed in eKuiper? Let's take an example.
 
-As in below - firstly, user creates an EdgeX stream named ``events`` with yellow color.
+As in below - firstly, user creates an EdgeX stream named `events` with yellow color.
 
 <img src="./create_stream.png" style="zoom:50%;" />
 
 Secondly, one message is published to message bus as in below. 
 
-- The device name is ``demo`` with green color
-- Reading name ``temperature`` & ``Humidity`` with red color. 
-- It has some ``metadata`` that is not necessary to "visible", but it probably will be used during data analysis, such as ``DeviceName`` field in ``Event`` structure. eKuiper saves these values into message tuple named metadata, and user can get these values during analysis. **Notice that, metadata name `DeviceName` was renamed from `Device` in EdgeX v2.**
+- The device name is `demo` with green color
+- Reading name `temperature` & `Humidity` with red color. 
+- It has some `metadata` that is not necessary to "visible", but it probably will be used during data analysis, such as `DeviceName` field in `Event` structure. eKuiper saves these values into message tuple named metadata, and user can get these values during analysis. **Notice that, metadata name `DeviceName` was renamed from `Device` in EdgeX v2.**
 
 <img src="./bus_data.png" style="zoom:50%;" />
 
 Thirdly, a SQL is provided for data analysis. Please notice that,
 
-- The ``events`` in FROM clause is yellow color, which is a stream name defined in the 1st step.
-- The SELECT fields ``temperature`` & ``humidity`` are red color, which are the ``Name`` field of readings.
-- The WHERE clause ``meta(deviceName)`` in green color, which is ued for extracting ``DeviceName`` field from ``Events`` structure. The SQL statement will filter data that device names are not ``demo``.
+- The `events` in FROM clause is yellow color, which is a stream name defined in the 1st step.
+- The SELECT fields `temperature` & `humidity` are red color, which are the `Name` field of readings.
+- The WHERE clause `meta(deviceName)` in green color, which is ued for extracting `DeviceName` field from `Events` structure. The SQL statement will filter data that device names are not `demo`.
 
 <img src="./sql.png" style="zoom:50%;" />
 
-Below are some other samples that extract other metadata through ``meta`` function.
+Below are some other samples that extract other metadata through `meta` function.
 
-1. ``meta(origin)``: 000 
+1. `meta(origin)`: 000 
 
    Get 'Origin' metadata from Event structure
 
-2. ``meta(temperature -> origin)``: 123 
+2. `meta(temperature -> origin)`: 123 
 
    Get 'origin' metadata from reading[0], key with 'temperature'
 
-3. ``meta(humidity -> origin)``: 456 
+3. `meta(humidity -> origin)`: 456 
 
    Get 'origin' metadata from reading[1], key with 'humidity'
 
-Please notice that if you want to extract metadata from readings, you need to use ``reading-name -> key`` operator to access the value. In previous samples, ``temperature`` & ``humidity`` are ``reading-names``, and ``key`` is the field names in readings.  
+Please notice that if you want to extract metadata from readings, you need to use `reading-name -> key` operator to access the value. In previous samples, `temperature` & `humidity` are `reading-names`, and `key` is the field names in readings.  
 
-However, if you want to get data from ``Events``, just need to specify the key directly. As the 1st sample in previous list.
+However, if you want to get data from `Events`, just need to specify the key directly. As the 1st sample in previous list.
 
-The ``meta`` function can also be used in ``SELECT`` clause, below is another example. Please notice that if multiple ``meta`` functions are used in ``SELECT`` clause, you should use ``AS`` to specify an alias name, otherwise, the data of previous fields will be overwritten.
+The `meta` function can also be used in `SELECT` clause, below is another example. Please notice that if multiple `meta` functions are used in `SELECT` clause, you should use `AS` to specify an alias name, otherwise, the data of previous fields will be overwritten.
 
 ```sql
 SELECT temperature,humidity, meta(id) AS eid,meta(origin) AS eo, meta(temperature->id) AS tid, meta(temperature->origin) AS torigin, meta(Humidity->deviceName) AS hdevice, meta(Humidity->profileName) AS hprofile FROM demo WHERE meta(deviceName)="demo2"
@@ -87,7 +87,7 @@ SELECT temperature,humidity, meta(id) AS eid,meta(origin) AS eo, meta(temperatur
 
 ## Summary
 
-``meta`` function can be used in eKuiper to access metadata values. Below lists all available keys for ``Events`` and ``Reading``.
+`meta` function can be used in eKuiper to access metadata values. Below lists all available keys for `Events` and `Reading`.
 
 - Events: id, deviceName, profileName, sourceName, origin, tags, correlationid
 - Readning: id, deviceName, profileName, origin, valueType

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 21 - 21
docs/en_US/edgex/edgex_rule_engine_tutorial.md


+ 4 - 4
docs/en_US/extension/native/develop/source.md

@@ -41,9 +41,9 @@ function MySource() api.Source{
 The [Random Source](https://github.com/lf-edge/ekuiper/blob/master/extensions/sources/random/random.go) is a good example.
 
 ### Rewindable source
-If the [rule checkpoint](../../../rules/state_and_fault_tolerance.md#source-consideration) is enabled, the source requires to be rewindable. That means the source need to implement both ``api.Source`` and ``api.Rewindable`` interface. 
+If the [rule checkpoint](../../../rules/state_and_fault_tolerance.md#source-consideration) is enabled, the source requires to be rewindable. That means the source need to implement both `api.Source` and `api.Rewindable` interface. 
 
-A typical implementation is to save an ``offset`` as a field of the source. And update the offset value when reading in new value. Notice that, when implementing GetOffset() will be called by eKuiper system which means the offset value can be accessed by multiple go routines. So a lock is required when read or write the offset.
+A typical implementation is to save an `offset` as a field of the source. And update the offset value when reading in new value. Notice that, when implementing GetOffset() will be called by eKuiper system which means the offset value can be accessed by multiple go routines. So a lock is required when read or write the offset.
 
 
 
@@ -62,8 +62,8 @@ A configuration system is supported for eKuiper extension which will automatical
 
 There are 2 common configuration fields.
  
-* ``concurrency`` to specify how many instances will be started to run the source.
-* ``bufferLength`` to specify the maximum number of messages to be buffered in the memory. This is used to avoid the extra large memory usage that would cause out of memory error. Notice that the memory usage will be varied to the actual buffer. Increase the length here won't increase the initial memory allocation so it is safe to set a large buffer length. The default value is 102400, that is if each payload size is about 100 bytes, the maximum buffer size will be about 102400 * 100B ~= 10MB.
+* `concurrency` to specify how many instances will be started to run the source.
+* `bufferLength` to specify the maximum number of messages to be buffered in the memory. This is used to avoid the extra large memory usage that would cause out of memory error. Notice that the memory usage will be varied to the actual buffer. Increase the length here won't increase the initial memory allocation so it is safe to set a large buffer length. The default value is 102400, that is if each payload size is about 100 bytes, the maximum buffer size will be about 102400 * 100B ~= 10MB.
 
 ### Package the source
 Build the implemented source as a go plugin and make sure the output so file resides in the plugins/sources folder.

+ 2 - 2
docs/en_US/extension/native/sinks/file.md

@@ -16,12 +16,12 @@ Restart the eKuiper server to activate the plugin.
 
 | Property name | Optional | Description                                                  |
 | ------------- | -------- | ------------------------------------------------------------ |
-| path          | false    | The file path for saving the result, such as ``/tmp/result.txt`` |
+| path          | false    | The file path for saving the result, such as `/tmp/result.txt` |
 | interval      | true     | The time interval (ms) for writing the analysis result. The default value is 1000, which means write the analysis result with every one second. |
 
 ## Sample usage
 
-Below is a sample for selecting temperature great than 50 degree, and save the result into file ``/tmp/result.txt`` with every 5 seconds.
+Below is a sample for selecting temperature great than 50 degree, and save the result into file `/tmp/result.txt` with every 5 seconds.
 
 ```json
 {

+ 3 - 3
docs/en_US/extension/native/sources/random.md

@@ -14,7 +14,7 @@ Restart the eKuiper server to activate the plugin.
 
 ## Configuration
 
-The configuration for this source is ``$ekuiper/etc/sources/random.yaml``. The format is as below:
+The configuration for this source is `$ekuiper/etc/sources/random.yaml`. The format is as below:
 
 ```yaml
 default:
@@ -33,7 +33,7 @@ dedup:
 ```
 ### Global configurations
 
-Use can specify the global random source settings here. The configuration items specified in ``default`` section will be taken as default settings for the source when running this source.
+Use can specify the global random source settings here. The configuration items specified in `default` section will be taken as default settings for the source when running this source.
 
 ### interval
 
@@ -53,7 +53,7 @@ An int value. If it is a positive number, the source will not issue the messages
 
 ## Override the default settings
 
-If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with ``test``.  Then you can specify the configuration with option ``CONF_KEY`` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
+If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `test`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
 
 ## Sample usage
 

+ 4 - 4
docs/en_US/extension/native/sources/zmq.md

@@ -14,7 +14,7 @@ Restart the eKuiper server to activate the plugin.
 
 ## Configuration
 
-The configuration for this source is ``$ekuiper/etc/sources/zmq.yaml``. The format is as below:
+The configuration for this source is `$ekuiper/etc/sources/zmq.yaml`. The format is as below:
 
 ```yaml
 #Global Zmq configurations
@@ -25,7 +25,7 @@ test:
 ```
 ### Global configurations
 
-Use can specify the global zmq source settings here. The configuration items specified in ``default`` section will be taken as default settings for the source when connects to Zero Mq.
+Use can specify the global zmq source settings here. The configuration items specified in `default` section will be taken as default settings for the source when connects to Zero Mq.
 
 ### server
 
@@ -33,7 +33,7 @@ The url of the Zero Mq server that the source will subscribe to.
 
 ## Override the default settings
 
-If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with ``test``.  Then you can specify the configuration with option ``CONF_KEY`` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
+If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `test`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../../sqls/streams.md) for more info).
 
 ## Sample usage
 
@@ -43,5 +43,5 @@ demo (
 	) WITH (DATASOURCE="demo", FORMAT="JSON", CONF_KEY="test", TYPE="zmq");
 ```
 
-The configuration keys "test" will be used. The Zero Mq topic to subscribe is "demo" as specified in the ``DATASOURCE``.
+The configuration keys "test" will be used. The Zero Mq topic to subscribe is "demo" as specified in the `DATASOURCE`.
 

+ 4 - 4
docs/en_US/getting_started.md

@@ -113,11 +113,11 @@ default:
   servers: [tcp://127.0.0.1:1883]
 ```
 
-You can use command ``kuiper show streams`` to see if the ``demo`` stream was created or not.
+You can use command `kuiper show streams` to see if the `demo` stream was created or not.
 
 ### Testing the stream through query tool
 
-Now the stream is created, it can be tested from ``kuiper query`` command. The `kuiper` prompt is displayed as below after typing `cli query`.
+Now the stream is created, it can be tested from `kuiper query` command. The `kuiper` prompt is displayed as below after typing `cli query`.
 
 ```sh
 $ bin/kuiper query
@@ -132,7 +132,7 @@ kuiper > select count(*), avg(humidity) as avg_hum, max(humidity) as max_hum fro
 query is submit successfully.
 ```
 
-Now if any data are published to the MQTT server available at ``tcp://127.0.0.1:1883``, then it prints message as following.
+Now if any data are published to the MQTT server available at `tcp://127.0.0.1:1883`, then it prints message as following.
 
 ```
 kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
@@ -146,7 +146,7 @@ kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
 ...
 ```
 
-You can press ``ctrl + c`` to break the query, and server will terminate streaming if detecting client disconnects from the query. Below is the log print at server.
+You can press `ctrl + c` to break the query, and server will terminate streaming if detecting client disconnects from the query. Below is the log print at server.
 
 ```
 ...

+ 3 - 3
docs/en_US/operation/cli/plugins.md

@@ -23,9 +23,9 @@ Sample:
 # bin/kuiper create plugin source random {"file":"http://127.0.0.1/plugins/sources/random.zip"}
 ```
 
-The command create a source plugin named ``random``. 
+The command create a source plugin named `random`. 
 
-- Specify the plugin definition in a file. If the plugin is complex, or the plugin is already wrote in text files with well organized formats, you can just specify the plugin definition through ``-f`` option.
+- Specify the plugin definition in a file. If the plugin is complex, or the plugin is already wrote in text files with well organized formats, you can just specify the plugin definition through `-f` option.
 
 Sample:
 
@@ -33,7 +33,7 @@ Sample:
 # bin/kuiper create plugin sink plugin1 -f /tmp/plugin1.txt
 ```
 
-Below is the contents of ``plugin1.txt``.
+Below is the contents of `plugin1.txt`.
 
 ```json
 {

+ 3 - 3
docs/en_US/operation/cli/rules.md

@@ -20,9 +20,9 @@ Sample:
 # bin/kuiper create rule rule1 '{"sql": "SELECT * from demo","actions": [{"log":  {}},{"mqtt":  {"server":"tcp://127.0.0.1:1883", "topic":"demoSink"}}]}'
 ```
 
-The command create a rule named ``rule1``. 
+The command create a rule named `rule1`. 
 
-- Specify the rule definition in file. If the rule is complex, or the rule is already wrote in text files with well organized formats, you can just specify the rule definition through ``-f`` option.
+- Specify the rule definition in file. If the rule is complex, or the rule is already wrote in text files with well organized formats, you can just specify the rule definition through `-f` option.
 
 Sample:
 
@@ -30,7 +30,7 @@ Sample:
 # bin/kuiper create rule rule1 -f /tmp/rule.txt
 ```
 
-Below is the contents of ``rule.txt``.
+Below is the contents of `rule.txt`.
 
 ```json
 {

+ 6 - 6
docs/en_US/operation/cli/streams.md

@@ -19,9 +19,9 @@ Sample:
 stream my_stream created
 ```
 
-The command create a stream named ``my_stream``. 
+The command create a stream named `my_stream`. 
 
-- Specify the stream definition in file. If the stream is complex, or the stream is already wrote in text files with well organized formats, you can just specify the stream definition through ``-f`` option.
+- Specify the stream definition in file. If the stream is complex, or the stream is already wrote in text files with well organized formats, you can just specify the stream definition through `-f` option.
 
 Sample:
 
@@ -30,7 +30,7 @@ Sample:
 stream my_stream created
 ```
 
-Below is the contents of ``my_stream.txt``.
+Below is the contents of `my_stream.txt`.
 
 ```json
 my_stream(id bigint, name string, score float)
@@ -103,7 +103,7 @@ Sample:
 kuiper > 
 ```
 
-After typing ``query`` sub-command, it prompts ``kuiper > ``, then type SQLs (see [eKuiper SQL reference](../../sqls/overview.md) for how to use eKuiper SQL) in the command prompt and press enter. 
+After typing `query` sub-command, it prompts `kuiper > `, then type SQLs (see [eKuiper SQL reference](../../sqls/overview.md) for how to use eKuiper SQL) in the command prompt and press enter. 
 
 The results will be print in the console.
 
@@ -112,7 +112,7 @@ kuiper > SELECT * FROM my_stream WHERE id > 10;
 [{"...":"..." ....}]
 ...
 ```
-- Press ``CTRL + C`` to stop the query; 
+- Press `CTRL + C` to stop the query; 
 
-- If no SQL are type, you can type ``quit`` or ``exit`` to quit the ``kuiper`` prompt console.
+- If no SQL are type, you can type `quit` or `exit` to quit the `kuiper` prompt console.
 

+ 3 - 3
docs/en_US/operation/cli/tables.md

@@ -19,9 +19,9 @@ Sample:
 table my_table created
 ```
 
-The command create a table named ``my_table``. 
+The command create a table named `my_table`. 
 
-- Specify the table definition in file. If the table is complex, or the table is already wrote in text files with well organized formats, you can just specify the table definition through ``-f`` option.
+- Specify the table definition in file. If the table is complex, or the table is already wrote in text files with well organized formats, you can just specify the table definition through `-f` option.
 
 Sample:
 
@@ -30,7 +30,7 @@ Sample:
 table my_table created
 ```
 
-Below is the contents of ``my_table.txt``.
+Below is the contents of `my_table.txt`.
 
 ```
 my_table(id bigint, name string, score float)

+ 1 - 1
docs/en_US/operation/compile/compile.md

@@ -6,7 +6,7 @@
 
   - Binary files that support EdgeX: `$ make build_with_edgex`
 
-+ Packages: `` $ make pkg``
++ Packages: ` $ make pkg`
 
   - Packages: `$ make pkg`
 

+ 7 - 7
docs/en_US/operation/config/authentication.md

@@ -1,11 +1,11 @@
 ## Authentication
 
-eKuiper support ``JWT RSA256`` authentication for the RESTful management APIs since ``1.4.0`` if enabled . Users need put their Public Key in ``etc/mgmt`` folder and use the corresponding Private key to sign the JWT Tokens.
-When user request the RESTful apis, put the ``Token`` in http request headers in the following format:
+eKuiper support `JWT RSA256` authentication for the RESTful management APIs since `1.4.0` if enabled . Users need put their Public Key in `etc/mgmt` folder and use the corresponding Private key to sign the JWT Tokens.
+When user request the RESTful apis, put the `Token` in http request headers in the following format:
 ```go
 Authorization: XXXXXXXXXXXXXXX
 ```
-If the token is correct, eKuiper will respond the result; otherwise, it will return http ``401``code.
+If the token is correct, eKuiper will respond the result; otherwise, it will return http `401`code.
 
 
 ### JWT Header
@@ -23,8 +23,8 @@ The JWT Payload should use the following format
 
 |  field   | optional |  meaning  |
 |  ----  | ----  | ----  |
-| iss  | false| Issuer , must use the same name with the public key put in ``etc/mgmt``|
-| aud  | false |Audience , must be ``eKuiper`` |
+| iss  | false| Issuer , must use the same name with the public key put in `etc/mgmt`|
+| aud  | false |Audience , must be `eKuiper` |
 | exp  | true |Expiration Time |
 | jti  | true |JWT ID |
 | iat  | true |Issued At |
@@ -38,8 +38,8 @@ There is an example in json format
   "adu": "eKuiper"
 }
 ```
-When use this format, user must make sure the correct Public key file ``sample_key.pub`` are under ``etc/mgmt`` .
+When use this format, user must make sure the correct Public key file `sample_key.pub` are under `etc/mgmt` .
 
 ### JWT Signature
 
-need use the Private key to sign the Tokens and put the corresponding Public Key in ``etc/mgmt`` .
+need use the Private key to sign the Tokens and put the corresponding Public Key in `etc/mgmt` .

+ 3 - 3
docs/en_US/operation/config/configuration_file.md

@@ -1,6 +1,6 @@
 # Basic configurations
 
-The configuration file for eKuiper is at ``$kuiper/etc/kuiper.yaml``. The configuration file is yaml format.
+The configuration file for eKuiper is at `$kuiper/etc/kuiper.yaml`. The configuration file is yaml format.
 Application can be configured through environment variables. Environment variables are taking precedence over their counterparts
 in the yaml file. In order to use env variable for given config we must use formatting as follows:
 `KUIPER__` prefix + config path elements connected by `__`.
@@ -70,7 +70,7 @@ The port for the rest api http server to listen to.
 The tls cert file path and key file path setting. If restTls is not set, the rest api server will listen on http. Otherwise, it will listen on https.
 
 ## authentication 
-eKuiper will check the ``Token`` for rest api when ``authentication`` option is true. please check this file for [more info](authentication.md).
+eKuiper will check the `Token` for rest api when `authentication` option is true. please check this file for [more info](authentication.md).
 
 ```yaml
 basic:
@@ -79,7 +79,7 @@ basic:
 
 ## Prometheus Configuration
 
-eKuiper can export metrics to prometheus if ``prometheus`` option is true. The prometheus will be served with the port specified by ``prometheusPort`` option.
+eKuiper can export metrics to prometheus if `prometheus` option is true. The prometheus will be served with the port specified by `prometheusPort` option.
 
 ```yaml
 basic:

+ 2 - 2
docs/en_US/operation/install/cent-os.md

@@ -6,9 +6,9 @@ This document describes how to install on CentOS.
 
 Unzip the installation package.
 
-``unzip kuiper-centos7-v0.0.1.zip``
+`unzip kuiper-centos7-v0.0.1.zip`
 
-Run the ``kuiper`` to verify eKuiper is installed successfully or not.
+Run the `kuiper` to verify eKuiper is installed successfully or not.
 
 ```shell
 # cd kuiper

+ 3 - 3
docs/en_US/operation/install/overview.md

@@ -22,11 +22,11 @@ log
 
 ### bin
 
-The ``bin`` directory includes all of executable files. Such as ``kuiper`` command.
+The `bin` directory includes all of executable files. Such as `kuiper` command.
 
 ### etc
 
-The ``etc`` directory contains the configuration files of eKuiper. Such as MQTT source configurations etc.
+The `etc` directory contains the configuration files of eKuiper. Such as MQTT source configurations etc.
 
 ### data
 
@@ -38,7 +38,7 @@ eKuiper allows users to develop your own plugins, and put these plugins into thi
 
 ### log
 
-All of the log files are under this folder. The default log file name is ``stream.log``.
+All of the log files are under this folder. The default log file name is `stream.log`.
 
 ## Next steps
 

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 7 - 7
docs/en_US/quick_start_docker.md


Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 3 - 3
docs/en_US/rules/overview.md


Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 25 - 25
docs/en_US/rules/sinks/edgex.md


Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 2 - 2
docs/en_US/rules/sinks/rest.md


+ 26 - 26
docs/en_US/rules/sources/edgex.md

@@ -21,25 +21,25 @@ The types defined in readings will be converted into related [data types](../../
 
 #### Boolean
 
-If ``ValueType`` value of the reading is ``Bool``, then eKuiper tries to convert to ``boolean`` type. Following values will be converted into ``true``.
+If `ValueType` value of the reading is `Bool`, then eKuiper tries to convert to `boolean` type. Following values will be converted into `true`.
 
 - "1", "t", "T", "true", "TRUE", "True" 
 
-Following will be converted into ``false``.
+Following will be converted into `false`.
 
 - "0", "f", "F", "false", "FALSE", "False"
 
 #### Bigint
 
-If ``ValueType`` value of the reading is ``INT8`` , ``INT16``, ``INT32``,  ``INT64``,``UINT`` , ``UINT8`` , ``UINT16`` ,  ``UINT32`` , ``UINT64`` then eKuiper tries to convert to ``Bigint`` type. 
+If `ValueType` value of the reading is `INT8` , `INT16`, `INT32`,  `INT64`,``UINT` , `UINT8` , `UINT16` ,  `UINT32` , `UINT64` then eKuiper tries to convert to `Bigint` type. 
 
 #### Float
 
-If ``ValueType`` value of the reading is ``FLOAT32``, ``FLOAT64``, then eKuiper tries to convert to ``Float`` type. 
+If `ValueType` value of the reading is `FLOAT32`, `FLOAT64`, then eKuiper tries to convert to `Float` type. 
 
 #### String
 
-If ``ValueType`` value of the reading is ``String``, then eKuiper tries to convert to ``String`` type. 
+If `ValueType` value of the reading is `String`, then eKuiper tries to convert to `String` type. 
 
 #### Boolean array
 
@@ -47,15 +47,15 @@ If ``ValueType`` value of the reading is ``String``, then eKuiper tries to conve
 
 #### Bigint array
 
-All of ``INT8`` , ``INT16``, ``INT32``,  ``INT64``,``UINT`` , ``UINT8`` , ``UINT16`` ,  ``UINT32`` , ``UINT64``  array types in EdgeX will be converted to `Bigint` array.
+All of `INT8` , `INT16`, `INT32`,  `INT64`,``UINT` , `UINT8` , `UINT16` ,  `UINT32` , `UINT64`  array types in EdgeX will be converted to `Bigint` array.
 
 #### Float array
 
-All of ``FLOAT32``, ``FLOAT64``  array types in EdgeX will be converted to `Float` array.
+All of `FLOAT32`, `FLOAT64`  array types in EdgeX will be converted to `Float` array.
 
 ## Global configurations
 
-The configuration file of EdgeX source is at ``$ekuiper/etc/sources/edgex.yaml``. Below is the file format.
+The configuration file of EdgeX source is at `$ekuiper/etc/sources/edgex.yaml`. Below is the file format.
 
 ```yaml
 #Global Edgex configurations
@@ -73,23 +73,23 @@ default:
 
 
 
-Use can specify the global EdgeX settings here. The configuration items specified in ``default`` section will be taken as default settings for all EdgeX source. 
+Use can specify the global EdgeX settings here. The configuration items specified in `default` section will be taken as default settings for all EdgeX source. 
 
 ### protocol
 
-The protocol connect to EdgeX message bus, default value is ``tcp``.
+The protocol connect to EdgeX message bus, default value is `tcp`.
 
 ### server
 
-The server address of  EdgeX message bus, default value is ``localhost``.
+The server address of  EdgeX message bus, default value is `localhost`.
 
 ### port
 
-The port of EdgeX message bus, default value is ``5573``.
+The port of EdgeX message bus, default value is `5573`.
 
 ### connectionSelector
 
-specify the stream to reuse the connection to EdgeX message bus. The connection profile located in ``connections/connection.yaml``.
+specify the stream to reuse the connection to EdgeX message bus. The connection profile located in `connections/connection.yaml`.
 ```yaml
 edgex:
   redisMsgBus: #connection key
@@ -113,7 +113,7 @@ edgex:
     #    KeyPEMBlock:
     #    SkipCertVerify: true/false
 ```
-There is one configuration group for EdgeX message bus in the example, user need use ``edgex.redisMsgBus`` as the selector.
+There is one configuration group for EdgeX message bus in the example, user need use `edgex.redisMsgBus` as the selector.
 For example
 ```yaml
 #Global Edgex configurations
@@ -129,23 +129,23 @@ default:
   #    Username: user1
   #    Password: password
 ```
-*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case ``protocol: tcp | server: localhost | port: 5573``
+*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case `protocol: tcp | server: localhost | port: 5573`
 
 
 ### topic
 
-The topic name of EdgeX message bus, default value is ``events``. Users can subscribe to the topics of message bus
+The topic name of EdgeX message bus, default value is `events`. Users can subscribe to the topics of message bus
 directly or subscribe to topics exported by EdgeX application service. Notice that, the message type of the two types of
 topics are different, remember to set the appropriate messageType property.
 
 ### type
 
 The EdgeX message bus type, currently three types of message buses are supported. If specified other values, then will
-use the default ``redis`` value.
+use the default `redis` value.
 
-- ``zero``: Use ZeroMQ as EdgeX message bus.
-- ``mqtt``: Use the MQTT broker as EdgeX message bus.
-- ``redis``: Use Redis as EdgeX message bus. When using EdgeX docker compose, the type will be set to this by default.
+- `zero`: Use ZeroMQ as EdgeX message bus.
+- `mqtt`: Use the MQTT broker as EdgeX message bus.
+- `redis`: Use Redis as EdgeX message bus. When using EdgeX docker compose, the type will be set to this by default.
 
 ### messageType
 
@@ -153,13 +153,13 @@ The EdgeX message model type. If connected to the topic of EdgeX application ser
 Otherwise, if connected to the topic of EdgeX message bus directly to receive the message from device service or core
 data, the message is a "request". There are two available types of messageType property:
 
-- ``event``: The message will be decoded as a `dtos.Event` type. This is the default.
-- ``request``: The message will be decoded as a `requests.AddEventRequest` type.
+- `event`: The message will be decoded as a `dtos.Event` type. This is the default.
+- `request`: The message will be decoded as a `requests.AddEventRequest` type.
 
 ### optional
 
 If MQTT message bus is used, some other optional configurations can be specified. Please notice that all of values in
-optional are **<u>string type</u>**, so values for these configurations should be string - such as ``KeepAlive: "5000"``
+optional are **<u>string type</u>**, so values for these configurations should be string - such as `KeepAlive: "5000"`
 . Below optional configurations are supported, please check MQTT specification for the detailed information.
 
 - ClientId
@@ -178,7 +178,7 @@ optional are **<u>string type</u>**, so values for these configurations should b
 
 ### Override the default settings
 
-In some cases, maybe you want to consume message from multiple topics from message bus.  eKuiper supports to specify another configuration, and use the ``CONF_KEY`` to specify the newly created key when you create a stream.
+In some cases, maybe you want to consume message from multiple topics from message bus.  eKuiper supports to specify another configuration, and use the `CONF_KEY` to specify the newly created key when you create a stream.
 
 ```yaml
 #Override the global configurations
@@ -189,7 +189,7 @@ demo1: #Conf_key
   topic: events
 ```
 
-If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with ``demo1``.  Then you can specify the configuration with option ``CONF_KEY`` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
+If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `demo1`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
 
 **Sample**
 
@@ -197,5 +197,5 @@ If you have a specific connection that need to overwrite the default settings, y
 create stream demo1() WITH (FORMAT="JSON", type="edgex", CONF_KEY="demo1");
 ```
 
-The configuration keys used for these specific settings are the same as in ``default`` settings, any values specified in specific settings will overwrite the values in ``default`` section.
+The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.
 

+ 6 - 6
docs/en_US/rules/sources/http_pull.md

@@ -1,6 +1,6 @@
 # HTTP pull source 
 
-eKuiper provides built-in support for pulling HTTP source stream, which can pull the message from HTTP server broker and feed into the eKuiper processing pipeline.  The configuration file of HTTP pull source is at ``etc/sources/httppull.yaml``. Below is the file format.
+eKuiper provides built-in support for pulling HTTP source stream, which can pull the message from HTTP server broker and feed into the eKuiper processing pipeline.  The configuration file of HTTP pull source is at `etc/sources/httppull.yaml`. Below is the file format.
 
 ```yaml
 #Global httppull configurations
@@ -33,7 +33,7 @@ application_conf: #Conf_key
 
 ## Global HTTP pull configurations
 
-Use can specify the global HTTP pull settings here. The configuration items specified in ``default`` section will be taken as default settings for all HTTP connections. 
+Use can specify the global HTTP pull settings here. The configuration items specified in `default` section will be taken as default settings for all HTTP connections. 
 
 ### url
 
@@ -64,11 +64,11 @@ Body type, it could be none|text|json|html|xml|javascript|format.
 
 ### certificationPath
 
-The location of certification path. It can be an absolute path, or a relative path. If it is an relative path, then the base path is where you excuting the ``kuiperd`` command. For example, if you run ``bin/kuiperd`` from ``/var/kuiper``, then the base path is ``/var/kuiper``; If you run ``./kuiperd`` from ``/var/kuiper/bin``, then the base path is ``/var/kuiper/bin``.  Such as  ``d3807d9fa5-certificate.pem``.
+The location of certification path. It can be an absolute path, or a relative path. If it is an relative path, then the base path is where you excuting the `kuiperd` command. For example, if you run `bin/kuiperd` from `/var/kuiper`, then the base path is `/var/kuiper`; If you run `./kuiperd` from `/var/kuiper/bin`, then the base path is `/var/kuiper/bin`.  Such as  `d3807d9fa5-certificate.pem`.
 
 ### privateKeyPath
 
-The location of private key path. It can be an absolute path, or a relative path.  For more detailed information, please refer to ``certificationPath``. Such as ``d3807d9fa5-private.pem.key``.
+The location of private key path. It can be an absolute path, or a relative path.  For more detailed information, please refer to `certificationPath`. Such as `d3807d9fa5-private.pem.key`.
 
 ### rootCaPath
 
@@ -86,7 +86,7 @@ The HTTP request headers that you want to send along with the HTTP request.
 
 ## Override the default settings
 
-If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with ``application_conf``.  Then you can specify the configuration with option ``CONF_KEY`` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
+If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `application_conf`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
 
 **Sample**
 
@@ -96,5 +96,5 @@ demo (
 	) WITH (DATASOURCE="test/", FORMAT="JSON", TYPE="httppull", KEY="USERID", CONF_KEY="application_conf");
 ```
 
-The configuration keys used for these specific settings are the same as in ``default`` settings, any values specified in specific settings will overwrite the values in ``default`` section.
+The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.
 

+ 10 - 10
docs/en_US/rules/sources/mqtt.md

@@ -1,6 +1,6 @@
 # MQTT source 
 
-eKuiper provides built-in support for MQTT source stream, which can subscribe the message from MQTT broker and feed into the eKuiper processing pipeline.  The configuration file of MQTT source is at ``$ekuiper/etc/mqtt_source.yaml``. Below is the file format.
+eKuiper provides built-in support for MQTT source stream, which can subscribe the message from MQTT broker and feed into the eKuiper processing pipeline.  The configuration file of MQTT source is at `$ekuiper/etc/mqtt_source.yaml`. Below is the file format.
 
 ```yaml
 #Global MQTT configurations
@@ -25,7 +25,7 @@ demo_conf: #Conf_key
 
 ## Global MQTT configurations
 
-Use can specify the global MQTT settings here. The configuration items specified in ``default`` section will be taken as default settings for all MQTT connections. 
+Use can specify the global MQTT settings here. The configuration items specified in `default` section will be taken as default settings for all MQTT connections. 
 
 ### qos
 
@@ -33,7 +33,7 @@ The default subscription QoS level.
 
 ### servers
 
-The server list for MQTT message broker. Currently, only ``ONE`` server can be specified.
+The server list for MQTT message broker. Currently, only `ONE` server can be specified.
 
 ### username
 
@@ -54,11 +54,11 @@ The client id for MQTT connection. If not specified, an uuid will be used.
 
 ### certificationPath
 
-The location of certification path. It can be an absolute path, or a relative path. If it is an relative path, then the base path is where you excuting the ``kuiperd`` command. For example, if you run ``bin/kuiperd`` from ``/var/kuiper``, then the base path is ``/var/kuiper``; If you run ``./kuiperd`` from ``/var/kuiper/bin``, then the base path is ``/var/kuiper/bin``.  Such as  ``d3807d9fa5-certificate.pem``.
+The location of certification path. It can be an absolute path, or a relative path. If it is an relative path, then the base path is where you excuting the `kuiperd` command. For example, if you run `bin/kuiperd` from `/var/kuiper`, then the base path is `/var/kuiper`; If you run `./kuiperd` from `/var/kuiper/bin`, then the base path is `/var/kuiper/bin`.  Such as  `d3807d9fa5-certificate.pem`.
 
 ### privateKeyPath
 
-The location of private key path. It can be an absolute path, or a relative path.  For more detailed information, please refer to ``certificationPath``. Such as ``d3807d9fa5-private.pem.key``.
+The location of private key path. It can be an absolute path, or a relative path.  For more detailed information, please refer to `certificationPath`. Such as `d3807d9fa5-private.pem.key`.
 
 ### rootCaPath
 
@@ -70,7 +70,7 @@ Control if to skip the certification verification. If it is set to true, then sk
 
 ### connectionSelector
 
-specify the stream to reuse the connection to mqtt broker. The connection profile located in ``connections/connection.yaml``.
+specify the stream to reuse the connection to mqtt broker. The connection profile located in `connections/connection.yaml`.
 ```yaml
 mqtt:
   localConnection: #connection key
@@ -92,7 +92,7 @@ mqtt:
     #protocolVersion: 3
 
 ```
-There are two configuration groups for mqtt in the example, user need use ``mqtt.localConnection`` or ``mqtt.cloudConnection`` as the selector.
+There are two configuration groups for mqtt in the example, user need use `mqtt.localConnection` or `mqtt.cloudConnection` as the selector.
 For example
 ```yaml
 #Global MQTT configurations
@@ -105,7 +105,7 @@ default:
   #privateKeyPath: /var/kuiper/xyz-private.pem.key
   connectionSelector: mqtt.localConnection
 ```
-*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case ``servers: [tcp://127.0.0.1:1883]``
+*Note*: once specify the connectionSelector in specific configuration group , all connection related parameters will be ignored , in this case `servers: [tcp://127.0.0.1:1883]`
 
 ### bufferLength
 
@@ -148,7 +148,7 @@ Expected field type.
 
 ## Override the default settings
 
-If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with ``demo``.  Then you can specify the configuration with option ``CONF_KEY`` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
+If you have a specific connection that need to overwrite the default settings, you can create a customized section. In the previous sample, we create a specific setting named with `demo`.  Then you can specify the configuration with option `CONF_KEY` when creating the stream definition (see [stream specs](../../sqls/streams.md) for more info).
 
 **Sample**
 
@@ -158,4 +158,4 @@ demo (
 	) WITH (DATASOURCE="test/", FORMAT="JSON", KEY="USERID", CONF_KEY="demo");
 ```
 
-The configuration keys used for these specific settings are the same as in ``default`` settings, any values specified in specific settings will overwrite the values in ``default`` section.
+The configuration keys used for these specific settings are the same as in `default` settings, any values specified in specific settings will overwrite the values in `default` section.

+ 4 - 4
docs/en_US/sqls/built-in_functions.md

@@ -94,7 +94,7 @@ Aggregate functions perform a calculation on a set of values and return a single
 | rtrim    | rtrim(col1) | Removes all trailing whitespace (tabs and spaces) from the provided String.                                                |
 | substring| substring(col1, start, end) |  returns the substring of the provided String from the provided Int index (0-based, inclusive) to the end of the String.                                                           |
 | startswith| startswith(col1, str) | Returns Boolean, whether the first string argument starts with the second string argument.                  |
-| split_value | split_value(col1, str_splitter, index) | Split the value of the 1st parameter with the 2nd parameter, and return the value of split array that indexed with the 3rd parameter.<br />``split_value("/test/device001/message","/",0) AS a``, the returned value of function is empty; <br />``split_value("/test/device001/message","/",3) AS a``, the returned value of function is ``message``; |
+| split_value | split_value(col1, str_splitter, index) | Split the value of the 1st parameter with the 2nd parameter, and return the value of split array that indexed with the 3rd parameter.<br />`split_value("/test/device001/message","/",0) AS a`, the returned value of function is empty; <br />`split_value("/test/device001/message","/",3) AS a`, the returned value of function is `message`; |
 | trim      | trim(col1) | Removes all leading and trailing whitespace (tabs and spaces) from the provided String.                                    |
 | upper     | upper(col1)| Returns the uppercase version of the given String.|
 
@@ -139,7 +139,7 @@ When casting to datetime type, the supported column type and casting rule are:
 
 1. If column is datatime type, just return the value.
 2. If column is bigint or float type, the number will be treated as the milliseconds elapsed since January 1, 1970 00:00:00 UTC and converted.
-3. If column is string, it will be parsed to datetime with the default format: ``"2006-01-02T15:04:05.000Z07:00"``.
+3. If column is string, it will be parsed to datetime with the default format: `"2006-01-02T15:04:05.000Z07:00"`.
 4. Other types are not supported.
 
 ## Hashing Functions
@@ -166,7 +166,7 @@ When casting to datetime type, the supported column type and casting rule are:
 | cardinality | cardinality(col1) | The number of members in the group. The null value is 0.     |
 | newuuid     | newuuid()         | Returns a random 16-byte UUID.                               |
 | tstamp      | tstamp()          | Returns the current timestamp in milliseconds from 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970 |
-| mqtt        | mqtt(topic)       | Returns the MQTT meta-data of specified key. The current supported keys<br />- topic: return the topic of message.  If there are multiple stream source, then specify the source name in parameter. Such as ``mqtt(src1.topic)``<br />- messageid: return the message id of message. If there are multiple stream source, then specify the source name in parameter. Such as ``mqtt(src2.messageid)`` |
-| meta        | meta(topic)       | Returns the meta-data of specified key. The key could be:<br/> - a standalone key if there is only one source in the from clause, such as ``meta(device)``<br />- A qualified key to specify the stream, such as ``meta(src1.device)`` <br />- A key with arrow for multi level meta data, such as ``meta(src1.reading->device->name)`` This assumes reading is a map structure meta data. |
+| mqtt        | mqtt(topic)       | Returns the MQTT meta-data of specified key. The current supported keys<br />- topic: return the topic of message.  If there are multiple stream source, then specify the source name in parameter. Such as `mqtt(src1.topic)`<br />- messageid: return the message id of message. If there are multiple stream source, then specify the source name in parameter. Such as `mqtt(src2.messageid)` |
+| meta        | meta(topic)       | Returns the meta-data of specified key. The key could be:<br/> - a standalone key if there is only one source in the from clause, such as `meta(device)`<br />- A qualified key to specify the stream, such as `meta(src1.device)` <br />- A key with arrow for multi level meta data, such as `meta(src1.reading->device->name)` This assumes reading is a map structure meta data. |
 | window_start| window_start()   | Return the window start timestamp in int64 format. If there is no time window, it returns 0. The window time is aligned with the timestamp notion of the rule. If the rule is using processing time, then the window start timestamp is the processing timestamp. If the rule is using event time, then the window start timestamp is the event timestamp.   |
 | window_end| window_end()   | Return the window end timestamp in int64 format. If there is no time window, it returns 0. The window time is aligned with the timestamp notion of the rule. If the rule is using processing time, then the window start timestamp is the processing timestamp. If the rule is using event time, then the window start timestamp is the event timestamp.  |

+ 2 - 2
docs/en_US/sqls/data_types.md

@@ -14,7 +14,7 @@ Below is the list of data types supported.
 | 2    | float     | The float type.                                              |
 | 3    | string    | Text values, comprised of Unicode characters.                |
 | 4    | datetime  | datatime type.          |
-| 5    | boolean   | The boolean type, the value could be ``true`` or ``false``.  |
+| 5    | boolean   | The boolean type, the value could be `true` or `false`.  |
 | 6    | array     | The array type, can be any types from simple data or struct type |
 | 7    | struct    | The complex type. Set of name/value pairs. Values must be of supported data type. |
 
@@ -32,7 +32,7 @@ Array and struct are not supported in any binary operations. The compatibility o
 | datetime | Y     | Y      | Y, if in the valid format | Y       |  N     |
 | boolean  | N     | N      | N                         | N       |  Y     |
 
- The default format for datetime string is ``"2006-01-02T15:04:05.000Z07:00"``
+ The default format for datetime string is `"2006-01-02T15:04:05.000Z07:00"`
 
  For `nil` value, we follow the rules:
 

+ 2 - 2
docs/en_US/sqls/json_expr.md

@@ -35,7 +35,7 @@
 
 ### Identifier 
 
-Source Dereference (`.`) The source dereference operator can be used to specify columns by dereferencing the source stream or table. The ``->`` dereference selects a key in a nested JSON object.
+Source Dereference (`.`) The source dereference operator can be used to specify columns by dereferencing the source stream or table. The `->` dereference selects a key in a nested JSON object.
 
 ```
 SELECT demo.age FROM demo
@@ -106,7 +106,7 @@ SELECT d.friends[0]->last FROM demo AS d
 
 Slices allow you to select a contiguous subset of an array. 
 
-``field[from:to)``is the interval before closing and opening, excluding to. If from is not specified, then it means start from the 1st element of array; If to is not specified, then it means end with the last element of array.
+`field[from:to)`is the interval before closing and opening, excluding to. If from is not specified, then it means start from the 1st element of array; If to is not specified, then it means end with the last element of array.
 
 ```
 SELECT children[0:1] FROM demo

+ 3 - 3
docs/en_US/sqls/streams.md

@@ -48,7 +48,7 @@ my_stream
 WITH ( datasource = "topic/temperature", FORMAT = "json", KEY = "id");
 ```
 
-The stream will subscribe to MQTT topic ``topic/temperature``, the server connection uses ``servers`` key of ``default`` section in configuration file ``$ekuiper/etc/mqtt_source.yaml``. 
+The stream will subscribe to MQTT topic `topic/temperature`, the server connection uses `servers` key of `default` section in configuration file `$ekuiper/etc/mqtt_source.yaml`. 
 
 - See [MQTT source](../rules/sources/mqtt.md) for more info.
 
@@ -65,7 +65,7 @@ demo (
 	) WITH (DATASOURCE="test/", FORMAT="JSON", KEY="USERID", CONF_KEY="demo");
 ```
 
-The stream will subscribe to MQTT topic ``test/``, the server connection uses settings of ``demo`` section in configuration file ``$ekuiper/etc/mqtt_source.yaml``. 
+The stream will subscribe to MQTT topic `test/`, the server connection uses settings of `demo` section in configuration file `$ekuiper/etc/mqtt_source.yaml`. 
 
 - See [MQTT source](../rules/sources/mqtt.md) for more info.
 
@@ -108,7 +108,7 @@ schemaless_stream
 WITH ( datasource = "topic/temperature", FORMAT = "json", KEY = "id");
 ```
 
-Schema-less stream field data type will be determined at runtime. If the field is used in an incompatible clause, a runtime error will be thrown and send to the sink. For example, ``where temperature > 30``. Once a temperature is not a number, an error will be sent to the sink.
+Schema-less stream field data type will be determined at runtime. If the field is used in an incompatible clause, a runtime error will be thrown and send to the sink. For example, `where temperature > 30`. Once a temperature is not a number, an error will be sent to the sink.
 
 See [Query languange element](query_language_elements.md) for more inforamtion of SQL language.
 

+ 3 - 3
docs/en_US/sqls/windows.md

@@ -8,7 +8,7 @@ All the windowing operations output results at the end of the window. The output
 
 ## Time-units
 
-There are 5 time-units can be used in the windows. For example, ``TUMBLINGWINDOW(ss, 10)``, which means group the data with tumbling with with 10  seconds interval.
+There are 5 time-units can be used in the windows. For example, `TUMBLINGWINDOW(ss, 10)`, which means group the data with tumbling with with 10  seconds interval.
 
 **DD**: day unit
 
@@ -132,13 +132,13 @@ SELECT * FROM demo GROUP BY COUNTWINDOW(3,1) FILTER(where revenue > 100)
 
 Every event has a timestamp associated with it. The timestamp will be used to calculate the window. By default, a timestamp will be added when an event feed into the source which is called `processing time`. We also support to specify a field as the timestamp, which is called `event time`. The timestamp field is specified in the stream definition. In the below definition, the field `ts` is specified as the timestamp field.
 
-``
+`
 CREATE STREAM demo (
 					color STRING,
 					size BIGINT,
 					ts BIGINT
 				) WITH (DATASOURCE="demo", FORMAT="json", KEY="ts", TIMESTAMP="ts"
-``
+`
 
 In event time mode, the watermark algorithm is used to calculate a window.
 

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 5 - 5
docs/zh_CN/README.md


+ 20 - 20
docs/zh_CN/edgex/edgex_meta.md

@@ -4,9 +4,9 @@
 
 ## EdgeX 消息总线上收到的消息模型
 
-在 EdgeX 消息总线上收到的数据结构如下,一个 ``Event`` 结构体封装了相关的「元数据」(ID, DeviceName, ProfileName, SourceName, Origin, Tags),以及从设备服务中采集到的实际数据 (在 ``Readings`` 字段中) 。
+在 EdgeX 消息总线上收到的数据结构如下,一个 `Event` 结构体封装了相关的「元数据」(ID, DeviceName, ProfileName, SourceName, Origin, Tags),以及从设备服务中采集到的实际数据 (在 `Readings` 字段中) 。
 
-与``Event`` 类似, ``Reading`` 也包含了一些元数据 (ID, DeviceName... 等)。
+与`Event` 类似, `Reading` 也包含了一些元数据 (ID, DeviceName... 等)。
 
 - Event
   - ID
@@ -33,53 +33,53 @@
 
 由于 EdgeX v2 重构了消息元数据,从 eKuiper v1.2.0 之前的版本升级的用户需要注意元数据的突破性变化。
 
-1. ``Event`` 和 ``Reading`` 中移除了元数据 `Pushed`, `Created` 和 `Modified` 。
-2. `Event````Reading`` 中的元数据 `Device` 改名为 `DeviceName` 。
-3. `Reading`` 中的元数据 `Name` 改名为 `ResourceName` 。
+1. `Event` 和 `Reading` 中移除了元数据 `Pushed`, `Created` 和 `Modified` 。
+2. `Event` 和 `Reading` 中的元数据 `Device` 改名为 `DeviceName` 。
+3. `Reading` 中的元数据 `Name` 改名为 `ResourceName` 。
 
 ## eKuiper 中的 EdgeX 数据模型
 
 那么在 eKuiper 中, EdgeX 数据是如何被管理的?让我们来看个例子。
 
-如下所示,首先用户创建了一个名为 ``events`` 的 EdgeX 流定义(以黄色高亮标示)。
+如下所示,首先用户创建了一个名为 `events` 的 EdgeX 流定义(以黄色高亮标示)。
 
 <img src="./create_stream.png" style="zoom:50%;" />
 
 其次,如下所示,一条消息被发送到消息总线。
 
-- Device name 为 ``demo``,以绿色高亮标示
-- Reading 名称为 ``temperature`` & ``Humidity`` ,用红色高亮标示
-- 这里有些 ``元数据`` 是没有必要「可见」的,但是这些值在分析的时候可能会被用到,例如``Event`` 结构体中的 ``DeviceName`` 字段。eKuiper 将这些值保存在 eKuiper 消息中的名为 metadata 的字段中,用户在分析阶段可以获取到这些值。**需要注意的是, EdgeX v2 中元数据 `Device` 已重命名为 `DeviceName` 。**
+- Device name 为 `demo`,以绿色高亮标示
+- Reading 名称为 `temperature` & `Humidity` ,用红色高亮标示
+- 这里有些 `元数据` 是没有必要「可见」的,但是这些值在分析的时候可能会被用到,例如`Event` 结构体中的 `DeviceName` 字段。eKuiper 将这些值保存在 eKuiper 消息中的名为 metadata 的字段中,用户在分析阶段可以获取到这些值。**需要注意的是, EdgeX v2 中元数据 `Device` 已重命名为 `DeviceName` 。**
 
 <img src="./bus_data.png" style="zoom:50%;" />
 
 最后,提供一条 SQL 用于数据分析,此处请注意,
 
-- FROM 子句中的 ``events`` 为黄色高亮,就是在第一步中定义的流名字。
-- SELECT 中的 ``temperature`` & ``humidity`` 字段为红色高亮,它们是 readings 中的 ``Name`` 字段的值。
-- WHERE 子句中的 ``meta(deviceName)`` 为绿色高亮,用于从 ``Events ``结构体中抽取 ``device`` 字段。该 SQL 语句将过滤所有设备名称不是 ``demo`` 的记录。
+- FROM 子句中的 `events` 为黄色高亮,就是在第一步中定义的流名字。
+- SELECT 中的 `temperature` & `humidity` 字段为红色高亮,它们是 readings 中的 `Name` 字段的值。
+- WHERE 子句中的 `meta(deviceName)` 为绿色高亮,用于从 `Events `结构体中抽取 `device` 字段。该 SQL 语句将过滤所有设备名称不是 `demo` 的记录。
 
 <img src="./sql.png" style="zoom:50%;" />
 
-以下是使用 ``meta`` 函数抽取别的元数据的一些例子。
+以下是使用 `meta` 函数抽取别的元数据的一些例子。
 
-1. ``meta(origin)``: 000  
+1. `meta(origin)`: 000  
 
    从 Event 结构体中获取 'origin' 元数据
 
-2. ``meta(temperature -> origin)``: 123 
+2. `meta(temperature -> origin)`: 123 
 
    从 reading[0] 中获取  'origin' 元数据,以 'temperature'  为 key
 
-3. ``meta(humidity -> origin)``: 456 
+3. `meta(humidity -> origin)`: 456 
 
    从 reading[1] 中获取  'origin' 元数据,以 'humidity' 为 key
 
-请注意,如果你想从 readings 中获取元数据,你需要使用 ``reading-name -> key`` 操作符来访问这些值。在前述例子中,``temperature`` & ``humidity````reading-names``,并且  ``key`` 是 readings 中的字段名字。
+请注意,如果你想从 readings 中获取元数据,你需要使用 `reading-name -> key` 操作符来访问这些值。在前述例子中,`temperature` & `humidity`  是  `reading-names`,并且  `key` 是 readings 中的字段名字。
 
-但是,如果你从 ``Events`` 中获取元数据,只需直接指定 key,如第一个例子所示。
+但是,如果你从 `Events` 中获取元数据,只需直接指定 key,如第一个例子所示。
 
-``meta`` 函数也可以用在 ``SELECT`` 子句中,以下为另外一个例子。请注意,如果在 ``SELECT`` 子句中使用了多个 ``meta`` 函数,你应该使用 ``AS`` 来指定一个别名,否则在前面的字段中的值将会被覆盖(不加别名,都有 meta 作为字段名)。
+`meta` 函数也可以用在 `SELECT` 子句中,以下为另外一个例子。请注意,如果在 `SELECT` 子句中使用了多个 `meta` 函数,你应该使用 `AS` 来指定一个别名,否则在前面的字段中的值将会被覆盖(不加别名,都有 meta 作为字段名)。
 
 ```sql
 SELECT temperature,humidity, meta(id) AS eid,meta(origin) AS eo, meta(temperature->id) AS tid, meta(temperature->origin) AS torigin, meta(Humidity->deviceName) AS hdevice, meta(Humidity->profileName) AS hprofile FROM demo WHERE meta(deviceName)="demo2"
@@ -87,7 +87,7 @@ SELECT temperature,humidity, meta(id) AS eid,meta(origin) AS eo, meta(temperatur
 
 ## 总结
 
-Kuper 的 ``meta`` 函数可以用于访问元数据,以下列出了所有在 EdgeX 的 ``Events`` 和 ``Reading`` 中支持的 key,
+Kuper 的 `meta` 函数可以用于访问元数据,以下列出了所有在 EdgeX 的 `Events` 和 `Reading` 中支持的 key,
 
 - Events: id, deviceName, profileName, sourceName, origin, tags, correlationid
 - Readning: id, deviceName, profileName, origin, valueType

Rozdílová data souboru nebyla zobrazena, protože soubor je příliš velký
+ 16 - 16
docs/zh_CN/edgex/edgex_rule_engine_tutorial.md


+ 1 - 1
docs/zh_CN/extension/native/develop/plugins_tutorial.md

@@ -216,7 +216,7 @@ require (
           extensions.mod     //existing extensions mod file for plugins
         samplePlugin
           go.mod             //new plugin project default mod file
-       ``
+       `
        
 5. 在 eKuiper 目录下,编译插件和eKuiper
    ```shell

+ 10 - 10
docs/zh_CN/getting_started.md

@@ -14,7 +14,7 @@ or
 $ tar -xzf kuiper-$VERISON-$OS-$ARCH.zip
 ```
 
-运行 ``bin/kuiperd`` 以启动 eKuiper 服务器
+运行 `bin/kuiperd` 以启动 eKuiper 服务器
 
 ```sh
 $ bin/kuiperd
@@ -96,7 +96,7 @@ eKuiper 具有许多用于复杂分析的内置函数和扩展,您可以访问
 
 流需要具有一个名称和一个架构,以定义每个传入事件应包含的数据。 对于这种情况,我们将使用 MQTT 源应对温度事件。 输入流可以通过 SQL 语言定义。
 
-我们创建一个名为 ``demo`` 的流,该流使用 ``DATASOURCE`` 属性中指定的 MQTT ``demo`` 主题。
+我们创建一个名为 `demo` 的流,该流使用 `DATASOURCE` 属性中指定的 MQTT `demo` 主题。
 ```sh
 $ bin/kuiper create stream demo '(temperature float, humidity bigint) WITH (FORMAT="JSON", DATASOURCE="demo")'
 ```
@@ -109,18 +109,18 @@ default:
   servers: [tcp://127.0.0.1:1883]
 ```
 
-您可以使用``kuiper show streams`` 命令来查看是否创建了 ``demo`` 流。
+您可以使用`kuiper show streams` 命令来查看是否创建了 `demo` 流。
 
 ### 通过查询工具测试流
 
-现在已经创建了流,可以通过 ``kuiper query`` 命令对其进行测试。键入``kuiper query``后,显示 ``kuiper``提示符。
+现在已经创建了流,可以通过 `kuiper query` 命令对其进行测试。键入`kuiper query`后,显示 `kuiper`提示符。
 
 ```sh
 $ bin/kuiper query
 kuiper > 
 ```
 
-在 ``kuiper``提示符下,您可以键入 SQL 并根据流验证 SQL。
+在 `kuiper`提示符下,您可以键入 SQL 并根据流验证 SQL。
 
 ```sh
 kuiper > select count(*), avg(humidity) as avg_hum, max(humidity) as max_hum from demo where temperature > 30 group by TUMBLINGWINDOW(ss, 5);
@@ -128,7 +128,7 @@ kuiper > select count(*), avg(humidity) as avg_hum, max(humidity) as max_hum fro
 query is submit successfully.
 ```
 
-现在,如果有任何数据发布到位于``tcp://127.0.0.1:1883``的 MQTT 服务器,那么它打印如下消息。
+现在,如果有任何数据发布到位于`tcp://127.0.0.1:1883`的 MQTT 服务器,那么它打印如下消息。
 
 ```
 kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
@@ -142,7 +142,7 @@ kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
 ...
 ```
 
-您可以按 ``ctrl + c`` 键中断查询,如果检测到客户端与查询断开连接,服务器将终止流传输。 以下是服务器上的日志打印。
+您可以按 `ctrl + c` 键中断查询,如果检测到客户端与查询断开连接,服务器将终止流传输。 以下是服务器上的日志打印。
 
 ```
 ...
@@ -158,7 +158,7 @@ time="2019-09-09T21:46:54+08:00" level=info msg="stop the query."
 * sql:针对规则运行的查询
 * 动作:规则的输出动作
 
-我们可以运行 ``kuiper rule`` 命令来创建规则并在文件中指定规则定义
+我们可以运行 `kuiper rule` 命令来创建规则并在文件中指定规则定义
 
 ```sh
 $ bin/kuiper create rule ruleDemo -f myRule
@@ -173,10 +173,10 @@ $ bin/kuiper create rule ruleDemo -f myRule
     }]
 }
 ```
-您应该在流日志中看到一条成功的消息`` rule ruleDemo created``。 现在,规则已经建立并开始运行。
+您应该在流日志中看到一条成功的消息` rule ruleDemo created`。 现在,规则已经建立并开始运行。
 
 ### 测试规则
-现在,规则引擎已准备就绪,可以接收来自 MQTT ``demo`` 主题的事件。 要对其进行测试,只需使用 MQTT 客户端将消息发布到 ``demo`` 主题即可。 该消息应为 json 格式,如下所示:
+现在,规则引擎已准备就绪,可以接收来自 MQTT `demo` 主题的事件。 要对其进行测试,只需使用 MQTT 客户端将消息发布到 `demo` 主题即可。 该消息应为 json 格式,如下所示:
 
 ```json
 {"temperature":31.2, "humidity": 77}

+ 2 - 2
docs/zh_CN/operation/cli/tables.md

@@ -8,7 +8,7 @@ create table $table_name $table_def | create table -f $table_def_file
 ```
 
 - 在命令行中定义表信息.
-  以下例子通过命令行创建了一个名为 ``my_table``的表。
+  以下例子通过命令行创建了一个名为 `my_table`的表。
 ```shell
 # bin/kuiper create table my_table '(id bigint, name string, score float) WITH ( datasource = "lookup.json", FORMAT = "json", KEY = "id");'
 table my_table created
@@ -22,7 +22,7 @@ table my_table created
 table my_table created
 ```
 
-  以下是 ``my_table.txt``的内容.
+  以下是 `my_table.txt`的内容.
 ```
 my_table(id bigint, name string, score float)
     WITH ( datasource = "lookup.json", FORMAT = "json", KEY = "id");

+ 7 - 7
docs/zh_CN/operation/config/authentication.md

@@ -1,11 +1,11 @@
 ## Authentication
 
-如果使能的话, eKuiper 从 1.4.0 起将为 RESTful API 提供基于 ``JWT RSA256`` 的身份验证。用户需要将他们的公钥放在 ``etc/mgmt`` 文件夹中,并使用相应的私钥来签署 JWT 令牌。
-当用户请求 RESTful API 时,将 ``Token`` 放在 http 请求头中,格式如下:
+如果使能的话, eKuiper 从 1.4.0 起将为 RESTful API 提供基于 `JWT RSA256` 的身份验证。用户需要将他们的公钥放在 `etc/mgmt` 文件夹中,并使用相应的私钥来签署 JWT 令牌。
+当用户请求 RESTful API 时,将 `Token` 放在 http 请求头中,格式如下:
 ```
 Authorization:XXXXXXXXXXXXXXX
 ```
-如果token正确,eKuiper会响应结果;否则,它将返回 http ``401`` 代码。
+如果token正确,eKuiper会响应结果;否则,它将返回 http `401` 代码。
 
 
 ### JWT Header
@@ -23,8 +23,8 @@ JWT Payload 应使用以下格式
 
 |  字段   | 是否可选 |  意义  |
 |  ----  | ----  | ----  |
-| iss  | 否| 颁发者 ,  此字段必须与``etc/mgmt``目录中的相应公钥文件名字一致|
-| aud  | 否 |颁发对象 , 此字段必须是 ``eKuiper`` |
+| iss  | 否| 颁发者 ,  此字段必须与`etc/mgmt`目录中的相应公钥文件名字一致|
+| aud  | 否 |颁发对象 , 此字段必须是 `eKuiper` |
 | exp  | 是 |过期时间 |
 | jti  | 是 |JWT ID |
 | iat  | 是 |颁发时间 |
@@ -38,8 +38,8 @@ JWT Payload 应使用以下格式
   "adu": "eKuiper"
 }
 ```
-使用此格式时,用户必须确保正确的公钥文件 ``sample_key.pub`` 位于 ``etc/mgmt`` 下。
+使用此格式时,用户必须确保正确的公钥文件 `sample_key.pub` 位于 `etc/mgmt` 下。
 
 ### JWT Signature
 
-需要使用私钥对令牌进行签名,并将相应的公钥放在 ``etc/mgmt`` 中。
+需要使用私钥对令牌进行签名,并将相应的公钥放在 `etc/mgmt` 中。

+ 1 - 1
docs/zh_CN/operation/config/configuration_file.md

@@ -51,7 +51,7 @@ REST http 服务器监听端口
 TLS 证书 cert 文件和 key 文件位置。如果 restTls 选项未配置,则 REST 服务器将启动为 http 服务器,否则启动为 https 服务器。
 
 ## authentication 
-当 ``authentication`` 选项为 true 时,eKuiper 将为 rest api 请求检查 ``Token`` 。请检查此文件以获取 [更多信息](authentication.md)。
+当 `authentication` 选项为 true 时,eKuiper 将为 rest api 请求检查 `Token` 。请检查此文件以获取 [更多信息](authentication.md)。
 
 ```yaml
 basic:

+ 1 - 1
docs/zh_CN/operation/install/cent-os.md

@@ -6,7 +6,7 @@
 
 解压安装包。
 
-``unzip kuiper-centos7-v0.0.1.zip``
+`unzip kuiper-centos7-v0.0.1.zip`
 
 运行 `kuiper` 来验证 eKuiper 是否已成功安装。
 

+ 2 - 2
docs/zh_CN/rules/data_template.md

@@ -59,7 +59,7 @@ Golang 的模版可以作用于各种数据结构,比如 map、切片 (slice)
 ```
 
 ::: v-pre
-在发送到 sink 的时候,希望每条数据分开发送,首先需要将 sink 的 ``sendSingle`` 设置为 `true`,然后使用数据模版:`{{json .}}`,完整配置如下,用户可以将其拷贝到某 sink 配置的最后。
+在发送到 sink 的时候,希望每条数据分开发送,首先需要将 sink 的 `sendSingle` 设置为 `true`,然后使用数据模版:`{{json .}}`,完整配置如下,用户可以将其拷贝到某 sink 配置的最后。
 :::
 
 ```json
@@ -68,7 +68,7 @@ Golang 的模版可以作用于各种数据结构,比如 map、切片 (slice)
  "dataTemplate": "{{toJson .}}"
 ```
 
-- 将 ``sendSingle`` 设置为 `true`后,eKuiper 把传递给 sink 的 `[]map[string]interface{}` 数据类型进行遍历处理,对于遍历过程中的每一条数据都会应用用户指定的数据模版
+- 将 `sendSingle` 设置为 `true`后,eKuiper 把传递给 sink 的 `[]map[string]interface{}` 数据类型进行遍历处理,对于遍历过程中的每一条数据都会应用用户指定的数据模版
 - `toJson` 是 eKuiper 提供的函数(用户可以参考 [eKuiper 扩展模版函数](./overview.md#模版中支持的函数)来了解更多的 eKuiper 扩展),可以将传入的参数转化为 JSON 字符串输出,对于遍历到的每一条数据,将 map 中的内容转换为 JSON 字符串
 
 Golang 还内置提供了一些函数,用户可以参考[更多 Golang 内置提供的函数](https://golang.org/pkg/text/template/#hdr-Functions)来获取更多函数信息。

+ 4 - 4
docs/zh_CN/rules/sinks/edgex.md

@@ -16,7 +16,7 @@
 | topic              | 是   | 发布的主题名称。该主题为固定值。若不同的消息需要动态指定主题,则将该属性置空,并设置 topicPrefix 属性。这两个属性只能设置一个。若两者都未设置,则使用缺省主题 `application` 。                                                                            |
 | topicPrefix        | 是   | 发布的主题的前缀。发送的主题将采用动态拼接,格式为`$topicPrefix/$profileName/$deviceName/$sourceName` 。                                                                                                   |
 | contentType        | 是   | 发布消息的内容类型,如未指定,使用缺省值 `application/json` 。                                                                                                                                        |
-| messageType        | 是   | EdgeX 消息模型类型。若要将消息发送为类似 apllication service 的 event 类型,则应设置为 `event`。否则,若要将消息发送为类似 device service 或者 core data service 的 event request 类型,则应设置为 `request`。如未指定,使用缺省值 ``event`` 。 |
+| messageType        | 是   | EdgeX 消息模型类型。若要将消息发送为类似 apllication service 的 event 类型,则应设置为 `event`。否则,若要将消息发送为类似 device service 或者 core data service 的 event request 类型,则应设置为 `request`。如未指定,使用缺省值 `event` 。 |
 | metadata           | 是   | 该属性为一个字段名称,该字段是 SQL SELECT 子句的一个字段名称,这个字段应该类似于 `meta(*) AS xxx` ,用于选出消息中所有的 EdgeX 元数据 。                                                                                          |
 | profileName        | 是   | 允许用户指定 Profile 名称,该名称将作为从 eKuiper 中发送出来的 Event 结构体的 profile 名称。若在 metadata 中设置了 profileName 将会优先采用。                                                                              |
 | deviceName         | 是   | 允许用户指定设备名称,该名称将作为从 eKuiper 中发送出来的 Event 结构体的设备名称。若在 metadata 中设置了 deviceName 将会优先采用。                                                                                             |
@@ -144,7 +144,7 @@
 
 ## 使用连接重用功能发布
 
-以下是如何使用连接重用功能的示例。我们只需要删除连接相关的参数并使用 ``connectionSelector`` 指定要重用的连接。 [更多信息](../sources/edgex.md#connectionselector)
+以下是如何使用连接重用功能的示例。我们只需要删除连接相关的参数并使用 `connectionSelector` 指定要重用的连接。 [更多信息](../sources/edgex.md#connectionselector)
 
 ```json
 {
@@ -181,7 +181,7 @@
   ]
 }
 ```
-2) 使用如下的规则,并且在 `edgex` action 中给属性 `deviceName` 指定 ``kuiper``,属性 `profileName` 指定 ``kuiperProfile``。
+2) 使用如下的规则,并且在 `edgex` action 中给属性 `deviceName` 指定 `kuiper`,属性 `profileName` 指定 `kuiperProfile`。
 
 ```json
 {
@@ -212,7 +212,7 @@
 }
 ```
 请注意,
-- Event 结构体中的设备名称( ``DeviceName``)变成了 `kuiper`,profile 名称( ``ProfileName``)变成了 `kuiperProfile`
+- Event 结构体中的设备名称( `DeviceName`)变成了 `kuiper`,profile 名称( `ProfileName`)变成了 `kuiperProfile`
 - `Events and Readings` 结构体中的数据被更新为新的值。 字段 `Origin` 被 eKuiper 更新为新的值 (这里为 `0`).
 
 ### 发布结果到  EdgeX 消息总线,并保留原有的元数据

+ 6 - 6
docs/zh_CN/rules/sources/edgex.md

@@ -23,7 +23,7 @@ EdgeX 源会试图取得某个字段的类型,
 
 ### Boolean
 
-如果 `reading` 中  `ValueType` 的值为 `Bool` ,那么 eKuiper 会试着将其转换为 ``boolean`` 类型,以下的值将被转化为 `true`。
+如果 `reading` 中  `ValueType` 的值为 `Bool` ,那么 eKuiper 会试着将其转换为 `boolean` 类型,以下的值将被转化为 `true`。
 
 - "1", "t", "T", "true", "TRUE", "True" 
 
@@ -89,7 +89,7 @@ EdgeX 消息总线的端口,缺省为 `5573`
 
 ## connectionSelector
 
-重用 EdgeX 源连接。连接配置信息位于 ``connections/connection.yaml``.
+重用 EdgeX 源连接。连接配置信息位于 `connections/connection.yaml`.
 ```yaml
 edgex:
   redisMsgBus: #connection key
@@ -113,7 +113,7 @@ edgex:
     #    KeyPEMBlock:
     #    SkipCertVerify: true/false
 ```
-对于 EdgeX 连接,这里有一个配置组。用户应该使用 ``edgex.redisMsgBus`` 来作为参数。举例如下:
+对于 EdgeX 连接,这里有一个配置组。用户应该使用 `edgex.redisMsgBus` 来作为参数。举例如下:
 ```yaml
 #Global Edgex configurations
 default:
@@ -124,7 +124,7 @@ default:
   topic: events
   messageType: event
 ```
-*注意*: 相应配置组一旦指定 connectionSelector 参数,所有关于连接的参数都会被忽略. 上面例子中,`` protocol: tcp | server: localhost | port: 5573`` 会被忽略。
+*注意*: 相应配置组一旦指定 connectionSelector 参数,所有关于连接的参数都会被忽略. 上面例子中,` protocol: tcp | server: localhost | port: 5573` 会被忽略。
 
 
 ## topic
@@ -145,8 +145,8 @@ EdgeX 消息总线类型,目前支持三种消息总线。如果指定了错
 EdgeX 消息模型类型。如果连接到 EdgeX application service 的 topic, 则消息为 "event" 类型。否则,如果直接连接到消息总线的 topic,接收到 device service 或者 core
 data 发出的数据,则消息类型为 "request"。该参数支持两种类型:
 
-- ``event``: 消息将会解码为 `dtos.Event` 类型。该选项为默认值。
-- ``request``: 消息将会解码为 `requests.AddEventRequest` 类型。
+- `event`: 消息将会解码为 `dtos.Event` 类型。该选项为默认值。
+- `request`: 消息将会解码为 `requests.AddEventRequest` 类型。
 
 ## optional
 

+ 3 - 3
docs/zh_CN/rules/sources/mqtt.md

@@ -67,7 +67,7 @@ MQTT 连接的客户端 ID。 如果未指定,将使用一个 uuid。
 
 ### connectionSelector
 
-重用 MQTT 源连接。连接配置信息位于 ``connections/connection.yaml``.
+重用 MQTT 源连接。连接配置信息位于 `connections/connection.yaml`.
 ```yaml
 mqtt:
   localConnection: #connection key
@@ -88,7 +88,7 @@ mqtt:
     #insecureSkipVerify: false
     #protocolVersion: 3
 ```
-对于 MQTT 连接,这里有两个配置组。用户应该使用 ``mqtt.localConnection`` 或者 ``mqtt.cloudConnection`` 来作为参数。举例如下:
+对于 MQTT 连接,这里有两个配置组。用户应该使用 `mqtt.localConnection` 或者 `mqtt.cloudConnection` 来作为参数。举例如下:
 ```yaml
 #Global MQTT configurations
 default:
@@ -100,7 +100,7 @@ default:
   #privateKeyPath: /var/kuiper/xyz-private.pem.key
   connectionSelector: mqtt.localConnection
 ```
-*注意*: 相应配置组一旦指定 connectionSelector 参数,所有关于连接的参数都会被忽略. 上面例子中,`` servers: [tcp://127.0.0.1:1883]`` 会被忽略。
+*注意*: 相应配置组一旦指定 connectionSelector 参数,所有关于连接的参数都会被忽略. 上面例子中,` servers: [tcp://127.0.0.1:1883]` 会被忽略。
 
 ### bufferLength
 

+ 1 - 1
docs/zh_CN/sqls/built-in_functions.md

@@ -138,7 +138,7 @@ eKuiper 具有许多内置函数,可以对数据执行计算。
 
 1. 如果参数为 datatime 类型,则直接返回原值。
 2. 如果参数为 bigint 或者 float 类型,则其数值会作为自 1970年1月1日0时起至今的毫秒值而转换为 datetime 类型。
-3. 如果参数为 string 类型,则会用默认格式 ``"2006-01-02T15:04:05.000Z07:00"``  将其转换为 datetime类型。
+3. 如果参数为 string 类型,则会用默认格式 `"2006-01-02T15:04:05.000Z07:00"`  将其转换为 datetime类型。
 4. 其他类型的参数均不支持转换。
 
 ## 哈希函数

+ 2 - 2
docs/zh_CN/sqls/windows.md

@@ -130,13 +130,13 @@ SELECT * FROM demo GROUP BY COUNTWINDOW(3,1) FITLER(where revenue > 100)
 
 每个事件都有一个与之关联的时间戳。 时间戳将用于计算窗口。 默认情况下,当事件输入到源时,将添加时间戳,称为`处理时间`。 我们还支持将某个字段指定为时间戳,称为`事件时间`。 时间戳字段在流定义中指定。 在下面的定义中,字段 `ts` 被指定为时间戳字段。
 
-``
+`
 CREATE STREAM demo (
 					color STRING,
 					size BIGINT,
 					ts BIGINT
 				) WITH (DATASOURCE="demo", FORMAT="json", KEY="ts", TIMESTAMP="ts"
-``
+`
 
 在事件时间模式下,水印算法用于计算窗口。