Bläddra i källkod

doc: reconstruct and add pages for en (#1579)

* refactor(doc): reconstruct

Signed-off-by: Jiyong Huang <huangjy@emqx.io>

* doc: add/modify all pages in en

Signed-off-by: Jiyong Huang <huangjy@emqx.io>

Signed-off-by: Jiyong Huang <huangjy@emqx.io>
ngjaying 2 år sedan
förälder
incheckning
6e7f97ca81
100 ändrade filer med 1842 tillägg och 638 borttagningar
  1. 3 3
      README.md
  2. 362 284
      docs/directory.json
  3. 49 16
      docs/en_US/README.md
  4. 0 0
      docs/en_US/api/cli/overview.md
  5. 0 0
      docs/en_US/api/cli/plugins.md
  6. 0 0
      docs/en_US/api/cli/resources/arch.png
  7. 1 1
      docs/en_US/operation/cli/rules.md
  8. 0 0
      docs/en_US/api/cli/ruleset.md
  9. 0 0
      docs/en_US/api/cli/schemas.md
  10. 0 0
      docs/en_US/api/cli/streams.md
  11. 0 0
      docs/en_US/api/cli/tables.md
  12. 0 0
      docs/en_US/api/restapi/authentication.md
  13. 0 0
      docs/en_US/api/restapi/overview.md
  14. 0 0
      docs/en_US/api/restapi/plugins.md
  15. 0 0
      docs/en_US/api/restapi/rules.md
  16. 0 0
      docs/en_US/api/restapi/ruleset.md
  17. 1 1
      docs/en_US/operation/restapi/schemas.md
  18. 0 0
      docs/en_US/api/restapi/services.md
  19. 0 0
      docs/en_US/api/restapi/streams.md
  20. 0 0
      docs/en_US/api/restapi/tables.md
  21. 0 0
      docs/en_US/api/restapi/uploads.md
  22. 16 44
      docs/en_US/concepts/ekuiper.md
  23. 1 1
      docs/en_US/concepts/rules.md
  24. 1 1
      docs/en_US/concepts/sinks.md
  25. 1 1
      docs/en_US/concepts/sources/overview.md
  26. 35 0
      docs/en_US/configuration/configuration.md
  27. 7 7
      docs/en_US/operation/config/configuration_file.md
  28. 13 13
      docs/en_US/edgex/edgex_rule_engine_tutorial.md
  29. 2 2
      docs/en_US/edgex/edgex_source_tutorial.md
  30. 2 2
      docs/en_US/extension/external/external_func.md
  31. 2 2
      docs/en_US/extension/native/develop/function.md
  32. 6 6
      docs/en_US/extension/native/develop/overview.md
  33. 1 1
      docs/en_US/extension/native/develop/plugins_tutorial.md
  34. 5 5
      docs/en_US/extension/native/develop/sink.md
  35. 3 3
      docs/en_US/extension/native/develop/source.md
  36. 3 3
      docs/en_US/extension/native/overview.md
  37. 1 1
      docs/en_US/extension/portable/overview.md
  38. 1 1
      docs/en_US/extension/portable/python_sdk.md
  39. 150 0
      docs/en_US/extension/wasm/overview.md
  40. 0 205
      docs/en_US/getting_started.md
  41. 216 0
      docs/en_US/getting_started/getting_started.md
  42. 1 1
      docs/en_US/quick_start_docker.md
  43. 1 1
      docs/en_US/tutorials/ai/python_tensorflow_lite_tutorial.md
  44. 0 0
      docs/en_US/guide/ai/tensorflow_lite_tutorial.md
  45. 4 4
      docs/en_US/rules/graph_rule.md
  46. 165 0
      docs/en_US/guide/rules/overview.md
  47. 1 1
      docs/en_US/rules/rule_pipeline.md
  48. 1 1
      docs/en_US/rules/state_and_fault_tolerance.md
  49. 4 4
      docs/en_US/tutorials/usage/protobuf_tutorial.md
  50. 0 0
      docs/en_US/guide/serialization/resources/action_mqtt.png
  51. 0 0
      docs/en_US/guide/serialization/resources/action_protobuf.png
  52. 0 0
      docs/en_US/guide/serialization/resources/create_detail.png
  53. 0 0
      docs/en_US/guide/serialization/resources/create_json_stream.png
  54. 0 0
      docs/en_US/guide/serialization/resources/create_proto_stream.png
  55. 0 0
      docs/en_US/guide/serialization/resources/create_schema.png
  56. 0 0
      docs/en_US/guide/serialization/resources/list_schema.png
  57. 0 0
      docs/en_US/guide/serialization/resources/proto_src_rule.png
  58. 0 0
      docs/en_US/guide/serialization/resources/receive_json.png
  59. 0 0
      docs/en_US/guide/serialization/resources/receive_protobuf.png
  60. 0 0
      docs/en_US/guide/serialization/resources/source_send.png
  61. 3 3
      docs/en_US/rules/codecs.md
  62. 0 0
      docs/en_US/guide/sinks/builtin/edgex.md
  63. 0 0
      docs/en_US/guide/sinks/builtin/log.md
  64. 1 1
      docs/en_US/rules/sinks/builtin/memory.md
  65. 0 0
      docs/en_US/guide/sinks/builtin/mqtt.md
  66. 0 0
      docs/en_US/guide/sinks/builtin/neuron.md
  67. 0 0
      docs/en_US/guide/sinks/builtin/nop.md
  68. 0 0
      docs/en_US/guide/sinks/builtin/redis.md
  69. 1 1
      docs/en_US/rules/sinks/builtin/rest.md
  70. 80 8
      docs/en_US/rules/data_template.md
  71. 195 0
      docs/en_US/guide/sinks/overview.md
  72. 0 0
      docs/en_US/guide/sinks/plugin/file.md
  73. 0 0
      docs/en_US/guide/sinks/plugin/image.md
  74. 0 0
      docs/en_US/guide/sinks/plugin/influx.md
  75. 0 0
      docs/en_US/guide/sinks/plugin/influx2.md
  76. 0 0
      docs/en_US/guide/sinks/plugin/sql.md
  77. 2 2
      docs/en_US/rules/sinks/plugin/tdengine.md
  78. 0 0
      docs/en_US/guide/sinks/plugin/zmq.md
  79. 0 0
      docs/en_US/guide/sources/builtin/edgex.md
  80. 1 1
      docs/en_US/rules/sources/builtin/file.md
  81. 0 0
      docs/en_US/guide/sources/builtin/http_pull.md
  82. 0 0
      docs/en_US/guide/sources/builtin/http_push.md
  83. 2 2
      docs/en_US/rules/sources/builtin/memory.md
  84. 0 0
      docs/en_US/guide/sources/builtin/mqtt.md
  85. 0 0
      docs/en_US/guide/sources/builtin/neuron.md
  86. 1 1
      docs/en_US/rules/sources/builtin/redis.md
  87. 11 2
      docs/en_US/rules/sources/overview.md
  88. 0 0
      docs/en_US/guide/sources/plugin/random.md
  89. 0 0
      docs/en_US/guide/sources/plugin/sql.md
  90. 0 0
      docs/en_US/guide/sources/plugin/video.md
  91. 0 0
      docs/en_US/guide/sources/plugin/zmq.md
  92. 148 0
      docs/en_US/guide/streams/overview.md
  93. 0 0
      docs/en_US/guide/tables/install_sql_source.png
  94. 2 2
      docs/en_US/tutorials/table/lookup.md
  95. 54 0
      docs/en_US/guide/tables/overview.md
  96. 52 0
      docs/en_US/guide/tables/scan.md
  97. 230 0
      docs/en_US/installation.md
  98. 0 0
      docs/en_US/integrations/deploy/add_service.png
  99. 0 0
      docs/en_US/integrations/deploy/ekuiper_openyurt.png
  100. 0 0
      docs/en_US/tutorials/deploy/kmanager.yaml

+ 3 - 3
README.md

@@ -43,7 +43,7 @@ eKuiper processing at the edge can greatly reduce system response latency, save
   - 60+ functions, includes mathematical, string, aggregate and hash etc
   - 4 time windows & count window
 
-- Highly extensibile 
+- Highly extensible 
 
   It supports to extend at `Source`, `Functions` and `Sink` with Golang or Python.
 
@@ -55,11 +55,11 @@ eKuiper processing at the edge can greatly reduce system response latency, save
 
   - [A free web based management dashboard](https://hub.docker.com/r/emqx/ekuiper-manager) for visualized management
   - Plugins, streams and rules management through CLI, REST API and config maps(Kubernetes)
-  - Easily be integrate with Kubernetes framworks [KubeEdge](https://github.com/kubeedge/kubeedge), [OpenYurt](https://openyurt.io/), [K3s](https://github.com/rancher/k3s) [Baetyl](https://github.com/baetyl/baetyl)
+  - Easily be integrated with Kubernetes framworks [KubeEdge](https://github.com/kubeedge/kubeedge), [OpenYurt](https://openyurt.io/), [K3s](https://github.com/rancher/k3s) [Baetyl](https://github.com/baetyl/baetyl)
 
 - Integration with EMQX products
 
-  Seamless integration with [EMQX](https://www.emqx.io/), [Neuron](https://neugates.io/) & [NanoMQ](https://nanomq.io/), and provided an end to end solution from IIoT, IoV 
+  Seamless integration with [EMQX](https://www.emqx.io/), [Neuron](https://neugates.io/) & [NanoMQ](https://nanomq.io/), and provided an end-to-end solution from IIoT, IoV 
 
 ## Quick start
 

+ 362 - 284
docs/directory.json

@@ -586,19 +586,19 @@
 	],
 	"en": [
 		{
-			"title": "Home",
+			"title": "Introduction",
 			"path": "./"
 		},
 		{
 			"title": "Getting Started",
 			"children": [
 				{
-					"title": "Run eKuiper locally",
-					"path": "getting_started"
+					"title": "Getting Started",
+					"path": "getting_started/getting_started"
 				},
 				{
-					"title": "Run eKuiper in Docker",
-					"path": "quick_start_docker"
+					"title": "5 Minutes Quick Start",
+					"path": "getting_started/quick_start_docker"
 				},
 				{
 					"title": "Run eKuiper with EdgeX Foundry",
@@ -607,403 +607,478 @@
 			]
 		},
 		{
-			"title": "Concepts",
+			"title": "Installation",
 			"children": [
 				{
-					"title": "Why eKuiper",
-					"path": "concepts/ekuiper"
-				},
-				{
-					"title": "Rules",
-					"path": "concepts/rules"
-				},
-				{
-					"title": "Sources",
-					"children": [
-						{
-							"title": "Overview",
-							"path": "concepts/sources/overview"
-						},
-						{
-							"title": "Stream",
-							"path": "concepts/sources/stream"
-						},
-						{
-							"title": "Table",
-							"path": "concepts/sources/table"
-						}
-					]
-				},
-				{
-					"title": "Sinks",
-					"path": "concepts/sinks"
-				},
-				{
-					"title": "SQL Queries",
-					"path": "concepts/sql"
-				},
+					"title": "Installation",
+					"path": "installation"
+				}
+			]
+		},
+		{
+			"title": "Configuration",
+			"children": [
 				{
-					"title": "Stream Processing",
-					"children": [
-						{
-							"title": "Overview",
-							"path": "concepts/streaming/overview"
-						},
-						{
-							"title": "Time Attribute",
-							"path": "concepts/streaming/time"
-						},
-						{
-							"title": "Windowing",
-							"path": "concepts/streaming/windowing"
-						},
-						{
-							"title": "Join",
-							"path": "concepts/streaming/join"
-						}
-					]
+					"title": "Configuration",
+					"path": "configuration/configuration"
 				},
 				{
-					"title": "Extensions",
-					"path": "concepts/extensions"
+					"title": "Global Configurations",
+					"path": "configuration/global_configurations"
 				}
 			]
 		},
 		{
-			"title": "Tutorials",
+			"title": "User Guide",
 			"children": [
 				{
-					"title": "AI",
+					"title": "Stream: unbounded series of events",
 					"children": [
 						{
-							"title": "Label image by tensorflow lite model with eKuiper native plugin",
-							"path": "tutorials/ai/tensorflow_lite_tutorial"
-						},
-						{
-							"title": "Running AI Algorithms with Python Function Plugins",
-							"path": "tutorials/ai/python_tensorflow_lite_tutorial"
+							"title": "Overview",
+							"path": "guide/streams/overview"
 						}
 					]
 				},
 				{
-					"title": "Working with EdgeX Foundry",
+					"title": "Table: snapshot of events",
 					"children": [
 						{
-							"title": "EdgeX Foundry rule engine tutorial",
-							"path": "edgex/edgex_rule_engine_tutorial"
-						},
-						{
-							"title": "Meta function for EdgeX stream",
-							"path": "edgex/edgex_meta"
+							"title": "Overview",
+							"path": "guide/tables/overview"
 						},
 						{
-							"title": "Command device with EdgeX eKuiper rules engine",
-							"path": "edgex/edgex_rule_engine_command"
+							"title": "Scan Table",
+							"path": "guide/tables/scan"
 						},
 						{
-							"title": "EdgeX source configuration command",
-							"path": "edgex/edgex_source_tutorial"
+							"title": "Lookup Table",
+							"path": "guide/tables/lookup"
 						}
 					]
 				},
 				{
-					"title": "Stream processing of data collected by Neuron using eKuiper",
-					"path": "tutorials/neuron/neuron_integration_tutorial"
-				},
-				{
-					"title": "Deploy by OpenYurt",
-					"path": "tutorials/deploy/openyurt_tutorial"
-				},
-				{
-					"title": "Usage Tutorial",
+					"title": "Rule: query and action",
 					"children": [
 						{
-							"title": "Stream Batch Integrated Calculation",
-							"path": "tutorials/table/lookup"
-						},
-						{
-							"title": "Protobuf Codec Tutorial",
-							"path": "tutorials/usage/protobuf_tutorial"
+							"title": "Rules",
+							"path": "guide/rules/overview"
 						},
 						{
-							"title": "Monitor rule status with Prometheus",
-							"path": "tutorials/usage/monitor_with_prometheus"
-						}
-					]
-				}
-			]
-		},
-		{
-			"title": "References",
-			"children": [
-				{
-					"title": "Rules",
-					"children": [
-						{
-							"title": "Introduction",
-							"path": "rules/overview"
+							"title": "Graph Rule",
+							"path": "guide/rules/graph_rule"
 						},
 						{
 							"title": "Rule Pipeline",
-							"path": "rules/rule_pipeline"
+							"path": "guide/rules/rule_pipeline"
 						},
 						{
 							"title": "State and Fault Tolerance",
-							"path": "rules/state_and_fault_tolerance"
-						},{
-							"title": "Codecs",
-							"path": "rules/codecs"
-						},{
-							"title": "Graph Rule",
-							"path": "rules/graph_rule"
+							"path": "guide/rules/state_and_fault_tolerance"
 						}
 					]
 				},
 				{
-					"title": "Sources",
+					"title": "Source Connectors",
 					"children": [
 						{
 							"title": "Overview",
-							"path": "rules/sources/overview"
+							"path": "guide/sources/overview"
 						},
 						{
-							"title": "Built-in sources",
+							"title": "Built-in Sources",
 							"children": [
 								{
-									"title": "MQTT source",
-									"path": "rules/sources/builtin/mqtt"
+									"title": "MQTT Source",
+									"path": "guide/sources/builtin/mqtt"
 								},
 								{
-									"title": "Neuron source",
-									"path": "rules/sources/builtin/neuron"
+									"title": "Neuron Source",
+									"path": "guide/sources/builtin/neuron"
 								},
 								{
 									"title": "EdgeX Source",
-									"path": "rules/sources/builtin/edgex"
+									"path": "guide/sources/builtin/edgex"
 								},
 								{
-									"title": "HTTP pull source",
-									"path": "rules/sources/builtin/http_pull"
+									"title": "HTTP Pull Source",
+									"path": "guide/sources/builtin/http_pull"
 								},
 								{
-									"title": "HTTP push source",
-									"path": "rules/sources/builtin/http_push"
+									"title": "HTTP Push Source",
+									"path": "guide/sources/builtin/http_push"
 								},
 								{
-									"title": "Memory source",
-									"path": "rules/sources/builtin/memory"
+									"title": "Memory Source",
+									"path": "guide/sources/builtin/memory"
 								},
 								{
-									"title": "File source",
-									"path": "rules/sources/builtin/file"
+									"title": "File Source",
+									"path": "guide/sources/builtin/file"
 								},
 								{
-									"title": "Redis source",
-									"path": "rules/sources/builtin/redis"
+									"title": "Redis Source",
+									"path": "guide/sources/builtin/redis"
 								}
 							]
 						},
 						{
-							"title": "Predefined source plugins",
+							"title": "Predefined Source Plugins",
 							"children": [
 								{
-									"title": "Zero MQ source",
-									"path": "rules/sources/plugin/zmq"
+									"title": "Zero MQ Source",
+									"path": "guide/sources/plugin/zmq"
 								},
 								{
-									"title": "SQL source",
-									"path": "rules/sources/plugin/sql"
+									"title": "SQL Source",
+									"path": "guide/sources/plugin/sql"
 								},
 								{
-									"title": "Random source",
-									"path": "rules/sources/plugin/random"
+									"title": "Random Source",
+									"path": "guide/sources/plugin/random"
 								},
 								{
-									"title": "Video source",
-									"path": "rules/sources/plugin/video"
+									"title": "Video Source",
+									"path": "guide/sources/plugin/video"
 								}
 							]
 						}
 					]
 				},
 				{
-					"title": "Sinks",
+					"title": "Sink Connectors",
 					"children": [
 						{
 							"title": "Overview",
-							"path": "rules/sinks/overview"
+							"path": "guide/sinks/overview"
 						},
 						{
 							"title": "Data Template",
-							"path": "rules/data_template"
+							"path": "guide/sinks/data_template"
 						},
 						{
-							"title": "Built-in sinks",
+							"title": "Built-in Sinks",
 							"children": [
 								{
-									"title": "MQTT action",
-									"path": "rules/sinks/builtin/mqtt"
+									"title": "MQTT Sink",
+									"path": "guide/sinks/builtin/mqtt"
 								},
 								{
-									"title": "Neuron action",
-									"path": "rules/sinks/builtin/neuron"
+									"title": "Neuron Sink",
+									"path": "guide/sinks/builtin/neuron"
 								},
 								{
-									"title": "EdgeX Message Bus action",
-									"path": "rules/sinks/builtin/edgex"
+									"title": "EdgeX Sink",
+									"path": "guide/sinks/builtin/edgex"
 								},
 								{
-									"title": "REST action",
-									"path": "rules/sinks/builtin/rest"
+									"title": "REST Sink",
+									"path": "guide/sinks/builtin/rest"
 								},
 								{
-									"title": "Memory action",
-									"path": "rules/sinks/builtin/memory"
+									"title": "Memory Sink",
+									"path": "guide/sinks/builtin/memory"
 								},
 								{
-									"title": "Log action",
-									"path": "rules/sinks/builtin/log"
+									"title": "Log Sink",
+									"path": "guide/sinks/builtin/log"
 								},
 								{
-									"title": "Nop action",
-									"path": "rules/sinks/builtin/nop"
+									"title": "Nop Sink",
+									"path": "guide/sinks/builtin/nop"
 								},
 								{
-									"title": "Redis sink",
-									"path": "rules/sinks/builtin/redis"
+									"title": "Redis Sink",
+									"path": "guide/sinks/builtin/redis"
 								}
 							]
 						},
 						{
-							"title": "Predefined sink plugins",
+							"title": "Predefined Sink Plugins",
 							"children": [
 								{
-									"title": "Zero MQ sink",
-									"path": "rules/sinks/plugin/zmq"
+									"title": "ZeroMQ Sink",
+									"path": "guide/sinks/plugin/zmq"
 								},
 								{
-									"title": "File sink",
-									"path": "rules/sinks/plugin/file"
+									"title": "File Sink",
+									"path": "guide/sinks/plugin/file"
 								},
 								{
-									"title": "SQL sink",
-									"path": "rules/sinks/plugin/sql"
+									"title": "SQL Sink",
+									"path": "guide/sinks/plugin/sql"
 								},
 								{
-									"title": "InfluxDB sink",
-									"path": "rules/sinks/plugin/influx"
+									"title": "InfluxDB Sink",
+									"path": "guide/sinks/plugin/influx"
 								},
 								{
-									"title": "InfluxDBV2 sink",
-									"path": "rules/sinks/plugin/influx2"
+									"title": "InfluxDBV2 Sink",
+									"path": "guide/sinks/plugin/influx2"
 								},
 								{
-									"title": "TDengine sink",
-									"path": "rules/sinks/plugin/tdengine"
+									"title": "TDengine Sink",
+									"path": "guide/sinks/plugin/tdengine"
 								},
 								{
-									"title": "Image sink",
-									"path": "rules/sinks/plugin/image"
+									"title": "Image Sink",
+									"path": "guide/sinks/plugin/image"
 								}
 							]
 						}
 					]
 				},
 				{
-					"title": "SQL",
+					"title": "Serialization",
+
+					"children": [
+						{
+							"title": "Overview",
+							"path": "guide/serialization/serialization"
+						},
+						{
+							"title": "Protobuf Codec Tutorial",
+							"path": "guide/serialization/protobuf_tutorial"
+						}
+					]
+				},
+				{
+					"title": "AI/ML",
+					"children": [
+						{
+							"title": "Running AI Algorithms with Native Plugin",
+							"path": "guide/ai/tensorflow_lite_tutorial"
+						},
+						{
+							"title": "Running AI Algorithms with Python Function Plugin",
+							"path": "guide/ai/python_tensorflow_lite_tutorial"
+						}
+					]
+				}
+			]
+		},
+		{
+			"title": "Admin Guide",
+			"children": [
+				{
+					"title": "Installation",
+					"path": "installation"
+				},
+				{
+					"title": "Configuration",
+					"children": [
+						{
+							"title": "Configuration",
+							"path": "configuration/configuration"
+						},
+						{
+							"title": "Global Configurations",
+							"path": "configuration/global_configurations"
+						}
+					]
+				},
+				{
+					"title": "Management Console",
+					"children": [
+						{
+							"title": "Introduction",
+							"path": "operation/manager-ui/overview"
+						},
+						{
+							"title": "Plugin Management",
+							"path": "operation/manager-ui/plugins_in_manager"
+						}
+					]
+				},
+				{
+					"title": "Compilation",
+					"children": [
+						{
+							"title": "Compile",
+							"path": "operation/compile/compile"
+						},
+						{
+							"title": "Cross Compile",
+							"path": "operation/compile/cross-compile"
+						},
+						{
+							"title": "Compile selected features only",
+							"path": "operation/compile/features"
+						}
+					]
+				},
+				{
+					"title": "Monitor",
+					"children": [
+						{
+							"title": "Monitor with Prometheus",
+							"path": "operation/usage/monitor_with_prometheus"
+						}
+					]
+				}
+			]
+		},
+		{
+			"title": "Use Cases",
+			"children": [
+				{
+					"title": "IIoT case",
+					"path": "usecases/iiot"
+				},
+				{
+					"title": "IoV case",
+					"path": "usecases/iov"
+				}
+			]
+		},
+		{
+			"title": "Integrations",
+			"children": [
+				{
+					"title": "Edge Cloud Collaboration",
+					"path": "integrations/edge_cloud/overview"
+				},
+				{
+					"title": "Working with EdgeX Foundry",
+					"children": [
+						{
+							"title": "EdgeX Foundry Rule Engine Tutorial",
+							"path": "edgex/edgex_rule_engine_tutorial"
+						},
+						{
+							"title": "Meta Function for EdgeX Stream",
+							"path": "edgex/edgex_meta"
+						},
+						{
+							"title": "Command EdgeX Device",
+							"path": "edgex/edgex_rule_engine_command"
+						},
+						{
+							"title": "EdgeX Source Configurations",
+							"path": "edgex/edgex_source_tutorial"
+						}
+					]
+				},
+				{
+					"title": "Processing Data Collected by Neuron ",
+					"path": "integrations/neuron/neuron_integration_tutorial"
+				},
+				{
+					"title": "Analytic Engine for KubeEdge",
+					"path": "integrations/kubeedge/overview"
+				},
+				{
+					"title": "Deploy by OpenYurt",
+					"path": "integrations/deploy/openyurt_tutorial"
+				}
+			]
+		},
+		{
+			"title": "Architecture Design",
+			"children": [
+				{
+					"title": "Architecture",
+					"path": "concepts/ekuiper"
+				},
+				{
+					"title": "Key Concepts",
 					"children": [
 						{
-							"title": "Syntax",
+							"title": "Rules",
+							"path": "concepts/rules"
+						},
+						{
+							"title": "Sources",
 							"children": [
 								{
-									"title": "Introduction",
-									"path": "sqls/overview"
+									"title": "Overview",
+									"path": "concepts/sources/overview"
 								},
 								{
-									"title": "Lexical elements",
-									"path": "sqls/lexical_elements"
-								},
-								{
-									"title": "Data types",
-									"path": "sqls/data_types"
+									"title": "Stream",
+									"path": "concepts/sources/stream"
 								},
 								{
-									"title": "JSON Expressions",
-									"path": "sqls/json_expr"
-								},
-								{
-									"title": "Query language element",
-									"path": "sqls/query_language_elements"
+									"title": "Table",
+									"path": "concepts/sources/table"
 								}
 							]
 						},
 						{
-							"title": "Statements",
-							"children": [
-								{
-									"title": "Streams",
-									"path": "sqls/streams"
-								},
-								{
-									"title": "Tables",
-									"path": "sqls/tables"
-								}
-							]
+							"title": "Sinks",
+							"path": "concepts/sinks"
 						},
 						{
-							"title": "Windows",
-							"path": "sqls/windows"
+							"title": "SQL Queries",
+							"path": "concepts/sql"
 						},
 						{
-							"title": "Built-in Functions",
-							"path": "sqls/built-in_functions"
+							"title": "Extensions",
+							"path": "concepts/extensions"
+						}
+					]
+				},
+				{
+					"title": "Stream Processing",
+					"children": [
+						{
+							"title": "Overview",
+							"path": "concepts/streaming/overview"
 						},
 						{
-							"title": "Predefined function plugins",
-							"path": "sqls/custom_functions"
+							"title": "Time Attribute",
+							"path": "concepts/streaming/time"
+						},
+						{
+							"title": "Windowing",
+							"path": "concepts/streaming/windowing"
+						},
+						{
+							"title": "Join",
+							"path": "concepts/streaming/join"
 						}
 					]
 				}
 			]
 		},
 		{
-			"title": "Extension Programming",
+			"title": "Developer Guide",
 			"children": [
 				{
 					"title": "Introduction",
 					"path": "extension/overview"
 				},
 				{
-					"title": "Native plugin develop",
+					"title": "Native Plugin Develop",
 					"children": [
 						{
-							"title": "Native plugin develop overview",
+							"title": "Overview",
 							"path": "extension/native/overview"
 						},
 						{
-							"title": "Native plugins overview",
+							"title": "Develop",
 							"path": "extension/native/develop/overview"
 						},
 						{
-							"title": "Plugin develop tutorial",
+							"title": "Plugin Develop Tutorial",
 							"path": "extension/native/develop/plugins_tutorial"
 						},
 						{
-							"title": "Function plugin",
+							"title": "Function Plugin",
 							"path": "extension/native/develop/function"
 						},
 						{
-							"title": "Sink plugin",
+							"title": "Sink Plugin",
 							"path": "extension/native/develop/sink"
 						},
 						{
-							"title": "Source plugin",
+							"title": "Source Plugin",
 							"path": "extension/native/develop/source"
 						}
 					]
 				},
 				{
-					"title": "Portable plugin develop",
+					"title": "Portable Plugin Development",
 					"children": [
 						{
 							"title": "Portable Plugin",
@@ -1020,145 +1095,148 @@
 					]
 				},
 				{
-					"title": "External function",
+					"title": "External Function",
 					"path": "extension/external/external_func"
+				},
+				{
+					"title": "Wasm Function",
+					"path": "extension/wasm/overview"
 				}
 			]
 		},
 		{
-			"title": "Operations",
+			"title": "SQL Reference",
 			"children": [
 				{
-					"title": "Introduction",
-					"path": "operation/overview"
-				},
-				{
-					"title": "Install",
+					"title": "Statements",
 					"children": [
 						{
-							"title": "overview",
-							"path": "operation/install/overview"
+							"title": "Streams",
+							"path": "sqls/streams"
 						},
 						{
-							"title": "centos",
-							"path": "operation/install/cent-os"
+							"title": "Tables",
+							"path": "sqls/tables"
+						},
+						{
+							"title": "Query",
+							"path": "sqls/query_language_elements"
 						}
 					]
 				},
 				{
-					"title": "Configuration",
+					"title": "Syntax",
 					"children": [
 						{
-							"title": "Configuration File",
-							"path": "operation/config/configuration_file"
+							"title": "Introduction",
+							"path": "sqls/overview"
 						},
 						{
-							"title": "Authentication",
-							"path": "operation/config/authentication"
+							"title": "Lexical elements",
+							"path": "sqls/lexical_elements"
+						},
+						{
+							"title": "Data types",
+							"path": "sqls/data_types"
+						},
+						{
+							"title": "JSON Expressions",
+							"path": "sqls/json_expr"
 						}
 					]
 				},
 				{
+					"title": "Built-in Functions",
+					"path": "sqls/built-in_functions"
+				},
+				{
+					"title": "Predefined function plugins",
+					"path": "sqls/custom_functions"
+				},
+				{
+					"title": "Windowing",
+					"path": "sqls/windows"
+				}
+			]
+		},
+		{
+			"title": "API Reference",
+			"children": [
+				{
 					"title": "Rest API",
 					"children": [
 						{
 							"title": "Introduction",
-							"path": "operation/restapi/overview"
+							"path": "api/restapi/overview"
+						},
+						{
+							"title": "Authentication",
+							"path": "api/restapi/authentication"
 						},
 						{
 							"title": "Streams",
-							"path": "operation/restapi/streams"
+							"path": "api/restapi/streams"
 						},
 						{
 							"title": "Tables",
-							"path": "operation/restapi/tables"
+							"path": "api/restapi/tables"
 						},
 						{
 							"title": "Rules",
-							"path": "operation/restapi/rules"
+							"path": "api/restapi/rules"
 						},
 						{
 							"title": "Plugins",
-							"path": "operation/restapi/plugins"
+							"path": "api/restapi/plugins"
 						},
 						{
 							"title": "External Services",
-							"path": "operation/restapi/services"
+							"path": "api/restapi/services"
 						},
 						{
 							"title": "Schemas",
-							"path": "operation/restapi/schemas"
+							"path": "api/restapi/schemas"
 						},
 						{
 							"title": "Upload files",
-							"path": "operation/restapi/uploads"
+							"path": "api/restapi/uploads"
 						},
 						{
 							"title": "Ruleset",
-							"path": "operation/restapi/ruleset"
+							"path": "api/restapi/ruleset"
 						}
 					]
 				},
 				{
-					"title": "Command line tool",
+					"title": "Command Line Tool",
 					"children": [
 						{
 							"title": "Introduction",
-							"path": "operation/cli/overview"
+							"path": "api/cli/overview"
 						},
 						{
 							"title": "Streams",
-							"path": "operation/cli/streams"
+							"path": "api/cli/streams"
 						},
 						{
 							"title": "Rules",
-							"path": "operation/cli/rules"
+							"path": "api/cli/rules"
 						},
 						{
 							"title": "Tables",
-							"path": "operation/cli/tables"
+							"path": "api/cli/tables"
 						},
 						{
 							"title": "Plugins",
-							"path": "operation/cli/plugins"
+							"path": "api/cli/plugins"
 						},
 						{
 							"title": "Schemas",
-							"path": "operation/cli/schemas"
+							"path": "api/cli/schemas"
 						},
 						{
 							"title": "Ruleset",
-							"path": "operation/cli/ruleset"
-						}
-					]
-				},
-				{
-					"title": "Management console",
-					"children": [
-						{
-							"title": "Introduction",
-							"path": "operation/manager-ui/overview"
-						},
-						{
-							"title": "How to display custom plugins in the installation list of the management console",
-							"path": "operation/manager-ui/plugins_in_manager"
-						}
-					]
-				},
-				{
-					"title": "Compile",
-					"children": [
-						{
-							"title": "Compile",
-							"path": "operation/compile/compile"
-						},
-						{
-							"title": "Cross Compile",
-							"path": "operation/compile/cross-compile"
-						},
-						{
-							"title": "Compile selected features only",
-							"path": "features"
+							"path": "api/cli/ruleset"
 						}
 					]
 				}

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 49 - 16
docs/en_US/README.md


docs/en_US/operation/cli/overview.md → docs/en_US/api/cli/overview.md


docs/en_US/operation/cli/plugins.md → docs/en_US/api/cli/plugins.md


docs/en_US/operation/cli/resources/arch.png → docs/en_US/api/cli/resources/arch.png


+ 1 - 1
docs/en_US/operation/cli/rules.md

@@ -4,7 +4,7 @@ The eKuiper rule command line tools allows you to manage rules, such as create,
 
 ## create a rule
 
-The command is used for creating a rule.  The rule's definition is specified with JSON format, read [rule](../../rules/overview.md) for more detailed information.
+The command is used for creating a rule.  The rule's definition is specified with JSON format, read [rule](../../guide/rules/overview.md) for more detailed information.
 
 ```shell
 create rule $rule_name '$rule_json' | create rule $rule_name -f $rule_def_file

docs/en_US/operation/cli/ruleset.md → docs/en_US/api/cli/ruleset.md


docs/en_US/operation/cli/schemas.md → docs/en_US/api/cli/schemas.md


docs/en_US/operation/cli/streams.md → docs/en_US/api/cli/streams.md


docs/en_US/operation/cli/tables.md → docs/en_US/api/cli/tables.md


docs/en_US/operation/config/authentication.md → docs/en_US/api/restapi/authentication.md


docs/en_US/operation/restapi/overview.md → docs/en_US/api/restapi/overview.md


docs/en_US/operation/restapi/plugins.md → docs/en_US/api/restapi/plugins.md


docs/en_US/operation/restapi/rules.md → docs/en_US/api/restapi/rules.md


docs/en_US/operation/restapi/ruleset.md → docs/en_US/api/restapi/ruleset.md


+ 1 - 1
docs/en_US/operation/restapi/schemas.md

@@ -43,7 +43,7 @@ Schema with static plugin:
 2. schema content, use `file` or `content` parameter to specify. After schema created, the schema content will be written into file `data/schemas/$shcema_type/$schema_name`.
    - file: the url of the schema file. The url can be `http` or `https` scheme or `file` scheme to refer to a local file path of the eKuiper server. The schema file must be the file type of the corresponding schema type. For example, protobuf schema file's extension name must be .proto.
    - content: the text content of the schema.
-3. soFile:The so file of the static plugin. Detail about the plugin creation, please check [customize format](../../rules/codecs.md#format-extension).
+3. soFile:The so file of the static plugin. Detail about the plugin creation, please check [customize format](../../guide/serialization/serialization.md#format-extension).
 
 ## Show schemas
 

docs/en_US/operation/restapi/services.md → docs/en_US/api/restapi/services.md


docs/en_US/operation/restapi/streams.md → docs/en_US/api/restapi/streams.md


docs/en_US/operation/restapi/tables.md → docs/en_US/api/restapi/tables.md


docs/en_US/operation/restapi/uploads.md → docs/en_US/api/restapi/uploads.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 16 - 44
docs/en_US/concepts/ekuiper.md


+ 1 - 1
docs/en_US/concepts/rules.md

@@ -16,4 +16,4 @@ Multiple rules can form a processing pipeline by specifying a joint point in sin
 
 ## More Readings
 
-- [Rule Reference](../rules/overview.md)
+- [Rule Reference](../guide/rules/overview.md)

+ 1 - 1
docs/en_US/concepts/sinks.md

@@ -10,4 +10,4 @@ The sink result is a string as always. It will be encoded into json string by de
 
 ## More Readings
 
-- [Sink Reference](../rules/sinks/overview.md)
+- [Sink Reference](../guide/sinks/overview.md)

+ 1 - 1
docs/en_US/concepts/sources/overview.md

@@ -26,7 +26,7 @@ The source defines the external system connection. When using in a rule, users c
 
 ## More Readings
 
-- [Source Reference](../../rules/sources/overview.md)
+- [Source Reference](../../guide/sources/overview.md)
 
 
 

+ 35 - 0
docs/en_US/configuration/configuration.md

@@ -0,0 +1,35 @@
+# Configuration
+
+eKuiper configuration is based on yaml file and allow to configure by updating the file, environment variable and REST API.
+
+## Configuration Scope
+
+eKuiper configurations include
+1. `etc/kuiper.yaml`: global configuration file. Make change to it need to restart the eKuiper instance. Please refer to [basic configuration file](./global_configurations.md) for detail.
+2. `etc/sources/${source_name}.yaml`: the configuration file for each source to define the default properties (except MQTT source, whose configuration file is `etc/mqtt_source.yaml`). Please refer to the doc for each source for detail. For example, [MQTT source](../guide/sources/builtin/mqtt.md) and [Neuron source](../guide/sources/builtin/neuron.md) covers the configuration items.
+3. `etc/connections/connection.yaml`: shared connection configuration file.
+
+## Configuration Methods
+
+Users can set the configuration through 3 methods order by precedence.
+
+1. Management Console/REST API
+2. Environment variables
+3. Yaml files in etc folder.
+
+The yaml files usually be used to set up the default configurations. It can be heavily use when deploy in bare metal and the user can access the file system easily.
+
+When deploying in docker or k8s, it is not easy enough to manipulate files, small amount of configurations can then be set or override by environment variables. And in runtime, end users will use management console to change the configurations dynamically. The `Configuration` page in the eKuiper manager can help users to modify the configurations visually.
+
+### Environment variable syntax
+
+There is a mapping from environment variable to the configuration yaml file. When modifying configuration through environment variables, the environment variables need to be set according to the prescribed format, for example:
+
+```
+KUIPER__BASIC__DEBUG => basic.debug in etc/kuiper.yaml
+MQTT_SOURCE__DEMO_CONF__QOS => demo_conf.qos in etc/mqtt_source.yaml
+EDGEX__DEFAULT__PORT => default.port in etc/sources/edgex.yaml
+CONNECTION__EDGEX__REDISMSGBUS__PORT => edgex.redismsgbus.port int etc/connections/connection.yaml
+```
+
+The environment variables are separated by "__", the content of the first part after the separation matches the file name of the configuration file, and the remaining content matches the different levels of the configuration items. The file name could be `KUIPER` and `MQTT_SOURCE` in the `etc` folder; or  `CONNECTION` in `etc/connection` folder. Otherwise, the file should in `etc/sources` folder.

+ 7 - 7
docs/en_US/operation/config/configuration_file.md

@@ -70,7 +70,7 @@ The port for the rest api http server to listen to.
 The tls cert file path and key file path setting. If restTls is not set, the rest api server will listen on http. Otherwise, it will listen on https.
 
 ## authentication 
-eKuiper will check the `Token` for rest api when `authentication` option is true. please check this file for [more info](authentication.md).
+eKuiper will check the `Token` for rest api when `authentication` option is true. please check this file for [more info](../api/restapi/authentication.md).
 
 ```yaml
 basic:
@@ -92,7 +92,7 @@ The prometheus port can be the same as the eKuiper REST API port. If so, both se
 
 ## Pluginhosts Configuration
 
-The URL where hosts all of pre-build [native plugins](../../extension/native/overview.md). By default, it's at `packages.emqx.net`. 
+The URL where hosts all of pre-build [native plugins](../extension/native/overview.md). By default, it's at `packages.emqx.net`. 
 
 All plugins list as follows:
 
@@ -110,17 +110,17 @@ GET http://localhost:9081/plugins/sinks/prebuild
 GET http://localhost:9081/plugins/functions/prebuild
 ``` 
 
-After get the plugin info, users can try these plugins, [more info](../restapi/plugins.md) 
+After get the plugin info, users can try these plugins, [more info](../api/restapi/plugins.md) 
 
 **Note: only the official released debian based docker images support these operations**
 
 ## Rule configurations
 
-Configure the default properties of the rule option. All the configuration can be overridden in rule level. Check [rule options](../../rules/overview.md#options) for detail.
+Configure the default properties of the rule option. All the configuration can be overridden in rule level. Check [rule options](../guide/rules/overview.md#options) for detail.
 
 ## Sink configurations
 
-Configure the default properties of sink, currently mainly used to configure [cache policy](../../rules/sinks/overview.md#Caching). The same configuration options are available at the rules level to override these default configurations.
+Configure the default properties of sink, currently mainly used to configure [cache policy](../guide/sinks/overview.md#Caching). The same configuration options are available at the rules level to override these default configurations.
 
 ```yaml
   sink:
@@ -163,7 +163,7 @@ It has properties
 * connectionSelector - reuse the connection info defined in etc/connections/connection.yaml, mainly used for edgeX redis in secure mode
   * only applicable to redis connection information
   * the server, port and password in connection info will overwrite the host port and password above
-  * [more info](../../rules/sources/builtin/edgex.md#connectionselector)
+  * [more info](../guide/sources/builtin/edgex.md#connectionselector)
     
 
 ### Config
@@ -195,4 +195,4 @@ This section configures the portable plugin runtime.
 
 ## Ruleset Provision
 
-Support file based stream and rule provisioning on startup. Users can put a [ruleset](../restapi/ruleset.md#ruleset-format) file named `init.json` into `data` directory to initialize the ruleset. The ruleset will only be import on the first startup of eKuiper.
+Support file based stream and rule provisioning on startup. Users can put a [ruleset](../api/restapi/ruleset.md#ruleset-format) file named `init.json` into `data` directory to initialize the ruleset. The ruleset will only be import on the first startup of eKuiper.

+ 13 - 13
docs/en_US/edgex/edgex_rule_engine_tutorial.md

@@ -32,7 +32,7 @@ EdgeX uses [message bus](https://github.com/edgexfoundry/go-mod-messaging) to ex
   CREATE STREAM demo (temperature bigint) WITH (FORMAT="JSON"...)
   ```
 
-  However, data type definitions are already specified in the EdgeX events/readings and to improve the using experience, user are NOT necessary to specify data types when creating stream. For any data sending from message bus, it will be converted into [corresponding data types](../rules/sources/builtin/edgex.md).
+  However, data type definitions are already specified in the EdgeX events/readings and to improve the using experience, user are NOT necessary to specify data types when creating stream. For any data sending from message bus, it will be converted into [corresponding data types](../guide/sources/builtin/edgex.md).
 
 - An EdgeX message bus sink is extended to support send analysis result back to EdgeX Message Bus. User can also choose to send analysis result to RestAPI, eKuiper already supported it. 
 
@@ -94,7 +94,7 @@ add these in `environment` part and make sure the image is `1.4.0` or later.
   ```
   
 * `mqtt/zeromq` messageBus: adjust the parameters accordingly and specify the client credentials if have.
-  There is a `mqtt` message bus example, make sure the connection info exists in `etc/connections/connection.yaml`, for [more info](../rules/sources/builtin/edgex.md#connectionselector) please check this. 
+  There is a `mqtt` message bus example, make sure the connection info exists in `etc/connections/connection.yaml`, for [more info](../guide/sources/builtin/edgex.md#connectionselector) please check this. 
   ```yaml
   environment:
       CONNECTION__EDGEX__MQTTMSGBUS__PORT: 1883
@@ -105,7 +105,7 @@ add these in `environment` part and make sure the image is `1.4.0` or later.
       CONNECTION__EDGEX__MQTTMSGBUS__OPTIONAL__PASSWORD: password
       EDGEX__DEFAULT__CONNECTIONSELECTOR: edgex.mqttMsgBus
   ```
-After these modifications and eKuiper starts up, please read [this](../rules/sinks/builtin/edgex.md#connection-reuse-publish-example) to learn how to refer to the connection info
+After these modifications and eKuiper starts up, please read [this](../guide/sinks/builtin/edgex.md#connection-reuse-publish-example) to learn how to refer to the connection info
 
 #### Use Redis as KV storage
 
@@ -151,7 +151,7 @@ curl -X POST \
 }'
 ```
 
-For other Rest APIs, please refer to [this doc](../operation/restapi/overview.md).
+For other Rest APIs, please refer to [this doc](../api/restapi/overview.md).
 
 #### Option 2: Use eKuiper CLI
 
@@ -167,7 +167,7 @@ Use following command to create a stream named `demo`.
 bin/kuiper create stream demo'() WITH (FORMAT="JSON", TYPE="edgex")'
 ```
 
-For other command line tools, please refer to [this doc](../operation/cli/overview.md).
+For other command line tools, please refer to [this doc](../api/cli/overview.md).
 
 ------
 
@@ -183,11 +183,11 @@ default:
 .....  
 ```
 
-For more detailed information of configuration file, please refer to [this doc](../rules/sources/builtin/edgex.md).
+For more detailed information of configuration file, please refer to [this doc](../guide/sources/builtin/edgex.md).
 
 ### Create a rule
 
-Let's create a rule that send result data to an MQTT broker, for detailed information of MQTT sink, please refer to [this link](../rules/sinks/builtin/mqtt.md).  Similar to create a stream, you can also choose REST or CLI to manage rules. 
+Let's create a rule that send result data to an MQTT broker, for detailed information of MQTT sink, please refer to [this link](../guide/sinks/builtin/mqtt.md).  Similar to create a stream, you can also choose REST or CLI to manage rules. 
 
 So the below rule will get all of values from `event` topic. The sink result will 
 
@@ -251,7 +251,7 @@ Rule rule1 was created successfully, please use 'cli getstatus rule rule1' comma
 
 ------
 
-If you want to send analysis result to another sink, please refer to [other sinks](../rules/overview.md#sinksactions)
+If you want to send analysis result to another sink, please refer to [other sinks](../guide/rules/overview.md#sinksactions)
 that supported in eKuiper.
 
 Now you can also take a look at the log file under `log/stream.log`, or through command `docker logs edgex-kuiper `
@@ -296,7 +296,7 @@ $ mosquitto_sub -h broker.emqx.io -t result
 ...
 ```
 
-You can also type below command to look at the rule execution status. The corresponding REST API is also available for getting rule status, please check [related document](../operation/restapi/overview.md).
+You can also type below command to look at the rule execution status. The corresponding REST API is also available for getting rule status, please check [related document](../api/restapi/overview.md).
 
 ```shell
 # bin/kuiper getstatus rule rule1
@@ -335,15 +335,15 @@ In this tutorial,  we introduce a very simple use of EdgeX eKuiper rule engine.
 
 ### More Excecise 
 
-Current rule does not filter any data that are sent to eKuiper, so how to filter data?  Please [drop rule](../operation/cli/rules.md) and change the SQL in previous rule accordingly.  After update the rule file, and then deploy the rule again. Please monitor the `result` topic of MQTT broker, and please verify see if the rule works or not.
+Current rule does not filter any data that are sent to eKuiper, so how to filter data?  Please [drop rule](../api/cli/rules.md) and change the SQL in previous rule accordingly.  After update the rule file, and then deploy the rule again. Please monitor the `result` topic of MQTT broker, and please verify see if the rule works or not.
 
 #### Extended Reading
 
 - Starting from eKuiper 0.9.1 version, [a visualized web UI](../operation/manager-ui/overview.md) is released with a separated Docker image. You can manage the streams, rules and plugins through web page. 
-- Read [EdgeX source](../rules/sources/builtin/edgex.md) for more detailed information of configurations and data type conversion.
+- Read [EdgeX source](../guide/sources/builtin/edgex.md) for more detailed information of configurations and data type conversion.
 - [How to use meta function to extract additional data from EdgeX message bus?](edgex_meta.md) There are some other information are sent along with device service, such as event created time, event id etc. If you want to use such metadata information in your SQL statements, please refer to this doc.
-- [Use Golang template to customize analaysis result in eKuiper](../rules/data_template.md) Before the analysis result is sent to different sinks, the data template can be used to make more processing. You can refer to this doc for more scenarios of using data templates.
-- [EdgeX message bus sink doc](../rules/sinks/builtin/edgex.md). The document describes how to use EdgeX message bus sink. If you'd like to have your analysis result be consumed by other EdgeX services, you can send analysis data with EdgeX data format through this sink, and other EdgeX services can subscribe new message bus exposed by eKuiper sink.
+- [Use Golang template to customize analaysis result in eKuiper](../guide/sinks/data_template.md) Before the analysis result is sent to different sinks, the data template can be used to make more processing. You can refer to this doc for more scenarios of using data templates.
+- [EdgeX message bus sink doc](../guide/sinks/builtin/edgex.md). The document describes how to use EdgeX message bus sink. If you'd like to have your analysis result be consumed by other EdgeX services, you can send analysis data with EdgeX data format through this sink, and other EdgeX services can subscribe new message bus exposed by eKuiper sink.
 - [eKuiper plugin development tutorial](../extension/native/develop/plugins_tutorial.md): eKuiper plugin is based on the plugin mechanism of Golang, users can build loosely-coupled plugin applications,  dynamic loading and binding when it is running. You can refer to this article if you're interested in eKuiper plugin development.
 
  If you want to explore more features of eKuiper, please refer to below resources.

+ 2 - 2
docs/en_US/edgex/edgex_source_tutorial.md

@@ -1,6 +1,6 @@
 # Configure the data flow from EdgeX to eKuiper
 
-Sources feed data into eKuiper from other systems such as EdgeX foundry which are defined as streams. [EdgeX source](../rules/sources/builtin/edgex.md) defines the properties to configure how the data feed into eKuiper from EdgeX. In this tutorial, we will demonstrate the various data flow from EdgeX to eKuiper and how to configure the source to adopt any kind of data flow.
+Sources feed data into eKuiper from other systems such as EdgeX foundry which are defined as streams. [EdgeX source](../guide/sources/builtin/edgex.md) defines the properties to configure how the data feed into eKuiper from EdgeX. In this tutorial, we will demonstrate the various data flow from EdgeX to eKuiper and how to configure the source to adopt any kind of data flow.
 
 ## Typical Data Flow Model
 
@@ -15,7 +15,7 @@ Notice that, the EdgeX message bus receives messages from various service such a
 
 By default, the first kind of data flow is used which allow users to prepare (transformed, enriched, filtered, etc.) and groom (formatted, compressed, encrypted, etc.) before sending to the eKuiper rule engine. If users don't need to transform the data and would like to process the raw data in eKuiper to reduce the overhead, they can connect to the message bus directly.
 
-The full properties list of EdgeX source can be found [here](../rules/sources/builtin/edgex.md#global-configurations). There are two critical properties that define the connection model: `topic` and `messageType`. Let's explore how to configure them to adopt the connection models.
+The full properties list of EdgeX source can be found [here](../guide/sources/builtin/edgex.md#global-configurations). There are two critical properties that define the connection model: `topic` and `messageType`. Let's explore how to configure them to adopt the connection models.
 
 ## Connect to the App Service
 

+ 2 - 2
docs/en_US/extension/external/external_func.md

@@ -190,7 +190,7 @@ Thus, the google api proto files must be in the imported path. eKuiper already s
 
 In the external service configuration, there are 1 json file and at least 1 schema file(.proto) to define the function mapping. This will define a 3 layer mappings.
 
-1. eKuiper external service layer: it is defined by the file name of the json. It will be used as a key for the external service in the [REST API](../../operation/restapi/services.md) for the describe, delete and update of the service as a whole.
+1. eKuiper external service layer: it is defined by the file name of the json. It will be used as a key for the external service in the [REST API](../../api/restapi/services.md) for the describe, delete and update of the service as a whole.
 2. Interface layer: it is defined in the `interfaces` section of the json file. This is a virtual layer to group functions with the same schemas so that the shared properties such as address, schema file can be specified only once.
 3. eKuiper function layer: it is defined in the proto file as `rpc`. Notice that, the proto rpcs must be defined under a service section in protobuf. There is no restriction for the name of proto service. The function name is the same as the rpc name in the proto by default. But the user can override the mapping name in the json files's interfaces -> functions section.
 
@@ -239,7 +239,7 @@ When eKuiper is started, it will read and register the external service configur
    ```
    Note: After eKuiper is started, it **cannot** automatically load the system by modifying the configuration file. If you need to update dynamically, please use the REST service.
 
-For dynamic registration and management of services, please refer to [External Service Management API](../../operation/restapi/services.md).
+For dynamic registration and management of services, please refer to [External Service Management API](../../api/restapi/services.md).
 
 ## Usage
 

+ 2 - 2
docs/en_US/extension/native/develop/function.md

@@ -64,8 +64,8 @@ go build -trimpath -modfile extensions.mod --buildmode=plugin -o plugins/functio
 
 eKuiper will load plugins in the plugin folders automatically. The auto loaded function plugin assumes there is a function named the same as the plugin name. If multiple functions are exported, users need to explicitly register them to make them available. There are two ways to register the functions.
 
-1. In development environment, we recommend to build plugin .so file directly into the plugin folder so that eKuiper can auto load it. Then call [CLI register functions command](../../../operation/cli/plugins.md#register-functions) or [REST register functions API](../../../operation/restapi/plugins.md#register-functions).
-2. In production environment, [package the plugin into zip file](plugins_tutorial.md#deployment), then call [CLI function plugin create command](../../../operation/cli/plugins.md#create-a-plugin) or [REST function plugin create API](../../../operation/restapi/plugins.md#create-a-plugin) with functions list specified.
+1. In development environment, we recommend to build plugin .so file directly into the plugin folder so that eKuiper can auto load it. Then call [CLI register functions command](../../../api/cli/plugins.md#register-functions) or [REST register functions API](../../../api/restapi/plugins.md#register-functions).
+2. In production environment, [package the plugin into zip file](plugins_tutorial.md#deployment), then call [CLI function plugin create command](../../../api/cli/plugins.md#create-a-plugin) or [REST function plugin create API](../../../api/restapi/plugins.md#create-a-plugin) with functions list specified.
 
 ## Usage
 

+ 6 - 6
docs/en_US/extension/native/develop/overview.md

@@ -12,8 +12,8 @@ Developers of eKuiper plugin can specify metadata files during the development p
 
 | Name                                              | Description                                                                 | Remarks                                                   |
 |---------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------|
-| [zmq](../../../rules/sources/plugin/zmq.md)       | The plugin listens to Zero Mq messages and sends them to the eKuiper stream | Sample of plugin, not available in production environment |
-| [random](../../../rules/sources/plugin/random.md) | The plugin generates messages according to the specified pattern            | Sample of plugin, not available in production environment |
+| [zmq](../../../guide/sources/plugin/zmq.md)       | The plugin listens to Zero Mq messages and sends them to the eKuiper stream | Sample of plugin, not available in production environment |
+| [random](../../../guide/sources/plugin/random.md) | The plugin generates messages according to the specified pattern            | Sample of plugin, not available in production environment |
 
 ### source metadata file format
 
@@ -159,10 +159,10 @@ The following is a sample of metadata file.
 
 | Name                                                | Description                                                      | Remarks                                                   |
 |-----------------------------------------------------|------------------------------------------------------------------|-----------------------------------------------------------|
-| [file](../../../rules/sinks/plugin/file.md)         | The plugin saves the analysis results to a specified file system | Sample of plugin, not available in production environment |
-| [zmq](../../../rules/sinks/plugin/zmq.md)           | The plugin sends the analysis results to the topic of Zero Mq    | Sample of plugin, not available in production environment |
-| [Influxdb](../../../rules/sinks/plugin/influx.md)   | The plugin sends the analysis results to InfluxDB                | Provided by [@smart33690](https://github.com/smart33690)  |
-| [TDengine](../../../rules/sinks/plugin/tdengine.md) | The plugin sends the analysis results to TDengine                |                                                           |
+| [file](../../../guide/sinks/plugin/file.md)         | The plugin saves the analysis results to a specified file system | Sample of plugin, not available in production environment |
+| [zmq](../../../guide/sinks/plugin/zmq.md)           | The plugin sends the analysis results to the topic of Zero Mq    | Sample of plugin, not available in production environment |
+| [Influxdb](../../../guide/sinks/plugin/influx.md)   | The plugin sends the analysis results to InfluxDB                | Provided by [@smart33690](https://github.com/smart33690)  |
+| [TDengine](../../../guide/sinks/plugin/tdengine.md) | The plugin sends the analysis results to TDengine                |                                                           |
 
 ### sink metadata file format
 

+ 1 - 1
docs/en_US/extension/native/develop/plugins_tutorial.md

@@ -342,7 +342,7 @@ Please refer [Docker compile](#Docker-compile) for the compilation process. The
 Users can use [REST API](https://github.com/lf-edge/ekuiper/blob/master/docs/en_US/restapi/plugins.md) or [CLI](https://github.com/lf-edge/ekuiper/blob/master/docs/en_US/cli/plugins.md) to manage plugins. The following takes the REST API as an example to deploy the plugin compiled in the previous step to the production environment. 
 
 1. Package the plugin and put it into the http server. Package the file `.so` of the plugin compiled in the previous step and the default configuration file (only required for source) `.yaml` into a `.zip` file (assuming that the file is `mysqlSink.zip`). Put this file into the http server that the production environment can also access. 
-    - Some plugin may depend on libs that are not installed on eKuiper environment. The user can either install them manually in the eKuiper server or put the install script and dependencies in the plugin zip and let the plugin management system do the installation. Please refer to [ Plugin File Format](../../../operation/restapi/plugins.md#plugin-file-format) for detail.
+    - Some plugin may depend on libs that are not installed on eKuiper environment. The user can either install them manually in the eKuiper server or put the install script and dependencies in the plugin zip and let the plugin management system do the installation. Please refer to [ Plugin File Format](../../../api/restapi/plugins.md#plugin-file-format) for detail.
 2. Use REST API to create plugins:
    ```
    POST http://{$production_eKuiper_ip}:9081/plugins/sinks

+ 5 - 5
docs/en_US/extension/native/develop/sink.md

@@ -1,6 +1,6 @@
 # Sink Extension
 
-Sink feed data from eKuiper into external systems. eKuiper has built-in sink support for [MQTT broker](../../../rules/sinks/builtin/mqtt.md) and [log sink](../../../rules/sinks/builtin/log.md). There are still needs to publish data to various external systems include messaging systems and database etc. Sink extension is presented to meet this requirement.
+Sink feed data from eKuiper into external systems. eKuiper has built-in sink support for [MQTT broker](../../../guide/sinks/builtin/mqtt.md) and [log sink](../../../guide/sinks/builtin/log.md). There are still needs to publish data to various external systems include messaging systems and database etc. Sink extension is presented to meet this requirement.
 
 ## Developing
 
@@ -10,7 +10,7 @@ To develop a sink for eKuiper is to implement [api.Sink](https://github.com/lf-e
 
 Before starting the development, you must [setup the environment for golang plugin](../overview.md#setup-the-plugin-developing-environment). 
 
-To develop a sink, the _Configure_ method must be implemented. This method will be called once the sink is initialized. In this method, a map that contains the configuration in the [rule actions definition](../../../rules/overview.md#sinksactions) is passed in. Typically, there will be information such as host, port, user and password of the external system. You can use this map to initialize this sink.
+To develop a sink, the _Configure_ method must be implemented. This method will be called once the sink is initialized. In this method, a map that contains the configuration in the [rule actions definition](../../../guide/rules/overview.md#sinksactions) is passed in. Typically, there will be information such as host, port, user and password of the external system. You can use this map to initialize this sink.
 
 ```go
 //Called during initialization. Configure the sink with the properties from action definition 
@@ -28,7 +28,7 @@ The main task for a Sink is to implement _collect_ method. The function will be
 
 Most of the time, the map content will be the selective fields. But if `sendError` property is enabled and there are errors happen in the rule, the map content will be like `{"error":"error message here"}`.
 
-The developer can fetch the transformed result from the context method `ctx.TransformOutput(data)`. The return values are the transformed value of `[]byte` type. Currently, it will be transformed to the json byte array be default or formatted with the set [`dataTemlate` property](../../../rules/overview.md#data-template). If the value is transformed by dataTemplate, the second return value will be true. 
+The developer can fetch the transformed result from the context method `ctx.TransformOutput(data)`. The return values are the transformed value of `[]byte` type. Currently, it will be transformed to the json byte array be default or formatted with the set [`dataTemlate` property](../../../guide/rules/overview.md#data-template). If the value is transformed by dataTemplate, the second return value will be true. 
 
 The developer can return any errors. However, to leverage the retry feature of eKuiper, the developer must return an error whose message starts with "io error".
 
@@ -61,7 +61,7 @@ So in the _Configure_ method, parse the `rowkindField` to know which field in th
 
 #### Parse dynamic properties
 
-For customized sink plugins, users may still want to support [dynamic properties](../../../rules/overview.md#dynamic-properties) like the built-in ones.
+For customized sink plugins, users may still want to support [dynamic properties](../../../guide/rules/overview.md#dynamic-properties) like the built-in ones.
 
 In the context object, a function `ParseTemplate` is provided to support the parsing of the dynamic property with the go template syntax. In the customized sink, developers can specify some properties to be dynamic according to the business logic. And in the plugin code, use this function to parse the user input in the collect function or elsewhere.
 
@@ -80,7 +80,7 @@ go build -trimpath -modfile extensions.mod --buildmode=plugin -o extensions/sink
 
 ### Usage
 
-The customized sink is specified in a [actions definition](../../../rules/overview.md#sinksactions). Its name is used as the key of the action. The configuration is the value.
+The customized sink is specified in a [actions definition](../../../guide/rules/overview.md#sinksactions). Its name is used as the key of the action. The configuration is the value.
 
 If you have developed a sink implementation MySink, you should have:
 1. In the plugin file, symbol MySink is exported.

+ 3 - 3
docs/en_US/extension/native/develop/source.md

@@ -1,6 +1,6 @@
 # Source Extension
 
-Sources feed data into eKuiper from other systems. eKuiper has built-in source support for [MQTT broker](../../../rules/sources/builtin/mqtt.md). There are still needs to consume data from various external systems include messaging systems and data pipelines etc. Source extension is presented to meet this requirement.
+Sources feed data into eKuiper from other systems. eKuiper has built-in source support for [MQTT broker](../../../guide/sources/builtin/mqtt.md). There are still needs to consume data from various external systems include messaging systems and data pipelines etc. Source extension is presented to meet this requirement.
 
 ## Developing
 
@@ -86,7 +86,7 @@ function MySourceLookup() api.LookupSource{
 The [SQL Lookup Source](https://github.com/lf-edge/ekuiper/blob/master/extensions/sources/sql/sqlLookup.go) is a good example.
 
 ### Rewindable source
-If the [rule checkpoint](../../../rules/state_and_fault_tolerance.md#source-consideration) is enabled, the source requires to be rewindable. That means the source need to implement both `api.Source` and `api.Rewindable` interface. 
+If the [rule checkpoint](../../../guide/rules/state_and_fault_tolerance.md#source-consideration) is enabled, the source requires to be rewindable. That means the source need to implement both `api.Source` and `api.Rewindable` interface. 
 
 A typical implementation is to save an `offset` as a field of the source. And update the offset value when reading in new value. Notice that, when implementing GetOffset() will be called by eKuiper system which means the offset value can be accessed by multiple go routines. So a lock is required when read or write the offset.
 
@@ -101,7 +101,7 @@ A configuration system is supported for eKuiper extension which will automatical
  To use configuration in your source, the following conventions must be followed.
  1. The name of your configuration file must be the same as the plugin name. For example, mySource.yaml.
  2. The yaml file must be located inside _etc/sources_
- 3. The format of the yaml file could be found [here](../../../rules/sources/builtin/mqtt.md)
+ 3. The format of the yaml file could be found [here](../../../guide/sources/builtin/mqtt.md)
  
 #### common configuration field
 

+ 3 - 3
docs/en_US/extension/native/overview.md

@@ -2,8 +2,8 @@
 
 eKuiper allows user to customize the different kinds of extensions by the native golang plugin system. 
 
-- The source extension is used for extending different stream source, such as consuming data from other message brokers. eKuiper has built-in source support for [MQTT broker](../../rules/sources/builtin/mqtt.md).
-- Sink/Action extension is used for extending pub/push data to different targets, such as database, other message system, web interfaces or file systems. Built-in action is supported in eKuiper, see [MQTT](../../rules/sinks/builtin/mqtt.md) & [log files](../../rules/sinks/builtin/log.md).
+- The source extension is used for extending different stream source, such as consuming data from other message brokers. eKuiper has built-in source support for [MQTT broker](../../guide/sources/builtin/mqtt.md).
+- Sink/Action extension is used for extending pub/push data to different targets, such as database, other message system, web interfaces or file systems. Built-in action is supported in eKuiper, see [MQTT](../../guide/sinks/builtin/mqtt.md) & [log files](../../guide/sinks/builtin/log.md).
 - Functions extension allows user to extend different functions that used in SQL. Built-in functions is supported in eKuiper, see [functions](../../sqls/built-in_functions.md).
 
 Please read the following to learn how to implement different extensions.
@@ -77,7 +77,7 @@ func (f *accumulateWordCountFunc) Exec(args []interface{}, ctx api.FunctionConte
 
 ## Runtime dependencies
 
-Some plugin may need to access dependencies in the file system. Those files is put under {{eKuiperPath}}/etc/{{pluginType}}/{{pluginName}} directory. When packaging the plugin, put those files in [etc directory](../../operation/restapi/plugins.md#plugin-file-format). After installation, they will be moved to the recommended place.
+Some plugin may need to access dependencies in the file system. Those files is put under {{eKuiperPath}}/etc/{{pluginType}}/{{pluginName}} directory. When packaging the plugin, put those files in [etc directory](../../api/restapi/plugins.md#plugin-file-format). After installation, they will be moved to the recommended place.
 
 In the plugin source code, developers can access the dependencies of file system by getting the eKuiper root path from the context:
 

+ 1 - 1
docs/en_US/extension/portable/overview.md

@@ -78,7 +78,7 @@ A plugin can contain multiple sources, sinks and functions, define them in the c
 
 The portable plugins can be automatically loaded in start up by putting the content(the json, the executable and all supportive files) inside `plugins/portables/${pluginName}` and the configurations to the corresponding directories under `etc`.
 
-To manage the portable plugins in runtime, we can use the [REST](../../operation/restapi/plugins.md) or [CLI](../../operation/cli/plugins.md) commands.
+To manage the portable plugins in runtime, we can use the [REST](../../api/restapi/plugins.md) or [CLI](../../api/cli/plugins.md) commands.
 
 ## Restrictions
 

+ 1 - 1
docs/en_US/extension/portable/python_sdk.md

@@ -6,7 +6,7 @@ To run python plugin, there are two prerequisites in the runtime environment:
 1. Install Python 3.x environment.
 2. Install nng and ekuiper package by `pip install nng ekuiper`.
 
-By default, the eKuiper portable plugin runtime will run python script with `python userscript.py`. If users have multiple python instance or an alternative python executable command, they can specify the python command in [the configuration file](../../operation/config/configuration_file.md#portable-plugin-configurations).
+By default, the eKuiper portable plugin runtime will run python script with `python userscript.py`. If users have multiple python instance or an alternative python executable command, they can specify the python command in [the configuration file](../../configuration/global_configurations.md#portable-plugin-configurations).
 
 ## Development
 

+ 150 - 0
docs/en_US/extension/wasm/overview.md

@@ -0,0 +1,150 @@
+# Wasm 插件
+
+作为对原生插件的补充  Wasm 插件旨在提供相同的功能,同时允许在更通用的环境中运行并由更多语言创建。
+
+创建插件的步骤如下:
+
+1. 开发插件
+2. 根据编程语言构建或打包插件
+3. 通过 eKuiper文件/REST/CLI 注册插件
+
+## 安装工具
+
+在 Wasm 插件模式下,用选择的语言来实现函数,并将其编译成 Wasm 文件。只要是 WebAssembly 支持的语言均可,例如 go,rust 等。
+我们使用 tinygo 工具将 go 文件编译成 Wasm 文件。 
+
+检查 go 是否已经安装,执行以下命令
+```shell
+go version
+```
+检查 tinygo 是否已经安装,请运行以下命令。
+```shell
+tinygo version
+```
+tinygo 下载地址 : https://github.com/tinygo-org/tinygo/releases
+
+检查 wasmedge 是否已经安装,请运行以下命令。
+```shell
+wasmedge -v
+```
+官方文档 wasmedge 下载地址 : https://wasmedge.org/book/en/quick_start/install.html
+
+下载命令:
+```shell
+//The easiest way to install WasmEdge is to run the following command. Your system should have git and curl as prerequisites.
+
+curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash
+
+//Run the following command to make the installed binary available in the current session.
+
+source $HOME/.wasmedge/env
+```
+
+## 开发函数
+官方教程(https://wasmedge.org/book/en/write_wasm/go.html)
+
+开发 fibonacci 插件(相比官方,略有改动)
+
+fibonacci.go
+```go
+package main
+
+func main() {
+}
+
+//export fib
+func fibArray(n int32) int32 {
+	arr := make([]int32, n)
+	for i := int32(0); i < n; i++ {
+		switch {
+		case i < 2:
+			arr[i] = i
+		default:
+			arr[i] = arr[i-1] + arr[i-2]
+		}
+	}
+	return arr[n-1]
+}
+```
+接下来将 fibonacci.go 编译成 fibonacci.wasm 文件
+```shell
+tinygo build -o fibonacci.wasm -target wasi fibonacci.go
+```
+运行并得到结果,检查是否符合预期.
+```shell
+$ wasmedge --reactor fibonacci.wasm fib 10
+34
+```
+
+## 打包发布
+
+开发完成后,我们需要将结果打包成zip进行安装。在 zip 文件中,文件结构必须遵循以下约定并使用正确的命名:
+
+- {pluginName}.json:文件名必须与插件主程序和REST/CLI命令中定义的插件名相同。
+- {pluginName}.wasm:文件名必须与插件主程序和REST/CLI命令中定义的插件名相同。
+
+在json文件中,我们需要描述这个插件的元数据。该信息必须与插件主程序中的定义相匹配。下面是一个例子:
+
+fibonacci.json
+```json
+{
+  "version": "v1.0.0",
+  "functions": [
+    "fib"
+  ],
+  "wasmEngine": "wasmedge"
+}
+```
+安装插件:
+
+首先启动服务器
+```shell
+bin/kuiperd
+```
+然后创建插件
+```go
+bin/kuiper create plugin wasm fibonacci "{\"file\":\"file:///$HOME/ekuiper/internal/plugin/testzips/wasm/fibonacci.zip\"}"
+```
+$HOME 为 自己本机路径,输入以下命令查看
+```shell
+$HOME
+```
+后面所使用的fibonacci.zip文件已提供,如果是自己开发新的插件,修改为新插件的绝对路径地址即可。
+
+查询插件信息
+```shell
+bin/kuiper describe plugin wasm fibonacci
+```
+## 运行
+1. 创建流
+```shell
+bin/kuiper create stream demo_fib '(num float) WITH (FORMAT="JSON", DATASOURCE="demo_fib")'
+
+bin/kuiper query
+
+select fib(num) from demo_fib
+```
+2. 安装 emqx docker容器并运行
+```shell
+docker pull emqx/emqx:v4.0.0
+docker run -d --name emqx -p 1883:1883 -p 8081:8081 -p 8083:8083 -p 8883:8883 -p 8084:8084 -p 18083:18083 emqx/emqx:v4.0.0
+```
+3. 登陆网页 
+
+地址: http://127.0.0.1:18083/
+
+登陆账户/密码:admin/public
+
+使用 TOOLS/Websocket 工具发送数据:
+
+Tpoic    : demo_fib 
+
+Messages : {“num” : 25}
+
+消息发送成功后,终端即可接收到执行结果.
+
+## 管理
+
+通过将内容(json、Wasm文件)放在`plugins/wasm/${pluginName}`中,可以在启动时自动加载可移植插件。
+
+要在运行时管理可移植插件,我们可以使用 [REST](https://github.com/lf-edge/ekuiper/blob/master/docs/zh_CN/operation/restapi/plugins.md) 或 [CLI](https://github.com/lf-edge/ekuiper/blob/master/docs/zh_CN/operation/cli/plugins.md) 命令。

+ 0 - 205
docs/en_US/getting_started.md

@@ -1,205 +0,0 @@
-
-
-## Download & install
-
-Get the installation package via https://github.com/lf-edge/ekuiper/releases or https://www.emqx.io/downloads#kuiper
-
-### zip、tar.gz compressed package
-
-Unzip eKuiper
-
-```sh
-$ unzip kuiper-$VERISON-$OS-$ARCH.zip
-or
-$ tar -xzf kuiper-$VERISON-$OS-$ARCH.zip
-```
-
-Run `bin/kuiperd` to start the eKuiper server
-
-```sh
-$ bin/kuiperd
-```
-
-You should see a successful message: `Serving Rule server on port 20498`
-
-The directory structure of eKuiper is as follows:
-
-```
-eKuiper_installed_dir
-  bin
-    server
-    cli
-  etc
-    mqtt_source.yaml
-    ...
-  data
-    ...
-  plugins
-    ...
-  log
-    ...
-```
-
-
-#### deb、rpm installation package
-
-Use related commands to install eKuiper
-
-```sh
-$ sudo dpkg -i kuiper_$VERSION_$ARCH.deb
-or
-$ sudo rpm -ivh kuiper-$VERSION-1.el7.rpm
-```
-
-Run `kuiperd` to start the eKuiper server
-
-```sh
-$ sudo kuiperd
-```
-
-You should see a successful message: `Serving Rule server on port 20498`
-
-eKuiper also supports systemctl startup
-
- ```sh
- $ sudo systemctl start kuiper
- ```
-
-The directory structure of eKuiper is as follows:
-
-```
-/usr/lib/kuiper/bin
-  server
-  cli
-/etc/kuiper
-  mqtt_source.yaml
-  ...
-/var/lib/kuiper/data
-  ...
-/var/lib/kuiper/plugins
-  ...
-/var/log/kuiper
-   ...
-```
-
-
-
-## Run the first rule stream
-
-eKuiper rule is composed by a SQL and multiple actions. eKuiper SQL is an easy to use SQL-like language to specify the logic of the rule stream. By providing the rule through CLI, a rule stream will be created in the rule engine and run continuously. The user can then manage the rules through CLI.
-
-eKuiper has a lot of built-in functions and extensions available for complex analysis, and you can find more information about the grammer and its functions from the [eKuiper SQL reference](sqls/overview.md).
-
-Let's consider a sample scenario where we are receiving temperature and humidity record from a sensor through MQTT service and we want to issue an alert when the temperature is bigger than 30 degrees celcius in a time window. We can write a eKuiper rule for the above scenario using the following several steps.
-
-### Prerequisite
-
-We assume there is already a MQTT broker as the data source of eKuiper server. If you don't have one, EMQX is recommended. Please follow the [EMQ Installation Guide](https://docs.emqx.io/en/broker/latest/getting-started/install.html) to setup a mqtt broker.
-
-### Defining the input stream
-
-The stream needs to have a name and a schema defining the data that each incoming event should contain. For this scenario, we will use an MQTT source to consume temperature events. The input stream can be defined by SQL language.
-
-We create a stream named `demo` which consumes MQTT `demo` topic as specified in the DATASOURCE property.
-```sh
-$ bin/kuiper create stream demo '(temperature float, humidity bigint) WITH (FORMAT="JSON", DATASOURCE="demo")'
-```
-The MQTT source will connect to MQTT broker at `tcp://localhost:1883`. If your MQTT broker is in another location, specify it in the `etc/mqtt_source.yaml`.  You can change the server configuration as in below.
-
-```yaml
-default:
-  qos: 1
-  sharedsubscription: true
-  server: "tcp://127.0.0.1:1883"
-```
-
-You can use command `kuiper show streams` to see if the `demo` stream was created or not.
-
-### Testing the stream through query tool
-
-Now the stream is created, it can be tested from `kuiper query` command. The `kuiper` prompt is displayed as below after typing `cli query`.
-
-```sh
-$ bin/kuiper query
-kuiper > 
-```
-
-In the `kuiper` prompt, you can type SQL and validate the SQL against the stream.
-
-```sh
-kuiper > select count(*), avg(humidity) as avg_hum, max(humidity) as max_hum from demo where temperature > 30 group by TUMBLINGWINDOW(ss, 5);
-
-query is submit successfully.
-```
-
-Now if any data are published to the MQTT server available at `tcp://127.0.0.1:1883`, then it prints message as following.
-
-```
-kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
-[{"avg_hum":62,"count":5,"max_hum":96}]
-[{"avg_hum":36,"count":3,"max_hum":63}]
-[{"avg_hum":48,"count":3,"max_hum":71}]
-[{"avg_hum":40,"count":3,"max_hum":69}]
-[{"avg_hum":44,"count":4,"max_hum":57}]
-[{"avg_hum":42,"count":3,"max_hum":74}]
-[{"avg_hum":53,"count":3,"max_hum":81}]
-...
-```
-
-You can press `ctrl + c` to break the query, and server will terminate streaming if detecting client disconnects from the query. Below is the log print at server.
-
-```
-...
-time="2019-09-09T21:46:54+08:00" level=info msg="The client seems no longer fetch the query result, stop the query now."
-time="2019-09-09T21:46:54+08:00" level=info msg="stop the query."
-...
-```
-
-### Writing the rule
-
-As part of the rule, we need to specify the following:
-* rule name: the id of the rule. It must be unique
-* sql: the query to run for the rule
-* actions: the output actions for the rule
-
-We can run the `kuiper rule` command to create rule and specify the rule definition in a file
-
-```sh
-$ bin/kuiper create rule ruleDemo -f myRule
-```
-The content of `myRule` file. It prints out to the log  for the events where the average temperature in a 1 minute tumbling window is bigger than 30.
-```json
-{
-    "sql": "SELECT temperature from demo where temperature > 30",
-    "actions": [{
-        "log":  {}
-    }]
-}
-```
-You should see a successful message `rule ruleDemo created` in the stream log, and the rule is now set up and running.
-
-### Testing the rule
-Now the rule engine is ready to receive events from  MQTT `demo`  topic. To test it, just use a MQTT client to publish message to the `demo` topic. The message should be in json format like this:
-```json
-{"temperature":31.2, "humidity": 77}
-```
-
-Check the stream log located at "`log/stream.log`", and you would see the filtered data are printed out. Also, if you send below message, it does not meet the SQL condition, and the message will be filtered.
-
-```json
-{"temperature":29, "humidity": 80}
-```
-
-### Managing the rules
-You can use command line tool to stop the rule for a while and restart it and other management work. The rule name is the identifier of a rule. Check [Rule Management CLI](operation/cli/rules.md) for detail
-```sh
-$ bin/kuiper stop rule ruleDemo
-```
-
-Refer to the following topics for guidance on using the eKuiper.
-
-- [Command line interface tools - CLI](operation/cli/overview.md)
-- [eKuiper SQL reference](./sqls/overview.md)
-- [Rules](./rules/overview.md)
-- [Extend eKuiper](./extension/overview.md)
-- [Plugins](extension/native/develop/overview.md)

+ 216 - 0
docs/en_US/getting_started/getting_started.md

@@ -0,0 +1,216 @@
+# Getting Started
+
+Starting from download and installation, this document will guide you to start eKuiper and run the first rule.
+
+## Install eKuiper
+
+eKuiper provides docker image, binary package and helm chart to install. 
+
+In this tutorial, we provide both web UI and CLI to create and manage the rules. If you want to run the eKuiper manager which is the web management console for eKuiper, please refer to [running eKuiper with management console](../installation.md#running-ekuiper-with-management-console).
+
+### Running in docker
+
+Docker deployment is the fastest way to start experimenting with eKuiper. 
+
+```shell
+docker run -p 9081:9081 -d --name kuiper -e MQTT_SOURCE__DEFAULT__SERVER="tcp://broker.emqx.io:1883" lfedge/ekuiper:$tag
+```
+
+For more information about docker installation, please refer to [running eKuiper in docker](../installation.md#running-ekuiper-in-docker).
+
+### Running in Kubernetes
+
+For Kubernetes, eKuiper offers helm chart. Please refer to [install via helm](../installation.md#install-via-helm--k8sk3s-) for detail.
+
+### Running in a VM or on bare metal
+
+eKuiper can be deployed directly to bare metal servers or virtual machines.
+
+eKuiper has prebuilt packages downloadable for Linux such as CentOS, Debian and Ubuntu and macOS. You can [install from zip](../installation.md#install-from-zip) or [from packages](../installation.md#install-from-package).
+
+For other platforms, you may [build the runnable from source code](../installation.md#compilation).
+
+## Create and manage the first rule
+
+As a rule engine, eKuiper allows the user to submit stream processing job aka. rules and manage rules through CLI, REST API or management console. In this tutorial, we will walk you through the rule creation and management by management console and CLI respectively.
+
+eKuiper rule is composed by a SQL and multiple actions. eKuiper SQL is an easy to use SQL-like language to specify the logic of a rule. eKuiper has a lot of built-in functions and extensions available for complex analysis to be used in your SQL. You can find more information about the syntax and its functions from the [eKuiper SQL reference](../sqls/overview.md).
+
+### Prerequisite
+
+We assume there is already a MQTT broker as the data source of our eKuiper rule. If you don't have one, EMQX is recommended. Please follow the [EMQ Installation Guide](https://docs.emqx.io/en/broker/latest/getting-started/install.html) to set up a mqtt broker.
+
+You can also use the public MQTT test server `tcp://broker.emqx.io:1883` hosted by [EMQ](https://www.emqx.io).
+
+Remember your broker address, we will use it in our MQTT configurations in this tutorial.
+
+### Scenario
+
+Let's consider a sample scenario where we are receiving temperature and humidity event from a sensor through MQTT service, and we want to issue an alert when the average temperature is bigger than 30 degree Celsius in a time window. We can write an eKuiper rule for the above scenario using the following several steps.
+
+1. Create a stream to define the data source that we want to proceed. The stream needs to have a name and an optional schema defining the data structure that each incoming event should contain. For this scenario, we will use an MQTT source to consume temperature events.
+2. Create the rule to define how to proceed the stream and the actions after proceed.
+3. Get the rule status and manage it like start, stop and delete.
+
+We will do these steps in management console and CLI respectively to create the same rule.
+
+### Management console
+
+Please make sure eKuiper manager has installed and configured.
+
+### Defining the stream
+
+1. In Source/Stream page, click `Create stream` button.
+2. Create a stream named `demo` which consumes MQTT `demo` topic as specified in the DATASOURCE property. The MQTT source will connect to MQTT broker at `tcp://localhost:1883`. If your MQTT broker is in another location, click `Add configuration key` to set up a new configuration and use.
+    ![create stream](../resources/create_stream.png)
+3. Click `Submit`. You should find the `demo` stream in the stream list.
+
+### Compose the rule
+
+1. Go to Rules page, click `Create rule`.
+2. Write the rule id, name and SQL as below. Then click `Add` to add actions. The SQL is `SELECT count(*), avg(temperature) AS avg_temp, max(humidity) AS max_hum FROM demo GROUP BY TUMBLINGWINDOW(ss, 5) HAVING avg_temp > 30`.
+![create rule](../resources/create_rule.png)
+3. Add MQTT action and fill in the configurations as below. Select `mqtt` in the Sink type dropdown. Set the broker address to your broker and set the topic to `result/rule1`. ClientID is optional, if not set an uuid will assign to it. If set, please make sure the id is unique and only use in one rule. Set the other properties like username, password according to your MQTT broker setting.
+![add mqtt action](../resources/mqtt_action.png)
+4. Click `Submit`. You should find the `myRule` rule in the rule list and started.
+
+By now, we have created a rule by specifying SQL as the logic, and add one MQTT action. As you could see, the actions could be multiple, you can add more actions like log, REST and file to issue the alarm.
+
+### Manage the rule
+
+In the Rules page, we could find all the created rules and its status as below.
+
+![rule list](../resources/rule_list.png)
+
+You can start or stop the rule by touching the switch button. In the Operations column, the second operation is status, which will show the running status and [metrics](../operation/usage/monitor_with_prometheus.md) of the rule. Once the data source has data in, you should find the metrics number rising.
+
+![rule status](../resources/rule_status.png)
+
+You can edit, duplicate and delete the rules by clicking the button in the Operations column.
+
+### CLI
+
+eKuiper provide CLI binary after installation. It is used to run locally without any external tools to manage the rule engine.
+
+### Defining the stream
+
+We create a stream named `demo` which consumes MQTT `demo` topic as specified in the DATASOURCE property.
+
+```sh
+$ bin/kuiper create stream demo '(temperature float, humidity bigint) WITH (FORMAT="JSON", DATASOURCE="demo")'
+```
+
+The MQTT source will connect to MQTT broker at `tcp://localhost:1883`. If your MQTT broker is in another location, specify it in the `etc/mqtt_source.yaml`.  You can change the server configuration as in below.
+
+```yaml
+default:
+  qos: 1
+  sharedsubscription: true
+  server: "tcp://127.0.0.1:1883"
+```
+
+You can use command `kuiper show streams` to see if the `demo` stream was created or not.
+
+### Testing the stream through query tool
+
+Now the stream is created, it can be tested from `kuiper query` command. The `kuiper` prompt is displayed as below after typing `cli query`.
+
+```sh
+$ bin/kuiper query
+kuiper > 
+```
+
+In the `kuiper` prompt, you can type SQL and validate the SQL against the stream.
+
+```sh
+kuiper > select count(*), avg(humidity) as avg_hum, max(humidity) as max_hum from demo where temperature > 30 group by TUMBLINGWINDOW(ss, 5);
+
+query is submit successfully.
+```
+
+Now if any data are published to the MQTT server available at `tcp://127.0.0.1:1883`, then it prints message as following.
+
+```
+kuiper > [{"avg_hum":41,"count":4,"max_hum":91}]
+[{"avg_hum":62,"count":5,"max_hum":96}]
+[{"avg_hum":36,"count":3,"max_hum":63}]
+[{"avg_hum":48,"count":3,"max_hum":71}]
+[{"avg_hum":40,"count":3,"max_hum":69}]
+[{"avg_hum":44,"count":4,"max_hum":57}]
+[{"avg_hum":42,"count":3,"max_hum":74}]
+[{"avg_hum":53,"count":3,"max_hum":81}]
+...
+```
+
+You can press `ctrl + c` to break the query, and server will terminate streaming if detecting client disconnects from the query. Below is the log print at server.
+
+```
+...
+time="2019-09-09T21:46:54+08:00" level=info msg="The client seems no longer fetch the query result, stop the query now."
+time="2019-09-09T21:46:54+08:00" level=info msg="stop the query."
+...
+```
+
+### Writing the rule
+
+As part of the rule, we need to specify the following:
+* rule id: the id of the rule. It must be unique
+* rule name: the description of the rule
+* sql: the query to run for the rule
+* actions: the output actions for the rule
+
+We can run the `kuiper rule` command to create rule and specify the rule definition in a file
+
+```sh
+$ bin/kuiper create rule myRule -f myRule
+```
+
+The content of `myRule` file as below. It publishes the result to the mqtt topic `result/myRule` when the average temperature in a 5-second tumbling window is bigger than 30.
+
+```json
+{
+    "sql": "SELECT count(*), avg(temperature) as avg_temp, max(humidity) as max_hum from demo group by TUMBLINGWINDOW(ss, 5) HAVING avg_temp > 30;",
+    "actions": [{
+        "mqtt":  {
+          "server": "tcp://127.0.0.1:1883",
+          "topic": "result/myRule",
+          "sendSingle": true
+        }
+    }]
+}
+```
+You should see a successful message `rule myRule created` in the stream log, and the rule is now set up and running.
+
+### Managing the rules
+
+You can use command line tool to stop the rule for a while and restart it and other management work. The rule name is the identifier of a rule. 
+
+```sh
+$ bin/kuiper stop rule myRule
+```
+
+## Testing the rule
+
+Now the rule engine is ready to receive events from MQTT `demo`  topic. To test it, just use a MQTT client such as [MQTT X](https://mqttx.app/) to publish message to the `demo` topic. The message should be in json format like this:
+
+```json
+{"temperature":31.2, "humidity": 77}
+```
+
+Since we publish the alarm to MQTT topic `result/myRule`, we can use a MQTT client to subscribe to the topic. We should receive message if the 5-second average temperature is bigger than 30.
+
+Below is an example data and the output in MQTT X.
+
+![result](../resources/result.png)
+
+## Further Reading
+
+Refer to the following topics for guidance on using the eKuiper.
+
+- [Installation](../installation.md)
+- [Rules](../guide/rules/overview.md)
+- [SQL reference](../sqls/overview.md)
+- [Stream](../guide/streams/overview.md)
+- [Sink](../guide/sinks/overview.md)
+- [Command line interface tools - CLI](../api/cli/overview.md)
+- [Management Console](../guide/rules/overview.md)

+ 1 - 1
docs/en_US/quick_start_docker.md

@@ -43,7 +43,7 @@
 
 6. To stop the test, just press `ctrl + c` in `bin/kuiper query` command console, or input `exit` and press enter.
 
-You can also refer to [eKuiper dashboard documentation](./operation/manager-ui/overview.md) for better using experience.
+You can also refer to [eKuiper dashboard documentation](../operation/manager-ui/overview.md) for better using experience.
 
 Next for exploring more powerful features of eKuiper? Refer to below for how to apply LF Edge eKuiper in edge and integrate with AWS / Azure IoT cloud.
 

+ 1 - 1
docs/en_US/tutorials/ai/python_tensorflow_lite_tutorial.md

@@ -21,7 +21,7 @@ Before starting the tutorial, please prepare the following products or environme
 1. Install the Python 3.x environment.
 2. Install the pynng, ekuiper and tensorflow lite packages via `pip install pynng ekuiper tflite_runtime`.
 
-By default, the portable plugin for eKuiper will run with the `python` command. If your environment does not support the `python` command, please use the [configuration file](../../operation/config/configuration_file.md#portable-plugin-config) to modify the Python command, such as `python3`.
+By default, the portable plugin for eKuiper will run with the `python` command. If your environment does not support the `python` command, please use the [configuration file](../../configuration/global_configurations.md#portable-plugin-configurations) to modify the Python command, such as `python3`.
 
 If you are developing with Docker, you can use the `lfedge/ekuiper:<tag>-slim-python` version. This version includes both the eKuiper and python environments.
 

docs/en_US/tutorials/ai/tensorflow_lite_tutorial.md → docs/en_US/guide/ai/tensorflow_lite_tutorial.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 4 - 4
docs/en_US/rules/graph_rule.md


+ 165 - 0
docs/en_US/guide/rules/overview.md

@@ -0,0 +1,165 @@
+# Rules
+
+Rules are defined by JSON, below is an example.
+
+```json
+{
+  "id": "rule1",
+  "sql": "SELECT demo.temperature, demo1.temp FROM demo left join demo1 on demo.timestamp = demo1.timestamp where demo.temperature > demo1.temp GROUP BY demo.temperature, HOPPINGWINDOW(ss, 20, 10)",
+  "actions": [
+    {
+      "log": {}
+    },
+    {
+      "mqtt": {
+        "server": "tcp://47.52.67.87:1883",
+        "topic": "demoSink"
+      }
+    }
+  ]
+}
+```
+
+The parameters for the rules are:
+
+| Parameter name | Optional                         | Description                                                                  |
+|----------------|----------------------------------|------------------------------------------------------------------------------|
+| id             | false                            | The id of the rule. The rule id must be unique in the same eKuiper instance. |
+| name           | true                             | The display name or description of a rule                                    |
+| sql            | required if graph is not defined | The sql query to run for the rule                                            |
+| actions        | required if graph is not defined | An array of sink actions                                                     |
+| graph          | required if sql is not defined   | The json presentation of the rule's DAG(directed acyclic graph)              |
+| options        | true                             | A map of options                                                             |
+
+## Rule Logic
+
+A rule represents a stream processing flow from data source that ingest data into the flow to various processing logic to actions that engest the data to external systems.
+
+There are two ways to define the flow aka. business logic of a rule. Either using SQL/actions combination or using the newly added graph API.
+
+### SQL Query
+
+By specifying the `sql` and `actions` property, we can define the business logic of a rule in a declarative way. Among these, `sql` defines the SQL query to run against a predefined stream which will transform the data. The output data can then route to multiple locations by `actions`. 
+
+#### SQL
+
+THE simplest rule SQL is like `SELECT * FROM demo`. It has ANSI SQL like syntax and can leverage abundant operators and functions provided by eKuiper runtime. See [SQL](../../sqls/overview.md) for more info of eKuiper SQL.
+
+Most of the SQL clause are defining the logic except the `FROM` clause, which is responsible to specify the stream. In this example, `demo` is the stream. It is possible to have multiple streams or streams/tables by using join clause. As a streaming engine, there must be at least one stream in a rule.
+
+Thus, the SQL query here actually defines two parts:
+
+- The stream(s) or table(s) to be processed.
+- How to process.
+
+Before using the SQL rule, the stream must define in prior. Please check [streams](../streams/overview.md) for detail.
+
+#### Actions
+
+The actions part defines the output action for a rule. Each rule can have multiple actions. An action is an instance of a sink connector. When define actions, the key is the sink connector type name, and the value is the properties.
+
+eKuiper has built in abundant sink connector type such as mqtt, rest and file. Users can also extend more sink type to be used in a rule action. Each sink type have its own property set. For more detail, please check [sink](../sinks/overview.md).
+
+### Graph rule
+
+Since eKuiper 1.6.0, eKuiper provides graph property in the rule model as an alternative way to create a rule. The property defines the DAG of a rule in JSON format. It is easy to map it directly to a graph in a GUI editor and suitable to serve as the backend of a drag and drop UI. An example of the graph rule definition is as below:
+
+```json
+{
+  "id": "rule1",
+  "name": "Test Condition",
+  "graph": {
+    "nodes": {
+      "demo": {
+        "type": "source",
+        "nodeType": "mqtt",
+        "props": {
+          "datasource": "devices/+/messages"
+        }
+      },
+      "humidityFilter": {
+        "type": "operator",
+        "nodeType": "filter",
+        "props": {
+          "expr": "humidity > 30"
+        }
+      },
+      "logfunc": {
+        "type": "operator",
+        "nodeType": "function",
+        "props": {
+          "expr": "log(temperature) as log_temperature"
+        }
+      },
+      "tempFilter": {
+        "type": "operator",
+        "nodeType": "filter",
+        "props": {
+          "expr": "log_temperature < 1.6"
+        }
+      },
+      "pick": {
+        "type": "operator",
+        "nodeType": "pick",
+        "props": {
+          "fields": ["log_temperature as temp", "humidity"]
+        }
+      },
+      "mqttout": {
+        "type": "sink",
+        "nodeType": "mqtt",
+        "props": {
+          "server": "tcp://${mqtt_srv}:1883",
+          "topic": "devices/result"
+        }
+      }
+    },
+    "topo": {
+      "sources": ["demo"],
+      "edges": {
+        "demo": ["humidityFilter"],
+        "humidityFilter": ["logfunc"],
+        "logfunc": ["tempFilter"],
+        "tempFilter": ["pick"],
+        "pick": ["mqttout"]
+      }
+    }
+  }
+}
+```
+
+The `graph` property is a json structure with `nodes` to define the nodes presented in the graph and `topo` to define the edge between nodes. The node type can be built-in node types such as window node and filter node etc. It can also be a user defined node from plugins. Please refer to [graph rule](./graph_rule.md) for more detail.
+
+## Options
+
+The current options includes:
+
+| Option name        | Type & Default Value | Description                                                                                                                                                                                                                                                                                                                                       |
+|--------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| isEventTime        | boolean: false       | Whether to use event time or processing time as the timestamp for an event. If event time is used, the timestamp will be extracted from the payload. The timestamp filed must be specified by the [stream](../../sqls/streams.md) definition.                                                                                                     |
+| lateTolerance      | int64:0              | When working with event-time windowing, it can happen that elements arrive late. LateTolerance can specify by how much time(unit is millisecond) elements can be late before they are dropped. By default, the value is 0 which means late elements are dropped.                                                                                  |
+| concurrency        | int: 1               | A rule is processed by several phases of plans according to the sql statement. This option will specify how many instances will be run for each plan. If the value is bigger than 1, the order of the messages may not be retained.                                                                                                               |
+| bufferLength       | int: 1024            | Specify how many messages can be buffered in memory for each plan. If the buffered messages exceed the limit, the plan will block message receiving until the buffered messages have been sent out so that the buffered size is less than the limit. A bigger value will accommodate more throughput but will also take up more memory footprint. |
+| sendMetaToSink     | bool:false           | Specify whether the meta data of an event will be sent to the sink. If true, the sink can get te meta data information.                                                                                                                                                                                                                           |
+| sendError          | bool: true           | Whether to send the error to sink. If true, any runtime error will be sent through the whole rule into sinks. Otherwise, the error will only be printed out in the log.                                                                                                                                                                           |
+| qos                | int:0                | Specify the qos of the stream. The options are 0: At most once; 1: At least once and 2: Exactly once. If qos is bigger than 0, the checkpoint mechanism will be activated to save states periodically so that the rule can be resumed from errors.                                                                                                |
+| checkpointInterval | int:300000           | Specify the time interval in milliseconds to trigger a checkpoint. This is only effective when qos is bigger than 0.                                                                                                                                                                                                                              |
+| restartStrategy    | struct               | Specify the strategy to automatic restarting rule after failures. This can help to get over recoverable failures without manual operations. Please check [Rule Restart Strategy](#rule-restart-strategy) for detail configuration items.                                                                                                          |
+
+For detail about `qos` and `checkpointInterval`, please check [state and fault tolerance](./state_and_fault_tolerance.md).
+
+The rule options can be defined globally in `etc/kuiper.yaml` under the `rules` section. The options defined in the rule json will override the global setting.
+
+### Rule Restart Strategy
+
+The restart strategy options include:
+
+| Option name  | Type & Default Value | Description                                                                                                                           |
+|--------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------------|
+| attempts     | int: 0               | The maximum retry times. If set to 0, the rule will fail immediately without retrying.                                                |
+| delay        | int: 1000            | The default interval in millisecond to retry. If `multiplier` is not set, the retry interval will be fixed to this value.             |
+| maxDelay     | int: 30000           | The maximum interval in millisecond to retry. Only effective when `multiplier` is set so that the delay will increase for each retry. |
+| multiplier   | float: 2             | The exponential to increase the interval.                                                                                             |
+| jitterFactor | float: 0.1           | How large random value will be added or subtracted to the delay to prevent restarting multiple rules at the same time.                |
+
+The default values can be changed by editing the `etc/kuiper.yaml` file. 

+ 1 - 1
docs/en_US/rules/rule_pipeline.md

@@ -1,6 +1,6 @@
 # Rule Pipeline
 
-We can form rule pipelines by importing results of prior rule into the following rule. This is possible by employing intermediate storage or MQ such as mqtt broker. By using the pair of [memory source](./sources/builtin/memory.md) and [sink](./sinks/builtin/memory.md), we can create rule pipelines without external dependencies.
+We can form rule pipelines by importing results of prior rule into the following rule. This is possible by employing intermediate storage or MQ such as mqtt broker. By using the pair of [memory source](../sources/builtin/memory.md) and [sink](../sinks/builtin/memory.md), we can create rule pipelines without external dependencies.
 
 ## Usage
 

+ 1 - 1
docs/en_US/rules/state_and_fault_tolerance.md

@@ -2,7 +2,7 @@
 
 eKuiper supports stateful rule stream. There are two kinds of states in eKuiper:
 1. Internal state for window operation and rewindable source
-2. User state exposed to extensions with stream context, check [state storage](../extension/native/overview.md#state-storage).
+2. User state exposed to extensions with stream context, check [state storage](../../extension/native/overview.md#state-storage).
 
 ## Fault Tolerance
 

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 4 - 4
docs/en_US/tutorials/usage/protobuf_tutorial.md


docs/en_US/tutorials/usage/resources/action_mqtt.png → docs/en_US/guide/serialization/resources/action_mqtt.png


docs/en_US/tutorials/usage/resources/action_protobuf.png → docs/en_US/guide/serialization/resources/action_protobuf.png


docs/en_US/tutorials/usage/resources/create_detail.png → docs/en_US/guide/serialization/resources/create_detail.png


docs/en_US/tutorials/usage/resources/create_json_stream.png → docs/en_US/guide/serialization/resources/create_json_stream.png


docs/en_US/tutorials/usage/resources/create_proto_stream.png → docs/en_US/guide/serialization/resources/create_proto_stream.png


docs/en_US/tutorials/usage/resources/create_schema.png → docs/en_US/guide/serialization/resources/create_schema.png


docs/en_US/tutorials/usage/resources/list_schema.png → docs/en_US/guide/serialization/resources/list_schema.png


docs/en_US/tutorials/usage/resources/proto_src_rule.png → docs/en_US/guide/serialization/resources/proto_src_rule.png


docs/en_US/tutorials/usage/resources/receive_json.png → docs/en_US/guide/serialization/resources/receive_json.png


docs/en_US/tutorials/usage/resources/receive_protobuf.png → docs/en_US/guide/serialization/resources/receive_protobuf.png


docs/en_US/tutorials/usage/resources/source_send.png → docs/en_US/guide/serialization/resources/source_send.png


+ 3 - 3
docs/en_US/rules/codecs.md

@@ -1,4 +1,4 @@
-# Codecs
+# Serialization
 
 The eKuiper uses a map based data structure internally during computation, so source/sink connections to external systems usually require codecs to convert the format. In source/sink, you can specify the codec scheme to be used by configuring the parameters `format` and `schemaId`.
 
@@ -116,5 +116,5 @@ When eKuiper starts, it will scan this configuration folder and automatically re
 
 Users can use the schema registry API to add, delete, and check schemas at runtime. For more information, please refer to.
 
-- [schema registry REST API](../operation/restapi/schemas.md)
-- [schema registry CLI](../operation/cli/schemas.md)
+- [schema registry REST API](../../api/restapi/schemas.md)
+- [schema registry CLI](../../api/cli/schemas.md)

docs/en_US/rules/sinks/builtin/edgex.md → docs/en_US/guide/sinks/builtin/edgex.md


docs/en_US/rules/sinks/builtin/log.md → docs/en_US/guide/sinks/builtin/log.md


+ 1 - 1
docs/en_US/rules/sinks/builtin/memory.md

@@ -2,7 +2,7 @@
 
 <span style="background:green;color:white">updatable</span>
 
-The action is used to flush the result into an in-memory topic so that it can be consumed by the [memory source](../../sources/builtin/memory.md). The topic is like pubsub topic such as mqtt, so that there could be multiple memory sinks which publish to the same topic and multiple memory sources which subscribe to the same topic. The typical usage for memory action is to form [rule pipelines](../../rule_pipeline.md).
+The action is used to flush the result into an in-memory topic so that it can be consumed by the [memory source](../../sources/builtin/memory.md). The topic is like pubsub topic such as mqtt, so that there could be multiple memory sinks which publish to the same topic and multiple memory sources which subscribe to the same topic. The typical usage for memory action is to form [rule pipelines](../../rules/rule_pipeline.md).
 
 | Property name | Optional | Description                                                                                                        |
 |---------------|----------|--------------------------------------------------------------------------------------------------------------------|

docs/en_US/rules/sinks/builtin/mqtt.md → docs/en_US/guide/sinks/builtin/mqtt.md


docs/en_US/rules/sinks/builtin/neuron.md → docs/en_US/guide/sinks/builtin/neuron.md


docs/en_US/rules/sinks/builtin/nop.md → docs/en_US/guide/sinks/builtin/nop.md


docs/en_US/rules/sinks/builtin/redis.md → docs/en_US/guide/sinks/builtin/redis.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 1 - 1
docs/en_US/rules/sinks/builtin/rest.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 80 - 8
docs/en_US/rules/data_template.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 195 - 0
docs/en_US/guide/sinks/overview.md


docs/en_US/rules/sinks/plugin/file.md → docs/en_US/guide/sinks/plugin/file.md


docs/en_US/rules/sinks/plugin/image.md → docs/en_US/guide/sinks/plugin/image.md


docs/en_US/rules/sinks/plugin/influx.md → docs/en_US/guide/sinks/plugin/influx.md


docs/en_US/rules/sinks/plugin/influx2.md → docs/en_US/guide/sinks/plugin/influx2.md


docs/en_US/rules/sinks/plugin/sql.md → docs/en_US/guide/sinks/plugin/sql.md


+ 2 - 2
docs/en_US/rules/sinks/plugin/tdengine.md

@@ -20,12 +20,12 @@ As the tdengine database requires a timestamp field in the table, the user must
 | user           | string   | true     | Username, default to `root`.                                                                                                                                     |
 | password       | string   | true     | Password, default to `taosdata`.                                                                                                                                 |
 | database       | string   | false    | Database name.                                                                                                                                                   |
-| table          | string   | false    | Table Name, could be a [dynamic property](../../overview.md#dynamic-properties).                                                                                 |
+| table          | string   | false    | Table Name, could be a [dynamic property](../../rules/overview.md#dynamic-properties).                                                                           |
 | fields         | []string | true     | The fields to be inserted to. The result map and the database should both have these fields. If not specified, all fields in the result map will be inserted.    |
 | provideTs      | Bool     | true     | Whether the user provides a timestamp field, default to false.                                                                                                   |
 | tsFieldName    | String   | false    | Timestamp field name                                                                                                                                             |
 | tagFields      | []String | true     | The result fields to be used as the tag values in order. If sTable is specified, this is required.                                                               |
-| sTable         | String   | true     | The super table to be use, could be a [dynamic property](../../overview.md#dynamic-properties).                                                                  |
+| sTable         | String   | true     | The super table to be use, could be a [dynamic property](../../rules/overview.md#dynamic-properties).                                                            |
 | tableDataField | String   | true     | Write the nested values of the tableDataField into database.                                                                                                     |
 
 ## Operation example

docs/en_US/rules/sinks/plugin/zmq.md → docs/en_US/guide/sinks/plugin/zmq.md


docs/en_US/rules/sources/builtin/edgex.md → docs/en_US/guide/sources/builtin/edgex.md


+ 1 - 1
docs/en_US/rules/sources/builtin/file.md

@@ -16,7 +16,7 @@ create table table1 (
 ) WITH (DATASOURCE="lookup.json", FORMAT="json", TYPE="file");
 ```
 
-You can use [cli](../../../operation/cli/tables.md) or [rest api](../../../operation/restapi/tables.md) to manage the tables.
+You can use [cli](../../../api/cli/tables.md) or [rest api](../../../api/restapi/tables.md) to manage the tables.
 
 The configure file for the file source is in */etc/sources/file.yaml* in which the path to the file can be specified.
 

docs/en_US/rules/sources/builtin/http_pull.md → docs/en_US/guide/sources/builtin/http_pull.md


docs/en_US/rules/sources/builtin/http_push.md → docs/en_US/guide/sources/builtin/http_push.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 2 - 2
docs/en_US/rules/sources/builtin/memory.md


docs/en_US/rules/sources/builtin/mqtt.md → docs/en_US/guide/sources/builtin/mqtt.md


docs/en_US/rules/sources/builtin/neuron.md → docs/en_US/guide/sources/builtin/neuron.md


+ 1 - 1
docs/en_US/rules/sources/builtin/redis.md

@@ -8,7 +8,7 @@ eKuiper provides built-in support for looking up data in redis. Notice that, the
 create table table1 () WITH (DATASOURCE="0", FORMAT="json", TYPE="redis", KIND="lookup");
 ```
 
-You can use [cli](../../../operation/cli/tables.md) or [rest api](../../../operation/restapi/tables.md) to manage the tables.
+You can use [cli](../../../api/cli/tables.md) or [rest api](../../../api/restapi/tables.md) to manage the tables.
 
 The configure file for the redis source is in */etc/sources/redis.yaml* in which the path to the file can be specified.
 

+ 11 - 2
docs/en_US/rules/sources/overview.md

@@ -1,7 +1,16 @@
-# Available Sources
+# Source Connectors
 
 In the eKuiper source code, there are built-in sources and sources in extension.
 
+## Ingestion Mode
+
+The source connector provides the connection to an external system to load data in. Regarding data loading mechanism, there are two modes:
+
+- Scan: load the data events one by one like a stream which is driven by event. Such mode of source can be used in stream or scan table.
+- Lookup: refer to external content when needed, only used in lookup table.
+
+Each source will support one or both modes. In the source page, a badge will show if the mode is supported.
+
 ## Built-in Sources
 
 Users can directly use the built-in sources in the standard eKuiper instance. The list of built-in sources are:
@@ -28,4 +37,4 @@ The list of predefined source plugins:
 
 ## Use of sources
 
-The user uses sources by means of streams or tables. The type `TYPE` property needs to be set to the name of the desired source in the stream properties created. The user can also change the behavior of the source during stream creation by configuring various general source attributes, such as the decoding type (default is JSON), etc. For the general properties and creation syntax supported by creating streams, please refer to the [Stream Specification](../../sqls/streams.md).
+The user uses sources by means of streams or tables. The type `TYPE` property needs to be set to the name of the desired source in the stream properties created. The user can also change the behavior of the source during stream creation by configuring various general source attributes, such as the decoding type (default is JSON), etc. For the general properties and creation syntax supported by creating streams, please refer to the [Stream Specification](../streams/overview.md).

docs/en_US/rules/sources/plugin/random.md → docs/en_US/guide/sources/plugin/random.md


docs/en_US/rules/sources/plugin/sql.md → docs/en_US/guide/sources/plugin/sql.md


docs/en_US/rules/sources/plugin/video.md → docs/en_US/guide/sources/plugin/video.md


docs/en_US/rules/sources/plugin/zmq.md → docs/en_US/guide/sources/plugin/zmq.md


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 148 - 0
docs/en_US/guide/streams/overview.md


docs/en_US/tutorials/table/install_sql_source.png → docs/en_US/guide/tables/install_sql_source.png


+ 2 - 2
docs/en_US/tutorials/table/lookup.md

@@ -1,4 +1,4 @@
-# Stream Batch Integrated Calculation
+# Lookup Table Scenarios
 
 Not all data will change often, even in real-time computing. In some cases, you may need to supplement the stream data with externally stored static data. For example, user metadata may be stored in a relational database, and the only data in the stream data is data that changes in real time, requiring a connection between the stream data and the batch data in the database to make up the complete data.
 
@@ -106,7 +106,7 @@ Streaming data changes frequently and has a large amount of data, and usually co
 
 This scenario will use MySQL as an external table data storage location. eKuiper provides a pre-compiled SQL source plugin to access MySQL data and use it as a lookup table. So, before starting the tutorial, we need to install the SQL source plugin. Using eKuiper manager administration console, you can directly click Create Plugin in extension management tab and select SQL source plugin to install as shown below.
 
-! [Install SQL source](./install_sql_source.png)
+![Install SQL source](./install_sql_source.png)
 
 This scenario will introduce how to connect to a relational database using MySQL as an example. The user needs to start a MySQL instance. Create table `devices` in MySQL, which contains fields `id`, `name`, `deviceKind` and write the content in advance.
 

+ 54 - 0
docs/en_US/guide/tables/overview.md

@@ -0,0 +1,54 @@
+# Table
+
+eKuiper streams is unbounded and immutable, any new data are appended in the current stream for processing.  **Table** is provided to represent the current state of the stream. It can be considered as a snapshot of the stream. Users can use table to retain a batch of data for processing.
+
+There are two kinds of table:
+
+- Scan table: accumulates the data in memory. It is suitable for smaller dataset and the table content do NOT need to share between rules.
+- Lookup table: refer to external table content. It is suitable for bigger dataset and share table content across rules.
+
+## Syntax
+
+Table supports almost the same syntax as streams. To create a table, run the below SQL:
+
+```sql
+CREATE TABLE   
+    table_name   
+    ( column_name <data_type> [ ,...n ] )
+    WITH ( property_name = expression [, ...] );
+```
+
+Table supports the same [data types](../streams/overview.md#data-types) as stream.
+
+Table also supports all [the properties of the stream](../streams/overview.md#stream-properties). Thus, all the source type are also supported in table. Many sources are not batched which have one event at any given time point, which means the table will always have only one event. An additional property `RETAIN_SIZE` to specify the size of the table snapshot so that the table can hold an arbitrary amount of history data.
+
+### Lookup Table Syntax
+
+The syntax is the same as creating a normal scan table, just need to specify kind property to be `lookup`. Below is an example to create a lookup data, which binds to redis database 0.
+
+```sql
+CREATE TABLE alertTable() WITH (DATASOURCE="0", TYPE="redis", KIND="lookup")
+```
+
+Currently, only `memory`, `redis` and `sql` source can be lookup table.
+
+### Table properties
+
+| Property name | Optional | Description                                                                                                                                                    |
+|---------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| DATASOURCE    | false    | The value is determined by source type. The topic names list if it's a MQTT data source. Please refer to related document for other sources.                   |
+| FORMAT        | true     | The data format, currently the value can be "JSON", "PROTOBUF" and "BINARY". The default is "JSON". Check [Binary Stream](#binary-stream) for more detail.     |
+| SCHEMAID      | true     | The schema to be used when decoding the events. Currently, only use when format is PROTOBUF.                                                                   |
+| KEY           | true     | The primary key of the table. For example, for SQL source key specifies the primary key in the SQL table. It is not obeyed by all source types.                |
+| TYPE          | true     | The source type. Each source type may support one kind or both kind of tables. Please refer to related documents.                                              |
+| CONF_KEY      | true     | If additional configuration items are requied to be configured, then specify the config key here. See [MQTT stream](../sources/builtin/mqtt.md) for more info. |
+| KIND          | true     | The table kind, could be `scan` or `lookup`. If not specified, the default value is `scan`.                                                                    |
+
+
+## Usage scenarios
+
+Table is a way to keep a large bunch of state for both scan and lookup type. Scan table keeps state in memory while lookup table keeps them externally and possibly persisted. Scan table is easier to set up while lookup table can easily connect to existed persisted states. Both types are suitable for stream batch integrated calculation.
+
+Please check below links for some typical scenarios.
+- [Scan table scenarios](scan.md)
+- [Lookup table scenarios](lookup.md)

+ 52 - 0
docs/en_US/guide/tables/scan.md

@@ -0,0 +1,52 @@
+# Scan Table Scenarios
+
+Typically, table will be joined with stream with or without a window. When joining with stream, table data won't affect the downstream data, it is treated like a static referenced data, although it may be updated internally.
+
+## Enrich data
+
+A typical usage for table is as a lookup table. Sample SQL will be like:
+
+```sql
+CREATE TABLE table1 (
+		id BIGINT,
+		name STRING
+	) WITH (DATASOURCE="lookup.json", FORMAT="JSON", TYPE="file");
+
+SELECT * FROM demo INNER JOIN table1 on demo.id = table1.id
+```
+
+In this example, a table `table1` is created to read json data from file *lookup.json*. Then in the rule, `table1` is joined with the stream `demo` so that the stream can lookup the name from the id.
+
+The content of *lookup.json* file should be an array of objects. Below is an example:
+
+```json
+[
+  {
+    "id": 1541152486013,
+    "name": "name1"
+  },
+  {
+    "id": 1541152487632,
+    "name": "name2"
+  },
+  {
+    "id": 1541152489252,
+    "name": "name3"
+  }
+]
+```
+
+## Filter by history state
+
+In some scenario, we may have an event stream for data and another event stream as the control information.
+
+```sql
+CREATE TABLE stateTable (
+		id BIGINT,
+		triggered bool
+	) WITH (DATASOURCE="myTopic", FORMAT="JSON", TYPE="mqtt");
+
+SELECT * FROM demo LEFT JOIN stateTable on demo.id = stateTable.id  WHERE triggered=true
+```
+
+In this example, a table `stateTable` is created to record the trigger state from mqtt topic *myTopic*. In the rule, the data of `demo` stream is filtered with the current trigger state.

+ 230 - 0
docs/en_US/installation.md

@@ -0,0 +1,230 @@
+# Installation
+
+eKuiper provides docker image, binary packages and helm chart to install.
+
+## Running eKuiper in Docker
+
+Please make sure docker has installed before running.
+
+1. Get docker image.
+   ```shell
+   docker pull lfedge/ekuiper:x.x.x
+   ```
+2. Start docker container.
+   ```shell
+   docker run -p 9081:9081 -d --name kuiper -e MQTT_SOURCE__DEFAULT__SERVER=tcp://broker.emqx.io:1883 lfedge/ekuiper:xxx
+   ```
+
+In this example, we specify the default MQTT broker via environment variable to `broker.emqx.io`, which is a public MQTT test server hosted by [EMQ](https://www.emqx.io).
+
+For more configuration and docker image tags, please check [lfedge/ekuiper in docker hub](https://hub.docker.com/r/lfedge/ekuiper).
+
+## Running eKuiper with management console
+
+eKuiper manager is a free eKuiper management web console which is provided as a docker image. We can use docker compose to run both eKuiper and eKuiper manager at once to ease the usage.
+
+Please make sure docker compose has installed before running.
+
+1. Create `docker-compose.yaml` file.
+   ```yaml
+   version: '3.4'
+
+   services:
+   manager:
+      image: emqx/ekuiper-manager:x.x.x
+      container_name: ekuiper-manager
+      ports:
+      - "9082:9082"
+      restart: unless-stopped
+   ekuiper:
+      image: lfedge/ekuiper:x.x.x
+      ports:
+        - "9081:9081"
+        - "127.0.0.1:20498:20498"
+      container_name: ekuiper
+      hostname: ekuiper
+      restart: unless-stopped
+      user: root
+      volumes:
+        - /tmp/data:/kuiper/data
+        - /tmp/log:/kuiper/log
+        - /tmp/plugins:/kuiper/plugins
+      environment:
+        MQTT_SOURCE__DEFAULT__SERVER: "tcp://broker.emqx.io:1883"
+        KUIPER__BASIC__CONSOLELOG: "true"
+        KUIPER__BASIC__IGNORECASE: "false"
+     ```
+2. Start docker-compose cluster.
+   ```shell
+   $ docker-compose -p my_ekuiper up -d
+   ```
+3. Check docker images running status, make sure two continers are started.
+   ```shell
+   $ docker ps
+   CONTAINER ID   IMAGE                         COMMAND                  CREATED              STATUS                  PORTS                                                NAMES
+   e2dbcd4c1f92   lfedge/ekuiper:latest          "/usr/bin/docker-ent…"   7 seconds ago        Up Less than a second   0.0.0.0:9081->9081/tcp, 127.0.0.1:20498->20498/tcp   ekuiper
+   fa7c33b3e114   emqx/ekuiper-manager:latest   "/usr/bin/docker-ent…"   About a minute ago   Up 59 seconds           0.0.0.0:9082->9082/tcp                               manager
+   ```
+
+Please check [use of eKuiper management console](./operation/manager-ui/overview.md) to set up and configure the eKuiper manager.
+
+## Install From Zip
+
+eKuiper binary packages are released on below operating systems with AMD64, ARM and ARM64 support:
+
+- CentOS 7 (EL7)
+- CentOS 8 (EL8)
+- Raspbian 10
+- Debian 9
+- Debian 10
+- Ubuntu 16.04
+- Ubuntu 18.04
+- Ubuntu 20.04
+- macOS
+
+For other operating systems such as Windows, users can [compile from source code manually](#compilation).
+
+1. Download eKuiper zip or tar for your CPU architecture from [ekuiper.org](https://ekuiper.org/downloads) or [Github](https://github.com/lf-edge/ekuiper/releases).
+2. Unzip the installation file:
+    ```shell
+    unzip kuiper-x.x.x-linux-amd64.zip
+    ```
+3. Start eKuiper.
+    ```shell
+    $ bin/kuiperd
+    ```
+4. Remove eKuiper. Simply delete the eKuiper directory.
+
+After installation, all the files are inside the unzipped directory. Please check [installed directory structure](#installation-structure) for detail.
+    
+
+## Install from package
+
+1. Download eKuiper package for your CPU architecture from [ekuiper.org](https://ekuiper.org/downloads) or [Github](https://github.com/lf-edge/ekuiper/releases).
+2. Install eKuiper.
+   - DEB package:
+     ```shell
+     # for debian/ubuntu
+     $ sudo apt install ./kuiper-x.x.x-linux-amd64.deb
+     ```   
+   - RPM package:
+     ```shell
+     # for CentOS
+     $ sudo rpm -ivh kuiper-x.x.x-linux-amd64.rpm
+     ```   
+3. Start eKuiper.
+   - quick start
+     ```shell
+     $ sudo kuiperd
+     ```   
+   - systemctl
+     ```shell
+     sudo systemctl start kuiper
+     ```
+4. Remove eKuiper.
+   - DEB:
+     ```shell
+     sudo apt remove --purge kuiper
+     ```
+   - RPM:
+     ```shell
+     sudo yum remove kuiper
+     ```
+     
+When installing by package, the eKuiper folders are not in the same directory. The installation structure is as below:
+
+```
+/usr/lib/kuiper/bin
+  kuiperd
+  kuiper
+/etc/kuiper
+  ...
+/var/lib/kuiper/data
+  ...
+/var/lib/kuiper/plugins
+  ...
+/var/log/kuiper
+   ...
+```
+
+## Install via Helm (K8S、K3S)
+
+1. Add helm repository.
+   ```shell
+    $ helm repo add emqx https://repos.emqx.io/charts
+    $ helm repo update
+   ```
+2. Query eKuiper.
+   ```shell
+    $ helm search repo emqx
+    NAME         CHART VERSION APP VERSION DESCRIPTION
+    emqx/emqx    v4.0.0        v4.0.0      A Helm chart for EMQX
+    emqx/emqx-ee v4.0.0        v4.0.0      A Helm chart for EMQX
+    emqx/ekuiper  0.1.1         0.1.1       A lightweight IoT edge analytic software
+   ```
+3. Start eKuiper.
+   ```shell
+    $ helm install my-ekuiper emqx/ekuiper
+   ``` 
+4. View eKuiper status.
+   ```shell
+   $ kubectl get pods
+   NAME         READY  STATUS    RESTARTS  AGE
+   my-ekuiper-0 1/1    Running   0         56s
+   ```
+
+## Compile from source code
+
+1. Get the source code.
+   ```shell
+   $ git clone https://github.com/lf-edge/ekuiper.git
+   ```
+2. Compile. 
+   ```shell
+   $ make
+   ```
+3. Start eKuiper.
+   ```shell
+   $ cd _build/kuiper-x.x.x-linux-amd64/
+   $ bin/kuiperd
+   ```
+   
+eKuiper allows to tailor the binary in compilation to get a customized feature set. As written by go, it also allows cross compilation. For detail, please check [compilation](./operation/compile/compile.md).
+
+## Installation structure
+
+Below is the directory structure after installation.
+
+```shell
+bin
+  kuiperd
+  kuiper
+etc
+  ...
+data
+  ...
+plugins
+  ...
+log
+  ...
+```
+
+### bin
+
+The `bin` directory includes all of executable files. Such as the kuiper server `kuiperd` and the cli client `kuiper`.
+
+### etc
+
+The `etc` directory contains the default configuration files of eKuiper. Such as the global configuration file `kuiper.yaml` and all the source configuration files such as `mqtt_source.yaml`.
+
+### data
+
+This folder saves the persisted definitions of streams and rules. It also contains any user defined configurations.
+
+### plugins
+
+eKuiper allows users to develop your own plugins, and put these plugins into this folder.  See [extension](./extension/overview.md) for more info for how to extend the eKuiper.
+
+### log
+
+All the log files are under this folder. The default log file name is `stream.log`.

docs/en_US/tutorials/deploy/add_service.png → docs/en_US/integrations/deploy/add_service.png


docs/en_US/tutorials/deploy/ekuiper_openyurt.png → docs/en_US/integrations/deploy/ekuiper_openyurt.png


+ 0 - 0
docs/en_US/tutorials/deploy/kmanager.yaml


Vissa filer visades inte eftersom för många filer har ändrats