Explorar el Código

feat(doc): translate to zh_CN

Signed-off-by: Jiyong Huang <huangjy@emqx.io>
Jiyong Huang hace 3 años
padre
commit
fee324bbab

+ 6 - 6
docs/directory.json

@@ -91,7 +91,7 @@
 					"title": "AI 教程",
 					"children": [
 						{
-							"title": "Label image by tensorflow lite model with eKuiper native plugin",
+							"title": "使用 eKuiper 原生插件实现图像标注",
 							"path": "tutorials/ai/tensorflow_lite_tutorial"
 						}
 					]
@@ -100,19 +100,19 @@
 					"title": "EdgeX Foundry 相关教程",
 					"children": [
 						{
-							"title": "EdgeX Foundry rule engine tutorial",
+							"title": "EdgeX Foundry 规则引擎教程",
 							"path": "edgex/edgex_rule_engine_tutorial"
 						},
 						{
-							"title": "Meta function for EdgeX stream",
+							"title": "使用 EdgeX 流的 meta 函数",
 							"path": "edgex/edgex_meta"
 						},
 						{
-							"title": "Command device with EdgeX eKuiper rules enginem",
+							"title": "EdgeX 规则引擎使用 command 服务",
 							"path": "edgex/edgex_rule_engine_command"
 						},
 						{
-							"title": "EdgeX source configuration command",
+							"title": "EdgeX 源配置教程",
 							"path": "edgex/edgex_source_tutorial"
 						}
 					]
@@ -618,7 +618,7 @@
 							"path": "edgex/edgex_meta"
 						},
 						{
-							"title": "Command device with EdgeX eKuiper rules enginem",
+							"title": "Command device with EdgeX eKuiper rules engine",
 							"path": "edgex/edgex_rule_engine_command"
 						},
 						{

+ 3 - 4
docs/en_US/concepts/ekuiper.md

@@ -52,8 +52,7 @@ eKuiper is designed to run in edge side either in edge gateway or edge device wi
 - Cross CPU and OS support: X86, ARM and PPC CPU arch; Linux distributions, OpenWrt Linux, macOS and Docker
 - Connect to different data source:MQTT, EdgeX, HTTP and file etc
 - SQL analytics: ANSI SQL queries for quick IoT data analytics
-- Sink to different destination: MQTT, EdgeX, HTTP, log, file and InfluxDB etc
-- Flexible approach to deploy analytic applications: Text-based rules for business logic implementation and deployment through rest-api
+- Sink to different destination: MQTT, EdgeX, HTTP, log, file and databases etc
+- Flexible approach to deploy analytic applications: Text-based rules for business logic implementation and deployment through REST API
 - Machine learning: Integrate machine learning algorithms and run against streaming data
-- Highly extensible: Python and Go language extension SDK for source, sink and function
-
+- Highly extensible: Python and Go language extension SDK for source, sink and function

+ 2 - 2
docs/en_US/concepts/sources/overview.md

@@ -1,6 +1,6 @@
 # Sources
 
-Sources are used to read data from external systems. The source can be unbounded streaming data named stream or bounded batch data named table. When using in a rule, at least one of  the source must be a stream.
+Sources are used to read data from external systems. The source can be unbounded streaming data named stream or bounded batch data named table. When using in a rule, at least one of the source must be a stream.
 
 The source basically defines how to connect to an external resource and fetch data from the resource in a streaming way. After fetching the data, common tasks like decode and transform by schema can be done by setting properties.
 
@@ -8,7 +8,7 @@ The source basically defines how to connect to an external resource and fetch da
 
 When define a source stream or table, it actually creates the logical definition instead of a physical running data input. The logical definition can then be used in rule's SQL in the `from` clause. The source only starts to run when any of the rules refer to it has started.
 
-By default, if multiple rules refer to the same source. Each rule will have its own, standalone source instance from other rules so that the rules are total separated. To boost performance when users want to process the same data across multiple rules, they can define the source as shared. Then the rules refer to the same shared source will share the same running source instance.
+By default, if multiple rules refer to the same source, each rule will have its own, standalone source instance from other rules so that the rules are total separated. To boost performance when users want to process the same data across multiple rules, they can define the source as [shared](../../sqls/streams.md#share-source-instance-across-rules). Then the rules refer to the same shared source will share the same running source instance.
 
 ## Decode
 

+ 2 - 2
docs/en_US/concepts/streaming/overview.md

@@ -10,7 +10,7 @@ Stream processing has the below characteristics:
 - Unbounded data processing: As applying to unbounded data, the stream processing itself is also unbounded. The workload can distribute evenly across time compared to batch processing.
 - Low-latency, near real-time: stream processing can process data once it is produced to get the result in a very low latency.
 
-Stream processing unifies applications and analytics. This simplifies the overall infrastructure, because many systems can be built on a common architecture, and also allows a developer to build applications that use analytical results to respond to insights in the data to take actiondirectly.
+Stream processing unifies applications and analytics. This simplifies the overall infrastructure, because many systems can be built on a common architecture, and also allows a developer to build applications that use analytical results to respond to insights in the data to take action directly.
 
 ## Edge Stream Processing
 
@@ -27,5 +27,5 @@ Stateful stream processing is a subset of stream processing in which the computa
 The state information can be found or managed by:
 
 - [Windows](./windowing.md)
-- State API
+- [State API](../../extension/native/overview.md#state-storage)
 

+ 91 - 95
docs/zh_CN/CONTRIBUTING.md

@@ -1,184 +1,180 @@
-# How to contribute
+# 如何贡献
 
-We're really glad you're reading this, because we need volunteer developers to help this project come to fruition.
+很高兴你能读到这篇文章,欢迎加入项目社区,帮助项目成长。
 
-## Did you find a bug?
+## 发现 bug ?
 
-- **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/lf-edge/ekuiper/issues).
-- If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/lf-edge/ekuiper/issues/new). Be sure to include a **title and clear description**, as much relevant information as possible, and a **code sample** or an **executable test case** demonstrating the expected behavior that is not occurring.
+- **通过在 GitHub 的[问题](https://github.com/lf-edge/ekuiper/issues)下搜索,确保该错误尚未被报告**。
+- 如果你找不到解决该问题的公开问题,[开一个新问题](https://github.com/lf-edge/ekuiper/issues/new)。请确保**标题和清晰的描述**,尽可能多的相关信息,以及**代码样本**或**可执行的测试案例**,以明确问题。
 
-## Code and doc contribution
+## 代码和文档贡献
 
-Welcome to contribute code to provide features or fix bugs. 
+欢迎贡献代码以提供功能或修复错误。
 
-### One time setup
+### 一次性设置
 
-We use GitHub pull request to review proposed code changes. So you'll need to obtain a GitHub account before making code contribution.
+我们使用 GitHub pull request 来审查提议的代码修改。所以你需要在做出代码贡献之前拥有一个 GitHub 账户。
 
-1. **Fork** eKuiper to your private repository. Click the `Fork` button in the top right corner of eKuiper repository.
-2. **Clone** the repository locally from your personal fork. `git clone https://github.com/<Github_user>/ekuiper.git`.
-3. Add eKuiper repo as additional Git remote so that you can sync between local repo and eKuiper.
-  ```shell
-  git remote add upstream https://github.com/lf-edge/ekuiper.git
-  ```
+1. **Fork** eKuiper到你的私人仓库。点击 eKuiper 仓库右上角的 "Fork "按钮。
+2. 从你的个人分叉中**克隆**版本库。 `git clone https://github.com/<Github_user>/ekuiper.git` 。
+3. 添加 eKuiper repo 作为额外的 Git 远程仓库,这样你就可以在本地 repo 和eKuiper 之间进行同步。
+   ```shell
+      git remote add upstream https://github.com/lf-edge/ekuiper.git
+   ```
 
-You can use your favorite IDE or editor to develop. You can find information in editor support for Go tools in [Editors and IDEs for GO](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins).
+你可以使用你喜欢的IDE或编辑器来开发。你可以在 [Editors and IDEs for GO](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) 中找到编辑器对Go工具的支持信息。
 
-### Create a branch in your fork
+### 创建一个分支
 
-You’ll work on your contribution in a branch in your own (forked) repository. Create a local branch, initialized with the state of the branch you expect your changes to be merged into. The `master` branch is active development branch, so it's recommended to set `master` as base branch.
+你将在你自己的 repo 中的一个分支中进行你的代码开发。创建一个本地分支,初始化为你希望合并到的分支的状态。`master`分支是活跃的开发分支,所以建议将`master`设为基础分支。
 
 ```shell
 $ git fetch upstream
 $ git checkout -b <my-branch> upstream/master
 ```
 
-### Code conventions
+### 代码惯例
 
-- Use `go fmt` to format your code before commit code change. eKuiper Github Action CI pipeline reports error if it's
-  not format by `go fmt`.
-- Configuration key in config files uses camel case format.
+- 在提交代码变更之前,使用 `go fmt` 来格式化你的代码。
+- 配置文件中的配置键使用 camel 大小写格式。
 
-### Debug your code
+### 调试你的代码
 
-Take GoLand as an example, developers can debug the code: 
+以 GoLand 为例,开发者可以对代码进行调试。
 
-1. Debug the whole program. Make sure all directories mentioned in [Makefile](../../Makefile) build_prepare sections are created in your eKuiper root path. Add your breakpoints. Open `cmd/kuiperd/main.go`. In the main function, you'll find a green triangle in the ruler, click it and select debug. Then create your stream/rule that would run through your breakpoint, the debugger will pause there.
-2. To debug a small portion of code, we recommend writing a unit test and debug it. You can go to any test file and find the same green triangle to run in debug mode. For example, `pkg/cast/cast_test.go` TestMapConvert_Funcs can run as debug.
+1. 调试整个程序。确保 [Makefile](../../Makefile) build_prepare 部分提到的所有目录都在你的eKuiper根路径中创建。添加你的断点。打开 `cmd/kuiperd/main.go` 。在主函数中,你会发现标尺上有一个绿色的三角形,点击它并选择调试。然后创建你的流/规则,让它运行到你的断点,调试器会在那里暂停。
+2. 要调试一小部分代码,我们建议写一个单元测试并调试它。你可以到任何一个测试文件中,找到同样的绿色三角形,在调试模式下运行。例如,`pkg/cast/cast_test.go` TestMapConvert_Funcs 可以作为调试运行。
 
-### Testing
+### 测试
 
-The eKuiper project leverages Github actions to run unit test & FVT (functional verification test), so please take a
-look at the PR status result, and make sure that all of testcases run successfully.
+eKuiper 项目利用 Github actions 来运行单元测试和 FVT(功能验证测试),所以请看一下 PR 状态的运行结果,并确保所有的测试用例都能成功运行。
 
-- Write Golang unit testcases to test your code if necessary.
-- A set of [FVT testcases](../../test/README.md) will be triggered with any PR submission, so please make sure that these
-  testcases can be run successfully.
+- 如果有必要,请编写 Golang 单元测试案例来测试你的代码。
+- [FVT测试案例](../../test/README.md) 将随着任何PR提交而被触发,所以请确保这些测试案例能够成功运行。
 
-### Licensing
+### 许可
 
-All code contributed to eKuiper will be licensed under Apache License V2. You need to ensure every new files you are adding have the right license header.
+所有贡献给eKuiper的代码都将在Apache License V2下授权。你需要确保你添加的每个新文件都有正确的许可证头。
 
-### Sign-off commit
+### Signoff
 
-The sign-off is to certify the origin of the commit. It is required to commit to this project. If you set
-your `user.name` and `user.email` git configs, you can sign your commit automatically with `git commit -s`. Each commit must be signed off.
+Sifnoff 是为了证明提交的来源。它是提交给这个项目的必要条件。如果你设置了
+你的`user.name`和`user.email`的 git 配置,你可以用`git commit -s`自动签署你的提交。每次提交都必须签名。
 
-### Syncing
+### 同步
 
-Periodically while you work, and certainly before submitting a pull request, you should update your branch with the most recent changes to the target branch. We prefer rebase than merge to avoid extraneous merge commits.
+在提交 PR 之前,你应该用目标分支的最新改动来更新你的分支。我们倾向于使用 rebase 而不是 merge,以避免不必要的合并提交。
 
 ```shell
 git fetch upstream
 git rebase upstream/master
 ```
 
-Then you can push to your forked repo. Assume the remove name for your forked is the default `origin`. If you have rebased the git history before the last push, add `-f` to force pushing the changes.
+假设你 forked repo 名称是默认的`origin`, 使用如下指令推送改动到你的 forked repo。假设你 forked repo 名称是默认的`origin`。如果你在最后一次推送前重新建立了 git 历史,请添加 `-f` 来强制推送这些变化。
 
 ```shell
 git push origin -f
 ```
 
-### Submitting changes
+### 提交修改
 
-The `master` branch is active development branch, so it's recommended to set `master` as base branch, and also create PR
-against `master` branch.
+`master` 分支是活跃的开发分支,所以建议将 `master` 设为基础分支,并在 `master` 分支下创建PR
 
-Organize your commits to make a committer’s job easier when reviewing. Committers normally prefer multiple small pull requests, instead of a single large pull request. Within a pull request, a relatively small number of commits that break the problem into logical steps is preferred. For most pull requests, you'll squash your changes down to 1 commit. You can use the following command to re-order, squash, edit, or change description of individual commits.
+组织你的提交,使提交者在审查时更容易。提交者通常喜欢多个小的拉动请求,而不是一个大的拉动请求。在一个拉动请求中,最好有相对较少的提交,将问题分解成合理的步骤。对于大多数 PR ,你可以将你的修改压缩到一个提交。你可以使用下面的命令来重新排序、合并、编辑或改变单个提交的描述。
 
 ```shell
 git rebase -i upstream/master
 ```
 
-Make sure all your commits comply to the [commit message guidelines](#commit-message-guidelines).
+确保你的所有提交都符合[提交信息指南](#提交信息指南)。
 
-You'll then push to your branch on your forked repo and then navigate to eKuiper repo to create a pull request. Our GitHub repo provides automatic testing with GitHub action. Please make sure those tests pass. We will review the code after all tests passed.
+然后你会推送到你 forked 的 repo 上的分支,然后导航到 eKuiper repo 创建一个 PR 。我们的 GitHub repo 提供了基于 GitHub actions的自动化测试。请确保这些测试通过。我们将在所有测试通过后审查代码。
 
-### Commit Message Guidelines
+### 提交信息指南
 
-Each commit message consists of a **header**, a **body** and a **footer**. The header has a special format that includes a **type**, a **scope** and a **subject**:
+每条提交信息都由一个 **header** ,一个 **body** 和一个 **footer** 组成。header 包含三个部分:**类型**,**范围**和**主题**。
 
 ```
-<type>(<scope>): <subject>
-<BLANK LINE>
+<类型>(<范围>): <主题>
+<空行>
 <body>
-<BLANK LINE>
+<空行>
 <footer>
 ```
 
-The **header** with **type** is mandatory. The **scope** of the header is optional. This repository has no predefined scopes. A custom scope can be used for clarity if desired.
+**header** 的**类型**为必填项。header 的**范围**是可选的。没有预定义的范围选项,可以使用一个自定义的范围。
 
-Any line of the commit message cannot be longer 100 characters! This allows the message to be easier to read on GitHub as well as in various git tools.
+提交信息的任何一行都不能超过100个字符,这样可以使信息在 GitHub 以及各种 git 工具中更容易阅读。
 
-The footer should contain a [closing reference to an issue](https://help.github.com/articles/closing-issues-via-commit-messages/) if any.
+如果有的话,footer 应该包含一个 [对问题的关闭引用](https://help.github.com/articles/closing-issues-via-commit-messages/)。
 
-Example 1:
+例子1:
 
 ```
-feat: add Fuji release compose files
+feat: 添加编译文件
 ```
 
 ```
-fix(script): correct run script to use the right ports
+fix(script): 纠正运行脚本以使用正确的端口
 
-Previously device services used wrong port numbers. This commit fixes the port numbers to use the latest port numbers.
+以前的设备服务使用了错误的端口号。这个提交修正了端口号,使用最新的端口号。
 
-Closes: #123, #245, #992
+关闭。#123, #245, #992
 ```
 
 #### Revert
 
-If the commit reverts a previous commit, it should begin with `revert: `, followed by the header of the reverted commit. In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit being reverted.
+如果该提交是为了恢复之前的提交,它应该以 `revert: `开头,然后是被恢复的提交的标题。在正文中,应该说:"这是对提交 hash 的恢复",其中的 hash 是被恢复的提交的 SHA 值。
 
-#### Type
+#### 类型
 
-Must be one of the following:
+必须是以下类型之一:
 
-- **feat**: New feature for the user, not a new feature for build script
-- **fix**: Bug fix for the user, not a fix to a build script
-- **docs**: Documentation only changes
-- **style**: Formatting, missing semi colons, etc; no production code change
-- **refactor**: Refactoring production code, eg. renaming a variable
-- **chore**: Updating grunt tasks etc; no production code change
-- **perf**: A code change that improves performance
-- **test**: Adding missing tests, refactoring tests; no production code change
-- **build**: Changes that affect the CI/CD pipeline or build system or external dependencies (example scopes: travis, jenkins, makefile)
-- **ci**: Changes provided by DevOps for CI purposes.
-- **revert**: Reverts a previous commit.
+- **feat**。为用户提供的新功能,而不是构建脚本的新功能
+- **fix**: 为用户提供的错误修复,而不是对构建脚本的修复
+- **docs**: 只对文档进行修改
+- **style**: 格式化,缺少分号,等等;没有生产代码的变化
+- **refactor**: 重构生产代码,例如重命名一个变量。
+- **chore**: 更新脚本任务等;不改变生产代码
+- **perf**: 提高性能的代码变化
+- **test**: 添加缺失的测试,重构测试;不改变生产代码
+- **build**: 影响 CI/CD 管道或构建系统或外部依赖的变化(例如 makefile)。
+- **ci**: 由 DevOps 提供的用于 CI 目的的改变。
+- **revert**: 恢复先前的提交。
 
-#### Scope
+#### 范围
 
-There are no predefined scopes for this repository. A custom scope can be provided for clarity.
+这个版本库没有预定义的范围。为了清晰起见,可以提供一个自定义的范围。
 
-#### Subject
+#### 主题
 
-The subject contains a succinct description of the change:
+主题包含对修改的简洁描述。
 
-- use the imperative, present tense: "change" not "changed" nor "changes"
-- don't capitalize the first letter
-- no dot (.) at the end
+- 使用祈使句、现在时:"改变 "而不是 "改变 "或 "变化"
+- 不要把第一个字母大写
+- 结尾不加点(...)。
 
-#### Body
+#### body
 
-Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes". The body should include the motivation for the change and contrast this with previous behavior.
+与主题一样,使用祈使句、现在时:"改变 "而不是 "改变 "或 "变化"。主体应该包括改变的动机,并与以前的行为进行对比。
 
-#### Footer
+#### footer
 
-The footer should contain any information about **Breaking Changes** and is also the place to reference GitHub issues that this commit **Closes**.
+页脚应该包含任何关于**突破性变化的信息,同时也是引用此提交**关闭的 GitHub 问题的地方。
 
-**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines. The rest of the commit message is then used for this.
+**Breaking Changes** 应该以 "BREAKING CHANGE: "开头,并加上一个空格或两个换行。提交信息的其余部分就用于此了。
 
-## Community Promotion
+## 社区推广
 
-Besides coding, other types of contributions are a great way to get involved. Welcome to contribute to this project by
-promoting it to the open source community and the world.
+除了编码,其他类型的贡献也是参与的好方法。欢迎通过以下方式为这个项目做出贡献
+向开源社区和世界推广它。
 
-The promotion contributions include but not limit to:
+推广贡献包括但不限于。
 
-- Integrate of eKuiepr to your open source project
-- Organize workshops or meetups about the project
-- Answer questions about the project on issues, slack or maillist
-- Write tutorials for how project can be used
-- Offer to mentor another contributor
+- 将 eKuiepr 整合到你的开源项目中。
+- 组织关于本项目的研讨会或聚会
+- 在 issues、slack 或 maillist 上回答关于本项目的问题
+- 撰写项目的使用教程
+- 为其他贡献者提供指导
 
-Thank you for taking the time to contribute!
+感谢你的贡献!

La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 51 - 51
docs/zh_CN/README.md


La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 38 - 39
docs/zh_CN/concepts/ekuiper.md


+ 14 - 14
docs/zh_CN/concepts/extensions.md

@@ -1,23 +1,23 @@
-# Extensions
+# 扩展
 
-eKuiper provides built-in sources, sinks and functions as the building block for the rule. However, it is impossible to cover all external system for source/sink connection such as user's system with private protocol. Moreover, the built-in function cannot cover all the computation needed for all users. Thus, customized source, sink and functions are needed in many cases. eKuiper provide extension mechanism for users to customize all these three aspects.
+eKuiper 提供了内置的源、动作和函数作为规则的构建模块。然而,它不可能覆盖所有外部系统的源/动作连接,例如用户的私有系统与私有协议的连接。此外,内置的功能不能涵盖所有用户需要的所有计算。因此,在很多情况下,用户需要定制源、动作和功能。eKuiper 提供了扩展机制,让用户可以定制这三个方面。
 
-## Extension Points
+## 扩展点
 
-We support 3 extension points:
+我们支持3种扩展点。
 
-- Source: add new source type for eKuiper to consume data from. The new extended source can be used in the stream/table definition.
-- Sink: add new sink type for eKuiper to produce data to. The new extended sink can be used in the rule actions definition.
-- Function: add new function type for eKuiper to process data. The new extended function can be used in the rule SQL.
+- 源:为 eKuiper 添加新的源类型,以便从中获取数据。新的扩展源可以在流/表定义中使用。
+- 动作 Sink: 为 eKuiper 增加新的 sink 类型来发送数据。新的扩展 sink 可以在规则动作定义中使用。
+- 函数: 为eKuiper添加新的函数类型来处理数据。新的扩展函数可以在规则 SQL 中使用。
 
-## Extension Types
+## 扩展类型
 
-We support 3 kinds of extension:
+我们支持3种类型的扩展。
 
-- [Go native plugin](../extension/native/overview.md): extend as a Go plugin. It is the most performant, but has a lot of limitation in development and deployment.
-- [Portable plugin](../extension/portable/overview.md) with Go or Python language, and it will support more languages later. It simplifies the development and deployment and has less limitations.
-- [External service](../extension/external/external_func.md): wrap existing external REST or rpc services as a eKuiper SQL function by configurations. It is a speedy way to extend by existing services. But it only supports function extension.
+- [Go 原生](../extension/native/overview.md):作为Go插件扩展。它是性能最好的,但在开发和部署方面有很多限制。
+- [Portable 插件](../extension/portable/overview.md)用 Go 或 Python 语言,以后会支持更多语言。它简化了开发和部署,限制较少。
+- [外部服务](../extension/external/external_func.md):通过配置将现有的外部 REST 或 RPC 服务包装成 eKuiper SQL 函数。这是一种快速的方式来扩展现有的服务,但它只支持函数扩展。
 
-## More Readings
+## 参考阅读
 
-- [Extension Reference](../extension/overview.md)
+- [扩展参考](../extension/overview.md)

+ 10 - 10
docs/zh_CN/concepts/rules.md

@@ -1,19 +1,19 @@
-# Rules
+# 规则
 
-Each rule represents a computing job to run in eKuiper. It defines the continuous streaming data source as the input, the computing logic and the result actions as the output.
+每条规则都代表了在 eKuiper 中运行的一项计算工作。它定义了连续流数据源作为输入,计算逻辑和结果 sink 作为输出。
 
-## Rule Lifecycle
+## 规则生命周期
 
-Currently, eKuiper only supports stream processing rule, which means that at lease one of the rule source must be a continuous stream. Thus, the rule will run continuously once started and only stopped if the user send stop command explicitly. The rule may stop abnormally for errors or the eKuiper instance exits.
+目前,eKuiper 只支持流处理规则,这意味着至少有一个规则源必须是连续流。规则一旦启动就会连续运行,只有在用户明确发送停止命令时才会停止。规则可能会因为错误或 eKuiper 实例退出而异常停止。
 
-## Rules Relationship
+## 规则关系
 
-It is common to run multiple rules simultaneously. As eKuiper is a single instance process, the rules are running in the same memory space. However, there are separated in the runtime and the error in one rule should not affect others. Regarding workload, all rules share the same hardware resource. Each rule can specify the operator buffer to limit the processing rate to avoid taking all resources.
+同时运行多个规则是很常见的。由于 eKuiper 是一个单一的实例进程,这些规则在同一个内存空间中运行。规则在运行时上是分开的,一个规则的错误不应该影响其他规则。关于工作负载,所有的规则都共享相同的硬件资源。每条规则可以指定算子缓冲区,以限制处理速度,避免占用所有资源。
 
-## Rule Pipeline
+## 规则流水线
 
-Multiple rules can form a processing pipeline by specifying a joint point in sink/source. For example, the first rule produce the result to a topic in memory sink and the other rule subscribe to that topic in its memory source. Besides the pair of memory sink/source, users can also use mqtt or other sink/source pair to connect rules.
+多个规则可以通过指定 sink /源的联合点形成一个处理管道。例如,第一条规则在内存 sink 中产生结果,其他规则在其内存源中订阅该主题。除了一对内存 sink /源,用户还可以使用 mqtt 或其他 sink /源对来连接规则。
 
-## More Readings
+## 参考阅读
 
-- [Rule Reference](../rules/overview.md)
+- [规则参考](../rules/overview.md)

+ 7 - 7
docs/zh_CN/concepts/sinks.md

@@ -1,13 +1,13 @@
-# Sinks
+# 动作
 
-Sinks are used to write data to an external system. Sinks can be used to write control data to trigger an action. Sinks can also be used to write status data and save in an external storage.
+动作是用来向外部系统写入数据的。动作可以用来写控制数据以触发一个动作,还可以用来写状态数据并保存在外部存储器中。
 
-In a rule, the sink type are used as an action. A rule can have more than one actions and the differenct actions can be the same sink type.
+一个规则可以有多个动作,不同的动作可以是同一个动作类型。
 
-## Result Encoding
+## 结果编码
 
-The sink result is a string as always. It will be encoded into json string by default. Users can change the format by setting `dataTemplate` which leverage the go template syntax to format the result into a string. For even detail control of the result format, users can develop a sink extension.
+动作的结是一个字符串。默认情况下,它将被编码为 json 字符串。用户可以通过设置`dataTemplate` 来改变格式,它利用 go 模板语法将结果格式化为字符串。为了更详细地控制结果的格式,用户可以开发一个动作扩展。
 
-## More Readings
+## 参考阅读
 
-- [Sink Reference](../rules/sinks/overview.md)
+- [动作参考](../rules/sinks/overview.md)

+ 15 - 14
docs/zh_CN/concepts/sources/overview.md

@@ -1,30 +1,31 @@
-# Sources
+# Sources
 
-Sources are used to read data from external systems. The source can be unbounded streaming data named stream or bounded batch data named table. When using in a rule, at least one of  the source must be a stream.
+源(source)用于从外部系统中读取数据。数据源既可以是无界的流式数据,即流;也可以是有界的批量数据,即表。在规则中使用时,至少有一个源必须为流。
 
-The source basically defines how to connect to an external resource and fetch data from the resource in a streaming way. After fetching the data, common tasks like decode and transform by schema can be done by setting properties.
+源定义了如何连接到外部资源,然后采用流式方式获取数据。获取数据后,通常源还会根据定义的数据模型进行数据解码和转换。
 
-## Define and Run
+## 定义和运行
 
-When define a source stream or table, it actually creates the logical definition instead of a physical running data input. The logical definition can then be used in rule's SQL in the `from` clause. The source only starts to run when any of the rules refer to it has started.
+在 eKuiper 中,定义数据源的流或者表之后,系统实际上只是创建了一个数据源的逻辑定义而非真正物理运行的数据输入。此逻辑定义可在多个规则的 SQL 的 `from` 子句中使用。只有当使用了该定义的规则启动之后,数据流才会真正运行。
 
-By default, if multiple rules refer to the same source. Each rule will have its own, standalone source instance from other rules so that the rules are total separated. To boost performance when users want to process the same data across multiple rules, they can define the source as shared. Then the rules refer to the same shared source will share the same running source instance.
+默认情况下,多个规则使用同一个源的情况下,每个规则会启动一个独立的源的运行时,与其他规则中的同名源完全隔离。若多个规则需要使用完全相同的输入数据或者提高性能,源可定义为[共享源](../../sqls/streams.md#共享源实例),从而在多个规则中共享同一个实例。
 
-## Decode
+## 解码
 
-Users can define the format to decode by setting `format` property. Currently, only `json` and `binary` format are supported. For other formats, customized source must be developed.
+用户可以在创建源时通过指定 `format` 属性来定义解码方式。当前只支持 `json` 和 `binary` 两种格式。若需要支持其他编码格式,用户需要开发自定义源插件。
 
-## Schema
+## 数据结构
 
-Users can define the data schema like a common SQL table. In eKuiper runtime, the data will be validated and transformed according to the schema. To avoid conversion overhead if the data is fixed and clean or to consume unknown schema data, users can define schemaless source.
+用户可以像定义关系数据库表结构一样定义数据源的结构。在 eKuiper 的运行时中,数据会根据定义的结构进行验证和类型转换。若输入数据为预处理过的干净数据或者数据结构未知或不固定,用户可不定义数据结构,从而也可以避免数据转换的开销。
 
-## Stream & Table
+## 流和表
 
-The source defines the external system connection. When using in a rule, users can define them as stream or table according to the processing mechanism. Check [stream](stream.md) and [table](table.md) for detail.
+源定义了与外部系统的连接方式。在规则中,根据数据使用逻辑,数据源可作为流或者表使用。
+详细信息请参见[流](stream.md)和[表](table.md)。
 
-## More Readings
+## 更多信息
 
-- [Source Reference](../../rules/sources/overview.md)
+- [数据源使用参考](../../rules/sources/overview.md)
 
 
 

+ 5 - 5
docs/zh_CN/concepts/sources/stream.md

@@ -1,10 +1,10 @@
-# Stream
+# 
 
-A stream is the runtime form of a source in eKuiper. It must specify a source type to define how to connect to the external resource.
+在 eKuiper 中,流指的是数据源的一种运行时形态。流定义需要指定其数据源类型以定义与外部资源的连接方式。
 
-When using as a stream, the source must be unbounded. The stream acts like a trigger for the rule. Each event will trigger a calculation in the rule.
+数据源作为流使用时,源必须为无界的。在规则中,流的行为类似事件触发器。每个事件都会触发规则的一次计算。
 
-## More Readings
+## 更多信息
 
-- [Stream Reference](../../sqls/streams.md)
+- [流用法](../../sqls/streams.md)
 

+ 9 - 9
docs/zh_CN/concepts/sources/table.md

@@ -1,17 +1,17 @@
-# Table
+# 
 
-In eKuiper, a table is a snapshot of the source data. In contrast to the common static tables that represent batch data, eKuiper tables can change over time.
+在 eKuiper 中,表是源数据在当前时间的快照。相比于普通的数据库表,eKuiper 中的表会随着时间而变化。
 
-The source for table can be either bounded or unbounded. For bounded source table, the content of the table is static. For unbounded table, the content of the table is dynamic.
+表的数据源既可以是无界的也可以是有界的。对于有界数据源来说,其表的内容为静态的。若表的数据源为无界数据流,则其内容会动态变化。
 
-## Table Updates
+## 表内容更新
 
-Currently, the table update in eKuiper is append-only. Users can specify the properties to limit the table size to avoid too much memory consumption.
+当前,表的内容更新仅支持追加。用户创建表时,可以指定参数限制表的大小,防止占用过多内存。
 
-## Table Usages
+## 表的使用
 
-Table cannot be used standalone in a rule. It is usually used to join with streams. It can be used to enrich stream data or as a switch for calculation.
+表不能在规则中单独使用,必须与流搭配,通常用于与流进行连接。表可用于补全流数据或作为计算的开关。
 
-## More Readings
+## 更多信息
 
-- [Table Reference](../../sqls/tables.md)
+- [表用法](../../sqls/tables.md)

+ 5 - 5
docs/zh_CN/concepts/sql.md

@@ -1,11 +1,11 @@
 # SQL
 
-The SQL language support in eKuiper includes Data Definition Language (DDL), Data Manipulation Language (DML) and Query Language. The SQL support in eKuiper is a subset of ANSI SQL and has some customized extensions.
+eKuiper 中的 SQL 语言支持包括数据定义语言(DDL)、数据操作语言(DML)和查询语言。eKuiper 中的 SQL 支持是 ANSI SQL 的一个子集,并有一些定制的扩展。
 
-## SQL in source definition
+## 源定义中的 SQL
 
-When create and manage stream or table source, SQL DDL and DML are used as the command payload. Check [streams](../sqls/streams.md) and [tables](../sqls/tables.md) for detail.
+当创建和管理流或表源时,SQL DDL和DML被用来作为命令的有效载荷。查看[流](../sqls/streams.md)和[表](../sqls/tables.md)以了解详情。
 
-## SQL queries in rules
+## 规则中的 SQL 查询
 
-In rules, SQL queries are used to define the business logic. Please check [sql reference](../sqls/overview.md) for detail.
+在规则中,SQL查询被用来定义业务逻辑。请查看[SQL 参考](../sqls/overview.md)了解详情。

+ 6 - 6
docs/zh_CN/concepts/streaming/join.md

@@ -1,10 +1,10 @@
-# Join of sources
+# 多源连接
 
-Currently, join is the only way to merge multiple sources in eKuiper. It requires a way to align multiple sources and trigger the join result.
+目前,连接是 eKuiper 中合并多个数据源的唯一方法。它需要一种方法来对齐多个来源并触发连接结果。
 
-The supported joins in eKuiper include:
+eKuiper支持的连接包括:
 
-- Join of streams: must do in a window.
-- Join of stream and table: the stream will be the trigger of join operation.
+- 多流的连接:必须在一个窗口中进行。
+- 流和表的连接:流将是连接操作的触发器。
 
-The supported join type includes LEFT, RIGHT, FULL & CROSS in eKuiper.
+eKuiper 支持的连接类型包括 LEFT, RIGHT, FULL 和 CROSS 。

La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 18 - 19
docs/zh_CN/concepts/streaming/overview.md


La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 10 - 10
docs/zh_CN/concepts/streaming/time.md


+ 7 - 7
docs/zh_CN/concepts/streaming/windowing.md

@@ -1,12 +1,12 @@
-# Windowing
+# 窗口
 
-As streaming data is infinite, it is impossible to process it as a whole. Windowing provides a mechanism to split the unbounded data into a continuous series of bounded data to calculate.
+由于流媒体数据是无限的,因此不可能将其作为一个整体来处理。窗口提供了一种机制,将无界的数据分割成一系列连续的有界数据来计算。
 
-In eKuiper, the built-in windowing supports:
+在eKuiper中,内置的窗口包括两种类型:
 
-- Time window: window split by time
-- Count window: window split by element count
+- 时间窗口:按时间分割的窗口
+- 计数窗口:按元素计数分割的窗口
 
-In time window, both processing time and event time are supported.
+在时间窗口中,同时支持处理时间和事件时间。
 
-For all the supported window type, please check [window functions](../../sqls/windows.md).
+对于所有支持的窗口类型,请查看[窗口函数](.../.../sqls/windows.md)。

+ 21 - 21
docs/zh_CN/rules/sinks/overview.md

@@ -1,30 +1,30 @@
-# Available Sinks
+# 可用的动作
 
-In the eKuiper source code, there are built-in sinks and sinks in extension.
+在 eKuiper 源代码中,有内置的动作和扩展的动作。
 
-## Built-in Sinks
+## 内置动作
 
-Users can directly use the built-in sinks in the standard eKuiper instance. The list of built-in sinks are:
+用户可以直接使用标准 eKuiper 实例中的内置动作。内建动作的列表如下。
 
-- [Mqtt sink](./builtin/mqtt.md): sink to external mqtt broker.
-- [Neuron sink](./builtin/neuron.md): sink to the local neuron instance.
-- [EdgeX sink](./builtin/edgex.md): sink to EdgeX Foundry. This sink only exist when enabling edgex build tag.
-- [Rest sink](./builtin/rest.md): sink to external http server.
-- [Memory sink](./builtin/memory.md): sink to eKuiper memory topic to form rule pipelines.
-- [Log sink](./builtin/log.md): sink to log, usually for debug only.
-- [Nop sink](./builtin/nop.md): sink to nowhere. It is used for performance testing now.
+- [Mqtt sink](./builtin/mqtt.md):输出到外部 mqtt 服务。
+- [Neuron sink](./builtin/neuron.md):输出到本地的 Neuron 实例。
+- [EdgeX sink](./builtin/edgex.md):输出到 EdgeX Foundry。此动作仅在启用 edgex 编译标签时存在。
+- [Rest sink](./builtin/rest.md):输出到外部 http 服务器。
+- [Memory sink](./builtin/memory.md):输出到 eKuiper 内存主题以形成规则管道。
+- [Log sink](./builtin/log.md):写入日志,通常只用于调试。
+- [Nop sink](./builtin/nop.md):不输出,用于性能测试。
 
-## Predefined Sink Plugins
+## 预定义的动作插件
 
-We have developed some official sink plugins. These plugins can be found in eKuiper's source code and users need to build them manually. Please check each sink about how to build and use.
+我们已经开发了一些官方的动作插件。这些插件可以在 eKuiper 的源代码中找到,用户需要手动构建它们。详细信息请查看每个动作的构建和使用方法。
 
-Additionally, these plugins have pre-built binaries for the mainstream cpu architecture such as AMD or ARM. The pre-built plugin hosted in `https://packages.emqx.net/kuiper-plugins/$version/$os/sinks/$type_$arch.zip`. For example, to get tdengine sink for debian amd64, install it from `https://packages.emqx.net/kuiper-plugins/1.4.4/debian/sinks/tdengine_amd64.zip`.
+这些插件有预编译的二进制文件,用于主流cpu架构,如AMD或ARM。预编译的插件托管在 `https://packages.emqx.net/kuiper-plugins/$version/$os/sinks/$type_$arch.zip` 中。例如,要获得用于 debian amd64 的 tdengine 插件,请从 `https://packages.emqx.net/kuiper-plugins/1.4.4/debian/sinks/tdengine_amd64.zip` 安装。
 
-The list of predefined sink plugins:
+预定义的动作插件列表。
 
-- [Zero MQ sink](./plugin/zmq.md): sink to zero mq.
-- [File sink](./plugin/file.md): sink to a file.
-- [InfluxDB sink](./plugin/influx.md): sink to influx db.
-- [Tdengine sink](./plugin/tdengine.md): sink to tdengine.
-- [Redis sink](./plugin/redis.md): sink to redis.
-- [Image sink](./plugin/image.md): sink to an image file. Only used to handle binary result.
+- [Zero MQ sink](./plugin/zmq.md):输出到 Zero MQ 。
+- [File sink](./plugin/file.md): 写入文件。
+- [InfluxDB sink](./plugin/influx.md): 写入 Influx DB 。
+- [Tdengine sink](./plugin/tdengine.md): 写入 Tdengine 。
+- [Redis sink](./plugin/redis.md): 写入 redis 。
+- [Image sink](./plugin/image.md): 写入一个图像文件。仅用于处理二进制结果。

+ 16 - 16
docs/zh_CN/rules/sources/overview.md

@@ -1,25 +1,25 @@
-# Available Sources
+# 可用的源
 
-In the eKuiper source code, there are built-in sources and sources in extension.
+在 eKuiper 源代码中,有内置源和扩展源。
 
-## Built-in Sources
+## 内置源
 
-Users can directly use the built-in sources in the standard eKuiper instance. The list of built-in sources are:
+用户可以直接使用标准 eKuiper 实例中的内置源。内置源的列表如下。
 
-- [Mqtt source](./builtin/mqtt.md): read data from mqtt topics.
-- [Neuron source](./builtin/neuron.md): read data from the local neuron instance.
-- [EdgeX source](./builtin/edgex.md): read data from EdgeX foundry.
-- [Http pull source](./builtin/http_pull.md): source to pull data from http servers.
-- [Memory source](./builtin/memory.md): source to read from eKuiper memory topic to form rule pipelines.
-- [File source](./builtin/file.md): source to read from file, usually used as tables.
+- [Mqtt source](./builtin/mqtt.md):从mqtt主题读取数据。
+- [Neuron source](./builtin/neuron.md): 从本地 Neuron 实例读取数据。
+- [EdgeX source](./builtin/edgex.md): 从 EdgeX foundry 读取数据。
+- [Http pull source](./builtin/http_pull.md):从http服务器中拉取数据。
+- [Memory source](./builtin/memory.md):从 eKuiper 内存主题读取数据以形成规则管道。
+- [File source](./builtin/file.md):从文件中读取数据,通常用作表格。
 
-## Predefined Source Plugins
+## 预定义的源插件
 
-We have developed some official source plugins. These plugins can be found in eKuiper's source code and users need to build them manually. Please check each source about how to build and use.
+我们已经开发了一些官方的源码插件。这些插件可以在 eKuiper 的源代码中找到,用户需要手动构建它们。关于如何构建和使用,请查看每个源的文档。
 
-Additionally, these plugins have pre-built binaries for the mainstream cpu architecture such as AMD or ARM. The pre-built plugin hosted in `https://packages.emqx.net/kuiper-plugins/$version/$os/sources/$type_$arch.zip`. For example, to get zmq source for debian amd64, install it from `https://packages.emqx.net/kuiper-plugins/1.4.4/debian/sources/zmq_amd64.zip`.
+这些插件有预编译的二进制文件,用于主流的cpu架构,如AMD或ARM。预编译建的插件托管在 `https://packages.emqx.net/kuiper-plugins/$version/$os/sources/$type_$arch.zip` 。例如,要获得 debian amd64 的 zmq 源插件,请从 `https://packages.emqx.net/kuiper-plugins/1.4.4/debian/sources/zmq_amd64.zip` 安装。
 
-The list of predefined source plugins:
+预定义的源插件列表:
 
-- [Zero MQ source](./plugin/zmq.md): read data from zero mq.
-- [Random source](./plugin/random.md): a source to generate random data for testing.
+- [Zero MQ source](./plugin/zmq.md):从Zero MQ读取数据。
+- [Random source](./plugin/random.md): 一个生成随机数据的源,用于测试。