Quellcode durchsuchen

feat(doc): Deploy by OpenYurt tutorial

Signed-off-by: Jiyong Huang <huangjy@emqx.io>
ngjaying vor 3 Jahren
Ursprung
Commit
c2c9322ee3

BIN
docs/en_US/deploy/add_service.png


BIN
docs/en_US/deploy/ekuiper_openyurt.png


+ 38 - 0
docs/en_US/deploy/kmanager.yaml

@@ -0,0 +1,38 @@
+kind: Deployment
+apiVersion: apps/v1
+metadata:
+  name: kmanager
+  namespace: default
+  labels:
+    app: kmanager
+spec:
+  selector:
+    matchLabels:
+      app: kmanager
+  template:
+    metadata:
+      labels:
+        app: kmanager
+    spec:
+      nodeName: cloud-node
+      hostNetwork: true
+      containers:
+        - name: kmanager
+          image: emqx/kuiper-manager:1.2.1
+          ports:
+            - containerPort: 9082
+              protocol: TCP
+---
+kind: Service
+apiVersion: v1
+metadata:
+  name: kmanager-http
+  namespace: default
+spec:
+  type: NodePort
+  selector:
+    app: kmanager
+  ports:
+    - nodePort: 32555
+      port: 9082
+      targetPort: 9082

+ 324 - 0
docs/en_US/deploy/openyurt_tutorial.md

@@ -0,0 +1,324 @@
+# Deploy and Manage eKuiper with OpenYurt
+
+LF Edge eKuiper is lightweight IoT data analytics and streaming software which is usually running in the edge side.
+A [manager dashboard](../manager-ui/overview.md) is provided to manage one or multiple eKuiper instances. Typically, the
+dashboard is deployed in a cloud node to manage eKuiper instances across many edge nodes.
+
+In most circumstances, the edge node is physically un-accessible from the cloud node due to security or other
+considerations. This makes deployment hard and cloud to edge management
+impossible. [OpenYurt](https://github.com/openyurtio/openyurt) sheds light on this scenario. OpenYurt is built based on
+native Kubernetes and targets to extend it to support edge computing seamlessly. In a nutshell, OpenYurt enables users
+to manage applications that run in the edge infrastructure as if they were running in the cloud infrastructure.
+
+In this tutorial, we will show how to deploy eKuiper and its dashboard in the OpenYurt cluster and leverage the yurt
+tunnel to enable the management from cloud to the edge. To mimic the real scenario where the cloud node and edge nodes
+may locate in separate network regions, we use a two-nodes kubernetes cluster. The eKuiper instance will be deployed to
+the edge node, and the dashboard will be deployed to the cloud node.
+
+![arch](ekuiper_openyurt.png)
+
+## Prerequisite
+
+In this tutorial, both the cloud node and edge node must install kubernetes and its dependencies. In the cloud node,
+additional tools are needed like OpenYurt and helm to deploy the eKuiper.
+
+Make sure your cloud node has an external ip so that the edge node can access it. Also make sure the edge node is
+internal so that the cloud node cannot access it.
+
+### Installation in the cloud node
+
+Firstly, install kubeadm and its dependency like docker engine. Please
+check [official doc to install kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
+for detail. **Notice that, OpenYurt does not support kubernetes versions bigger than 1.20, so please install versions
+1.20.x or below.** For debian-like system, install with command like:
+
+```shell
+sudo apt-get install -y kubelet=1.20.8-00 kubeadm=1.20.8-00 kubectl=1.20.8-00
+```
+
+Next, [install Golang](https://golang.org/doc/install) and
+then [build OpenYurt](https://github.com/openyurtio/openyurt#getting-started).
+
+Finally, [install helm](https://helm.sh/docs/intro/install/) as we will deploy eKuiper by helm chart.
+
+Across this tutorial, the host name of the cloud node is `cloud-node`. You can modify your host name to match this, or
+you will have to update all occurrence of `cloud-node` in this tutorial to your cloud node host name.
+
+### Installation in the edge node
+
+Just install `kubeadm` in the edge node.
+
+Across this tutorial, the host name of the edge node is `edge-node`. You can modify your host name to match this, or you
+will have to update all occurrence of `edge-node` in this tutorial to your edge node host name.
+
+## Setup Kubernetes Cluster
+
+We will provision the kubernetes cluster by `kubeadm` and let the edge node join to the cluster.
+
+Assume your external IP of cloud node is `34.209.219.149`. In the cloud node, type the following command and we will get
+a similar result as below.
+
+```shell
+# sudo kubeadm init --control-plane-endpoint 34.209.219.149 --kubernetes-version stable-1.20 
+[init] Using Kubernetes version: v1.20.8
+...
+Your Kubernetes control-plane has initialized successfully!
+
+To start using your cluster, you need to run the following as a regular user:
+
+  mkdir -p $HOME/.kube
+  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+  sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+Alternatively, if you are the root user, you can run:
+
+  export KUBECONFIG=/etc/kubernetes/admin.conf
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+  https://kubernetes.io/docs/concepts/cluster-administration/addons/
+
+You can now join any number of control-plane nodes by copying certificate authorities
+and service account keys on each node and then running the following as root:
+
+  kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325 \
+    --control-plane
+
+Then you can join any number of worker nodes by running the following on each as root:
+
+kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
+```
+
+In the command we specify the external ip as the control plane endpoint so that the edge node can access and also
+specify the kubernetes version to 1.20 which is the latest supported version in OpenYurt.
+
+Follow the instruction in the output to set the kubeconfig. And then copy the `kubeadm join` command to be used in the
+edge node.
+
+**In the edge node**, run the copied command:
+
+```shell
+sudo kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
+```
+
+If everything goes well, go back to the cloud node and type the below command to get the k8s node list and make sure you
+got 2 nodes as well:
+
+```shell
+$ kubectl get nodes -o wide
+NAME         STATUS     ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
+cloud-node   NotReady   control-plane,master   17m   v1.20.8   172.31.6.118    <none>        Ubuntu 20.04.2 LTS   5.4.0-1045-aws     docker://20.10.7
+edge-node    NotReady   <none>                 17s   v1.20.8   192.168.2.143   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   docker://20.10.7
+```
+
+If the node status is 'NotReady', it could be that the container network is not configured. We can install a kubernetes
+network addon as described [here](https://kubernetes.io/docs/concepts/cluster-administration/addons/). For example,
+install Weave Net addon:
+
+```shell
+$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |
+tr -d '\n')"
+```
+
+After several minutes, run `kubectl get nodes -o wide`, the nodes should all be ready.
+
+By now, we have created a k8s cluster with two nodes: cloud-node and edge-node.
+
+### Make Cloud Node Accessible
+
+In the `kubectl get nodes -o wide` result, if the internal IP of the cloud-node is not an accessible external IP, we
+will need to make it accessible. You can specify an external IP for the node. However, in most Cloud platform like AWS,
+the machine does not have an external IP, we will add iptables rules to forward the access of the internal IP to the
+external IP. Assume the internal IP of cloud node is `172.31.0.236`, add an iptables rule in the cloud-node.
+
+```shell
+$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
+```
+
+Add another iptables rule in the edge-node.
+
+```shell
+$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
+```
+
+Make sure in the edge node, `172.31.0.236` is accessible by `ping 172.31.0.236`.
+
+## Deploy eKuiper instance to edge
+
+As an edge streaming software, eKuiper usually deploys in the edge side. We will use eKuiper helm chart to accelerate
+the deployment.
+
+```shell
+$ git clone https://github.com/lf-edge/ekuiper
+$ cd ekuiper/deploy/chart/Kuiper
+```
+
+In order to deploy the eKuiper to the edge-node, we will modify the template file in the helm chart.
+Edit `template/StatefulSet.yaml` line 38 to add nodeName and hostNetwork as below. Whereas, `edge-node` is the host of
+the edge node, if your hostname is different, change to fit your edge hostname.
+
+```yaml
+...
+spec:
+   nodeName: edge-node
+   hostNetwork: true
+   volumes:
+        {{- if not .Values.persistence.enabled }}
+...
+```
+
+Save the change and deploy eKuiper by helm command:
+
+```shell
+$ helm install ekuiper .
+```
+
+You will have two new services running.
+
+```shell
+$ kubectl get services
+NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
+ekuiper            ClusterIP   10.99.57.211     <none>        9081/TCP,20498/TCP   22h
+ekuiper-headless   ClusterIP   None             <none>        <none>               22h
+```
+
+Verify the pods, the ekuiper should run in `edge-node`.
+
+```shell
+$ kubectl get pods -o wide
+NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
+ekuiper-0                   1/1     Running   0          22h   10.244.1.3   edge-node   <none>           <none>
+```
+
+The `ekuiper` rest service is running inside cluster with port 9081. We can check the service connection by typing the
+following commands in the edge node where `192.168.2.143` is the edge node intranet ip.
+
+```shell
+$ curl http://192.168.2.143:9081
+{"version":"1.2.0","os":"linux","upTimeSeconds":81317}
+```
+
+## Deploy the eKuiper dashboard to cloud
+
+We will deploy ekuiper dashboard in the cloud node by kubectl tool with [kmanager.yaml](./kmanager.yaml). In the
+configuration file, we define a deployment and a service for eKuiper manager which is a web based UI. First, we need to
+revise the manager docker tag to the correct version that matches the eKuiper version in line 21:
+
+```yaml
+...
+containers:
+   - name: kmanager
+     image: emqx/kuiper-manager:1.2.1
+...
+```
+
+Then, running the kubectl command:
+
+```shell
+$ kubectl apply -f kmanager.yaml
+```
+
+Run get service, you should find
+
+```shell
+$kubectl get svc
+NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
+ekuiper            ClusterIP   10.99.57.211    <none>        9081/TCP,20498/TCP   120m
+ekuiper-headless   ClusterIP   None            <none>        <none>               120m
+kmanager-http      NodePort    10.99.154.153   <none>        9082:32555/TCP       15s
+kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP              33h
+```
+
+The dashboard is run in the cloud node with port `32555`. So open the dashboard with url http://34.209.219.149:32555 in
+your browser. Login with default username and password: admin/public.
+
+Our goal is to manage eKuiper instance in the edge node. So we will add a the eKuiper service in the edge node which is
+set up in the last section as a service in the dashboard.
+
+1. Create `Add Service` and fill in the form as below.
+
+   ![add service](./add_service.png)
+
+2. After service created, click the service name `ekuiper` and switch to `system` tab. The connection should be broken
+   so that we will get errors about connection. That is because `http://192.168.2.143:9081/` is the intranet address of
+   eKuiper service in the edge side. It is not accessible from the cloud side directly.
+
+In next section, we will setup the yurt tunnel to let the dashboard manage the eKuiper instance in the edge side.
+
+## Setup the yurt-tunnel
+
+We will use OpenYurt to setup the tunnel as the communication pipeline between the cloud and the edge node. Because we
+need to connect to port `9081` in the edge, we will have to setup the port mapping in the yurt tunnel.
+
+In the cloud node, open `openyurt/config/setup/yurt-tunnel-server.yaml` file and edit the
+configmap `yurt-tunnel-server-cfg` in line 31 to add nat-ports-pair as below.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: yurt-tunnel-server-cfg
+  namespace: kube-system
+data:
+  dnat-ports-pair: "9081=10264"
+```
+
+Then edit line 175 to add the cloud-node external ip as the cert ip. This is only required if the cloud node does not
+have a public ip and [setup with NAT rules](#make-cloud-node-accessible).
+
+```yaml
+...
+args:
+  - --bind-address=$(NODE_IP)
+  - --insecure-bind-address=$(NODE_IP)
+  - --proxy-strategy=destHost
+  - --v=2
+  - --cert-ips=34.209.219.149
+...
+```
+
+Then, we will convert the kubernetes cluster to OpenYurt cluster and deploy the yurt tunnel.
+
+```shell
+$ _output/bin/yurtctl convert --cloud-nodes cloud-node --provider kubeadm --deploy-yurttunnel
+```
+
+Next we will setup the yurt-tunnel manually by deploying yurrt-tunnel-server and yurt-tunnel-agent separately.
+
+To set up the yurt-tunnel-server, let's first add a label to the cloud node
+
+```shell
+$ kubectl label nodes cloud-node openyurt.io/is-edge-worker=false
+```
+
+Then, we can deploy the yurt-tunnel-server:
+
+```shell
+$ kubectl apply -f config/setup/yurt-tunnel-server.yaml
+```
+
+Next, we can set up the yurt-tunnel-agent. Like before, we add a label to the edge node, which allows the
+yurt-tunnel-agent to be run on the edge node:
+
+```shell
+kubectl label nodes edge-node openyurt.io/is-edge-worker=true
+```
+
+And, apply the yurt-tunnel-agent yaml:
+
+```shell
+kubectl apply -f config/setup/yurt-tunnel-agent.yaml
+```
+
+After the agent and the server are running, we should be able to manage ekuiper from the dashboard again. Go back to the
+dashboard in the browser, click the service name `ekuiper` and switch to `system` tab, we should find the service is
+healthy like the below screenshot:
+
+![system](./ping.png)
+
+Great! Now we can manage the eKuiper in the edge by the dashboard, as if it was deployed in the cloud. Follow
+the [manager ui tutorial](../manager-ui/overview.md) to create and manage your stream, rule and plugins and any other
+management works of eKuiper from the cloud.

BIN
docs/en_US/deploy/ping.png


BIN
docs/zh_CN/deploy/add_service.png


BIN
docs/zh_CN/deploy/ekuiper_openyurt.png


+ 38 - 0
docs/zh_CN/deploy/kmanager.yaml

@@ -0,0 +1,38 @@
+kind: Deployment
+apiVersion: apps/v1
+metadata:
+  name: kmanager
+  namespace: default
+  labels:
+    app: kmanager
+spec:
+  selector:
+    matchLabels:
+      app: kmanager
+  template:
+    metadata:
+      labels:
+        app: kmanager
+    spec:
+      nodeName: cloud-node
+      hostNetwork: true
+      containers:
+        - name: kmanager
+          image: emqx/kuiper-manager:1.2.1
+          ports:
+            - containerPort: 9082
+              protocol: TCP
+---
+kind: Service
+apiVersion: v1
+metadata:
+  name: kmanager-http
+  namespace: default
+spec:
+  type: NodePort
+  selector:
+    app: kmanager
+  ports:
+    - nodePort: 32555
+      port: 9082
+      targetPort: 9082

+ 287 - 0
docs/zh_CN/deploy/openyurt_tutorial.md

@@ -0,0 +1,287 @@
+# 使用 OpenYurt 部署和管理 eKuiper
+
+LF Edge eKuiper 是轻量级物联网数据分析和流媒体软件,通常在边缘端运行。 它提供了一个 [管理仪表板](../manager-ui/overview.md) 来管理一个或多个 eKuiper 实例。 通常,仪表板部署在云节点中,用于管理跨多个边缘节点的 eKuiper 实例。
+
+在大多数情况下,出于安全或其他考虑,边缘节点在物理上无法从云节点访问。 这使得部署变得困难,并且无法进行云到边缘管理。 [OpenYurt](https://github.com/openyurtio/openyurt) 改变了这种情况。 OpenYurt 基于原生 Kubernetes 构建,可以对其进行扩展以无缝支持边缘计算。 简而言之,OpenYurt 使用户能够管理在边缘基础设施中运行的应用程序,就像它们在云基础设施中运行一样。
+
+在本教程中,我们将展示如何在 OpenYurt 集群中部署 eKuiper 及其仪表板,并利用 yurt 隧道实现从云到边缘的管理。 为了模拟云节点和边缘节点可能位于不同网络区域的真实场景,我们使用了一个两节点的 kubernetes 集群。 eKuiper 实例将部署到边缘节点,仪表板将部署到云节点。
+
+![arch](ekuiper_openyurt.png)
+
+## 先决条件
+
+在本教程中,云节点和边缘节点都必须安装 kubernetes 及其依赖项。 在云节点中,需要使用 OpenYurt 和 helm 等工具来部署 eKuiper。
+
+确保云节点具有外部 ip,以便边缘节点可以访问它。还要确保边缘节点是内部结点,以便云节点无法访问它。
+
+### 云节点安装工作
+
+首先,安装 kubeadm 及其依赖项,如 docker 引擎。 详情请查看 [安装 kubeadm 的官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。 **注意,OpenYurt 不支持高于 1.20 的 kubernetes 版本,所以请安装 1.20.x 或以下版本。** 对于类似 debian 的系统,使用如下命令安装:
+
+```shell
+sudo apt-get install -y kubelet=1.20.8-00 kubeadm=1.20.8-00 kubectl=1.20.8-00
+```
+
+接下来,[安装 Golang](https://golang.org/doc/install) ,然后[构建 OpenYurt](https://github.com/openyurtio/openyurt#getting-started)。
+
+最后,[安装 helm](https://helm.sh/docs/intro/install/),因为我们将通过 helm chart 部署 eKuiper。
+
+在本教程中,云节点的主机名是 `cloud-node`。 您可以修改您的主机名以匹配此名称,或者您必须将本教程中所有出现的 `cloud-node` 替换为您的云节点主机名。
+
+### 边缘节点安装工作
+
+只需在边缘节点中安装 `kubeadm`。
+
+在本教程中,边缘节点的主机名是 `edge-node`。 您可以修改您的主机名以匹配此名称,或者您必须将本教程中所有出现的 `edge-node` 替换为您的边缘节点主机名。
+
+## 设置 Kubernetes 集群
+
+我们将通过 `kubeadm` 配置 kubernetes 集群,并让边缘节点加入集群。
+
+假设您的云节点的外部 IP 是 `34.209.219.149`。 在云节点中,输入以下命令,我们将得到类似下面的结果。
+
+```shell
+# sudo kubeadm init --control-plane-endpoint 34.209.219.149 --kubernetes-version stable-1.20 
+[init] Using Kubernetes version: v1.20.8
+...
+Your Kubernetes control-plane has initialized successfully!
+
+To start using your cluster, you need to run the following as a regular user:
+
+  mkdir -p $HOME/.kube
+  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+  sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+Alternatively, if you are the root user, you can run:
+
+  export KUBECONFIG=/etc/kubernetes/admin.conf
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+  https://kubernetes.io/docs/concepts/cluster-administration/addons/
+
+You can now join any number of control-plane nodes by copying certificate authorities
+and service account keys on each node and then running the following as root:
+
+  kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325 \
+    --control-plane
+
+Then you can join any number of worker nodes by running the following on each as root:
+
+kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
+```
+
+通过命令,我们指定外部 ip 作为控制平面端点,以便边缘节点可以访问,并将 kubernetes 版本指定为 1.20,这是 OpenYurt 中支持的最新版本。
+
+按照输出中的说明设置 kubeconfig。 然后复制要在边缘节点中使用的 `kubeadm join` 命令。
+
+**在边缘节点**,运行复制的命令:
+
+```shell
+sudo kubeadm join 34.209.219.149:6443 --token i24p5i.nz1feykoggszwxpq \
+    --discovery-token-ca-cert-hash sha256:3aacafdd44d1136808271ad4aafa34e5e9e3553f3b6f21f972d29b8093554325
+```
+
+如果一切顺利,返回云节点并输入以下命令以获取 k8s 节点列表,确保您可以获得 2 个节点:
+
+```shell
+$ kubectl get nodes -o wide
+NAME         STATUS     ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
+cloud-node   NotReady   control-plane,master   17m   v1.20.8   172.31.6.118    <none>        Ubuntu 20.04.2 LTS   5.4.0-1045-aws     docker://20.10.7
+edge-node    NotReady   <none>                 17s   v1.20.8   192.168.2.143   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   docker://20.10.7
+```
+
+如果节点状态为 'NotReady',则可能是未配置容器网络。 我们可以按照 [此处](https://kubernetes.io/docs/concepts/cluster-administration/addons/) 的描述安装 kubernetes 网络插件。 例如,安装 Weave Net 插件:
+
+```shell
+$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |
+tr -d '\n')"
+```
+
+几分钟后,运行 `kubectl get nodes -o wide`,节点应该已准备就绪。
+
+至此,我们已经创建了一个具有两个节点的 k8s 集群:cloud-node 和 edge-node。
+
+### 使云节点可访问
+
+在 `kubectl get nodes -o wide` 返回的结果中,如果 cloud-node 的内部 IP 不是可访问的外部 IP,我们需要使其可访问。 您可以为节点指定外部 IP。 但是,在大多数像 AWS
+这样的云平台,机器没有外部IP,我们需要添加 iptables 规则,将内部 IP 的访问转化到外部 IP。 假设云节点的内部 IP 为 `172.31.0.236`,在云节点中添加 iptables 规则。
+
+```shell
+$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
+```
+
+在边缘节点中添加另一个 iptables 规则。
+
+```shell
+$ sudo iptables -t nat -A OUTPUT -d 172.31.0.236 -j DNAT --to-destination 34.209.219.149
+```
+
+通过运行 `ping 172.31.0.236`,确保在边缘节点中可以访问 `172.31.0.236` 。
+
+## 将 eKuiper 实例部署到边缘
+
+eKuiper 作为边缘流媒体软件,通常部署在边缘端。 我们将使用 eKuiper helm chart 来加速部署。
+
+```shell
+$ git clone https://github.com/lf-edge/ekuiper
+$ cd ekuiper/deploy/chart/Kuiper
+```
+
+为了将 eKuiper 部署到 edge-node,我们将修改 helm chart 中的模板文件。
+编辑 `template/StatefulSet.yaml` 第 38 行以添加 nodeName 和 hostNetwork,如下所示。 其中, `edge-node` 是边缘节点的主机名字,如果您的主机名不同,请更改以匹配您的边缘主机名。
+
+```yaml
+...
+spec:
+   nodeName: edge-node
+   hostNetwork: true
+   volumes:
+        {{- if not .Values.persistence.enabled }}
+...
+```
+
+保存更改并通过 helm 命令部署 eKuiper:
+
+```shell
+$ helm install ekuiper .
+```
+
+您将运行两个新服务。
+
+```shell
+$ kubectl get services
+NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
+ekuiper            ClusterIP   10.99.57.211     <none>        9081/TCP,20498/TCP   22h
+ekuiper-headless   ClusterIP   None             <none>        <none>               22h
+```
+
+通过验证 pod,ekuiper 应该在 `edge-node` 中运行。
+
+```shell
+$ kubectl get pods -o wide
+NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
+ekuiper-0                   1/1     Running   0          22h   10.244.1.3   edge-node   <none>           <none>
+```
+
+`ekuiper` rest 服务在集群内运行,端口为 9081。我们可以通过在边缘节点中键入以下命令来检查服务连接,其中 `192.168.2.143` 是边缘节点内网 ip。
+
+```shell
+$ curl http://192.168.2.143:9081
+{"version":"1.2.0","os":"linux","upTimeSeconds":81317}
+```
+
+## 将 eKuiper 仪表板部署到云端
+
+我们将使用 [kmanager.yaml](./kmanager.yaml) 和 kubectl 工具在云节点中部署 ekuiper 仪表板。 eKuiper manager 是一个基于 web
+的用户界面。在配置文件中,我们为eKuiper manager 定义了部署和服务。
+
+首先,我们需要确保文件中使用的仪表盘版本跟 eKuiper 版本相匹配。打开并修改 kmanager.yaml 第21行,确保版本正确。
+
+```yaml
+...
+containers:
+   - name: kmanager
+     image: emqx/kuiper-manager:1.2.1
+...
+```
+
+然后,运行 kubectl 命令
+
+```shell
+$ kubectl apply -f kmanager.yaml
+```
+
+运行 get 服务,你将得到如下结果:
+
+```shell
+$kubectl get svc
+NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
+ekuiper            ClusterIP   10.99.57.211    <none>        9081/TCP,20498/TCP   120m
+ekuiper-headless   ClusterIP   None            <none>        <none>               120m
+kmanager-http      NodePort    10.99.154.153   <none>        9082:32555/TCP       15s
+kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP              33h
+```
+
+仪表板在端口 `32555` 的云节点中运行。 因此,在浏览器中使用 url http://34.209.219.149:32555 打开仪表板。 使用默认用户名和密码登录:admin/public 。
+
+我们的目标是在边缘节点管理 eKuiper 实例。 因此,我们将在上一节中设置的边缘节点中添加一个 eKuiper 服务作为仪表板中的服务。
+
+1. 创建 `Add Service` 并填写如下表格。
+
+   ![add service](./add_service.png)
+
+2. 服务创建完成后,点击服务名称 `ekuiper` 并切换到 `system`  页面。 连接应该被断开,这样我们就会得到关于连接的错误信息。 那是因为 `http://192.168.2.143:9081/` 是边缘端 eKuiper 服务的内网地址, 不能直接从云端访问。
+
+在下一节中,我们将设置 yurt 隧道,让仪表板管理 edge 端的 eKuiper 实例。
+
+## 设置 yurt 隧道
+
+我们将使用 OpenYurt 将隧道设置为云和边缘节点之间的通信管道。 因为我们需要连接到边缘的 `9081` 端口,我们必须在 yurt 隧道中设置端口映射。
+
+在云节点中,打开 `openyurt/config/setup/yurt-tunnel-server.yaml` 文件,编辑 configmap 第31行  `yurt-tunnel-server-cfg`,添加 nat-ports-pair,如下所示。
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: yurt-tunnel-server-cfg
+  namespace: kube-system
+data:
+  dnat-ports-pair: "9081=10264"
+```
+
+然后编辑第 175 行以添加 cloud-node 外部 ip 作为证书 ip。 仅当云节点没有公共 ip 和 [使用 NAT 规则设置](#使云节点可访问) 时才需要这样做。
+
+```yaml
+...
+args:
+  - --bind-address=$(NODE_IP)
+  - --insecure-bind-address=$(NODE_IP)
+  - --proxy-strategy=destHost
+  - --v=2
+  - --cert-ips=34.209.219.149
+...
+```
+
+然后,我们将 kubernetes 集群转换为 OpenYurt 集群并部署 yurt 隧道。
+
+```shell
+$ _output/bin/yurtctl convert --cloud-nodes cloud-node --provider kubeadm --deploy-yurttunnel
+```
+
+接下来我们将通过分别部署 yurt-tunnel-server 和 yurt-tunnel-agent 手动设置 yurt 隧道。
+
+在设置 yurt 隧道服务器之前,我们先给云节点添加一个标签
+
+```shell
+$ kubectl label nodes cloud-node openyurt.io/is-edge-worker=false
+```
+
+然后,我们可以部署 yurt 隧道服务器:
+
+```shell
+$ kubectl apply -f config/setup/yurt-tunnel-server.yaml
+```
+
+接下来,我们可以设置 yurt 隧道代理。 和之前一样,我们给边缘节点添加一个标签,允许在边缘节点上运行 yurt 隧道代理:
+
+```shell
+kubectl label nodes edge-node openyurt.io/is-edge-worker=true
+```
+
+并且,应用 yurt-tunnel-agent. yaml文件:
+
+```shell
+kubectl apply -f config/setup/yurt-tunnel-agent.yaml
+```
+
+代理和服务器运行后,我们应该可以从仪表板管理 ekuiper。 返回浏览器中的仪表板,单击服务名称 `ekuiper` 并切换到 `system` 选项卡,我们应该会发现该服务是健康的,如下图所示:
+
+![system](./ping.png)
+
+很棒! 现在我们可以通过仪表板在边缘管理 eKuiper,就像它部署在云端一样。参照 [manager ui教程](../manager-ui/overview.md),可以从云端创建和管理 eKuiper
+的流、规则和插件以及任何类似的管理工作。
+

BIN
docs/zh_CN/deploy/ping.png