diff --git a/docs/getting-started/quick-start/kubernetes.md b/docs/getting-started/quick-start/kubernetes.md index 058ccd39..36ea8b11 100644 --- a/docs/getting-started/quick-start/kubernetes.md +++ b/docs/getting-started/quick-start/kubernetes.md @@ -5,10 +5,10 @@ description: Kubernetes slug: /getting-started/quick-start/kubernetes/ --- -Documentation for deploying dragonfly on kubernetes using helm. +Documentation for deploying Dragonfly on kubernetes using helm. You can have a quick start following [Helm Charts](../installation/helm-charts.md). -We recommend to use `Containerd with CRI` and `CRI-O` client. +We recommend to use `containerd with CRI` and `CRI-O` client. This table describes some container runtimes version and documents. @@ -16,8 +16,8 @@ This table describes some container runtimes version and documents. | Runtime | Version | Document | CRI Support | Pull Command | | ----------------------- | ------- | ------------------------------------------------ | ----------- | ------------------------------------------- | -| Containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | -| Containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | +| containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | +| containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | | CRI-O | All | [Link](../../setup/runtime/cri-o.md) | Yes | crictl pull docker.io/library/alpine:latest | @@ -51,9 +51,9 @@ Switch the context of kubectl to kind cluster: kubectl config use-context kind-kind ``` -## Kind loads dragonfly image {#kind-loads-dragonfly-image} +## Kind loads Dragonfly image {#kind-loads-dragonfly-image} -Pull dragonfly latest images: +Pull Dragonfly latest images: ```shell docker pull dragonflyoss/scheduler:latest @@ -69,7 +69,7 @@ kind load docker-image dragonflyoss/manager:latest kind load docker-image dragonflyoss/dfdaemon:latest ``` -## Create dragonfly cluster based on helm charts {#create-dragonfly-cluster-based-on-helm-charts} +## Create Dragonfly cluster based on helm charts {#create-dragonfly-cluster-based-on-helm-charts} Create helm charts configuration file `charts-config.yaml`, configuration content is as follows: @@ -124,7 +124,7 @@ jaeger: enable: true ``` -Create a dragonfly cluster using the configuration file: +Create a Dragonfly cluster using the configuration file: @@ -161,7 +161,7 @@ NOTES: -Check that dragonfly is deployed successfully: +Check that Dragonfly is deployed successfully: ```shell $ kubectl get po -n dragonfly-system @@ -179,7 +179,7 @@ dragonfly-scheduler-0 1/1 Running 0 8m43s dragonfly-seed-peer-0 1/1 Running 3 (5m56s ago) 8m43s ``` -## Containerd pull image back-to-source for the first time through dragonfly {#containerd-pull-image-back-to-source-for-the-first-time-through-dragonfly} +## Containerd pull image back-to-source for the first time through Dragonfly {#containerd-pull-image-back-to-source-for-the-first-time-through-dragonfly} Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: @@ -202,10 +202,24 @@ Tracing details: ![download-back-to-source-tracing](../../resource/getting-started/download-back-to-source-tracing.jpg) -When pull image back-to-source for the first time through dragonfly, it takes `5.58s` to +When pull image back-to-source for the first time through Dragonfly, it takes `5.58s` to download the `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` layer. -## Containerd pull image hits the cache of local peer {#containerd-pull-image-hits-the-cache-of-local-peer} +## Containerd pull image hits the cache of remote peer {#containerd-pull-image-hits-the-cache-of-remote-peer} + +Delete the dfdaemon whose Node is `kind-worker` to clear the cache of Dragonfly local Peer. + + + +```shell +# Find pod name. +export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) + +# Delete pod. +kubectl delete pod ${POD_NAME} -n dragonfly-system +``` + + Delete `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: @@ -228,21 +242,27 @@ kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query Visit the Jaeger page in [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), Search for tracing with Tags `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:8a9fba45626f402c12bafaadb718690187cae6e5d56296a8fe7d7c4ce19038f7?ns=ghcr.io"`: -![hit-local-peer-cache-search-tracing](../../resource/getting-started/hit-local-peer-cache-search-tracing.jpg) +![hit-remote-peer-cache-search-tracing](../../resource/getting-started/hit-remote-peer-cache-search-tracing.jpg) Tracing details: -![hit-local-peer-cache-tracing](../../resource/getting-started/hit-local-peer-cache-tracing.jpg) +![hit-remote-peer-cache-tracing](../../resource/getting-started/hit-remote-peer-cache-tracing.jpg) -When pull image hits cache of local peer, it takes `65.24ms` to +When pull image hits cache of remote peer, it takes `117.98ms` to download the `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` layer. -## Containerd pull image hits the cache of remote peer {#containerd-pull-image-hits-the-cache-of-remote-peer} +## Containerd pull image hits the cache of local peer {#containerd-pull-image-hits-the-cache-of-local-peer} -Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker2` node: +Delete `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: ```shell -docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +docker exec -i kind-worker /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: + +```shell +docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` Expose jaeger's port `16686`: @@ -254,13 +274,13 @@ kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query Visit the Jaeger page in [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), Search for tracing with Tags `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:8a9fba45626f402c12bafaadb718690187cae6e5d56296a8fe7d7c4ce19038f7?ns=ghcr.io"`: -![hit-remote-peer-cache-search-tracing](../../resource/getting-started/hit-remote-peer-cache-search-tracing.jpg) +![hit-local-peer-cache-search-tracing](../../resource/getting-started/hit-local-peer-cache-search-tracing.jpg) Tracing details: -![hit-remote-peer-cache-tracing](../../resource/getting-started/hit-remote-peer-cache-tracing.jpg) +![hit-local-peer-cache-tracing](../../resource/getting-started/hit-local-peer-cache-tracing.jpg) -When pull image hits cache of remote peer, it takes `117.98ms` to +When pull image hits cache of local peer, it takes `65.24ms` to download the `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` layer. ## Preheat image {#preheat-image} @@ -271,16 +291,22 @@ Expose manager's port `8080`: kubectl --namespace dragonfly-system port-forward service/dragonfly-manager 8080:8080 ``` -Preheat `ghcr.io/dragonflyoss/dragonfly2/manager:v2.0.5` image: +Please create personal access Token before calling Open API, and select `job` for access scopes, refer to [personal-access-tokens](../../reference/personal-access-tokens.md). + +Use Open API to preheat the image `ghcr.io/dragonflyoss/dragonfly2/manager:v2.0.5` to Seed Peer, refer to [preheat](../../reference/preheat.md). ```shell -curl --location --request POST 'http://127.0.0.1:8080/api/v1/jobs' \ +curl --location --request POST 'http://127.0.0.1:8080/oapi/v1/jobs' \ --header 'Content-Type: application/json' \ +--header 'Authorization: Bearer your_personal_access_token' \ --data-raw '{ "type": "preheat", "args": { "type": "image", - "url": "https://ghcr.io/v2/dragonflyoss/dragonfly2/manager/manifests/v2.0.5" + "url": "https://ghcr.io/v2/dragonflyoss/dragonfly2/manager/manifests/v2.0.5", + "filteredQueryParams": "Expires&Signature", + "username": "your_registry_username", + "password": "your_registry_password" } }' ``` @@ -293,8 +319,10 @@ The command-line log returns the preheat job id: Polling the preheating status with job id: -```bash -curl --request GET 'http://127.0.0.1:8080/api/v1/jobs/1' +```shell +curl --request GET 'http://127.0.0.1:8080/oapi/v1/jobs/1' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer your_personal_access_token' ``` If the status is `SUCCESS`, the preheating is successful: diff --git a/docs/getting-started/quick-start/multi-cluster-kubernetes.md b/docs/getting-started/quick-start/multi-cluster-kubernetes.md index 74588969..fb34169e 100644 --- a/docs/getting-started/quick-start/multi-cluster-kubernetes.md +++ b/docs/getting-started/quick-start/multi-cluster-kubernetes.md @@ -5,12 +5,12 @@ description: Multi-cluster kubernetes slug: /getting-started/quick-start/multi-cluster-kubernetes/ --- -Documentation for deploying dragonfly on multi-cluster kubernetes using helm. A dragonfly cluster manages cluster within -a network. If you have two clusters with disconnected networks, you can use two dragonfly clusters to manage their own clusters. +Documentation for deploying Dragonfly on multi-cluster kubernetes using helm. A Dragonfly cluster manages cluster within +a network. If you have two clusters with disconnected networks, you can use two Dragonfly clusters to manage their own clusters. -The recommended deployment in a multi-cluster kubernetes is to use a dragonfly cluster to manage a kubernetes cluster, -and use a centralized manager service to manage multiple dragonfly clusters. Because peer can only transmit data in -its own dragonfly cluster, if a kubernetes cluster deploys a dragonfly cluster, then a kubernetes cluster forms a p2p network, +The recommended deployment in a multi-cluster kubernetes is to use a Dragonfly cluster to manage a kubernetes cluster, +and use a centralized manager service to manage multiple Dragonfly clusters. Because peer can only transmit data in +its own Dragonfly cluster, if a kubernetes cluster deploys a Dragonfly cluster, then a kubernetes cluster forms a p2p network, and internal peers can only schedule and transmit data in a kubernetes cluster. ![multi-cluster-kubernetes](../../resource/getting-started/multi-cluster-kubernetes.png) @@ -18,7 +18,7 @@ and internal peers can only schedule and transmit data in a kubernetes cluster. ## Runtime You can have a quick start following [Helm Charts](../../installation/helm-charts). -We recommend to use `Containerd with CRI` and `CRI-O` client. +We recommend to use `containerd with CRI` and `CRI-O` client. This table describes some container runtimes version and documents. @@ -26,8 +26,8 @@ This table describes some container runtimes version and documents. | Runtime | Version | Document | CRI Support | Pull Command | | ----------------------- | ------- | ------------------------------------------------ | ----------- | ------------------------------------------- | -| Containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | -| Containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | +| containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | +| containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | | CRI-O | All | [Link](../../setup/runtime/cri-o.md) | Yes | crictl pull docker.io/library/alpine:latest | @@ -74,9 +74,9 @@ Switch the context of kubectl to kind cluster A: kubectl config use-context kind-kind ``` -## Kind loads dragonfly image +## Kind loads Dragonfly image -Pull dragonfly latest images: +Pull Dragonfly latest images: ```shell docker pull dragonflyoss/scheduler:latest @@ -84,7 +84,7 @@ docker pull dragonflyoss/manager:latest docker pull dragonflyoss/dfdaemon:latest ``` -Kind cluster loads dragonfly latest images: +Kind cluster loads Dragonfly latest images: ```shell kind load docker-image dragonflyoss/scheduler:latest @@ -92,14 +92,14 @@ kind load docker-image dragonflyoss/manager:latest kind load docker-image dragonflyoss/dfdaemon:latest ``` -## Create dragonfly cluster A +## Create Dragonfly cluster A -Create dragonfly cluster A, the schedulers, seed peers, peers and centralized manager included in +Create Dragonfly cluster A, the schedulers, seed peers, peers and centralized manager included in the cluster should be installed using helm. -### Create dragonfly cluster A based on helm charts +### Create Dragonfly cluster A based on helm charts -Create dragonfly cluster A charts configuration file `charts-config-cluster-a.yaml`, configuration content is as follows: +Create Dragonfly cluster A charts configuration file `charts-config-cluster-a.yaml`, configuration content is as follows: ```yaml containerRuntime: @@ -160,7 +160,7 @@ jaeger: enable: true ``` -Create dragonfly cluster A using the configuration file: +Create Dragonfly cluster A using the configuration file: @@ -197,7 +197,7 @@ NOTES: -Check that dragonfly cluster A is deployed successfully: +Check that Dragonfly cluster A is deployed successfully: ```shell $ kubectl get po -n cluster-a @@ -253,56 +253,56 @@ the username is `root` and password is `dragonfly`. ![clusters](../../resource/getting-started/clusters.png) -By default, Dragonfly will automatically create dragonfly cluster A record in manager when -it is installed for the first time. You can click dragonfly cluster A to view the details. +By default, Dragonfly will automatically create Dragonfly cluster A record in manager when +it is installed for the first time. You can click Dragonfly cluster A to view the details. ![cluster-a](../../resource/getting-started/cluster-a.png) -## Create dragonfly cluster B +## Create Dragonfly cluster B -Create dragonfly cluster B, you need to create a dragonfly cluster record in the manager console first, -and the schedulers, seed peers and peers included in the dragonfly cluster should be installed using helm. +Create Dragonfly cluster B, you need to create a Dragonfly cluster record in the manager console first, +and the schedulers, seed peers and peers included in the Dragonfly cluster should be installed using helm. -### Create dragonfly cluster B in the manager console +### Create Dragonfly cluster B in the manager console -Visit manager console and click the `ADD CLUSTER` button to add dragonfly cluster B record. +Visit manager console and click the `ADD CLUSTER` button to add Dragonfly cluster B record. Note that the IDC is set to `cluster-2` to match the peer whose IDC is `cluster-2`. ![create-cluster-b](../../resource/getting-started/create-cluster-b.png) -Create dragonfly cluster B record successfully. +Create Dragonfly cluster B record successfully. ![create-cluster-b-successfully](../../resource/getting-started/create-cluster-b-successfully.png) -### Use scopes to distinguish different dragonfly clusters +### Use scopes to distinguish different Dragonfly clusters -The dragonfly cluster needs to serve the scope. It wil provide scheduler services and -seed peer services to peers in the scope. The scopes of the dragonfly cluster are configured +The Dragonfly cluster needs to serve the scope. It wil provide scheduler services and +seed peer services to peers in the scope. The scopes of the Dragonfly cluster are configured when the console is created and updated. The scopes of the peer are configured in peer YAML config, the fields are `host.idc`, `host.location` and `host.advertiseIP`, refer to [dfdaemon config](../../reference/configuration/dfdaemon.md). -If the peer scopes match the dragonfly cluster scopes, then the peer will use -the dragonfly cluster's scheduler and seed peer first, and if there is no matching -dragonfly cluster then use the default dragonfly cluster. +If the peer scopes match the Dragonfly cluster scopes, then the peer will use +the Dragonfly cluster's scheduler and seed peer first, and if there is no matching +Dragonfly cluster then use the default Dragonfly cluster. -**Location**: The dragonfly cluster needs to serve all peers in the location. When the location in -the peer configuration matches the location in the dragonfly cluster, the peer will preferentially -use the scheduler and the seed peer of the dragonfly cluster. It separated by "|", +**Location**: The Dragonfly cluster needs to serve all peers in the location. When the location in +the peer configuration matches the location in the Dragonfly cluster, the peer will preferentially +use the scheduler and the seed peer of the Dragonfly cluster. It separated by "|", for example "area|country|province|city". -**IDC**: The dragonfly cluster needs to serve all peers in the IDC. When the IDC in the peer -configuration matches the IDC in the dragonfly cluster, the peer will preferentially use the -scheduler and the seed peer of the dragonfly cluster. IDC has higher priority than location +**IDC**: The Dragonfly cluster needs to serve all peers in the IDC. When the IDC in the peer +configuration matches the IDC in the Dragonfly cluster, the peer will preferentially use the +scheduler and the seed peer of the Dragonfly cluster. IDC has higher priority than location in the scopes. -**CIDRs**: The dragonfly cluster needs to serve all peers in the CIDRs. The advertise IP will be reported in the peer +**CIDRs**: The Dragonfly cluster needs to serve all peers in the CIDRs. The advertise IP will be reported in the peer configuration when the peer is started, and if the advertise IP is empty in the peer configuration, -peer will automatically get expose IP as advertise IP. When advertise IP of the peer matches the CIDRs in dragonfly cluster, -the peer will preferentially use the scheduler and the seed peer of the dragonfly cluster. +peer will automatically get expose IP as advertise IP. When advertise IP of the peer matches the CIDRs in Dragonfly cluster, +the peer will preferentially use the scheduler and the seed peer of the Dragonfly cluster. CIDRs has higher priority than IDC in the scopes. -### Create dragonfly cluster B based on helm charts +### Create Dragonfly cluster B based on helm charts Create charts configuration with cluster information in the manager console. @@ -319,7 +319,7 @@ Create charts configuration with cluster information in the manager console. - `externalManager.host` is host of the manager GRPC server. - `externalRedis.addrs[0]` is address of the redis. -Create dragonfly cluster B charts configuration file `charts-config-cluster-b.yaml`, +Create Dragonfly cluster B charts configuration file `charts-config-cluster-b.yaml`, configuration content is as follows: ```yaml @@ -395,7 +395,7 @@ jaeger: enable: true ``` -Create dragonfly cluster B using the configuration file: +Create Dragonfly cluster B using the configuration file: @@ -431,10 +431,10 @@ NOTES: -Check that dragonfly cluster B is deployed successfully: +Check that Dragonfly cluster B is deployed successfully: ```shell -$ kubectl get po -n dragonfly-system +$ kubectl get po -n cluster-b NAME READY STATUS RESTARTS AGE dragonfly-dfdaemon-q8bsg 1/1 Running 0 67s dragonfly-dfdaemon-tsqls 1/1 Running 0 67s @@ -447,9 +447,9 @@ Create dragonfly cluster B successfully. ![install-cluster-b-successfully](../../resource/getting-started/install-cluster-b-successfully.png) -## Using dragonfly to distribute images for multi-cluster kubernetes +## Using Dragonfly to distribute images for multi-cluster kubernetes -### Containerd pull image back-to-source for the first time through dragonfly in cluster A +### Containerd pull image back-to-source for the first time through Dragonfly in cluster A Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: @@ -472,15 +472,35 @@ Tracing details: ![cluster-a-download-back-to-source-tracing](../../resource/getting-started/cluster-a-download-back-to-source-tracing.jpg) -When pull image back-to-source for the first time through dragonfly, peer uses `cluster-a`'s scheduler and seed peer. +When pull image back-to-source for the first time through Dragonfly, peer uses `cluster-a`'s scheduler and seed peer. It takes `1.47s` to download the `82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399` layer. ### Containerd pull image hits the cache of remote peer in cluster A -Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker2` node: +Delete the dfdaemon whose Node is `kind-worker` to clear the cache of Dragonfly local Peer. + + ```shell -docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +# Find pod name. +export POD_NAME=$(kubectl get pods --namespace cluster-a -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) + +# Delete pod. +kubectl delete pod ${POD_NAME} -n cluster-a +``` + + + +Delete `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: + +```shell +docker exec -i kind-worker /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker` node: + +```shell +docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` Expose jaeger's port `16686`: @@ -524,15 +544,33 @@ Tracing details: ![cluster-b-download-back-to-source-tracing](../../resource/getting-started/cluster-b-download-back-to-source-tracing.jpg) -When pull image back-to-source for the first time through dragonfly, peer uses `cluster-b`'s scheduler and seed peer. +When pull image back-to-source for the first time through Dragonfly, peer uses `cluster-b`'s scheduler and seed peer. It takes `4.97s` to download the `82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399` layer. ### Containerd pull image hits the cache of remote peer in cluster B -Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker4` node: + + +```shell +# Find pod name. +export POD_NAME=$(kubectl get pods --namespace cluster-b -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker3")].metadata.name}' | head -n 1 ) + +# Delete pod. +kubectl delete pod ${POD_NAME} -n cluster-b +``` + + + +Delete `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker3` node: ```shell -docker exec -i kind-worker4 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +docker exec -i kind-worker3 /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +Pull `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` image in `kind-worker3` node: + +```shell +docker exec -i kind-worker3 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` Expose jaeger's port `16686`: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation/helm-charts.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation/helm-charts.md index 6b51bbd7..2317a6b3 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation/helm-charts.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation/helm-charts.md @@ -152,8 +152,7 @@ docker exec -i kind-worker /usr/local/bin/crictl pull alpine:3.19 ```shell # 获取 Pod Name -export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component= -dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) +export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) # 获取 Peer ID export PEER_ID=$(kubectl -n dragonfly-system exec -it ${POD_NAME} -- grep "alpine" /var/log/dragonfly/ @@ -216,8 +215,7 @@ Tracing 详细内容: ```shell # 获取 Pod Name -export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component= -dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) +export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) # 删除 Pod kubectl delete pod ${POD_NAME} -n dragonfly-system diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/kubernetes.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/kubernetes.md index 4ebe4b5e..120bd85e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/kubernetes.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/kubernetes.md @@ -9,7 +9,7 @@ slug: /getting-started/quick-start/kubernetes/ 您可以根据 [Helm Charts](../installation/helm-charts.md) 文档中的内容快速搭建 Dragonfly 的 Kubernetes 集群。 -我们推荐使用 `Containerd with CRI` 和 `CRI-O` 客户端。 +我们推荐使用 `containerd with CRI` 和 `CRI-O` 客户端。 下表列出了一些容器的运行时、版本和文档。 @@ -17,8 +17,8 @@ slug: /getting-started/quick-start/kubernetes/ | Runtime | Version | Document | CRI Support | Pull Command | | ----------------------- | ------- | ------------------------------------------------ | ----------- | ------------------------------------------- | -| Containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | -| Containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | +| containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | +| containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | | CRI-O | All | [Link](../../setup/runtime/cri-o.md) | Yes | crictl pull docker.io/library/alpine:latest | @@ -29,7 +29,7 @@ slug: /getting-started/quick-start/kubernetes/ 如果没有可用的 Kubernetes 集群进行测试,推荐使用 [Kind](https://kind.sigs.k8s.io/)。 -创建 Kind 多节点集群配置文件 `kind-config.yaml`, 配置如下: +创建 Kind 多节点集群配置文件 `kind-config.yaml`,配置如下: ```yaml kind: Cluster @@ -72,7 +72,7 @@ kind load docker-image dragonflyoss/dfdaemon:latest ## 基于 Helm Charts 创建 Dragonfly P2P 集群 -创建 Helm Charts 配置文件 `charts-config.yaml`, 配置如下: +创建 Helm Charts 配置文件 `charts-config.yaml`,配置如下: ```yaml containerRuntime: @@ -180,7 +180,7 @@ dragonfly-scheduler-0 1/1 Running 0 8m43s dragonfly-seed-peer-0 1/1 Running 3 (5m56s ago) 8m43s ``` -## Containerd 通过 Dragonfly 首次回源拉镜像 +## containerd 通过 Dragonfly 首次回源拉镜像 在 `kind-worker` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: @@ -194,7 +194,7 @@ docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drago kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:8a9fba45626f402c12bafaadb718690187cae6e5d56296a8fe7d7c4ce19038f7?ns=ghcr.io"` Tracing: @@ -206,9 +206,23 @@ Tracing 详细内容: 集群内首次回源时,下载 `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` 层需要消耗时间为 `5.58s` -## Containerd 下载镜像命中 Dragonfly 本地 Peer 的缓存 +## containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 -删除 `kind-worker` Node 的 Containerd 中镜像 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 的缓存: +删除 Node 为 `kind-worker` 的 dfdaemon,为了清除 Dragonfly 本地 Peer 的缓存。 + + + +```shell +# 获取 Pod Name +export POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) + +# 删除 Pod +kubectl delete pod ${POD_NAME} -n dragonfly-system +``` + + + +删除 `kind-worker` Node 的 containerd 中镜像 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 的缓存: ```shell docker exec -i kind-worker /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 @@ -226,24 +240,30 @@ docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drago kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:8a9fba45626f402c12bafaadb718690187cae6e5d56296a8fe7d7c4ce19038f7?ns=ghcr.io"` Tracing: -![hit-local-peer-cache-search-tracing](../../resource/getting-started/hit-local-peer-cache-search-tracing.jpg) +![hit-remote-peer-cache-search-tracing](../../resource/getting-started/hit-remote-peer-cache-search-tracing.jpg) Tracing 详细内容: -![hit-local-peer-cache-tracing](../../resource/getting-started/hit-local-peer-cache-tracing.jpg) +![hit-remote-peer-cache-tracing](../../resource/getting-started/hit-remote-peer-cache-tracing.jpg) -命中本地 Peer 缓存时,下载 `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` 层需要消耗时间为 `65.24ms` +命中远程 Peer 缓存时,下载 `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` 层需要消耗时间为 `117.98ms` + +## containerd 下载镜像命中 Dragonfly 本地 Peer 的缓存 -## Containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 +删除 `kind-worker` Node 的 containerd 中镜像 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 的缓存: -在 `kind-worker2` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: +```shell +docker exec -i kind-worker /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +在 `kind-worker` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: ```shell -docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` 暴露 Jaeger `16686` 端口: @@ -252,17 +272,17 @@ docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drag kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:8a9fba45626f402c12bafaadb718690187cae6e5d56296a8fe7d7c4ce19038f7?ns=ghcr.io"` Tracing: -![hit-remote-peer-cache-search-tracing](../../resource/getting-started/hit-remote-peer-cache-search-tracing.jpg) +![hit-local-peer-cache-search-tracing](../../resource/getting-started/hit-local-peer-cache-search-tracing.jpg) Tracing 详细内容: -![hit-remote-peer-cache-tracing](../../resource/getting-started/hit-remote-peer-cache-tracing.jpg) +![hit-local-peer-cache-tracing](../../resource/getting-started/hit-local-peer-cache-tracing.jpg) -命中远程 Peer 缓存时,下载 `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` 层需要消耗时间为 `117.98ms` +命中本地 Peer 缓存时,下载 `f643e116a03d9604c344edb345d7592c48cc00f2a4848aaf773411f4fb30d2f5` 层需要消耗时间为 `65.24ms` ## 预热镜像 @@ -272,16 +292,22 @@ Tracing 详细内容: kubectl --namespace dragonfly-system port-forward service/dragonfly-manager 8080:8080 ``` -预热镜像 `ghcr.io/dragonflyoss/dragonfly2/manager:v2.0.5`: +使用 Open API 之前请先申请 Personal Access Token,并且 Access Scopes 选择为 `job`,参考文档 [personal-access-tokens](../../reference/personal-access-tokens.md)。 + +使用 Open API 预热镜像 `ghcr.io/dragonflyoss/dragonfly2/manager:v2.0.5`,参考文档 [preheat](../../reference/preheat.md)。 ```shell -curl --location --request POST 'http://127.0.0.1:8080/api/v1/jobs' \ +curl --location --request POST 'http://127.0.0.1:8080/oapi/v1/jobs' \ --header 'Content-Type: application/json' \ +--header 'Authorization: Bearer your_personal_access_token' \ --data-raw '{ "type": "preheat", "args": { "type": "image", - "url": "https://ghcr.io/v2/dragonflyoss/dragonfly2/manager/manifests/v2.0.5" + "url": "https://ghcr.io/v2/dragonflyoss/dragonfly2/manager/manifests/v2.0.5", + "filteredQueryParams": "Expires&Signature", + "username": "your_registry_username", + "password": "your_registry_password" } }' ``` @@ -294,8 +320,10 @@ curl --location --request POST 'http://127.0.0.1:8080/api/v1/jobs' \ 使用预热任务 ID 轮训查询任务是否成功: -```bash -curl --request GET 'http://127.0.0.1:8080/api/v1/jobs/1' +```shell +curl --request GET 'http://127.0.0.1:8080/oapi/v1/jobs/1' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Bearer your_personal_access_token' ``` 如果返回预热任务状态为 `SUCCESS`,表示预热成功: @@ -316,7 +344,7 @@ docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drago kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/manager/blobs/sha256:ceba1302dd4fbd8fc7fd7a135c8836c795bc3542b9b134597eba13c75d2d2cb0?ns=ghcr.io"` Tracing: diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/multi-cluster-kubernetes.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/multi-cluster-kubernetes.md index 30189225..1e36bb5e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/multi-cluster-kubernetes.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start/multi-cluster-kubernetes.md @@ -19,7 +19,7 @@ Peers 只能在当前 Dragonfly 集群内 P2P 传输数据,所以一定要保 您可以根据 [Helm Charts](../installation/helm-charts.md) 文档中的内容快速搭建 Dragonfly 的 Kubernetes 集群。 -我们推荐使用 `Containerd with CRI` 和 `CRI-O` 客户端。 +我们推荐使用 `containerd with CRI` 和 `CRI-O` 客户端。 下表列出了一些容器的运行时、版本和文档。 @@ -27,8 +27,8 @@ Peers 只能在当前 Dragonfly 集群内 P2P 传输数据,所以一定要保 | Runtime | Version | Document | CRI Support | Pull Command | | ----------------------- | ------- | ------------------------------------------------ | ----------- | ------------------------------------------- | -| Containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | -| Containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | +| containerd\* | v1.1.0+ | [Link](../../setup/runtime/containerd/mirror.md) | Yes | crictl pull docker.io/library/alpine:latest | +| containerd without CRI | v1.1.0 | [Link](../../setup/runtime/containerd/proxy.md) | No | ctr image pull docker.io/library/alpine | | CRI-O | All | [Link](../../setup/runtime/cri-o.md) | Yes | crictl pull docker.io/library/alpine:latest | @@ -39,7 +39,7 @@ Peers 只能在当前 Dragonfly 集群内 P2P 传输数据,所以一定要保 如果没有可用的 Kubernetes 集群进行测试,推荐使用 [Kind](https://kind.sigs.k8s.io/)。 -创建 Kind 多节点集群配置文件 `kind-config.yaml`, 配置如下: +创建 Kind 多节点集群配置文件 `kind-config.yaml`,配置如下: ```yaml kind: Cluster @@ -99,7 +99,7 @@ kind load docker-image dragonflyoss/dfdaemon:latest ### 基于 Helm Charts 创建 Dragonfly 集群 A -创建 Helm Charts 的 Dragonfly 集群 A 的配置文件 `charts-config-cluster-a.yaml`, 配置如下: +创建 Helm Charts 的 Dragonfly 集群 A 的配置文件 `charts-config-cluster-a.yaml`,配置如下: ```yaml containerRuntime: @@ -245,7 +245,7 @@ kubectl apply -f manager-rest-svc.yaml -n cluster-a ### 访问 Manager 控制台 -使用默认用户名 `root`, 密码 `dragonfly` 访问 `localhost:8080` 的 Manager 控制台地址,并且进入控制台。 +使用默认用户名 `root`,密码 `dragonfly` 访问 `localhost:8080` 的 Manager 控制台地址,并且进入控制台。 ![signin](../../resource/getting-started/signin.png) @@ -309,7 +309,7 @@ CIDR 在 Scopes 内的优先级高于 IDC。 - `externalManager.host` 是 Manager 的 GRPC 服务的 Host。 - `externalRedis.addrs[0]` 是 Redis 的服务地址。 -创建 Helm Charts 的 Dragonfly 集群 B 的配置文件 `charts-config-cluster-b.yaml`, 配置如下: +创建 Helm Charts 的 Dragonfly 集群 B 的配置文件 `charts-config-cluster-b.yaml`,配置如下: ```yaml containerRuntime: @@ -423,7 +423,7 @@ NOTES: 检查 Dragonfly 集群 B 是否部署成功: ```shell -$ kubectl get po -n dragonfly-system +$ kubectl get po -n cluster-b NAME READY STATUS RESTARTS AGE dragonfly-dfdaemon-q8bsg 1/1 Running 0 67s dragonfly-dfdaemon-tsqls 1/1 Running 0 67s @@ -438,7 +438,7 @@ dragonfly-seed-peer-0 1/1 Running 0 67s ## 使用 Dragonfly 在多集群环境下分发镜像 -### 集群 A 中 Containerd 通过 Dragonfly 首次回源拉镜像 +### 集群 A 中 containerd 通过 Dragonfly 首次回源拉镜像 在 `kind-worker` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: @@ -452,7 +452,7 @@ docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drago kubectl --namespace cluster-a port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399?ns=ghcr.io"` Tracing: @@ -464,12 +464,32 @@ Tracing 详细内容: 集群 A 内首次回源时,下载 `82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399` 层需要消耗时间为 `1.47s`。 -### 集群 A 中 Containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 +### 集群 A 中 containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 -在 `kind-worker2` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: +删除 Node 为 `kind-worker` 的 dfdaemon,为了清除 Dragonfly 本地 Peer 的缓存。 + + ```shell -docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +# 获取 Pod Name +export POD_NAME=$(kubectl get pods --namespace cluster-a -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker")].metadata.name}' | head -n 1 ) + +# 删除 Pod +kubectl delete pod ${POD_NAME} -n cluster-a +``` + + + +删除 `kind-worker` Node 的 containerd 中镜像 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 的缓存: + +```shell +docker exec -i kind-worker /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +在 `kind-worker` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: + +```shell +docker exec -i kind-worker /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` 暴露 Jaeger `16686` 端口: @@ -478,7 +498,7 @@ docker exec -i kind-worker2 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drag kubectl --namespace cluster-a port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399?ns=ghcr.io"` Tracing: @@ -490,7 +510,7 @@ Tracing 详细内容: 集群 A 中命中远程 Peer 缓存时,下载 `82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399` 层需要消耗时间为 `37.48ms`。 -### 集群 B 中 Containerd 通过 Dragonfly 首次回源拉镜像 +### 集群 B 中 containerd 通过 Dragonfly 首次回源拉镜像 在 `kind-worker3` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: @@ -504,7 +524,7 @@ docker exec -i kind-worker3 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drag kubectl --namespace cluster-b port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399?ns=ghcr.io"` Tracing: @@ -516,12 +536,32 @@ Tracing 详细内容: 集群 B 中命中远程 Peer 缓存时,下载 `82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399` 层需要消耗时间为 `4.97s`。 -### 集群 B 中 Containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 +### 集群 B 中 containerd 下载镜像命中 Dragonfly 远程 Peer 的缓存 -在 `kind-worker4` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: +删除 Node 为 `kind-worker3` 的 dfdaemon,为了清除 Dragonfly 本地 Peer 的缓存。 + + ```shell -docker exec -i kind-worker4 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +# 获取 Pod Name +export POD_NAME=$(kubectl get pods --namespace cluster-b -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o=jsonpath='{.items[?(@.spec.nodeName=="kind-worker3")].metadata.name}' | head -n 1 ) + +# 删除 Pod +kubectl delete pod ${POD_NAME} -n cluster-b +``` + + + +删除 `kind-worker3` Node 的 containerd 中镜像 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 的缓存: + +```shell +docker exec -i kind-worker3 /usr/local/bin/crictl rmi ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 +``` + +在 `kind-worker3` Node 下载 `ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5` 镜像: + +```shell +docker exec -i kind-worker3 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/dragonfly2/scheduler:v2.0.5 ``` 暴露 Jaeger `16686` 端口: @@ -530,7 +570,7 @@ docker exec -i kind-worker4 /usr/local/bin/crictl pull ghcr.io/dragonflyoss/drag kubectl --namespace cluster-b port-forward service/dragonfly-jaeger-query 16686:16686 ``` -进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search), 搜索 Tags 值为 +进入 Jaeger 页面 [http://127.0.0.1:16686/search](http://127.0.0.1:16686/search),搜索 Tags 值为 `http.url="/v2/dragonflyoss/dragonfly2/scheduler/blobs/sha256:82cbeb56bf8065dfb9ff5a0c6ea212ab3a32f413a137675df59d496e68eaf399?ns=ghcr.io"` Tracing: