From df6a2ae7b2f247366f5aec2bc5aedbc9c0a7e448 Mon Sep 17 00:00:00 2001 From: Dominika Schweier Date: Mon, 28 Oct 2024 15:04:14 +0100 Subject: [PATCH 1/6] Correcting identifiers Signed-off-by: Dominika Schweier --- ...to-operator-codegen-sdk-developer-guide.md | 17 +- .../contributor-guides/minimal-environment.md | 4 +- .../unit-testing-mockery.md | 34 +- .../en/docs/guides/install-guides/_index.md | 10 +- .../install-guides/common-components.md | 10 +- .../install-guides/common-dependencies.md | 4 +- .../install-guides/demo-vagrant-windows.md | 6 +- .../install-guides/explore-nephio-free5gc.md | 24 +- .../guides/install-guides/explore-sandbox.md | 34 +- .../guides/install-guides/install-on-byoc.md | 24 +- .../guides/install-guides/install-on-gce.md | 4 +- .../guides/install-guides/install-on-gcp.md | 74 +-- .../install-guides/install-on-multiple-vm.md | 30 +- .../install-guides/install-on-openshift.md | 6 +- .../install-guides/install-on-single-vm.md | 26 +- .../install-guides/package-transformations.md | 240 +++++----- .../guides/install-guides/webui-auth-gcp.md | 22 +- .../guides/install-guides/webui-auth-okta.md | 26 +- .../en/docs/guides/install-guides/webui.md | 18 +- .../en/docs/guides/user-guides/controllers.md | 6 +- .../guides/user-guides/exercise-1-free5gc.md | 62 +-- .../docs/guides/user-guides/exercise-2-oai.md | 42 +- .../docs/guides/user-guides/helm/flux-helm.md | 16 +- ...helm-to-operator-codegen-sdk-user-guide.md | 4 +- .../network-architecture/o-ran-integration.md | 2 +- content/en/docs/porch/config-as-data.md | 16 +- .../docs/porch/contributors-guide/_index.md | 36 +- .../porch/contributors-guide/dev-process.md | 16 +- .../environment-setup-vm.md | 16 +- .../contributors-guide/environment-setup.md | 22 +- .../en/docs/porch/package-orchestration.md | 140 +++--- content/en/docs/porch/package-variant.md | 438 +++++++++--------- .../porch/running-porch/running-locally.md | 8 +- .../porch/running-porch/running-on-GKE.md | 28 +- .../adding-external-git-ca-bundle.md | 12 +- .../using-porch/install-and-using-porch.md | 146 +++--- .../porch/using-porch/porchctl-cli-guide.md | 4 +- .../porch/using-porch/usage-porch-kpt-cli.md | 86 ++-- 38 files changed, 854 insertions(+), 859 deletions(-) diff --git a/content/en/docs/guides/contributor-guides/helm-to-operator-codegen-sdk-developer-guide.md b/content/en/docs/guides/contributor-guides/helm-to-operator-codegen-sdk-developer-guide.md index df56eafd..85209bec 100644 --- a/content/en/docs/guides/contributor-guides/helm-to-operator-codegen-sdk-developer-guide.md +++ b/content/en/docs/guides/contributor-guides/helm-to-operator-codegen-sdk-developer-guide.md @@ -6,7 +6,7 @@ weight: 1 The [Helm to Operator Codegen SDK](https://github.com/nephio-project/nephio-sdk/tree/main/helm-to-operator-codegen-sdk) offers a streamlined solution for translating existing Helm charts into Kubernetes operators with minimal effort and cost. ## The Flow Diagram -In a nutshell, the Helm-Charts are converted to YAML files using the values provided in "values.yaml". Then, each Kubernetes Resource Model (KRM) in the YAML is translated into Go code, employing one of two methods. +In a nutshell, the Helm-Charts are converted to YAML files using the values provided in *values.yaml*. Then, each Kubernetes Resource Model (KRM) in the YAML is translated into Go code, employing one of two methods. 1) If the resource is Runtime-Supported, it undergoes a conversion process where the KRM resource is first transformed into a Runtime Object, then into JSON, and finally into Go code. 2) Otherwise, if the resource is not Runtime-Supported, it is converted into an Unstructured Object and then into Go code. @@ -20,7 +20,7 @@ Helm to YAML conversion is achieved by running the following command `helm template --namespace --output-dir “temp/templated/”` internally. As of now, it retrieves the values from default "values.yaml" ### Flow-2: YAML Split -The SDK iterates over each YAML file in the "converted-yamls" directory. If a YAML file contains multiple Kubernetes Resource Models (KRM), separated by "---", the SDK splits the YAML file accordingly to isolate each individual KRM resource. This ensures that each KRM resource is processed independently. +The SDK iterates over each YAML file in the *converted-yamls* directory. If a .yaml file contains multiple Kubernetes Resource Models (KRM), separated by "---", the SDK splits the .yaml file accordingly to isolate each individual KRM resource. This ensures that each KRM resource is processed independently. ### Runtime-Object and Unstruct-Object The SDK currently employs the "runtime-object method" to handle Kubernetes resources whose structure is recognized by Kubernetes by default. Examples of such resources include Deployment, Service, and ConfigMap. Conversely, resources that are not inherently known to Kubernetes and require explicit installation or definition, such as Third-Party Custom Resource Definitions (CRDs) like NetworkAttachmentDefinition or PrometheusRule, are processed using the "unstructured-object" method. Such examples are given below: @@ -69,15 +69,14 @@ networkAttachmentDefinition1 := &unstructured.Unstructured{ ``` ### Flow-3.1: KRM to Runtime-Object -The conversion process relies on the "k8s.io/apimachinery/pkg/runtime" package. Currently, only the API version "v1" is supported. The supported kinds for the Runtime Object method include: -`Deployment, Service, Secret, Role, RoleBinding, ClusterRoleBinding, PersistentVolumeClaim, StatefulSet, ServiceAccount, ClusterRole, PriorityClass, ConfigMap` +The conversion process relies on the "k8s.io/apimachinery/pkg/runtime" package. Currently, only the API version "v1" is supported. The supported kinds for the Runtime Object method include: Deployment, Service, Secret, Role, RoleBinding, ClusterRoleBinding, PersistentVolumeClaim, StatefulSet, ServiceAccount, ClusterRole, PriorityClass, ConfigMap ### Flow-3.2: Runtime-Object to JSON Firstly, the SDK performs a typecast of the runtime object to its actual data type. For instance, if the Kubernetes Kind is "Service," the SDK typecasts the runtime object to the specific data type corev1.Service. Then, it conducts a Depth-First Search (DFS) traversal over the corev1.Service object using reflection. During this traversal, the SDK generates a JSON structure that encapsulates information about the struct hierarchy, including corresponding data types and values. This transformation results in a JSON representation of the corev1.Service object's structure and content. #### DFS Algorithm Cases -The DFS function iterates over the runtime object, traversing its structure in a Depth-First Search manner. During this traversal, it constructs the JSON structure while inspecting each attribute for its data type and value. Attributes that have default values in the runtime object but are not explicitly set in the YAML file are omitted from the conversion process. This ensures that only explicitly defined attributes with their corresponding values are included in the resulting JSON structure. The function follows this flow to accurately capture the structure, data types, and values of the Kubernetes resource while excluding default attributes that are not explicitly configured in the YAML file. +The DFS function iterates over the runtime object, traversing its structure in a Depth-First Search manner. During this traversal, it constructs the JSON structure while inspecting each attribute for its data type and value. Attributes that have default values in the runtime object but are not explicitly set in the .yaml file are omitted from the conversion process. This ensures that only explicitly defined attributes with their corresponding values are included in the resulting JSON structure. The function follows this flow to accurately capture the structure, data types, and values of the Kubernetes resource while excluding default attributes that are not explicitly configured in the .yaml file. A) Base-Cases: @@ -156,7 +155,7 @@ spec: ``` ### Flow-3.3: JSON to String (Go-Code) -The SDK reads the JSON file containing the information about the Kubernetes resource and then translates this information into a string of Go code. This process involves parsing the JSON structure and generating corresponding Go code strings based on the structure, data types, and values extracted from the JSON representation. Ultimately, this results in a string that represents the Kubernetes resource in a format compatible with Go code. +The SDK reads the .json file containing the information about the Kubernetes resource and then translates this information into a string of Go code. This process involves parsing the JSON structure and generating corresponding Go code strings based on the structure, data types, and values extracted from the JSON representation. Ultimately, this results in a string that represents the Kubernetes resource in a format compatible with Go code. #### TraverseJSON Cases (Json-to-String) The traverse JSON function is responsible for converting JSON data into Go code. Here's how it handles base cases: @@ -275,12 +274,12 @@ Structs need to be initialized using curly brackets {}, whereas enums need Paren Solution: We solve the above problems by building an “enumModuleMapping” which is a set that stores all data types that are enums. i.e. If a data type belongs to the set, then It is an Enum. -There is an automation-script that takes the types.go files of packages and build the config-json. For details, Please refer [here](https://github.com/nephio-project/nephio-sdk/tree/main/helm-to-operator-codegen-sdk/config) +There is an automation-script that takes the *types.go* files of packages and build the config-json. For details, Please refer [here](https://github.com/nephio-project/nephio-sdk/tree/main/helm-to-operator-codegen-sdk/config) ### Flow-4: KRM to Unstruct-Obj to String(Go-code) -All Kubernetes resource kinds that are not supported by the runtime-object method are handled using the unstructured method. In this approach, the Kubernetes Resource MOdel (KRM) is converted to an unstructured object using the package "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured". -Then, We traverse the unstructured-Obj in a DFS fashion and build the gocode-string. +All Kubernetes resource kinds that are not supported by the runtime-object method are handled using the unstructured method. In this approach, the Kubernetes Resource MOdel (KRM) is converted to an unstructured object using the package *k8s.io/apimachinery/pkg/apis/meta/v1/unstructured*"*. +Then, we traverse the unstructured-Obj in a DFS fashion and build the gocode-string. #### DFS Algorithm Cases (Unstruct-Version) diff --git a/content/en/docs/guides/contributor-guides/minimal-environment.md b/content/en/docs/guides/contributor-guides/minimal-environment.md index cf762bca..94159f8b 100644 --- a/content/en/docs/guides/contributor-guides/minimal-environment.md +++ b/content/en/docs/guides/contributor-guides/minimal-environment.md @@ -106,11 +106,11 @@ Connecting to Gitea allows you to see the actions that Nephio takes on Gitea. kubectl port-forward -n gitea svc/gitea 3000:3000 ``` -2. Browse to the Gitea web client at `http://localhost:3000` and log on. +2. Browse to the Gitea web client at http://localhost:3000 and log on. ## VS Code Configuration -Set up a launch configuration in VS Code `launch.json` similar to the configuration below: +Set up a launch configuration in VS Code *launch.json* similar to the configuration below: ```json { diff --git a/content/en/docs/guides/contributor-guides/unit-testing-mockery.md b/content/en/docs/guides/contributor-guides/unit-testing-mockery.md index 8f745362..eae4dd9f 100644 --- a/content/en/docs/guides/contributor-guides/unit-testing-mockery.md +++ b/content/en/docs/guides/contributor-guides/unit-testing-mockery.md @@ -10,11 +10,11 @@ This guide will help folks come up to speed on using testify and mockery. ## How Mockery works -The [mockery documentation](https://vektra.github.io/mockery/latest/#why-mockery) describes why you would use and how to use Mockery. In a nutshell, Mockery generates mock implementations for interfaces in `go`, which you can then use instead of real implementations when unit testing. +The [mockery documentation](https://vektra.github.io/mockery/latest/#why-mockery) describes why you would use and how to use Mockery. In a nutshell, Mockery generates mock implementations for interfaces in go, which you can then use instead of real implementations when unit testing. ## Mockery support in Nephio `make` -The `make` files in Nephio repos containing `go` code have targets to support mockery. +The `make` files in Nephio repos containing go code have targets to support mockery. The [default-mockery.mk](https://github.com/nephio-project/nephio/blob/main/default-mockery.mk) file in the root of Nephio repos is included in Nephio `make` runs. @@ -27,13 +27,13 @@ The targets above must be run explicitly. Run `make install-mockery` to install mockery in your container runtime (docker, podman etc) or locally if you have no container runtime running. You need only run this target once unless you need to reinstall Mockery for whatever reason. -Run `make generate-mocks` to generate the mocked implementation of the go interfaces specified in '.mockery.yaml' files. You need to run this target each time an interface that you are mocking changes or whenever you change the contents of a `.mockery.yaml` file. You can run `make generate-mocks` in the repo root to generate or re-generate all interfaces or in subdirectories containing a `Makefile` to generate or regenerate only the interfaces in that subdirectory and its children. +Run `make generate-mocks` to generate the mocked implementation of the go interfaces specified in *.mockery.yaml* files. You need to run this target each time an interface that you are mocking changes or whenever you change the contents of a *.mockery.yaml* file. You can run `make generate-mocks` in the repo root to generate or re-generate all interfaces or in subdirectories containing a Makefile to generate or regenerate only the interfaces in that subdirectory and its children. -The generate-mocks target looks for `.mockery.yaml` files in the repo and it runs the mockery mock generator on each `.mockery.yaml` file it finds. This has the nice effect of allowing `.mockery.yaml` files to be in either the root of the repo or in subdirectories, so the choice of placement of `.mockery.yaml` files is left to the developer. +The generate-mocks target looks for *.mockery.yaml`* files in the repo and it runs the mockery mock generator on each *.mockery.yaml* file it finds. This has the nice effect of allowing *.mockery.yaml* files to be in either the root of the repo or in subdirectories, so the choice of placement of *.mockery.yaml* files is left to the developer. ## The .mockery.yaml file -The `.mockery.yaml` file specifies which mock implementations Mockery should generate and also controls how that generation is performed. Here we just give an overview of `mockery.yaml`. For full details consult the [configuration](https://github.com/vektra/mockery/blob/master/docs/configuration.md) section of the Mockery documentation. +The *.mockery.yaml* file specifies which mock implementations Mockery should generate and also controls how that generation is performed. Here we just give an overview of *mockery.yaml*. For full details consult the [configuration](https://github.com/vektra/mockery/blob/master/docs/configuration.md) section of the Mockery documentation. ### Example 1 @@ -53,7 +53,7 @@ We provide a list of the packages for which we want to generate mocks. In this e 6. dir: "{{.InterfaceDir}}" ``` -We want mocks to be generated for the `GiteaClient` go interface (line 4). The `{{.InterfaceDir}}` parameter (line 6) asks Mockery to generate the mock file in the same directory as the interface is located. +We want mocks to be generated for the GiteaClien go interface (line 4). The {{.InterfaceDir}} parameter (line 6) asks Mockery to generate the mock file in the same directory as the interface is located. ### Example 2 @@ -80,14 +80,14 @@ Lines 2 to 7 are as explained in Example 1 above. 8. sigs.k8s.io/controller-runtime/pkg/client: ``` -Generate mocks for the external package `sigs.k8s.io/controller-runtime/pkg/client`. +Generate mocks for the external package *sigs.k8s.io/controller-runtime/pkg/client*. ``` 9. interfaces: 10. Client: ``` -Generate a mock implementation of the go interface `Client` in the external package `sigs.k8s.io/controller-runtime/pkg/client`. +Generate a mock implementation of the go interface Client in the external package *sigs.k8s.io/controller-runtime/pkg/client*. ``` 11. config: @@ -95,7 +95,7 @@ Generate a mock implementation of the go interface `Client` in the external pack 13. outpkg: "mocks" ``` -Create the mocks for the `Client` interface in the `mocks/external/client` directory and cal the output package `mocks`. +Create the mocks for the Client interface in the *mocks/external/client* directory and cal the output package *mocks*. ## The generated mock implementation @@ -107,7 +107,7 @@ We can treat this generated file as a black box and we do not have to know the d The [mockery utils](https://github.com/nephio-project/nephio/tree/main/testing/mockeryutils) package is a utility package that you can use to initialize your mocks and to define some common fields for your tests. -[mockeryutils-types.go](https://github.com/nephio-project/nephio/blob/main/testing/mockeryutils/mockeryutils-types.go) contains the `MockHelper` struct, which allows you to control the behaviour of a mock. +[mockeryutils-types.go](https://github.com/nephio-project/nephio/blob/main/testing/mockeryutils/mockeryutils-types.go) contains the MockHelper struct, which allows you to control the behaviour of a mock. ``` type MockHelper struct { @@ -117,15 +117,15 @@ type MockHelper struct { } ``` -The `MockHelper` struct is used to configure a mocked method to expect and return a certain set of arguments. We pass instances of this struct to the mocked interface during tests. +The MockHelper struct is used to configure a mocked method to expect and return a certain set of arguments. We pass instances of this struct to the mocked interface during tests. -[mockeryutils.go](https://github.com/nephio-project/nephio/blob/main/testing/mockeryutils/mockeryutils-types.go) contains the `InitMocks` function, which initializes your mocks for you before a test. +[mockeryutils.go](https://github.com/nephio-project/nephio/blob/main/testing/mockeryutils/mockeryutils-types.go) contains the InitMocks function, which initializes your mocks for you before a test. ``` func InitMocks(mocked *mock.Mock, mocks []MockHelper) ``` -For the given `mocked` interface, the function initializes the `mocks` as specified in the given `MockHelper` array. +For the given mocked interface, the function initializes the mocks as specified in the given MockHelper array. ## Using the mock implementation in unit tests @@ -151,7 +151,7 @@ type repoTest struct { wantErr bool } ``` -The code above allows us to specify input data and the expected outcome for tests. Each test is specified as an instance of the `repoTest` struct. For each test, we specify its fields and arguments, and specify the mocking for the test. +The code above allows us to specify input data and the expected outcome for tests. Each test is specified as an instance of the repoTest struct. For each test, we specify its fields and arguments, and specify the mocking for the test. ``` func TestUpsertRepo(t *testing.T) @@ -194,7 +194,7 @@ This is the specification of an array of tests that we will run. } ``` -The code above specifies a single test and is an instance of the `tests` array. We specify the fields, arguments, and mocks for the test. In this case, we mock three functions on our GiteaClient interface: `GetMyUserInfo`, `GetRepo`, and `CreateRepo`. We specify the arguments we expect for each function and specify what the function should return if it receives correct arguments. Of course, if the mocked function receives incorrect arguments, it will report an error. The `wantErr` value indicates if we expect the `upsertRepo` function being tested to succeed or fail. +The code above specifies a single test and is an instance of the tests array. We specify the fields, arguments, and mocks for the test. In this case, we mock three functions on our GiteaClient interface: GetMyUserInfo, GetRepo, and CreateRepo. We specify the arguments we expect for each function and specify what the function should return if it receives correct arguments. Of course, if the mocked function receives incorrect arguments, it will report an error. The wantErr value indicates if we expect the upsertRepo function being tested to succeed or fail. ``` for _, tt := range tests { @@ -214,7 +214,7 @@ for _, tt := range tests { } ``` -The code above executes the tests. We run a reconciler `r` and initialize our tests using the local `initMockeryTests()` function. We then call the `upsertRepo` function to test it and check the result. +The code above executes the tests. We run a reconciler `r` and initialize our tests using the local initMockeryTests() function. We then call the upsertRepo function to test it and check the result. ``` func initMockeryMocks(tt *repoTest) { @@ -225,4 +225,4 @@ func initMockeryMocks(tt *repoTest) { } ``` -The `initMockeryMocks` local function calls the `mockeryutils.InitMocks` to initialize the mocks for the tests. +The initMockeryMocks local function calls the mockeryutils.InitMocks to initialize the mocks for the tests. diff --git a/content/en/docs/guides/install-guides/_index.md b/content/en/docs/guides/install-guides/_index.md index 872bf863..ee8cc939 100644 --- a/content/en/docs/guides/install-guides/_index.md +++ b/content/en/docs/guides/install-guides/_index.md @@ -18,7 +18,7 @@ will be used in the exercises to simulate a topology with a Nephio management cl ### GCE Prerequisites -You will need a account in GCP and `gcloud` installed on your local environment. +You will need a account in GCP and *gcloud* installed on your local environment. ### Create a Virtual Machine on GCE @@ -65,7 +65,7 @@ Order or create a VM with the following specification: In some installations, the IP range used by Kubernetes in the sandbox can clash with the IP address used by your VPN. In such cases, the VM will become unreachable during the sandbox installation. If you have this situation, add the route below on your VM. -Log onto your VM and run the following commands, replacing **\** and **\** with your VMs values: +Log onto your VM and run the following commands, replacing *\* and *\* with your VMs values: ```bash sudo bash -c 'cat << EOF > /etc/netplan/99-cloud-init-network.yaml @@ -100,8 +100,8 @@ sudo NEPHIO_DEBUG=false \ **Pre-installed K8s Cluster** Log onto your VM/System and run the following command: -(NOTE: The VM or System should be able to access the K8S API server via the kubeconfig file and have docker installed. -Docker is needed to run the KRM container functions specified in rootsync and repository packages.) +(Note that the VM or System should be able to access the K8S API server via the *kubeconfig* file and have docker installed. +Docker is needed to run the KRM container functions specified in *rootsync* and *repository* packages.) ```bash wget -O - https://raw.githubusercontent.com/nephio-project/test-infra/v3.0.0/e2e/provision/init.sh | \ @@ -127,7 +127,7 @@ The following environment variables can be used to configure the installation: | NEPHIO_REPO | URL | https://github.com/nephio-project/test-infra.git | URL of the repository to be used for installation | | NEPHIO_BRANCH | branch | main/v3.0.0 | Tag or branch name to use in NEPHIO_REPO | | DOCKER_REGISTRY_MIRRORS | list of URLs in JSON format | | List of docker registry mirrors in JSON format, or empty for no mirrors to be set. Example value: ``["https://docker-registry-remote.mycompany.com", "https://docker-registry-remote2.mycompany.com"]`` | -| K8S_CONTEXT | K8s context | kind-kind | Kubernetes context for existing non-kind cluster (gathered from `kubectl config get-contexts`, for example "kubernetes-admin@kubernetes") | +| K8S_CONTEXT | K8s context | kind-kind | Kubernetes context for existing non-kind cluster (gathered from `kubectl config get-contexts`, for example *kubernetes-admin@kubernetes*) | ### Follow the Installation on VM diff --git a/content/en/docs/guides/install-guides/common-components.md b/content/en/docs/guides/install-guides/common-components.md index 0f739640..33ebb510 100644 --- a/content/en/docs/guides/install-guides/common-components.md +++ b/content/en/docs/guides/install-guides/common-components.md @@ -14,7 +14,7 @@ This page is draft and the separation of the content to different categories is {{% alert title="Note" color="primary" %}} -If you want to use a version other than that of `v3.0.0` of Nephio `catalog` repo, then replace the `@origin/v3.0.0` suffix on the package URLs on the `kpt pkg get` commands below with the tag/branch of the version you wish to use. +If you want to use a version other than that of v3.0.0 of Nephio *catalog* repo, then replace the *@origin/v3.0.0* suffix on the package URLs on the `kpt pkg get` commands below with the tag/branch of the version you wish to use. While using KPT you can [either pull a branch or a tag](https://kpt.dev/book/03-packages/01-getting-a-package) from a git repository. By default it pulls the tag. In case, you have branch with the same name as a tag then to: @@ -60,10 +60,10 @@ To install the Nephio Operators, repeat the `kpt` steps, but for that package: kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/nephio/core/nephio-operator@origin/v3.0.0 ``` -The Nephio Operator package by default uses the Gitea instance at `172.18.0.200:3000` as +The Nephio Operator package by default uses the Gitea instance at *172.18.0.200:3000* as the git repository. Change it to point to your git instance in -`nephio-operator/app/controller/deployment-token-controller.yaml` and -`nephio-operator/app/controller/deployment-controller.yaml` +*nephio-operator/app/controller/deployment-token-controller.yaml* and +*nephio-operator/app/controller/deployment-controller.yaml*. You also need to create a secret with your Git instance credentials: @@ -98,7 +98,7 @@ is used extensively in the cluster provisioning workflows. Different GitOps tools may be used, but these instructions only cover ConfigSync. To install it on the management cluster, we again follow the same process. -Later, we will configure it to point to the `mgmt` repository: +Later, we will configure it to point to the *mgmt* repository: ```bash kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/nephio/core/configsync@origin/v3.0.0 diff --git a/content/en/docs/guides/install-guides/common-dependencies.md b/content/en/docs/guides/install-guides/common-dependencies.md index 851a9823..316e8051 100644 --- a/content/en/docs/guides/install-guides/common-dependencies.md +++ b/content/en/docs/guides/install-guides/common-dependencies.md @@ -12,7 +12,7 @@ installation, the CRDs that come along with them are necessary. {{% alert title="Note" color="primary" %}} -If you want to use a version other than that of `v3.0.0` of Nephio `catalog` repo, then replace the `@origin/v3.0.0` suffix on the package URLs on the `kpt pkg get` commands below with the tag/branch of the version you wish to use. +If you want to use a version other than that of v3.0.0 of Nephio *catalog* repo, then replace the *@origin/v3.0.0* suffix on the package URLs on the `kpt pkg get` commands below with the tag/branch of the version you wish to use. While using KPT you can [either pull a branch or a tag](https://kpt.dev/book/03-packages/01-getting-a-package) from a git repository. By default it pulls the tag. In case, you have branch with the same name as a tag then to: @@ -62,4 +62,4 @@ kpt live apply gitea --reconcile-timeout 15m --output=table ``` You can find the Gitea ip-address via `kubectl get svc -n gitea` -and use port `3000` to access it with login `nephio` and password `secret`. +and use port 3000 to access it with login *nephio* and password *secret*. diff --git a/content/en/docs/guides/install-guides/demo-vagrant-windows.md b/content/en/docs/guides/install-guides/demo-vagrant-windows.md index 87cc3e6b..f4e5d183 100644 --- a/content/en/docs/guides/install-guides/demo-vagrant-windows.md +++ b/content/en/docs/guides/install-guides/demo-vagrant-windows.md @@ -51,10 +51,10 @@ the Vagrant file. This is not recommended! {{% /alert %}} -- In the Vagrant file "./Vagrantfile", there are *CPUS & RAM* parameters in - `config.vm.provider`, it's possible to override them at runtime: +- In the Vagrant file *./Vagrantfile*, there are *CPUS & RAM* parameters in + the *config.vm.provider*, it's possible to override them at runtime: - On Linux, or the Git Bash on Windows we can use a one-liner command `CPUS=16 MEMORY=32768 vagrant up` -- In the Ansible "./playbooks/roles/bootstrap/tasks/prechecks.yml" file, there +- In the Ansible *./playbooks/roles/bootstrap/tasks/prechecks.yml* file, there are the checks for *CPUS & RAM* diff --git a/content/en/docs/guides/install-guides/explore-nephio-free5gc.md b/content/en/docs/guides/install-guides/explore-nephio-free5gc.md index 3ee54086..99c80352 100644 --- a/content/en/docs/guides/install-guides/explore-nephio-free5gc.md +++ b/content/en/docs/guides/install-guides/explore-nephio-free5gc.md @@ -39,46 +39,46 @@ tasks such as 2. [Controllers](https://github.com/nephio-project/free5gc/tree/main/controllers) * **Reconciler**: The XXFDeploymentReconciler struct is responsible for reconciling the state of the XXFDeployment - resource in the Kubernetes cluster. It implements the *Reconcile* function, which is called by the Controller Runtime - framework when changes occur to the XXFDeployment resource. The *Reconcile* function performs various operations such - as creating or updating the **ConfigMap** and **Service** resources associated with the XXFDeployment. + resource in the Kubernetes cluster. It implements the Reconcile function, which is called by the Controller Runtime + framework when changes occur to the XXFDeployment resource. The Reconcile function performs various operations such + as creating or updating the ConfigMap and Service resources associated with the XXFDeployment. Overall, the XXFDeploymentReconciler struct acts as the controller for the XXFDeployment resource, ensuring that the cluster state aligns with the desired state specified by the user. * **Resources**: functions that provide the necessary logic to create the required Kubernetes resources for an XXF deployment, including the deployment, service, and configuration map: - * *createDeployment*: This function creates a Deployment resource for the AMF deployment. It defines the desired + * createDeployment: This function creates a Deployment resource for the AMF deployment. It defines the desired state of the deployment, including the number of replicas, container image, ports, command, arguments, volume mounts, resource requirements, and security context. - * *createService*: This function creates a Service resource for the AMF deployment. It defines the desired state of + * createService: This function creates a Service resource for the AMF deployment. It defines the desired state of the service, including the selector for the associated deployment and the ports it exposes. - * *createConfigMap*: This function creates a ConfigMap resource for the AMF deployment. It generates the + * createConfigMap: This function creates a ConfigMap resource for the AMF deployment. It generates the configuration data for the AMF based on the provided template values and renders it into the amfcfg.yaml file. - * *createResourceRequirements*: This function calculates the resource requirements (CPU and memory limits and + * createResourceRequirements: This function calculates the resource requirements (CPU and memory limits and requests) for the AMF deployment based on the specified capacity and sets them in a ResourceRequirements object. - * *createNetworkAttachmentDefinitionNetworks*: This function creates the network attachment definition networks for + * createNetworkAttachmentDefinitionNetworks: This function creates the network attachment definition networks for the AMF deployment. It uses the CreateNetworkAttachmentDefinitionNetworks function from the controllers package to generate the network attachment definition YAML based on the provided template name and interface configurations. * **Templates**: The configuration template includes various parameters. Example for AMF: version, description, ngapIpList, sbi, nrfUri, amfName, serviceNameList, servedGuamiList, supportTaiList, plmnSupportList, supportDnnList, security settings, networkName, locality, networkFeatureSupport5GS, timers, and logger configurations. - The *renderConfigurationTemplate* function takes a struct (configurationTemplateValues) containing the values for + The renderConfigurationTemplate function takes a struct (configurationTemplateValues) containing the values for placeholders in the template and renders the final configuration as a string. The rendered configuration can then be used by the AMF application. * **Status**: It holds the logic to get the status of the deployment and displaying it as "Available," "Progressing," - and "ReplicaFailure".The function returns the *NFDeploymentStatus* object and a boolean value indicating whether the + and "ReplicaFailure".The function returns the NFDeploymentStatus object and a boolean value indicating whether the status has been updated or not. 3. [Config](https://github.com/nephio-project/free5gc/tree/main/config) - There are [Kustomization](https://github.com/kubernetes-sigs/kustomize) file for a Kubernetes application, specifying various configuration options and resources for the application. + There are [Kustomization](https://github.com/kubernetes-sigs/kustomize) files for a Kubernetes application, specifying various configuration options and resources for the application. In the */default* folder there are: * *Namespace*: Defines the namespace (free5gc) where all resources will be deployed. * *Name Prefix*: Specifies a prefix (free5gc-operator-) that will be prepended to the names of all resources. * *Common Labels*: Allows adding labels to all resources and selectors. Currently commented out. -* *Bases*: Specifies the directories (../crd, ../rbac, ../operator) containing the base resources for the application. +* *Bases*: Specifies the directories (*../crd*, *../rbac*, *../operator*) containing the base resources for the application. In the *crd/base* folder there are CRDs for the workload network functions. They define the schema for the "XXFDeployment" resource under the "workload.nephio.org" group. Also, there are YAML config files for teaching kustomize how to substitute *name* and *namespace* reference in CRD. diff --git a/content/en/docs/guides/install-guides/explore-sandbox.md b/content/en/docs/guides/install-guides/explore-sandbox.md index 2bd89bdf..742260ef 100644 --- a/content/en/docs/guides/install-guides/explore-sandbox.md +++ b/content/en/docs/guides/install-guides/explore-sandbox.md @@ -18,28 +18,28 @@ Ansible install scripts. | Component | Purpose | | --------- | ---------------------------------------------------------------------------------------- | -| docker | Used to host Kubernetes clusters created by KinD | -| kind | Used to create clusters in docker | -| kubectl | Used to control clusters created by KinD | -| kpt | Used to install packages (software and metadata) on k8s clusters | -| cni | Used to implement the k8s network model for the KinD clusters | -| gtp5g | A Linux module that supports the 3GPP GPRS tunneling protocol (required by free5gc NFs) | +| *docker* | Used to host Kubernetes clusters created by KinD | +| *kind* | Used to create clusters in docker | +| *kubectl* | Used to control clusters created by KinD | +| *kpt* | Used to install packages (software and metadata) on k8s clusters | +| *cni* | Used to implement the k8s network model for the KinD clusters | +| *gtp5g* | A Linux module that supports the 3GPP GPRS tunneling protocol (required by free5gc NFs) | The Ansible install scripts use kind to create the Management cluster. Once the Management KinD cluster is created, the -install uses kpt packages to install the remainder of the software. +install uses *kpt* packages to install the remainder of the software. ## Components Installed on the Management KinD cluster -Everything is installed on the Management KinD cluster by Ansible scripts using kpt packages. +Everything is installed on the Management KinD cluster by Ansible scripts using *kpt* packages. -The install unpacks each kpt package in the */tmp* directory. It then applies the kpt functions to the packages and -applies the packages to the Management KinD cluster. This allows the user to check the status of the kpt packages in +The install unpacks each *kpt* package in the */tmp* directory. It then applies the *kpt* functions to the packages and +applies the packages to the Management KinD cluster. This allows the user to check the status of the *kpt* packages in the cluster using the `kpt live status` command on the unpacked packages in the */tmp* directory. -The rendered kpt packages containing components are unpacked in the */tmp/kpt-pkg* directory. The rendered kpt packages -that create the *mgmt* and *mgmt-staging* repositories are unpacked in the */tmp/repository* directory. The rendered kpt -package containing the rootsync configuration for the *mgmt* repository is unpacked in the */tmp/rootsync* directory. -You can examine the contents of any rendered kpt packager by examining the contents of these directories. +The rendered *kpt* packages containing components are unpacked in the */tmp/kpt-pkg* directory. The rendered *kpt* packages +that create the *mgmt* and *mgmt-staging* repositories are unpacked in the */tmp/repository* directory. The rendered *kpt* +package containing the *rootsync* configuration for the *mgmt* repository is unpacked in the */tmp/rootsync* directory. +You can examine the contents of any rendered *kpt* packager by examining the contents of these directories. ```bash /tmp/kpt-pkg/ /tmp/repository /tmp/rootsync/ @@ -58,7 +58,7 @@ You can examine the contents of any rendered kpt packager by examining the conte └── resource-backend ``` -You can check the status of an applied kpt package using a `kpt live status package_dir` command. +You can check the status of an applied *kpt* package using a `kpt live status package_dir` command. ```bash kpt live status /tmp/kpt-pkg/nephio-controllers/ @@ -129,13 +129,13 @@ interacts closely with. ## Some Useful Commands -Easily get the kubeconfig for a CAPI KinD cluster: +Easily get the *kubeconfig* for a CAPI KinD cluster: ```bash get_capi_kubeconfig regional ``` -will create a file `regional-kubeconfig` used to connect to that +will create a file *regional-kubeconfig* used to connect to that cluster. You can query docker to see the docker images running KinD diff --git a/content/en/docs/guides/install-guides/install-on-byoc.md b/content/en/docs/guides/install-guides/install-on-byoc.md index 3df9a50b..b9bc0dcd 100644 --- a/content/en/docs/guides/install-guides/install-on-byoc.md +++ b/content/en/docs/guides/install-guides/install-on-byoc.md @@ -19,11 +19,11 @@ Regardless of the specific choices you make, you will need the following prerequisites. This is in addition to any prerequisites that are specific to your environment and choices. - a Linux workstation with internet access - - `kubectl` [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)on your workstation - - `kpt` [installed](https://kpt.dev/installation/kpt-cli) on your workstation + - *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)on your workstation + - *kpt* [installed](https://kpt.dev/installation/kpt-cli) on your workstation (version v1.0.0-beta.43 or later) - - `porchctl` [installed](/content/en/docs/porch/using-porch/porchctl-cli-guide.md) on your workstation - - Sudo-less `docker`, `podman`, or `nerdctl`. If using `podman` or `nerdctl`, + - *porchctl* [installed](/content/en/docs/porch/using-porch/porchctl-cli-guide.md) on your workstation + - Sudo-less *docker*, *podman*, or *nerdctl*. If using *podman* or *nerdctl*, you must set the [`KPT_FN_RUNTIME`](https://kpt.dev/reference/cli/fn/render/?id=environment-variables) environment variable. @@ -31,12 +31,12 @@ environment variable. As part of all installations, you will create or utilize an existing Kubernetes management cluster. The management cluster must have internet access, and must be a non-EOL Kubernetes version. Additionally: - - Your default `kubectl` context should point to the cluster + - Your default *kubectl* context should point to the cluster - You will need cluster administrator privileges (in particular you will need to be able to create namespaces and other cluster-scoped resources). -You will use `kpt` for most of the installation packages in these instructions, -though you could also use `kubectl` directly to apply the resources, once they +You will use *kpt* for most of the installation packages in these instructions, +though you could also use *kubectl* directly to apply the resources, once they are configured. After installing the prerequisites, create a local directory on your @@ -49,7 +49,7 @@ cd nephio-install ``` The instructions for setting up the opinionated installations will assume you -have installed the prerequisites and created the `nephio-install` directory. +have installed the prerequisites and created the *nephio-install* directory. ## Opinionated Installations @@ -118,12 +118,12 @@ Ingress or Gateway is recommended. ### Nephio WebUI Authentication and Authorization -In the default configuration, the Nephio WebUI *is wide open with no -authentication*. The webui itself authenticates to the cluster using a static +In the default configuration, the Nephio WebUI **is wide open with no +authentication**. The webui itself authenticates to the cluster using a static service account, which is bound to the cluster admin role. Any user accessing -the webui is *acting as a cluster admin*. +the webui is **acting as a cluster admin**. -This configuration is designed for *testing and development only*. You must not +This configuration is designed for **testing and development only**. You must not use this configuration in any other situation, and even for testing and development it must not be exposed on the internet (for example, via a LoadBalancer service, Ingress, or Route). diff --git a/content/en/docs/guides/install-guides/install-on-gce.md b/content/en/docs/guides/install-guides/install-on-gce.md index 2bcc5066..e3ed8a85 100644 --- a/content/en/docs/guides/install-guides/install-on-gce.md +++ b/content/en/docs/guides/install-guides/install-on-gce.md @@ -15,7 +15,7 @@ to simulate a topology with a Nephio management cluster and three workload clust ### GCE Prerequisites -You will need an account in GCP and `gcloud` installed on your local environment. +You will need an account in GCP and *gcloud* installed on your local environment. ### Create a Virtual Machine on GCE @@ -64,7 +64,7 @@ browse the Nephio Web UI ## Open Terminal -You will probably want a second ssh window open to run `kubectl` commands, etc., +You will probably want a second ssh window open to run *kubectl* commands, etc., without the port forwarding (which would fail if you try to open a second ssh connection with that setting). diff --git a/content/en/docs/guides/install-guides/install-on-gcp.md b/content/en/docs/guides/install-guides/install-on-gcp.md index 49790f85..8ffe41e8 100644 --- a/content/en/docs/guides/install-guides/install-on-gcp.md +++ b/content/en/docs/guides/install-guides/install-on-gcp.md @@ -31,27 +31,27 @@ In addition to the general prerequisites, you will need: - A GCP account. This account should have enough privileges to create projects, enable APIs in those projects, and create the necessary resources. -- [Google Cloud CLI](https://cloud.google.com/sdk/docs) (`gcloud`) installed and set up on your workstation. +- [Google Cloud CLI](https://cloud.google.com/sdk/docs) (*gcloud*) installed and set up on your workstation. - git installed on your workstation. ## Setup Your Environment -To make the instructions (and possibly your life) simpler, you can create a `gcloud` configuration and a project for +To make the instructions (and possibly your life) simpler, you can create a *gcloud* configuration and a project for Nephio. In the commands below, several environment variables are used. You can set them to appropriate values for you. Set -`LOCATION` to a region to create a regional Nephio management cluster, or to a zone to create a zonal cluster. Regional +*LOCATION* to a region to create a regional Nephio management cluster, or to a zone to create a zonal cluster. Regional clusters have increased availability but higher resource demands. -- `PROJECT` is an existing project ID, or the ID to use for a new project. -- `ACCOUNT` should be your Google account mentioned in the prerequisites. -- `REGION` is the region for your Config Controller. See [this link] for the list of supported regions. -- `LOCATION` is the location (region or zone) for your Nephio management cluster as well as any workload clusters you +- *PROJECT* is an existing project ID, or the ID to use for a new project. +- *ACCOUNT* should be your Google account mentioned in the prerequisites. +- *REGION* is the region for your Config Controller. See [this link] for the list of supported regions. +- *LOCATION* is the location (region or zone) for your Nephio management cluster as well as any workload clusters you create. Setting this will not limit you to this location, but it will be what is used in this guide. Note that Config Controller is always regional. -- `WEBUIFQDN` is the fully qualified domain name you would like to use for the web UI. -- `MANAGED_ZONE` is the GCP name for the zone where you will put the DNS entry for `WEBUIFQDN`. Note that it is not the - domain name, but rather the managed zone name used in GCP - for example, `my-zone-name`, not `myzone.example.com`. +- *WEBUIFQDN* is the fully qualified domain name you would like to use for the web UI. +- *MANAGED_ZONE* is the GCP name for the zone where you will put the DNS entry for *WEBUIFQDN*. Note that it is not the + domain name, but rather the managed zone name used in GCP - for example, *my-zone-name*, not *myzone.example.com*. Set the environment variables: @@ -64,7 +64,7 @@ WEBUIFQDN=nephio.example.com MANAGED_ZONE=your-managed-zone-name ``` -First, create the configuration. You can view and switch between `gcloud` configurations with +First, create the configuration. You can view and switch between *gcloud* configurations with `gcloud config configurations list` and `gcloud config configurations activate`. ```bash @@ -116,7 +116,7 @@ method for selecting and assigning billing accounts. See the [project billing account documentation](https://cloud.google.com/billing/docs/how-to/modify-project#how-to-change-ba), or consult with the GCP administrators in your organization. -Next, set the new project as the default in your `gcloud` configuration: +Next, set the new project as the default in your *gcloud* configuration: ```bash gcloud config set project $PROJECT @@ -262,7 +262,7 @@ gcloud projects add-iam-policy-binding ${PROJECT} \ The Porch SA will also be used for synchronizing GKE Fleet information to the Nephio cluster, for use in our deployments. For this, it needs the -`roles/gkehub.viewer` role: +*roles/gkehub.viewer* role: ```bash gcloud projects add-iam-policy-binding ${PROJECT} \ @@ -380,7 +380,7 @@ If not, you should retrieve the credentials with: gcloud anthos config controller get-credentials nephio-cc --location $REGION ``` -There is one more step - granting privileges to the CC cluster to manage GCP resources in this project. With `kubectl` +There is one more step - granting privileges to the CC cluster to manage GCP resources in this project. With *kubectl* pointing at the CC cluster, retrieve the service account email address used by CC: ```bash @@ -398,7 +398,7 @@ service-NNNNNNNNNNNN@gcp-sa-yakima.iam.gserviceaccount.com -Grant that service account `roles/editor`, which allows full management access to the project, except for IAM and a few +Grant that service account *roles/editor*, which allows full management access to the project, except for IAM and a few other things: ```bash @@ -453,8 +453,8 @@ version: 1 -The service account also needs to create Cloud Source Repositories which is not par of the `roles/editor`, role. So, add -the `roles/source.admin` role as well: +The service account also needs to create Cloud Source Repositories which is not par of the *roles/editor*, role. So, add +the *roles/source.admin* role as well: ```bash gcloud projects add-iam-policy-binding $PROJECT \ @@ -465,7 +465,7 @@ gcloud projects add-iam-policy-binding $PROJECT \ Granting IAM privileges is not necessary for this setup, but if you did want to use separate service accounts per -workload cluster, you would need to grant those privileges as well (`roles/owner` for example). +workload cluster, you would need to grant those privileges as well (*roles/owner* for example). ## Setting Up GitOps for Config Controller @@ -536,7 +536,7 @@ Customized package for deployment. -You need to add your project ID to your clone of the package. You can manually edit the `gcp-context.yaml` or run the +You need to add your project ID to your clone of the package. You can manually edit the *gcp-context.yaml* or run the following command: ```bash @@ -602,14 +602,14 @@ Config Sync will now synchronize that repository to your Config Controller. ## Provisioning Your Management Cluster -You will use CC to provision the Nephio management cluster and associated resources, by way of the `config-control` +You will use CC to provision the Nephio management cluster and associated resources, by way of the *config-control* repository. The [cc-cluster-gke-std-csr-cs](https://github.com/nephio-project/catalog/tree/main/infra/gcp/cc-cluster-gke-std-csr-cs) package uses CC to create a cluster and a cloud source repository, add the cluster to a fleet, and install and configure -Config Sync on the cluster to point to the new repository. This is similar to what the `nephio-workload-cluster` +Config Sync on the cluster to point to the new repository. This is similar to what the *nephio-workload-cluster* package does in the Sandbox exercises, except that it uses GCP services to create the repository and bootstrap Config Sync, rather than Nephio controllers. -First, pull the cluster package into your clone of the `config-control` +First, pull the cluster package into your clone of the *config-control* repository: ```bash @@ -626,7 +626,7 @@ git commit -m "Initial clone of GKE package" ``` Next, configure the package for your environment. Specifically, you need to add your project ID and location to your -clone of the package. You can manually edit the `gcp-context.yaml` or run the following commands: +clone of the package. You can manually edit the *gcp-context.yaml* or run the following commands: ```bash kpt fn eval nephio --image gcr.io/kpt-fn/search-replace:v0.2.0 --match-name gcp-context -- 'by-path=data.project-id' "put-value=${PROJECT}" @@ -674,7 +674,7 @@ To check the status, use the console: ![Console Packages](/static/images/install-guides/gcp-console-packages.png) -Alternatively, you can use `kubectl` to view the status of the `root-sync`: +Alternatively, you can use `kubectl` to view the status of the *root-sync*: ```bash kubectl describe rootsync -n config-management-system root-sync @@ -771,8 +771,8 @@ nephio us-central1 1.27.3-gke.100 34.xxx.xx.xx e2-medium 1 -Once the management cluster is `RUNNING`, retrieve the credentials and -store them as a `kubectl` context: +Once the management cluster is RUNNING, retrieve the credentials and +store them as a *kubectl* context: ```bash gcloud container clusters get-credentials --location $LOCATION nephio @@ -800,7 +800,7 @@ If the context is not current, use this command to make it current: kubectl config use-context "gke_${PROJECT}_${LOCATION}_nephio" ``` -As a final step, return to the `nephio-install` directory as your current +As a final step, return to the *nephio-install* directory as your current working directory: ```bash @@ -827,8 +827,8 @@ nephio your-nephio-project-id https://source.developers.google.com/p/ -Ensure your current working directory is `nephio-install`, and then clone the -`nephio` repository locally: +Ensure your current working directory is *nephio-install*, and then clone the +*nephio* repository locally: ```bash gcloud source repos clone nephio @@ -845,7 +845,7 @@ Project [your-nephio-project-id] repository [nephio] was cloned to [/home/your-u -Navigate to that directory, and pull out the `nephio-mgmt` package, which +Navigate to that directory, and pull out the *nephio-mgmt* package, which contains all the necessary Nephio components as subpackages: - Porch - Nephio Controllers @@ -1027,7 +1027,7 @@ set up OAuth. In particular you need to [create the client ID](/content/en/docs/ and the [secret](/content/en/docs/guides/install-guides/webui-auth-gcp.md#create-the-secret-in-the-cluster) manually. -The `nephio-webui` subpackage in `nephio-mgmt` is already set up for +The *nephio-webui* subpackage in *nephio-mgmt* is already set up for Google OAuth 2.0; you can follow the instructions in the linked document if you prefer OIDC. @@ -1146,12 +1146,12 @@ To https://source.developers.google.com/p/your-nephio-project-id/nephio ## Accessing Nephio -Accessing Nephio with `kubectl` or `kpt` can be done from your workstation, so long as you use the context for the +Accessing Nephio with *kubectl* or *kpt* can be done from your workstation, so long as you use the context for the Nephio management cluster. To access the WebUI, you need to create a DNS entry pointing to the load balancer IP serving the Ingress resources. The Ingress included in the Web UI package will use Cert Manager to automatically generate a self-signed certificate for the -`WEBUIFQDN` value. +*WEBUIFQDN* value. Find the IP address using this command: @@ -1169,7 +1169,7 @@ The output is similar to: -You will need to add this as an `A` record for the name you used in `WEBUIFQDN`. If you are using Google Cloud DNS for +You will need to add this as an **A** record for the name you used in *WEBUIFQDN*. If you are using Google Cloud DNS for that zone, first find the managed zone name: ```bash @@ -1188,8 +1188,8 @@ your-managed-zone-name example.com. -In this case, you would use `your-managed-zone-name`, which is the name for the -`example.com.` zone. +In this case, you would use *your-managed-zone-name*, which is the name for the +*example.com.* zone. Start a transaction to add a record set: @@ -1206,7 +1206,7 @@ Transaction started [transaction.yaml]. -Add the specific IP address as an A record, with the fully-qualified domain name +Add the specific IP address as an **A** record, with the fully-qualified domain name of the site: ```bash diff --git a/content/en/docs/guides/install-guides/install-on-multiple-vm.md b/content/en/docs/guides/install-guides/install-on-multiple-vm.md index b43f7b6d..99c654fc 100644 --- a/content/en/docs/guides/install-guides/install-on-multiple-vm.md +++ b/content/en/docs/guides/install-guides/install-on-multiple-vm.md @@ -10,15 +10,15 @@ weight: 7 * 4 vCPU * 8 GB RAM * Kubernetes version 1.26+ - * `kubectl` [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) + * *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) * **Ingress/Load Balancer**: [MetalLB](https://metallb.universe.tf/), but only internally to the VM * Cluster Edge * 2 vCPU 1 NODE * 4 GB RAM * Kubernetes version 1.26+ - * `kubectl` [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) -* `kpt` [installed](https://kpt.dev/installation/kpt-cli) (version v1.0.0-beta.43 or later) -* `porchctl` [installed](/content/en/docs/porch/using-porch/porchctl-cli-guide.md) on your workstation + * *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) +* *kpt* [installed](https://kpt.dev/installation/kpt-cli) (version v1.0.0-beta.43 or later) +* *porchctl* [installed](/content/en/docs/porch/using-porch/porchctl-cli-guide.md) on your workstation ## Installation of the management cluster @@ -29,13 +29,13 @@ weight: 7 ## Manual Installation of the Edge cluster using kpt -All the workload clusters need config-sync, root-sync +All the workload clusters need *config-sync*, *root-sync* and a cluster git repository to manage packages. The below steps have to be repeated for each workload cluster: ### Install Config-sync -Install config-sync using: +Install *config-sync* using: ```bash kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/nephio/core/configsync@@origin/v3.0.0 @@ -53,7 +53,7 @@ If you want to use GitHub or GitLab then follow below steps Get a [GitHub token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#fine-grained-personal-access-tokens) if your repository is private, to allow Porch to make modifications. -Register the edge repository using kpt cli or nephio web-ui. +Register the edge repository using *kpt* cli or Nephio WebUI. ```bash GITHUB_USERNAME= @@ -80,22 +80,22 @@ kpt live apply --reconcile-timeout=15m --output=table {{% alert title="Note" color="primary" %}} -* For management cluster you have to name the repository as `mgmt`. -* In the `repository` package by default gitea address is `172.18.0.200:3000` in `repository/set-values.yaml` +* For management cluster you have to name the repository as *mgmt*. +* In the *repository* package by default gitea address is *172.18.0.200:3000* in *repository/set-values.yaml* change this to your git address. -* `repository/token-configsync.yaml` and `repository/token-porch.yaml` are responsible for creating secrets with the help of Nephio token controller for accessing git instance for root-sync. You would need the name of config-sync token to provide it to root-sync. +* *repository/token-configsync.yaml* and *repository/token-porch.yaml* are responsible for creating secrets with the help of Nephio token controller for accessing git instance for root-sync. You would need the name of config-sync token to provide it to root-sync. {{% /alert %}} ### Install Root-sync -Get the Root-sync kpt package and edit it: +Get the *root-sync* *kpt* package and edit it: ```bash kpt pkg get https://github.com/nephio-project/catalog.git/nephio/optional/rootsync@@origin/v3.0.0 ``` -Change `./rootsync/rootsync.yaml` and point `spec.git.repo` to the edge git repository and the +Change *./rootsync/rootsync.yaml* and point *spec.git.repo* to the edge git repository and the ```yaml spec: @@ -108,7 +108,7 @@ Change `./rootsync/rootsync.yaml` and point `spec.git.repo` to the edge git repo If need credentials to access repository your repository then copy the token name from previous section and provide it in -`./rootsync/rootsync.yaml` +*./rootsync/rootsync.yaml*. ```yaml spec: @@ -130,7 +130,7 @@ kpt live apply rootsync --reconcile-timeout=15m --output=table If the output of `kubectl get rootsyncs.configsync.gke.io -A` -is similar as below then root-sync is properly configured. +is similar as below then *root-sync* is properly configured. ```console kubectl get rootsyncs.configsync.gke.io -A @@ -150,4 +150,4 @@ kpt live apply workload-crds --reconcile-timeout=15m --output=table ## Deploy packages to the edge clusters -Using web-ui or command line add a new deployment to the edge workload cluster. +Using WebUI or command line add a new deployment to the edge workload cluster. diff --git a/content/en/docs/guides/install-guides/install-on-openshift.md b/content/en/docs/guides/install-guides/install-on-openshift.md index a1015e2b..3d626813 100644 --- a/content/en/docs/guides/install-guides/install-on-openshift.md +++ b/content/en/docs/guides/install-guides/install-on-openshift.md @@ -25,7 +25,7 @@ In this guide, you will set up Nephio with: ## Prerequisites - A Red Hat Account and access to https://console.redhat.com/openshift/ -- OpenShift cli client `oc`. [Download here](https://console.redhat.com/openshift/downloads) +- OpenShift cli client *oc*. [Download here](https://console.redhat.com/openshift/downloads) ## Setup the Management Cluster @@ -90,8 +90,8 @@ and the [common components](/content/en/docs/guides/install-guides/common-compon ``` - Login - - user: gitea - - password: password + - user: *gitea* + - password: *password* ## Install edge clusters diff --git a/content/en/docs/guides/install-guides/install-on-single-vm.md b/content/en/docs/guides/install-guides/install-on-single-vm.md index 6d896153..6c907104 100644 --- a/content/en/docs/guides/install-guides/install-on-single-vm.md +++ b/content/en/docs/guides/install-guides/install-on-single-vm.md @@ -28,12 +28,12 @@ In addition to the general prerequisites, you will need: * Access to a Virtual Machine provided by an hypervisor ([VirtualBox](https://www.virtualbox.org/), [Libvirt](https://libvirt.org/)) and running an OS supported by Nephio (Ubuntu 20.04/22.04, Fedora 34) with a minimum of 16 vCPUs and 32 GB in RAM. -* [Kubernetes IN Docker](https://kind.sigs.k8s.io/) (`kind`) installed and set up your workstation. +* [Kubernetes IN Docker](https://kind.sigs.k8s.io/) (*kind*) installed and set up your workstation. ## Provisioning Your Management Cluster The Cluster API services require communication with the Docker socket for creation of workload clusters. The command -below creates an All-in-One Nephio management cluster through the KinD tool, mapping the `/var/run/docker.sock` socket +below creates an All-in-One Nephio management cluster through the KinD tool, mapping the */var/run/docker.sock* socket file for Cluster API communication. ```bash @@ -52,7 +52,7 @@ EOF ## Gitea Installation While you may use other Git providers as well, Gitea is required in the R2 setup. To install Gitea, use `kpt`. From your -`nephio-install` directory, run: +*nephio-install* directory, run: ```bash kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/distros/sandbox/gitea@@origin/v3.0.0 @@ -85,7 +85,7 @@ kpt live init cert-manager kpt live apply cert-manager --reconcile-timeout 15m --output=table ``` -Once `cert-manager` is installed, you can proceed with the installation of Cluster API components +Once *cert-manager* is installed, you can proceed with the installation of Cluster API components ```bash kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/infra/capi/cluster-capi@@origin/v3.0.0 @@ -116,11 +116,11 @@ kpt live apply cluster-capi-kind-docker-templates --reconcile-timeout 15m --outp ## Installing Packages -The management or workload cluster both need config-sync, root-sync and a cluster git repository to manage packages. +The management or workload cluster both need *config-sync*, *root-sync* and a cluster git repository to manage packages. ### Install Config-sync -Install config-sync using: +Install *config-sync* using: ```bash kpt pkg get --for-deployment https://github.com/nephio-project/catalog.git/nephio/core/configsync@@origin/v3.0.0 @@ -142,10 +142,10 @@ kpt live apply --reconcile-timeout=15m --output=table {{% alert title="Note" color="primary" %}} -* For management cluster you have to name the repository as `mgmt`. -* In the `repository` package the default Gitea address is `172.18.0.200:3000`. -In `repository/set-values.yaml` change this to your Gitea address. -* `repository/token-configsync.yaml` and `repository/token-porch.yaml` are +* For management cluster you have to name the repository as *mgmt*. +* In the *repository* package the default Gitea address is *172.18.0.200:3000*. +In *repository/set-values.yaml* change this to your Gitea address. +* *repository/token-configsync.yaml* and *repository/token-porch.yaml* are responsible for creating secrets with the help of the Nephio token controller for accessing the git instance for root-sync. You would need the name of the config-sync token to provide it to root-sync. @@ -154,13 +154,13 @@ You would need the name of the config-sync token to provide it to root-sync. ### Install Root-sync -Get the Root-sync kpt package and edit it: +Get the *root-sync* kpt package and edit it: ```bash kpt pkg get https://github.com/nephio-project/catalog.git/nephio/optional/rootsync@@origin/v3.0.0 ``` -Change `./rootsync/rootsync.yaml` and point `spec.git.repo` to the edge git repository: +Change *./rootsync/rootsync.yaml* and point *spec.git.repo* to the edge git repository: ```yaml spec: @@ -175,7 +175,7 @@ Change `./rootsync/rootsync.yaml` and point `spec.git.repo` to the edge git repo If you need credentials to access your repository then copy the token name from the previous section and provide it in -`./rootsync/rootsync.yaml`: +*./rootsync/rootsync.yaml*: ```yaml spec: diff --git a/content/en/docs/guides/install-guides/package-transformations.md b/content/en/docs/guides/install-guides/package-transformations.md index 8819cc45..61b6b19f 100644 --- a/content/en/docs/guides/install-guides/package-transformations.md +++ b/content/en/docs/guides/install-guides/package-transformations.md @@ -13,24 +13,24 @@ Before reading this, please read [the kpt book](https://kpt.dev/book/). The `kpt pkg get --for-deployment https:///@/ -` command downloads a kpt package from a repository. +` command downloads a *kpt* package from a repository. The fields in the command above are as follows: | Field | Description | | ---------------- | -------------------------------------------------------------------------------- | -| `repo-path` | The path in the repository to the kpt package | -| `repo-pkg-name` | The name of the kpt package in the repository | -| `pkg-version` | The version of the kpt package | -| `local-pkg-name` | The local name of the kpt package in the repository, defaults to `repo-pkg-name` | +| repo-path | The path in the repository to the *kpt* package | +| repo-pkg-name | The name of the *kpt* package in the repository | +| pkg-version | The version of the *kpt* package | +| local-pkg-name | The local name of the *kpt* package in the repository, defaults to `repo-pkg-name` | `kpt pkg get` make the following transformations: -1. The `metadata.name` field in the root `Kptfile` in the package is changed - from whatever value it has to `local-pkg-name` -2. The `metadata.namespace` field in the root `Kptfile` in the package is +1. The metadata.name field in the root *Kptfile* in the package is changed + from whatever value it has to local-pkg-name +2. The metadata.namespace field in the root *Kptfile* in the package is removed -3. `upstream` and `upstreamlock` root fields are added to the root `Kptfile` as +3. upstream and upstreamlock root fields are added to the root *Kptfile* as follows: ```yaml @@ -49,9 +49,9 @@ upstreamLock: ref: commit: 0123456789abcdef0123456789abcdef01234567 ``` -4. The `data.name` field in the root `package-context.yaml` files is changed to - be `local-pkg-name` -5. The `package-context.yaml` file is added if it does not exist with the +4. The data.name field in the root *package-context.yaml* files is changed to + be local-pkg-name +5. The *package-context.yaml* file is added if it does not exist with the following content: ```yaml @@ -65,15 +65,15 @@ data: name: ``` -6. The `data.name` field in `package-context.yaml` files in the sub kpt packages is +6. The data.name field in *package-context.yaml* files in the sub *kpt* packages is changed to be the name of the sub package -7. All other sub-fields under the `data:` field are deleted -8. The comment `metadata: # kpt-merge: /` is added to root - `metadata` fields on all YAML documents in the kpt package and enclosed - sub-packages that have a root `apiVersion` and `kind` field if such a comment - does not already exist. The `namespace` and `name` values used are the values - of those fields in the `metadata` field. Note that a YAML file can contain - multiple YAML documents and each root `metadata` field is commented. For +7. All other sub-fields under the data: field are deleted +8. The comment metadata: *# kpt-merge: \/\* is added to root + metadata fields on all YAML documents in the *kpt* package and enclosed + sub-packages that have a root apiVersion and kind field if such a comment + does not already exist. The namespace and name values used are the values + of those fields in the metadata field. Note that a YAML file can contain + multiple YAML documents and each root metadata field is commented. For example: ```yaml @@ -82,13 +82,13 @@ metadata: # kpt-merge: cert-manager/cert-manager-cainjector namespace: cert-manager ``` -9. The annotation `internal.kpt.dev/upstream-identifier: - '|||'` is added to root - `metadata.annotations:` fields on all YAML documents in the kpt package and - enclosed sub-packages that have a root `apiVersion:` and `kind:` field if - such an annotation does not already exist. The `namespace` and `name` values - used are the values of those fields in the `metadata` field. Note that a YAML - file can contain multiple YAML documents and each root `metadata` field is +9. The annotation internal.kpt.dev/upstream-identifier: + *\|\|\|\* is added to root + metadata.annotations: fields on all YAML documents in the *kpt* package and + enclosed sub-packages that have a root apiVersion: and kind: field if + such an annotation does not already exist. The namespace and name values + used are the values of those fields in the metadata field. Note that a YAML + file can contain multiple YAML documents and each root metadata field is commented. For example: ```yaml @@ -107,15 +107,15 @@ metadata: # kpt-merge: capi-kubeadm/leader-election-rolebinding The `kpt fn render ` runs kpt functions on a local package, thus applying local changes to the package. -In the Nephio sandbox installation, kpt fn render only acts on the `repository` and -`rootsync` kpt packages from +In the Nephio sandbox installation, kpt fn render only acts on the repository and +*rootsync* *kpt* packages from [nephio-example-packages](https://github.com/nephio-project/nephio-example-packages). #### repository package -The `repository` package has a kpt function written in +The repository package has a kpt function written in [starlark](https://github.com/bazelbuild/starlark), which is invoked by a -pipeline specified in the `kptfile`. +pipeline specified in the *kptfile*. ```yaml pipeline: @@ -124,48 +124,48 @@ pipeline: configPath: set-values.yaml ``` -The starlark function is specified in the `set-values.yaml` file. It makes the +The starlark function is specified in the *set-values.yaml* file. It makes the following transformations on the repositories: -1. In the file `repo-gitea.yaml` - - the `metadata.name` field gets the value of `` - - the `spec.description` field gets the value of ` repository` -2. In the file `repo-porch.yaml` - - the `metadata.name` field gets the value of `` - - the `spec.git.repo` field gets the value of - `"http://172.18.0.200:3000/nephio/.git` - - the `spec.git.secretRef.name` field gets the value of - `-access-token-porch` - - if the `` is called `mgmt-staging`, then the following extra +1. In the file *repo-gitea.yaml* + - the metadata.name field gets the value of \ + - the spec.description field gets the value of \ repository +2. In the file *repo-porch.yaml* + - the metadata.name field gets the value of \ + - the spec.git.repo field gets the value of + "http://172.18.0.200:3000/nephio/\.git + - the spec.git.secretRef.name field gets the value of + \-access-token-porch + - if the \ is called mgmt-staging, then the following extra changes are made: - - the `spec.deployment` field is set to `false` (it defaults to `true`) - - the annotation `metadata.annotations.nephio.org/staging: "true"` is added -3. In the file `token-configsync.yaml` - - the `metadata.name` field gets the value of - `-access-token-configsync` - - the `metadata.namespace` field gets the value of `config-management-system` -4. In the file `token-porch.yaml` - - the `metadata.name` field gets the value of - `-access-token-porch` + - the spec.deployment field is set to false (it defaults to true) + - the annotation metadata.annotations.nephio.org/staging: "true" is added +3. In the file *token-configsync.yaml* + - the metadata.name field gets the value of + \-access-token-configsync + - the metadata.namespace field gets the value of config-management-system +4. In the file *token-porch.yaml* + - the metadata.name field gets the value of + \-access-token-porch #### rootsync Package -The `rootsync` package also has a kpt function written in +The *rootsync* package also has a kpt function written in [starlark](https://github.com/bazelbuild/starlark) specified in the -`set-values.yaml` file. It makes the following transformations on repositories: +*set-values.yaml* file. It makes the following transformations on repositories: -1. In the file `rootsync.yaml` - - the `metadata.name` field gets the value of `` - - the `spec.git.repo` field gets the value of - `"http://172.18.0.200:3000/nephio/.git` +1. In the file *rootsync.yaml* + - the metadata.name field gets the value of \ + - the spec.git.repo field gets the value of + "http://172.18.0.200:3000/nephio/\.git - the `spec.git.secretRef.name` field gets the value of - `-access-token-configsync` + \-access-token-configsync ### kpt live init The `kpt live init ` initializes a local package, making it -ready for application to a cluster. This command creates a `resourcegroup.yaml` -in the kpt package with content similar to: +ready for application to a cluster. This command creates a *resourcegroup.yaml* +in the *kpt* package with content similar to: ```yaml apiVersion: kpt.dev/v1alpha1 @@ -178,9 +178,9 @@ metadata: ``` ## porchctl rpkg for Workload clusters -The `porchctl rpkg` suite of commands that act on `Repository` resources on the -kubernetes cluster in scope. The packages in the `Repository` resources are -*remote packages (rpkg)*. +The `porchctl rpkg` suite of commands that act on Repository resources on the +kubernetes cluster in scope. The packages in the Repository resources are +remote packages (*rpkg*). To see which repositories are in scope: @@ -248,10 +248,10 @@ nephio-example-packages-7895e28d847c0296a204007ed577cd2a4222d1ea nephio-worklo ### Create the Workload cluster package -The Workload cluster package contains `PackageVariant` files for configuring the +The Workload cluster package contains PackageVariant files for configuring the new cluster. -Clone the `nephio-workload-cluster` package into the `mgmt` repository. This +Clone the nephio-workload-cluster package into the *mgmt* repository. This creates the blueprint package for the workload cluster in the management repository. @@ -261,23 +261,23 @@ porchctl rpkg clone -n default nephio-example-packages-7895e28d847c0296a204007ed During the clone operation, the command above performs the following operations: -1. It creates a `drafts/regional/v1` branch on the `mgmt` repository -2. It does the equivalent of a [kpt pkg get](#kpt-pkg-get) on the `nephio-workload-cluster` package into a directory - called `regional` on that branch, with the same transformations on package files carried out as the +1. It creates a drafts/regional/v1 branch on the *mgmt* repository +2. It does the equivalent of a [kpt pkg get](#kpt-pkg-get) on the *nephio-workload-cluster* package into a directory + called *regional* on that branch, with the same transformations on package files carried out as the [kpt pkg get](#kpt-pkg-get) command above, this content is checked into the new branch in the initial commit -3. The pipeline specified in the `Kptfile`of the `nephio-workload-cluster` package specifies an `apply-replacements` - specified in the `apply-replacements.yaml` file in the package and uses the value of the - `package-context.yaml:data.name` field set in 2. above (which is the workload cluster name) as follows: +3. The pipeline specified in the *Kptfile* of the *nephio-workload-cluster* package specifies an apply-replacements + specified in the *apply-replacements.yaml* file in the package and uses the value of the + package-context.yaml:data.name field set in 2. above (which is the workload cluster name) as follows: - - In all `PackageVariant` files, the `metadata.name` and `spec.downstream.package` field before the '-' is replaced + - In all *PackageVariant* files, the metadata.name and spec.downstream.package field before the '-' is replaced with that field value. In this way, the downstream package names for all the packages to be pulled from the - `mgmt-staging` repository for the workload cluster are specified. - - In all `PackageVariant` files, the `spec.injectors.WorkloadCluster.name` field is replaced with the workload - cluster name. This gives us the handle for `packageVariant` injection for the workload cluster in question. - - In all `PackageVariant` files, the - `spec.pipeline.mutators.[image=gcr.io/kpt-fn/set-annotations:v0.1.4].configMap.[nephio.org/cluster-name]` + mgmt-staging repository for the workload cluster are specified. + - In all *PackageVariant* files, the spec.injectors.WorkloadCluster.name field is replaced with the workload + cluster name. This gives us the handle for packageVariant injection for the workload cluster in question. + - In all *PackageVariant* files, the + spec.pipeline.mutators.[image=gcr.io/kpt-fn/set-annotations:v0.1.4].configMap.[nephio.org/cluster-name] field is replaced with the workload cluster name. - - In all `WorkloadCluster` files, the `metadata.name` and `spec.clusterName` fields are replaced with the workload + - In all *WorkloadCluster* files, the metadata.name and spec.clusterName fields are replaced with the workload cluster name. We now have a draft blueprint package for our workload cluster ready for further configuration. @@ -306,8 +306,8 @@ porchctl rpkg pull -n default mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 regi kpt fn eval --image gcr.io/kpt-fn/set-labels:v0.2.0 regional -- "nephio.org/site-type=regional" "nephio.org/region=us-west1" ``` -4. Check that the labels have been set. In all `PackageVariant` and `WorkloadCluster` files, the following - `metadata.labels` fields have been added: +4. Check that the labels have been set. In all *PackageVariant* and *WorkloadCluster* files, the following + metadata.labels fields have been added: ```yaml labels: @@ -332,14 +332,14 @@ porchctl rpkg propose -n default mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 proposed ``` -Proposing the package changes the name of the `drafts/regional/v1` to -`proposed/regional/v1`. There are no changes to the content of the branch. +Proposing the package changes the name of the drafts/regional/v1 to +proposed/regional/v1. There are no changes to the content of the branch. ### Approve the Package and Trigger Configsync Approving the package triggers `configsync`, which triggers creation of the new -workload cluster using all the `PackageVariant` components specified in the -`nephio-workload-cluster` kpt package. +workload cluster using all the *PackageVariant* components specified in the +nephio-workload-cluster *kpt* package. ```bash porchctl rpkg approve -n default mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 @@ -350,19 +350,17 @@ The new cluster comes up after a number of minutes. ## Transformations in the Workload cluster creation -Approving the `regional` Workload cluster package in the `mgmt` repository -triggered configsync to apply the `PackageVariant` configurations in the -`mgmt/regional` package. Let's examine those `PackageVariant` configurations one +Approving the regional Workload cluster package in the *mgmt* repository +triggered configsync to apply the *PackageVariant* configurations in the +*mgmt/regional* package. Let's examine those *PackageVariant* configurations one by one. -In the text below, let's assume we are creating a workload cluster called `lambda`. - ### pv-cluster.yaml: creates the Workload cluster -In the text below, let's assume we are creating a workload cluster called `lambda`. +In the text below, let's assume we are creating a workload cluster called lambda. This package variant transformation results in a package variant of the -`cluster-capi-kind` package called `lambda-package`. The `lambda-package` +*cluster-capi-kind* package called *lambda-package*. The *lambda-package* contains the definition of a pair custom resources that are created when the package is applied. The custom resource pair are instances of the CRDs below @@ -371,42 +369,42 @@ Custom Resource Definition | Controller | Functi clusters.cluster.x-k8s.io | capi-system.capi-controller-manager | Trigger creation and start of the kind cluster | workloadclusters.infra.nephio.org | nephio-system.nephio-controller | Trigger addition of nephio-specific configuration to the kind cluster | -The `PackageVariant` specified in `pv-cluster.yaml` is executed and: +The *PackageVariant* specified in *pv-cluster.yaml* is executed and: 1. Produces a package variant of the [cluster-capi-kind](https://github.com/nephio-project/nephio-example-packages/tree/main/cluster-capi-kind) - package called `lambda-cluster` in the gitea `mgmt` repository on your management + package called lambda-cluster in the gitea *mgmt* repository on your management cluster. -2. Applies the `lambda-cluster` kpt package to create the kind cluster for the +2. Applies the lambda-cluster *kpt* package to create the kind cluster for the workload cluster. #### Package transformations -During creation of the package variant kpt package, the following transformations occur: +During creation of the package variant *kpt* package, the following transformations occur: -1. It creates a `drafts/lambda-cluster/v1` branch on the `mgmt` repository -2. It does the equivalent of a [`kpt pkg get`](#kpt-pkg-get) on the `cluster-capi-kind` package into a directory called - `lambda-cluster` on that branch, with the same transformations on package files carried out as the +1. It creates a drafts/lambda-cluster/v1 branch on the *mgmt* repository +2. It does the equivalent of a [`kpt pkg get`](#kpt-pkg-get) on the *cluster-capi-kind* package into a directory called + lambda-cluster on that branch, with the same transformations on package files carried out as the [`kpt pkg get`](#kpt-pkg-get) command above, this content is checked into the new branch in the initial commit -3. The pipeline specified in the `Kptfile`of the `cluster-capi-kind` package specifies an `apply-replacements` specified - in the `apply-replacements.yaml` file in the package and uses the value of the - `workload-cluster.yaml:spec.clusterName` field set in 2. above (which is the workload cluster name). This has the - value of `example` in the `workload-cluster.yaml` file. This means that in the `cluster.yaml` file the value of field - `metadata.name` is changed from `workload` to `example`. -4. The package variant `spec.injectors` changes specified in the `pv-cluster.yaml` file are applied.
a. The relevant - `pv-cluster.yaml` fields are: ``` spec: injectors: +3. The pipeline specified in the *Kptfile* of the *cluster-capi-kind* package specifies an apply-replacements specified + in the *apply-replacements.yaml* file in the package and uses the value of the + workload-cluster.yaml:spec.clusterName field set in 2. above (which is the workload cluster name). This has the + value of example in the *workload-cluster.yaml* file. This means that in the *cluster.yaml* file the value of field + metadata.name is changed from workload to example. +4. The package variant spec.injectors changes specified in the *pv-cluster.yaml* file are applied.
a. The relevant + *pv-cluster.yaml* fields are: ``` spec: injectors: - kind: WorkloadCluster name: example pipeline: mutators: - image: gcr.io/kpt-fn/set-annotations:v0.1.4 configMap: nephio.org/cluster-name: example ``` - b. The following `PackageVariant` changes are made to the `lambda-cluster` package: + b. The following *PackageVariant* changes are made to the *lambda-cluster* package: - 1. The field `info.readinessGates.conditionType` is added to the `Kptfile` with the value - `config.injection.WorkloadCluster.workload-cluster`. - 2. An extra `pipeline.mutators` entry is inserted in the `Kptfile`. This mutator is the mutator specified in the - `pv-cluster.yaml` package variant specification, which specifies that the annotation - `nephio.org/cluster-name: lambda` should be set on the resources in the package: + 1. The field info.readinessGates.conditionType is added to the *Kptfile* with the value + config.injection.WorkloadCluster.workload-cluster. + 2. An extra pipeline.mutators entry is inserted in the *Kptfile*. This mutator is the mutator specified in the + *pv-cluster.yaml* package variant specification, which specifies that the annotation + nephio.org/cluster-name: lambda should be set on the resources in the package: ```yaml pipeline: @@ -416,8 +414,8 @@ During creation of the package variant kpt package, the following transformation configMap: nephio.org/cluster-name: lambda ``` - 3. The field `status.conditions` is added to the `Kptfile` with the values below. This condition means that the - kpt package is not considered to be applied until the condition `config.injection.WorkloadCluster.workload-cluster` is `True`: + 3. The field status.conditions is added to the *Kptfile* with the values below. This condition means that the + *kpt* package is not considered to be applied until the condition config.injection.WorkloadCluster.workload-cluster is True: ```yaml status: @@ -427,7 +425,7 @@ During creation of the package variant kpt package, the following transformation message: injected resource "lambda" from cluster reason: ConfigInjected ``` - 4. The `spec` in the WorkloadCluster file `workload-cluster.yaml` is set. This is the specification of the extra + 4. The spec in the WorkloadCluster file *workload-cluster.yaml* is set. This is the specification of the extra configuration that will be carried out on the workload cluster once kind has brought it up: ```yaml @@ -438,16 +436,16 @@ During creation of the package variant kpt package, the following transformation - sriov masterInterface: eth1 ``` -5. The amended pipeline specified in the `Kptfile`of the `lambda-cluster` is now re-executed. It was previously executed +5. The amended pipeline specified in the *Kptfile* of the lambda-cluster is now re-executed. It was previously executed in step 3 above but there is now an extra mutator added by the package variant. The following changes result: - a. The new mutator added to the `Kptfile` by the package variant adds the annotation - `nephio.org/cluster-name: lambda` is added to every resource in the package. - b. The existing annotation in the `Kptfile` (coming from the Kptfile in the parent `cluster-capi-kind` package) sets - the value `lambda` of the `spec.clusterName` field in `workload-cluster.yaml` as the value of the `metadata.name` - field in the `cluster.yaml` file. + a. The new mutator added to the *Kptfile* by the package variant adds the annotation + nephio.org/cluster-name: lambda is added to every resource in the package. + b. The existing annotation in the *Kptfile* (coming from the Kptfile in the parent *cluster-capi-kind* package) sets + the value lambda of the spec.clusterName field in *workload-cluster.yaml* as the value of the metadata.name + field in the *cluster.yaml* file. -6. The `lambda-cluster` package is now ready to go. It is proposed and approved and the process of cluster creation +6. The *lambda-cluster* package is now ready to go. It is proposed and approved and the process of cluster creation commences. #### Cluster Creation diff --git a/content/en/docs/guides/install-guides/webui-auth-gcp.md b/content/en/docs/guides/install-guides/webui-auth-gcp.md index 5ef070a5..9321e94f 100644 --- a/content/en/docs/guides/install-guides/webui-auth-gcp.md +++ b/content/en/docs/guides/install-guides/webui-auth-gcp.md @@ -11,7 +11,7 @@ When used with the Web UI running in a GKE cluster, the users authorization role based upon their IAM roles in GCP. If you are not exposing the webui on a load balancer IP address, but are instead using `kubectl port-forward`, you -should use `http`, `localhost` and `7007` for the `SCHEME`, `HOSTNAME` and `PORT`; otherwise, use the scheme, DNS name +should use *http*, *localhost* and *7007* for the SCHEME, HOSTNAME and PORT; otherwise, use the scheme, DNS name and port as it will be seen by your browser. You can leave the port off if it is 443 for HTTPS or 80 for HTTP. ## Creating an OAuth 2.0 Client ID @@ -22,17 +22,17 @@ client ID and secret: 1. Sign in to the [Google Console](https://console.cloud.google.com) 2. Select or create a new project from the dropdown menu on the top bar 3. Navigate to [APIs & Services > Credentials](https://console.cloud.google.com/apis/credentials) -4. Click **Create Credentials** and choose `OAuth client ID` +4. Click **Create Credentials** and choose **OAuth client ID** 5. Configure an OAuth consent screen, if required - - For scopes, select `openid`, `auth/userinfo.email`, `auth/userinfo.profile`, and `auth/cloud-platform`. + - For scopes, select *openid*, *auth/userinfo.email*, *auth/userinfo.profile*, and *auth/cloud-platform*. - Add any users that will want access to the UI if using External user type -6. Set **Application Type** to `Web Application` with these settings: +6. Set **Application Type** to *Web Application* with these settings: - - *Name*: Nephio Web UI (or any other name you wish) - - *Authorized JavaScript origins*: `SCHEME`://`HOSTNAME`:`PORT` - - *Authorized redirect URIs*: `SCHEME`://`HOSTNAME`:`PORT`/api/auth/google/handler/frame + - **Name**: Nephio Web UI (or any other name you wish) + - **Authorized JavaScript origins**: SCHEME://HOSTNAME:PORT + - **Authorized redirect URIs**: SCHEME://HOSTNAME:PORT/api/auth/google/handler/frame 7. Click Create 8. Copy the client ID and client secret displayed @@ -49,16 +49,16 @@ kubectl create secret generic -n nephio-webui nephio-google-oauth-client --from- ## Enable Google OAuth -The webui package has a function that will configure the package for authentication with different services. Edit the -`set-auth.yaml` file to set the `authProvider` field to `google` or run this command: +The *webui* package has a function that will configure the package for authentication with different services. Edit the +*set-auth.yaml* file to set the authProvider field to *google* or run the following command: ```bash kpt fn eval nephio-webui --image gcr.io/kpt-fn/search-replace:v0.2.0 --match-name set-auth -- 'by-path=authProvider' 'put-value=google' ``` ## Enable OIDC with Google -The webui package has a function that will configure the package for authentication with different services. Edit the -`set-auth.yaml` file to set the `authProvider` field to `oidc` and the `oidcTokenProvider` to `google`, or run these +The *webui* package has a function that will configure the package for authentication with different services. Edit the +*set-auth.yaml* file to set the authProvider field to *oidc* and the oidcTokenProvider to *google*, or run the following commands: ```bash diff --git a/content/en/docs/guides/install-guides/webui-auth-okta.md b/content/en/docs/guides/install-guides/webui-auth-okta.md index b3d011c5..7d24d74d 100644 --- a/content/en/docs/guides/install-guides/webui-auth-okta.md +++ b/content/en/docs/guides/install-guides/webui-auth-okta.md @@ -6,7 +6,7 @@ weight: 7 --- If you are not exposing the webui on a load balancer IP address, but are instead using `kubectl port-forward`, you -should use `localhost` and `7007` for the `HOSTNAME` and `PORT`; otherwise, use the DNS name and port as it will be seen +should use *localhost* and *7007* for the HOSTNAME and PORT; otherwise, use the DNS name and port as it will be seen by your browser. ## Creating an Okta Application @@ -15,21 +15,21 @@ Adapted from the [Backstage](https://backstage.io/docs/auth/okta/provider#create documentation: 1. Log into Okta (generally company.okta.com) -2. Navigate to Menu >> Applications >> Applications >> Create App Integration +2. Navigate to **Menu** >> **Applications** >> **Applications** >> **Create App Integration** 3. Fill out the Create a new app integration form: - - Sign-in method: OIDC - OpenID Connect - - Application type: Web Application - - Click Next + - **Sign-in method**: OIDC - OpenID Connect + - **Application type**: Web Application + - Click **Next** 4. Fill out the New Web App Integration form: - - App integration name: Nephio Web UI (or any other name you wish) - - Grant type: Authorization Code & Refresh Token - - Sign-in redirect URIs: http://HOSTNAME:PORT/api/auth/okta/handler/frame - - Sign-out redirect URIs: http://HOSTNAME:PORT - - Controlled access: (select as appropriate) - - Click Save + - **App integration name**: Nephio Web UI (or any other name you wish) + - **Grant type**: Authorization Code & Refresh Token + - **Sign-in redirect URIs**: http://HOSTNAME:PORT/api/auth/okta/handler/frame + - **Sign-out redirect URIs**: http://HOSTNAME:PORT + - **Controlled access**: (select as appropriate) + - Click **Save** ## Create the Secret in the Cluster @@ -60,8 +60,8 @@ kubectl create secret generic -n nephio-webui nephio-okta-oauth-client \ ## Enable the WebUI Auth Provider -The webui package has a function that will configure the package for authentication with different services. Edit the -`set-auth.yaml` file to set the `authProvider` field to `oidc` and the `oidcTokenProvider` to `okta`, or run these +The *webui* package has a function that will configure the package for authentication with different services. Edit the +*set-auth.yaml* file to set the authProvider field to *oidc* and the oidcTokenProvider to *okta*, or run the following commands: ```bash diff --git a/content/en/docs/guides/install-guides/webui.md b/content/en/docs/guides/install-guides/webui.md index 9c7a8c63..0c079c72 100644 --- a/content/en/docs/guides/install-guides/webui.md +++ b/content/en/docs/guides/install-guides/webui.md @@ -12,7 +12,7 @@ This page is draft and the separation of the content to different categories is ## Nephio WebUI -To install the WebUI, we simply install a different kpt package. First, we pull the package locally: +To install the WebUI, we simply install a different *kpt* package. First, we pull the package locally: ```bash kpt pkg get --for-deployment https://github.com/nephio-project/nephio-packages.git/nephio-webui@origin/v3.0.0 @@ -20,14 +20,14 @@ kpt pkg get --for-deployment https://github.com/nephio-project/nephio-packages.g Before we apply it to the cluster, however, we should configure it. -By default, it expects the webui to be reached via `http://localhost:7007`. If you plan to expose the webui via a load +By default, it expects the webui to be reached via *http://localhost:7007*. If you plan to expose the webui via a load balancer service instead, then you need to configure the scheme, hostname, port, and service. Note that if you wish to -use HTTPS, you should set the `scheme` to `https`, but you will need to terminate the TLS at the load balancer as the +use HTTPS, you should set the *scheme* to *https*, but you will need to terminate the TLS at the load balancer as the container currently only supports HTTP. This information is captured in the application ConfigMap for the webui, which is generated by a KRM function. We can -change the values in `nephio-webui/gen-configmap.yaml` just using a text editor (change the `hostname` and `port` values -under `params:`), and those will take effect later when we run `kpt fn render`. As an alternative to a text editor, you +change the values in the *nephio-webui/gen-configmap.yaml* just using a text editor (change the *hostname* and *port* values +under *params:*), and those will take effect later when we run `kpt fn render`. As an alternative to a text editor, you can run these commands: ```bash @@ -36,18 +36,18 @@ kpt fn eval nephio-webui --image gcr.io/kpt-fn/search-replace:v0.2.0 --match-kin kpt fn eval nephio-webui --image gcr.io/kpt-fn/search-replace:v0.2.0 --match-kind GenConfigMap -- 'by-path=params.port' 'put-value=PORT' ``` -If you want to expose the UI via a load balancer service, you can manually change the Service `type` to `LoadBalancer`, +If you want to expose the UI via a load balancer service, you can manually change the Service *type* to *LoadBalancer*, or run: ```bash kpt fn eval nephio-webui --image gcr.io/kpt-fn/search-replace:v0.2.0 --match-kind Service -- 'by-path=spec.type' 'put-value=LoadBalancer' ``` -In the default configuration, the Nephio WebUI *is wide open with no authentication*. The webui itself authenticates to +In the default configuration, the Nephio WebUI **is wide open with no authentication**. The webui itself authenticates to the cluster using a static service account, which is bound to the cluster admin role. Any user accessing the webui is -*acting as a cluster admin*. +**acting as a cluster admin**. -This configuration is designed for *testing and development only*. You must not use this configuration in any other +This configuration is designed for **testing and development only**. You must not use this configuration in any other situation, and even for testing and development it must not be exposed on the internet (for example, via a LoadBalancer service). diff --git a/content/en/docs/guides/user-guides/controllers.md b/content/en/docs/guides/user-guides/controllers.md index 079e5e7e..ea3a13cb 100644 --- a/content/en/docs/guides/user-guides/controllers.md +++ b/content/en/docs/guides/user-guides/controllers.md @@ -35,9 +35,9 @@ The reconcilers below are currently deployed by default in the nephio controller To enable a particular reconciler, you pass an environment variable to the Nephio Controller at startup. The environment variable is of the form -`ENABLE_` where `` is the name of the reconciler to -be enabled in upper case. Therefore, to enable the `bootstrap-packages` reconciler, -pass the `ENABLE_BOOTSTRAPPACKAGES` to the nephio controller. Reconcilers are +*ENABLE_\* where *\* is the name of the reconciler to +be enabled in upper case. Therefore, to enable the bootstrap-packages reconciler, +pass the ENABLE_BOOTSTRAPPACKAGES to the nephio controller. Reconcilers are disabled by default. diff --git a/content/en/docs/guides/user-guides/exercise-1-free5gc.md b/content/en/docs/guides/user-guides/exercise-1-free5gc.md index 8025bcb9..a372d5ce 100644 --- a/content/en/docs/guides/user-guides/exercise-1-free5gc.md +++ b/content/en/docs/guides/user-guides/exercise-1-free5gc.md @@ -41,7 +41,7 @@ operator translates that into increased memory and CPU requirements for the unde To perform these exercises, you will need: -- Access to the installed demo VM environment and can login as the `ubuntu` user to have access to the necessary files. +- Access to the installed demo VM environment and can login as the ubuntu user to have access to the necessary files. - Access to the Nephio UI as described in the installation guide Access to Gitea, used in the demo environment as the Git provider, is optional. Later in the exercises, you will also @@ -59,8 +59,8 @@ Use the session just started on the VM to run these commands: {{% alert title="Note" color="primary" %}} -After fresh `docker` install, verify `docker` supplementary group is loaded by executing `id | grep docker`. -If not, logout and login to the VM or execute the `newgrp docker` to ensure the `docker` supplementary group is loaded. +After fresh docker install, verify docker supplementary group is loaded by executing `id | grep docker`. +If not, logout and login to the VM or execute the `newgrp docker` to ensure the docker supplementary group is loaded. {{% /alert %}} @@ -90,7 +90,7 @@ Since those are Ready, you can deploy a package from the [catalog-infra-capi](https://github.com/nephio-project/catalog/tree/main/infra/capi) repository into the mgmt repository. To do this, you retrieve the Package Revision name using `porchctl rpkg get`, clone that specific Package Revision via the `porchctl rpkg clone` command, then propose and approve the resulting package revision. You want to -use the latest revision of the nephio-workload-cluster package, which you can get with the command below (your latest +use the latest revision of the *nephio-workload-cluster* package, which you can get with the command below (your latest revision may be different): ```bash @@ -107,8 +107,8 @@ catalog-infra-capi-b0ae9512aab3de73bbae623a3b554ade57e15596 nephio-workload-cl ``` -Then, use the NAME from that in the `clone` operation, and the resulting PackageRevision name to perform the `propose` -and `approve` operations: +Then, use the NAME from that in the clone operation, and the resulting PackageRevision name to perform the propose +and approve operations: ```bash porchctl rpkg clone -n default catalog-infra-capi-b0ae9512aab3de73bbae623a3b554ade57e15596 --repository mgmt regional @@ -124,7 +124,7 @@ mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 created Next, you will want to ensure that the new Regional cluster is labeled as regional. Since you are using the CLI, you will need to pull the package out, modify it, and then push the updates back to the Draft revision. You will use `kpt` -and the `set-labels` function to do this. +and the set-labels function to do this. To pull the package to a local directory, use the `rpkg pull` command: @@ -132,7 +132,7 @@ To pull the package to a local directory, use the `rpkg pull` command: porchctl rpkg pull -n default mgmt-08c26219f9879acdefed3469f8c3cf89d5db3868 regional ``` -The package is now in the `regional` directory. So you can execute the `set-labels` function against the package +The package is now in the *regional* directory. So you can execute the set-labels function against the package imperatively, using `kpt fn eval`: ```bash @@ -150,7 +150,7 @@ The output is similar to: ``` -If you wanted to, you could have used the `--save` option to add the `set-labels` call to the package pipeline. This +If you wanted to, you could have used the --save option to add the set-labels call to the package pipeline. This would mean that function gets called whenever the server saves the package. If you added new resources later, they would also get labeled. @@ -219,7 +219,7 @@ regional Provisioned 52m v1.26.3 ``` -To access the API server of that cluster, you need to retrieve the `kubeconfig` file by pulling it from the Kubernetes +To access the API server of that cluster, you need to retrieve the *kubeconfig* file by pulling it from the Kubernetes Secret and decode the base64 encoding: ```bash @@ -268,7 +268,7 @@ regional-md-0-m6cr5-wtzlx regional 1 1 1 5m36s v1 ## Step 3: Deploy two Edge clusters -Next, you can deploy two Edge clusters by applying the PackageVariantSet that can be found in the `tests` directory: +Next, you can deploy two Edge clusters by applying the PackageVariantSet that can be found in the *tests* directory: ```bash kubectl apply -f test-infra/e2e/tests/free5gc/002-edge-clusters.yaml @@ -302,10 +302,10 @@ regional-md-0-lvmvm-8msw6 regional 1 1 1 143m v1. This is equivalent to doing the same `kpt` commands used earlier for the Regional cluster, except that it uses the PackageVariantSet controller, which is running in the Nephio Management cluster. It will clone the package for each -entry in the field `packageNames` in the PackageVariantSet. You can observe the progress by looking at the UI, or by +entry in the field packageNames in the PackageVariantSet. You can observe the progress by looking at the UI, or by using `kubectl` to monitor the various package variants, package revisions, and KinD clusters. -To access the API server of these clusters, you will need to get the `kubeconfig` file. To retrieve the file, you pull +To access the API server of these clusters, you will need to get the *kubeconfig* file. To retrieve the file, you pull it from the Kubernetes Secret and decode the base64 encoding: ```bash @@ -315,7 +315,7 @@ export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/regional-kubeconfig:$HOME/.kube ``` To retain the KUBECONFIG environment variable permanently across sessions for the -user, add it to the `~/.bash_profile` and source the `~/.bash_profile` file +user, add it to the *~/.bash_profile* and source the *~/.bash_profile* file ```bash echo "export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/regional-kubeconfig:$HOME/.kube/edge01-kubeconfig:$HOME/.kube/edge02-kubeconfig" >> ~/.bash_profile source ~/.bash_profile @@ -389,7 +389,7 @@ packagevariant.config.porch.kpt.dev/network created ``` -Then you will create appropriate `Secret` to make sure that Nephio can authenticate to the external backend. +Then you will create appropriate Secret to make sure that Nephio can authenticate to the external backend. ```bash kubectl apply -f test-infra/e2e/tests/free5gc/002-secret.yaml @@ -424,7 +424,7 @@ rawtopology.topo.nephio.org/nephio created While the Edge clusters are deploying (which will take 5-10 minutes), you can install the free5GC functions other than SMF, AMF, and UPF. For this, you will use the Regional cluster. Since these are all installed with a single package, you -can use the UI to pick the `free5gc-cp` package from the `free5gc-packages` repository and clone it to the `regional` +can use the UI to pick the *free5gc-cp* package from the *free5gc-packages* repository and clone it to the *regional* repository (you could have also used the CLI). ![Install free5gc - Step 1](/static/images/user-guides/free5gc-cp-1.png) @@ -433,8 +433,8 @@ repository (you could have also used the CLI). ![Install free5gc - Step 3](/static/images/user-guides/free5gc-cp-3.png) -Click through the "Next" button until you are through all the steps, then click "Add Deployment". On the next screen, -click "Propose", and then "Approve". +Click through the **Next** button until you are through all the steps, then click **Add Deployment**. On the next screen, +click **Propose**, and then **Approve**. ![Install free5gc - Step 4](/static/images/user-guides/free5gc-cp-4.png) @@ -522,7 +522,7 @@ statefulset.apps/mongodb 1/1 3m31s Now you will need to deploy the free5GC operator across all of the Workload clusters (regional and edge). To do this, you use another PackageVariantSet. This one uses an objectSelector to select the WorkloadCluster resources previously -added to the Management cluster when you had deployed the nephio-workload-cluster packages (manually as well as via +added to the Management cluster when you had deployed the *nephio-workload-cluster* packages (manually as well as via PackageVariantSet). ```bash @@ -539,7 +539,7 @@ packagevariantset.config.porch.kpt.dev/free5gc-operator created ## Step 6: Check free5GC Operator Deployment -Within five minutes of applying the free5GC Operator YAML file, you should see `free5gc` namespaces on your regional and +Within five minutes of applying the free5GC Operator YAML file, you should see free5gc namespaces on your regional and edge clusters: ```bash @@ -749,16 +749,16 @@ The output is similar to: ## Step 8: Deploy UERANSIM -The UERANSIM package can be deployed to the edge01 cluster, where it will simulate a gNodeB and UE. Just like our other +The *UERANSIM* package can be deployed to the edge01 cluster, where it will simulate a gNodeB and UE. Just like our other packages, UERANSIM needs to be configured to attach to the correct networks and use the correct IP address. Thus, you use our standard specialization techniques and pipeline to deploy UERANSIM, just like the other network functions. However, before you do that, let us register the UE with free5GC as a subscriber. You will use the free5GC Web UI to do -this. To access it, you need to open another port forwarding session. Assuming you have the `regional-kubeconfig` file +this. To access it, you need to open another port forwarding session. Assuming you have the *regional-kubeconfig* file you created earlier in your home directory, you need to establish another ssh session from your workstation to the VM, port forwarding port 5000. -Before moving on to the new terminal, let's copy `regional-kubeconfig` to the home directory: +Before moving on to the new terminal, let's copy regional-kubeconfig to the home directory: ```bash cp $HOME/.kube/regional-kubeconfig . @@ -838,7 +838,7 @@ Our DNN does not actually provide access to the internet, so you won't be able t In this step, you will change the capacity requirements for the UPF and SMF, and see how the operator reconfigures the Kubernetes resources used by the network functions. -The capacity requirements are captured in a custom resource (capacity.yaml) within the deployed package. You can edit +The capacity requirements are captured in a custom resource (*capacity.yaml*) within the deployed package. You can edit this value with the CLI, or use the web interface. Both options lead to the same result, but using the web interface is faster. First, you will vertically scale the UPF using the CLI. @@ -880,7 +880,7 @@ choice (in the example you can use /tmp/upf-scale-package). porchctl rpkg pull -n default edge01-40c616e5d87053350473d3ffa1387a9a534f8f42 /tmp/upf-scale-package ``` You can inspect the contents of the package in the chosen directory. The UPF configuration is located in the -capacity.yaml file. +*capacity.yaml* file. ```bash cat /tmp/upf-scale-package/capacity.yaml @@ -904,8 +904,8 @@ spec: ``` -The contents of the package will be mutated using kpt functions to adjust the UPF configuration, however you can also -manually edit the file. Apply the kpt functions to the contents of the kpt package with a new value for the throughputs +The contents of the package will be mutated using *kpt* functions to adjust the UPF configuration, however you can also +manually edit the file. Apply the *kpt* functions to the contents of the kpt package with a new value for the throughputs of your choice. ```bash @@ -1024,17 +1024,17 @@ After the package is approved, the results can be observed in Nephio Web UI. Hea Inside the package, you can see that the throughput values for UPF have been modified, reflecting the changes you made with the CLI. -You can also scale NFs vertically using the Nephio Web UI. For practice you can scale the UPF on the second edge cluster. Once again, navigate to the Web UI and choose the `edge02` repository in the Deployments section. +You can also scale NFs vertically using the Nephio Web UI. For practice you can scale the UPF on the second edge cluster. Once again, navigate to the Web UI and choose the **edge02** repository in the Deployments section. ![Edge02 Deployments](/static/images/user-guides/UPF-Capacity-5.png) -Select the `free5gc-upf` deployment, and then `View draft revision`. +Select the **free5gc-upf** deployment, and then **View draft revision**. ![UPF Deployment in edge02](/static/images/user-guides/UPF-Capacity-6.png) ![First revision](/static/images/user-guides/UPF-Capacity-7.png) -Edit the draft revision, and modify the `Capacity.yaml` file. +Edit the draft revision, and modify the *Capacity.yaml* file. ![Edit the revision](/static/images/user-guides/UPF-Capacity-8.png) @@ -1048,6 +1048,6 @@ After saving the changes to the file, propose the draft package and approve it. ![New revision](/static/images/user-guides/UPF-Capacity-12.png) -After a few minutes, the revision for the UPF deployment will change, and the changes will be reflected in the `edge-02` cluster. +After a few minutes, the revision for the UPF deployment will change, and the changes will be reflected in the edge-02 cluster. **NOTE**: You will observe that the UPF NFDeployment on the workload clusters is updated and synced with Gitea. The UPF pod will not reflect the new information. This is because the Nephio free5gc operator is not updating the pod with new configuration. diff --git a/content/en/docs/guides/user-guides/exercise-2-oai.md b/content/en/docs/guides/user-guides/exercise-2-oai.md index cf7b90a8..b2d405e8 100644 --- a/content/en/docs/guides/user-guides/exercise-2-oai.md +++ b/content/en/docs/guides/user-guides/exercise-2-oai.md @@ -34,7 +34,7 @@ The network configuration is illustrated in the following figure: To perform these exercises, you will need: -- Access to the installed demo VM environment and can login as the `ubuntu` user to have access to the necessary files. +- Access to the installed demo VM environment and can login as the ubuntu user to have access to the necessary files. - Access to the Nephio UI as described in the installation guide Access to Gitea, used in the demo environment as the Git provider, is optional. @@ -50,8 +50,8 @@ Use the session just started on the VM to run these commands: {{% alert title="Note" color="primary" %}} -After fresh `docker` install, verify `docker` supplementary group is loaded by executing `id | grep docker`. -If not, logout and login to the VM or execute the `newgrp docker` to ensure the `docker` supplementary group is loaded. +After fresh docker install, verify docker supplementary group is loaded by executing `id | grep docker`. +If not, logout and login to the VM or execute the `newgrp docker` to ensure the docker supplementary group is loaded. {{% /alert %}} @@ -77,7 +77,7 @@ oai-core-packages git Package false True https://github ``` -Since those are Ready, you can deploy packages from these repositories. You can use our pre-defined `PackageVariantSets` for creating workload clusters +Since those are Ready, you can deploy packages from these repositories. You can use our pre-defined *PackageVariantSets* for creating workload clusters ```bash kubectl apply -f test-infra/e2e/tests/oai/001-infra.yaml @@ -93,15 +93,15 @@ packagevariantset.config.porch.kpt.dev/oai-edge-clusters created ``` -It will take around 15 mins to create the three clusters. You can check the progress by looking at commits made in gitea `mgmt` and `mgmt-staging` repository. After couple of minutes you should see three independent repositories (Core, Regional and Edge) for each workload cluster. +It will take around 15 mins to create the three clusters. You can check the progress by looking at commits made in gitea *mgmt* and *mgmt-staging* repository. After couple of minutes you should see three independent repositories (Core, Regional and Edge) for each workload cluster. -You can also look at the state of `packagerevisions` for the three packages. You can use the below command +You can also look at the state of packagerevisions for the three packages. You can use the below command ```bash kubectl get packagerevisions | grep -E 'core|regional|edge' | grep mgmt ``` -While you are checking you will see `LIFECYCLE` will change from Draft to Published. Once packages are Published then the clusters will start getting deployed. +While you are checking you will see *LIFECYCLE* will change from Draft to Published. Once packages are Published then the clusters will start getting deployed. ## Step 2: Check the status of the workload clusters @@ -125,7 +125,7 @@ regional docker Provisioned 34m v1.26.3 ``` -To access the API server of that cluster, you need to retrieve the `kubeconfig` file by pulling it from the Kubernetes Secret and decode the base64 encoding: +To access the API server of that cluster, you need to retrieve the *kubeconfig* file by pulling it from the Kubernetes Secret and decode the base64 encoding: ```bash kubectl get secret core-kubeconfig -o jsonpath='{.data.value}' | base64 -d > $HOME/.kube/core-kubeconfig @@ -134,7 +134,7 @@ kubectl get secret edge-kubeconfig -o jsonpath='{.data.value}' | base64 -d > $HO export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/regional-kubeconfig:$HOME/.kube/core-kubeconfig:$HOME/.kube/edge-kubeconfig ``` -To retain the KUBECONFIG environment variable permanently across sessions for the user, add it to the `~/.bash_profile` and source the `~/.bash_profile` file +To retain the KUBECONFIG environment variable permanently across sessions for the user, add it to the *~/.bash_profile* and source the *~/.bash_profile* file You can then use it to access the Workload cluster directly: @@ -225,7 +225,7 @@ worker nodes. Finally, you want to configure the resource backend to be aware of these clusters. The resource backend is an IP address and VLAN index management system. It is included for demonstration purposes to show how Nephio package specialization can interact with external systems to fully configure packages. But it needs to be configured to match our topology. First, you will apply a package to define the high-level networks for attaching our workloads. The Nephio package specialization pipeline will determine the exact VLAN tags and IP addresses for those attachments based on the specific -clusters. There is a predefined PackageVariant in the tests directory for this: +clusters. There is a predefined *PackageVariant* in the tests directory for this: ```bash kubectl apply -f test-infra/e2e/tests/oai/001-network.yaml @@ -253,7 +253,7 @@ secret/srl.nokia.com created ``` -The predefined PackageVariant package defines certain resources that exist for the entire topology. However, you also need to configure the resource backend for our particular topology. This will likely be automated in the future, but for now you can just directly apply the configuration you have created that matches this test topology. Within this step also the credentials and information is provided to configure the network device, that aligns with the topology. +The predefined *PackageVariant* package defines certain resources that exist for the entire topology. However, you also need to configure the resource backend for our particular topology. This will likely be automated in the future, but for now you can just directly apply the configuration you have created that matches this test topology. Within this step also the credentials and information is provided to configure the network device, that aligns with the topology. ```bash ./test-infra/e2e/provision/hacks/network-topo.sh @@ -335,7 +335,7 @@ packagerevision.porch.kpt.dev/mgmt-staging-f1b8e75b6c87549d67037f784abc0083ac601 ## Step 3: Deploy Dependencies, MySQL database, OAI Core and RAN Operator in the Workload clusters -Now you will need to deploy the MySQL database required by OAI UDR network function, OAI Core and RAN operators across the Workload clusters. To do this, you use `PackageVariant` and `PackageVariantSet`. Later uses an objectSelector to select the WorkloadCluster resources previously added to the Management cluster when you had deployed the nephio-workload-cluster packages (manually as well as via PackageVariantSet). +Now you will need to deploy the MySQL database required by OAI UDR network function, OAI Core and RAN operators across the Workload clusters. To do this, you use *PackageVariant* and *PackageVariantSet*. Later uses an objectSelector to select the WorkloadCluster resources previously added to the Management cluster when you had deployed the *nephio-workload-cluster* packages (manually as well as via *PackageVariantSet*). ```bash kubectl apply -f test-infra/e2e/tests/oai/002-database.yaml @@ -356,7 +356,7 @@ packagevariant.config.porch.kpt.dev/oai-ran-operator-regional created ## Step 4: Check Database and Operator Deployment -Within five minutes of applying the RAN, Core Operator, and database Packages, you should see `oai-core` and `oai-cn-operators` namespaces on the Core workload cluster: +Within five minutes of applying the RAN, Core Operator, and database Packages, you should see oai-core and oai-cn-operators namespaces on the Core workload cluster: ```bash kubectl get ns --context core-admin@core @@ -382,7 +382,7 @@ resource-group-system Active 88m ``` -In the namespace `oai-core` you can check MySQL pod +In the namespace oai-core you can check MySQL pod ```bash kubectl get pods -n oai-core --context core-admin@core @@ -397,7 +397,7 @@ mysql-7dd4cc6945-lqwcv 1/1 Running 0 7m12s ``` -In the `oai-cn-operators` namespace you should see control plane network function operators +In the oai-cn-operators namespace you should see control plane network function operators ```bash kubectl get pods -n oai-cn-operators --context core-admin@core @@ -464,7 +464,7 @@ resource-group-system Active 97m ``` -In edge cluster in `oai-cn-operators` namespace you will see only oai-upf network function. +In edge cluster in oai-cn-operators namespace you will see only oai-upf network function. ```bash kubectl get pods -n oai-cn-operators --context edge-admin@edge @@ -481,7 +481,7 @@ oai-upf-operator-75cbc869cb-67lf9 1/1 Running 0 16m ## Step 5: Deploy the Core Network Functions -You can start by deploying the core network functions which the operator will instantiate. For now, you will use individual `PackageVariants` targeting the Core and Edge cluster. In the future, you could put all of these resources into +You can start by deploying the core network functions which the operator will instantiate. For now, you will use individual *PackageVariants* targeting the Core and Edge cluster. In the future, you could put all of these resources into yet-another-package - a "topology" package - and deploy them all as a unit. Or you can use a topology controller to create them. But for now, let's do each manually. ```bash @@ -501,7 +501,7 @@ packagevariant.config.porch.kpt.dev/oai-upf-edge created ``` -All the NFs will wait for NRF to come up and then they will register to NRF. SMF has a dependency on UPF which is described by `dependency.yaml` file in SMF package. It will wait till the time UPF is deployed. It takes around ~800 seconds for the whole core network to come up. NRF is exposing its service via metallb external ip-address. In case metallb ip-address pool is not properly defined in the previous section, then UPF will not be able to register to NRF and in this case SMF and UPF will not be able to communicate. +All the NFs will wait for NRF to come up and then they will register to NRF. SMF has a dependency on UPF which is described by *dependency.yaml* file in SMF package. It will wait till the time UPF is deployed. It takes around ~800 seconds for the whole core network to come up. NRF is exposing its service via metallb external ip-address. In case metallb ip-address pool is not properly defined in the previous section, then UPF will not be able to register to NRF and in this case SMF and UPF will not be able to communicate. ### Check Core Network Deployment @@ -574,11 +574,11 @@ content-length: 58 [2024-01-25 16:54:21.817] [upf_n4 ] [info] Received SX HEARTBEAT REQUEST ``` -In the logs you should see `Received SX HEARTBEAT REQUEST` statement. If that is present then SMF and UPF are sharing PFCP heartbeats. +In the logs you should see **Received SX HEARTBEAT REQUEST** statement. If that is present then SMF and UPF are sharing PFCP heartbeats. ## Step 6: Deploy RAN Network Functions -If the core network functions are running and configured properly then you can start by deploying RAN network function `PackageVariants`. +If the core network functions are running and configured properly then you can start by deploying RAN network function *PackageVariants*. ```bash kubectl create -f test-infra/e2e/tests/oai/004-ran-network.yaml @@ -679,7 +679,7 @@ The output is similar to: ## Step 7: Deploy UE -If all three links are configured then you can proceed with deploying the UE `PackageVariants` +If all three links are configured then you can proceed with deploying the UE *PackageVariants* ```bash kubectl create -f test-infra/e2e/tests/oai/005-ue.yaml diff --git a/content/en/docs/guides/user-guides/helm/flux-helm.md b/content/en/docs/guides/user-guides/helm/flux-helm.md index 8a0e2546..f60c610e 100644 --- a/content/en/docs/guides/user-guides/helm/flux-helm.md +++ b/content/en/docs/guides/user-guides/helm/flux-helm.md @@ -26,7 +26,7 @@ Then, we can utilize the flux Custom Resources defined in another test kpt packa Access the Nephio Web UI and execute the following: -We will deploy the `flux-helm-controllers` pkg from the `nephio-example-packages` repo to the `edge02` workload +We will deploy the *flux-helm-controllers* pkg from the *nephio-example-packages* repo to the *edge02* workload cluster. * **Step 1** @@ -41,8 +41,8 @@ cluster. ![Install flux controllers - Step 3](/static/images/user-guides/flux-controller-selection.png) -Click through the `Next` button until you are through all the steps, leaving all options as `default`, then click -`Create Deployment`. +Click through the ***Next** button until you are through all the steps, leaving all options as default, then click +**Create Deployment**. * **Step 4** @@ -53,11 +53,11 @@ resources to deploy the controllers. {{% alert title="Note" color="primary" %}} -We are deploying into the `flux-system` namespace by default. +We are deploying into the flux-system namespace by default. {{% /alert %}} -Finally, we need to `propose` and then `approve` the pkg to initialize the deployment. +Finally, we need to propose and then approve the pkg to initialize the deployment. * **Step 5** @@ -85,8 +85,8 @@ source-controller-5756bf7d48-hprkn 1/1 Running 0 6m20s ### Deploying the onlineboutique-flux pkg -To make the demo kpt packages available in Nephio, we need to register a new `External Blueprints`repository. We can do -this via kubectl towards the management cluster. +To make the demo kpt packages available in Nephio, we need to register a new *External Blueprints* repository. We can do +this via `kubectl` towards the management cluster. ```bash cat << EOF | kubectl apply -f - @@ -109,7 +109,7 @@ spec: EOF ``` -The new repository should now have been added to the `External Blueprints` section of the UI. +The new repository should now have been added to the **External Blueprints** section of the UI. ![External Blueprints UI](/static/images/user-guides/external-bp-repos.png) diff --git a/content/en/docs/guides/user-guides/helm/helm-to-operator-codegen-sdk-user-guide.md b/content/en/docs/guides/user-guides/helm/helm-to-operator-codegen-sdk-user-guide.md index eb6c2e5a..9cd40c8a 100644 --- a/content/en/docs/guides/user-guides/helm/helm-to-operator-codegen-sdk-user-guide.md +++ b/content/en/docs/guides/user-guides/helm/helm-to-operator-codegen-sdk-user-guide.md @@ -269,7 +269,7 @@ INFO[0000] ConfigMap |10 ``` -The generated Go-Code would be written to the "outputs/generated_code.go" file +The generated Go-Code would be written to the *outputs/generated_code.go* file The Generated Go-Code shall contain the following functions: @@ -289,7 +289,7 @@ The Generated Go-Code shall contain the following functions: Please refer [here](https://book.kubebuilder.io/quick-start) to develop & deploy the operator. After the basic structure of the operator is created, users can proceed to add their business logic -The `CreateAll()` and `DeleteAll()` functions generated by the SDK can be leveraged for Day-0 resource deployments, allowing users to easily manage the creation and deletion of resources defined in the Helm chart. By integrating their business logic with these functions, users can ensure that their operator effectively handles resource lifecycle management and orchestration within a Kubernetes environment. +The CreateAll() and DeleteAll() functions generated by the SDK can be leveraged for Day-0 resource deployments, allowing users to easily manage the creation and deletion of resources defined in the Helm chart. By integrating their business logic with these functions, users can ensure that their operator effectively handles resource lifecycle management and orchestration within a Kubernetes environment. In the end, all the resources created could be checked by: `kubectl get pods -n free5gcns` diff --git a/content/en/docs/network-architecture/o-ran-integration.md b/content/en/docs/network-architecture/o-ran-integration.md index 36e7f4ee..aae9e30c 100644 --- a/content/en/docs/network-architecture/o-ran-integration.md +++ b/content/en/docs/network-architecture/o-ran-integration.md @@ -88,7 +88,7 @@ The Nephio ClusterClaim CR: The O-Cloud Cluster Template: - Supports installation of add-on features such as Multus networking that will require specific configuration handled through the configRefs CRDs. A configRef CR can contain both configuration that is fixed for the O-Cloud Cluster template as well as instance specific configuration that must be provided as user input. -- Is realized with a KPT package that contains the ClusterClaim CR manifest as well as the referred O2imsClusterParameters CR manifest and additional configuration data manifests +- Is realized with a *KPT* package that contains the ClusterClaim CR manifest as well as the referred O2imsClusterParameters CR manifest and additional configuration data manifests As of this release, the O-RAN Alliance has not specified O2ims provisioning interface, as such this pre-standardization version of the O2ims provisioning interface is KRM/CRD based where the: diff --git a/content/en/docs/porch/config-as-data.md b/content/en/docs/porch/config-as-data.md index 11bf7bdd..c7c5def4 100644 --- a/content/en/docs/porch/config-as-data.md +++ b/content/en/docs/porch/config-as-data.md @@ -12,7 +12,7 @@ This document provides background context for Package Orchestration, which is fu ## Configuration as Data -*Configuration as Data* is an approach to management of configuration (incl. +Configuration as Data is an approach to management of configuration (incl. configuration of infrastructure, policy, services, applications, etc.) which: * makes configuration data the source of truth, stored separately from the live @@ -28,7 +28,7 @@ configuration of infrastructure, policy, services, applications, etc.) which: ## Key Principles -A system based on CaD *should* observe the following key principles: +A system based on CaD should observe the following key principles: * secrets should be stored separately, in a secret-focused storage system ([example](https://cloud.google.com/secret-manager)) @@ -47,7 +47,7 @@ A system based on CaD *should* observe the following key principles: can be operated on by given code (functions) * finds and/or filters / queries / selects code (functions) that can operate on resource types contained within a body of configuration data -* *actuation* (reconciliation of configuration data with live state) is separate +* actuation (reconciliation of configuration data with live state) is separate from transformation of configuration data, and is driven by the declarative data model * transformations, particularly value propagation, are preferable to wholesale @@ -90,16 +90,16 @@ metadata, references, status conventions, etc. as the configuration serialization data model * uses [Kptfile](https://kpt.dev/reference/schema/kptfile/) to store package metadata * uses [ResourceList](https://kpt.dev/reference/schema/resource-list/) as a serialized package wire-format -* uses a function `ResourceList → ResultList` (`kpt` function) as the foundational, composable unit of +* uses a function `ResourceList → ResultList` (*kpt* function) as the foundational, composable unit of package-manipulation code (note that other forms of code can manipulate packages as well, i.e. UIs, custom algorithms not necessarily packaged and used as kpt functions) and provides the following basic functionality: -* load a serialized package from a repository (as `ResourceList`) (examples of repository may be one or more of: local +* load a serialized package from a repository (as ResourceList) (examples of repository may be one or more of: local HDD, Git repository, OCI, Cloud Storage, etc.) -* save a serialized package (as `ResourceList`) to a package repository -* evaluate a function on a serialized package (`ResourceList`) +* save a serialized package (as ResourceList) to a package repository +* evaluate a function on a serialized package (ResourceList) * [render](https://kpt.dev/book/04-using-functions/01-declarative-function-execution) a package (evaluate functions declared within the package itself) * create a new (empty) package @@ -115,7 +115,7 @@ The Config as Data approach enables some key value which is available in other configuration management approaches to a lesser extent or is not available at all. -*CaD* approach enables: +CaD approach enables: * simplified authoring of configuration using a variety of methods and sources * WYSIWYG interaction with configuration using a simple data serialization formation rather than a code-like format diff --git a/content/en/docs/porch/contributors-guide/_index.md b/content/en/docs/porch/contributors-guide/_index.md index 349a0af9..13c88779 100644 --- a/content/en/docs/porch/contributors-guide/_index.md +++ b/content/en/docs/porch/contributors-guide/_index.md @@ -7,7 +7,7 @@ description: ## Changing Porch API -If you change the API resources, in `api/porch/.../*.go`, update the generated code by running: +If you change the API resources, in *api/porch/.../*.go*, update the generated code by running: ```sh make generate @@ -17,23 +17,23 @@ make generate Porch comprises of several software components: -* [api](https://github.com/nephio-project/porch/tree/main/api): Definition of the KRM API supported by the Porch +* [*api*](https://github.com/nephio-project/porch/tree/main/api): Definition of the KRM API supported by the Porch extension apiserver -* [porchctl](https://github.com/nephio-project/porch/tree/main/cmd/porchctl): CLI command tool for administration of - Porch `Repository` and `PackageRevision` custom resources. -* [apiserver](https://github.com/nephio-project/porch/tree/main/pkg/apiserver): The Porch apiserver implementation, REST - handlers, Porch `main` function -* [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine): Core logic of Package Orchestration - +* [*porchctl*](https://github.com/nephio-project/porch/tree/main/cmd/porchctl): CLI command tool for administration of + Porch Repository and PackageRevision custom resources. +* [*apiserver*](https://github.com/nephio-project/porch/tree/main/pkg/apiserver): The Porch apiserver implementation, REST + handlers, Porch main function +* [*engine*](https://github.com/nephio-project/porch/tree/main/pkg/engine): Core logic of Package Orchestration - operations on package contents -* [func](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes gRPC API -* [repository](https://github.com/nephio-project/porch/blob/main/pkg/repository): Repository integration package -* [git](https://github.com/nephio-project/porch/tree/main/pkg/git): Integration with Git repository. -* [oci](https://github.com/nephio-project/porch/tree/main/pkg/oci): Integration with OCI repository. -* [cache](https://github.com/nephio-project/porch/tree/main/pkg/cache): Package caching. -* [controllers](https://github.com/nephio-project/porch/tree/main/controllers): `Repository` CRD. No controller; +* [*func*](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes gRPC API +* [*repository*](https://github.com/nephio-project/porch/blob/main/pkg/repository): Repository integration package +* [*git*](https://github.com/nephio-project/porch/tree/main/pkg/git): Integration with Git repository. +* [*oci*](https://github.com/nephio-project/porch/tree/main/pkg/oci): Integration with OCI repository. +* [*cache*](https://github.com/nephio-project/porch/tree/main/pkg/cache): Package caching. +* [*controllers*](https://github.com/nephio-project/porch/tree/main/controllers): Repository CRD. No controller; Porch apiserver watches these resources for changes as repositories are (un-)registered. -* [test](https://github.com/nephio-project/porch/tree/main/test): Test Git Server for Porch e2e testing, and - [e2e](https://github.com/nephio-project/porch/tree/main/test/e2e) tests. +* [*test*](https://github.com/nephio-project/porch/tree/main/test): Test Git Server for Porch e2e testing, and + [*e2e*](https://github.com/nephio-project/porch/tree/main/test/e2e) tests. ## Running Porch @@ -68,7 +68,7 @@ Follow the [Running Porch Locally](../running-porch/running-locally.md) guide to ## Debugging -To debug Porch, run Porch locally [running-locally.md](../running-porch/running-locally.md), exit porch server running +To debug Porch, run Porch locally [Running Porch Locally](../running-porch/running-locally.md), exit porch server running in the shell, and launch Porch under the debugger. VSCode debug session is pre-configured in [launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json). @@ -89,7 +89,7 @@ Some useful code pointers: ## Running Tests All tests can be run using `make test`. Individual tests can be run using `go test`. -End-to-End tests assume that Porch instance is running and `KUBECONFIG` is configured +End-to-End tests assume that Porch instance is running and KUBECONFIG is configured with the instance. The tests will automatically detect whether they are running against Porch running on local machien or k8s cluster and will start Git server appropriately, then run test suite against the Porch instance. @@ -103,7 +103,7 @@ then run test suite against the Porch instance. * `make push-images`: builds and pushes Porch Docker images * `make deployment-config`: customizes configuration which installs Porch in k8s cluster with correct image names, annotations, service accounts. - The deployment-ready configuration is copied into `./.build/deploy` + The deployment-ready configuration is copied into *./.build/deploy* * `make deploy`: deploys Porch in the k8s cluster configured with current kubectl context * `make push-and-deploy`: builds, pushes Porch Docker images, creates deployment configuration, and deploys Porch * `make` or `make all`: builds and runs Porch [locally](../running-porch/running-locally.md) diff --git a/content/en/docs/porch/contributors-guide/dev-process.md b/content/en/docs/porch/contributors-guide/dev-process.md index ec84f685..821a6593 100644 --- a/content/en/docs/porch/contributors-guide/dev-process.md +++ b/content/en/docs/porch/contributors-guide/dev-process.md @@ -31,17 +31,17 @@ After issuing this command you are expected to start the porch API server locall The simplest way to run the porch API server is to launch it in a VSCode IDE, as described by the following process: -1. Open the `porch.code-workspace` file in the root of the porch git repo. +1. Open the *porch.code-workspace* file in the root of the porch git repo. -1. Edit your local `.vscode/launch.json` file as follows: Change the `--kubeconfig` argument of the `Launch Server` configuration to point to a KUBECONFIG file that is set to the kind cluster as the current context. +1. Edit your local *.vscode/launch.json* file as follows: Change the `--kubeconfig` argument of the Launch Server configuration to point to a *KUBECONFIG* file that is set to the kind cluster as the current context. {{% alert title="Note" color="primary" %}} - If your current KUBECONFIG environment variable already points to the porch-test kind cluster, then you don't have to touch anything. + If your current *KUBECONFIG* environment variable already points to the porch-test kind cluster, then you don't have to touch anything. {{% /alert %}} -1. Launch the Porch server locally in VSCode by selecting the "Launch Server" configuration on the VSCode "Run and Debug" window. For more information please refer to the [VSCode debugging documentation](https://code.visualstudio.com/docs/editor/debugging). +1. Launch the Porch server locally in VSCode by selecting the **Launch Server** configuration on the VSCode **Run and Debug** window. For more information please refer to the [VSCode debugging documentation](https://code.visualstudio.com/docs/editor/debugging). ### Check to ensure that the API server is serving requests: @@ -129,7 +129,7 @@ curl https://localhost:4443/apis/porch.kpt.dev/v1alpha1 -k ## Troubleshoot the porch controllers -There are several ways to develop, test and troubleshoot the porch controllers (i.e. PackageVariant, PackageVariantSet). In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the process hosting all porch controllers is running locally on your machine. +There are several ways to develop, test and troubleshoot the porch controllers (i.e. *ackageVariant*, *PackageVariantSet*). In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the process hosting all porch controllers is running locally on your machine. The following command will rebuild and deploy porch, except the porch-controllers component: @@ -137,7 +137,7 @@ The following command will rebuild and deploy porch, except the porch-controller make run-in-kind-no-controllers ``` -After issuing this command you are expected to start the porch controllers process locally on your machine (outside of the kind cluster); probably in your IDE, potentially in a debugger. If you are using VS Code you can use the "Launch Controllers" configuration that is defined in the [launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json) file of the porch git repo. +After issuing this command you are expected to start the porch controllers process locally on your machine (outside of the kind cluster); probably in your IDE, potentially in a debugger. If you are using VS Code you can use the **Launch Controllers** configuration that is defined in the [launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json) file of the porch git repo. ## Run the unit tests @@ -147,7 +147,7 @@ make test ## Run the end-to-end tests -To run the end-to-end tests against the Kubernetes API server where KUBECONFIG points to, simply issue: +To run the end-to-end tests against the Kubernetes API server where *KUBECONFIG* points to, simply issue: ```bash make test-e2e @@ -182,7 +182,7 @@ E2E=1 go test -v ./test/e2e/cli -run TestPorch/rpkg-lifecycle The `make run-in-kind`, `make run-in-kind-no-server` and `make run-in-kind-no-controller` commands can be executed right after each other. No clean-up or restart is required between them. The make scripts will intelligently do the necessary changes in your current porch deployment in kind (e.g. removing or re-adding the porch API server). -You can always find the configuration of your current deployment in `.build/deploy`. +You can always find the configuration of your current deployment in *.build/deploy*. You can always use `make test` and `make test-e2e` to test your current setup, no matter which of the above detailed configurations it is. diff --git a/content/en/docs/porch/contributors-guide/environment-setup-vm.md b/content/en/docs/porch/contributors-guide/environment-setup-vm.md index 4b4945dc..0a1e6245 100644 --- a/content/en/docs/porch/contributors-guide/environment-setup-vm.md +++ b/content/en/docs/porch/contributors-guide/environment-setup-vm.md @@ -41,7 +41,7 @@ sudo usermod -a -G syslog ubuntu sudo usermod -a -G docker ubuntu ``` -3. Log out of your VM and log in again so that the group changes on the `ubuntu` user are picked up. +3. Log out of your VM and log in again so that the group changes on the *ubuntu* user are picked up. ```bash > exit @@ -51,7 +51,7 @@ sudo usermod -a -G docker ubuntu ubuntu adm dialout cdrom floppy sudo audio dip video plugdev syslog netdev lxd docker ``` -4. Install `go` so that you can build Porch on the VM: +4. Install *go* so that you can build Porch on the VM: ```bash wget -O - https://go.dev/dl/go1.22.5.linux-amd64.tar.gz | sudo tar -C /usr/local -zxvf - @@ -64,7 +64,7 @@ echo ' PATH="/usr/local/go/bin:$PATH"' >> ~/.profile echo 'fi' >> ~/.profile ``` -5. Log out of your VM and log in again so that the `go` is added to your path. Verify that `go` is in the path: +5. Log out of your VM and log in again so that the *go* is added to your path. Verify that *go* is in the path: ```bash > exit @@ -75,7 +75,7 @@ echo 'fi' >> ~/.profile go version go1.22.5 linux/amd64 ``` -6. Install `go delve` for debugging on the VM: +6. Install *go delve* for debugging on the VM: ```bash go install -v github.com/go-delve/delve/cmd/dlv@latest @@ -104,7 +104,7 @@ sed -i "s/^KIND_CONTEXT_NAME ?= porch-test$/KIND_CONTEXT_NAME ?= "$(kind get clu kubectl expose svc -n porch-system function-runner --name=xfunction-runner --type=LoadBalancer --load-balancer-ip='172.18.0.202' ``` -10. Set the `KUBECONFIG` and `FUNCTION_RUNNER_IP` environment variables in the `.profile` file +10. Set the KUBECONFIG and FUNCTION_RUNNER_IP environment variables in the *.profile* file You **must** do this step before connecting with VSCode because VSCode caches the environment on the server. If you want to change the values of these variables subsequently, you must restart the VM server. ```bash @@ -127,7 +127,7 @@ documentation. 1. Use the **Connect to a remote host** instructions on the [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) page to connect to your VM. -2. Click **Open Folder** and browse to the Porch code on the vm, `/home/ubuntu/git/github/nephio-project/porch` in this case: +2. Click **Open Folder** and browse to the Porch code on the vm, */home/ubuntu/git/github/nephio-project/porch* in this case: ![Browse to Porch code](/static/images/porch/contributor/01_VSCodeOpenPorchFolder.png) @@ -135,12 +135,12 @@ documentation. ![Porch code is open](/static/images/porch/contributor/02_VSCodeConnectedPorch.png) -4. We now need to install support for `go` debugging in VSCode. Trigger this by launching a debug configuration in VSCode. +4. We now need to install support for *go* debugging in VSCode. Trigger this by launching a debug configuration in VSCode. Here we use the **Launch Override Server** configuration. ![Launch the Override Server VSCode debug configuration](/static/images/porch/contributor/03_LaunchOverrideServer.png) -5. VSCode complains that `go` debugging is not supported, click the **Install go Extension** button. +5. VSCode complains that *go* debugging is not supported, click the **Install go Extension** button. ![VSCode go debugging not supported message](/static/images/porch/contributor/04_GoDebugNotSupportedPopup.png) diff --git a/content/en/docs/porch/contributors-guide/environment-setup.md b/content/en/docs/porch/contributors-guide/environment-setup.md index 3e86ea77..41486672 100644 --- a/content/en/docs/porch/contributors-guide/environment-setup.md +++ b/content/en/docs/porch/contributors-guide/environment-setup.md @@ -20,16 +20,16 @@ plugin to connect to it. ## Extra steps for MacOS users -The script the `make deployment-config` target to generate the deployment files for porch. The scripts called by this -make target use recent `bash` additions. MacOS comes with `bash` 3.x.x +The script the make deployment-config target to generate the deployment files for porch. The scripts called by this +make target use recent *bash* additions. MacOS comes with *bash* 3.x.x -1. Install `bash` 4.x.x or better of `bash` using homebrew, see +1. Install *bash* 4.x.x or better of *bash* using homebrew, see [this this post for details](https://apple.stackexchange.com/questions/193411/update-bash-to-version-4-0-on-osx) -2. Ensure that `/opt/homebrew/bin` is earlier in your path than `/bin` and `/usr/bin` +2. Ensure that */opt/homebrew/bin* is earlier in your path than */bin* and */usr/bin* {{% alert title="Note" color="primary" %}} -The changes above **permanently** change the `bash` version for **all** applications and may cause side +The changes above **permanently** change the *bash* version for **all** applications and may cause side effects. {{% /alert %}} @@ -37,7 +37,7 @@ effects. ## Setup the environment automatically -The [`./scripts/setup-dev-env.sh`](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) setup +The [*./scripts/setup-dev-env.sh*](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) setup script automatically builds a porch development environment. {{% alert title="Note" color="primary" %}} @@ -50,9 +50,9 @@ to customize it to suit your needs. The setup script will perform the following steps: 1. Install a kind cluster. The name of the cluster is read from the PORCH_TEST_CLUSTER environment variable, otherwise - it defaults to `porch-test`. The configuration of the cluster is taken from + it defaults to porch-test. The configuration of the cluster is taken from [here](https://github.com/nephio-project/porch/blob/main/deployments/local/kind_porch_test_cluster.yaml). -1. Install the MetalLB load balancer into the cluster, in order to allow `LoadBalancer` typed Services to work properly. +1. Install the MetalLB load balancer into the cluster, in order to allow LoadBalancer typed Services to work properly. 1. Install the Gitea git server into the cluster. This can be used to test porch during development, but it is not used in automated end-to-end tests. Gitea is exposed to the host via port 3000. The GUI is accessible via , or (username: nephio, password: secret). @@ -63,7 +63,7 @@ The setup script will perform the following steps: {{% /alert %}} 1. Generate the PKI resources (key pairs and certificates) required for end-to-end tests. -1. Build the porch CLI binary. The result will be generated as `.build/porchctl`. +1. Build the porch CLI binary. The result will be generated as *.build/porchctl*. That's it! If you want to run the steps manually, please use the code of the script as a detailed description. @@ -72,7 +72,7 @@ script is interrupted for any reason, and you run it again it should effectively ## Extra manual steps -Copy the `.build/porchctl` binary (that was built by the setup script) to somewhere in your $PATH, or add the `.build` +Copy the *.build/porchctl* binary (that was built by the setup script) to somewhere in your $PATH, or add the *.build* directory to your PATH. ## Build and deploy porch @@ -143,7 +143,7 @@ external-blueprints git Package false True https://github.com/n management git Package false True http://172.18.255.200:3000/nephio/management.git ``` -You can also check the repositories using kubectl. +You can also check the repositories using *kubectl*. ```bash kubectl get repositories -n porch-demo diff --git a/content/en/docs/porch/package-orchestration.md b/content/en/docs/porch/package-orchestration.md index a687b658..ea6d3a96 100644 --- a/content/en/docs/porch/package-orchestration.md +++ b/content/en/docs/porch/package-orchestration.md @@ -8,7 +8,7 @@ description: ## Why Customers who want to take advantage of the benefits of [Configuration as Data](config-as-data.md) can do so today using -a [kpt](https://kpt.dev) CLI and kpt function ecosystem, including [functions catalog](https://catalog.kpt.dev/). +a [*kpt*](https://kpt.dev) CLI and *kpt* function ecosystem, including [functions catalog](https://catalog.kpt.dev/). Package authoring is possible using a variety of editors with [YAML](https://yaml.org/) support. That said, a delightful UI experience of WYSIWYG package authoring which supports broader package lifecycle, including package authoring with *guardrails*, approval workflow, package deployment, and more, is not yet available. @@ -20,40 +20,40 @@ building the delightful UI experience supporting the configuration lifecycle. This section briefly describes core concepts of package orchestration: -***Package***: Package is a collection of related configuration files containing configuration of [KRM][krm] -**resources**. Specifically, configuration packages are [kpt packages](https://kpt.dev/). +**Package**: Package is a collection of related configuration files containing configuration of [KRM][krm] +resources. Specifically, configuration packages are [*kpt* packages](https://kpt.dev/). -***Repository***: Repositories store packages or [functions][]. For example [git][] or [OCI][oci]. Functions may be +**Repository**: Repositories store packages or [functions][]. For example [git][] or [OCI][oci]. Functions may be associated with repositories to enforce constraints or invariants on packages (guardrails). ([more details](#repositories)) -Packages are sequentially ***versioned***; multiple versions of the same package may exist in a repository. +Packages are sequentially versioned; multiple versions of the same package may exist in a repository. [more details](#package-versioning)) -A package may have a link (URL) to an ***upstream package*** (a specific version) from which it was cloned. +A package may have a link (URL) to an upstream package (a specific version) from which it was cloned. ([more details](#package-relationships)) Package may be in one of several lifecycle stages: -* ***Draft*** - package is being created or edited. The package contents can be modified but package is not ready to be +* **Draft** - package is being created or edited. The package contents can be modified but package is not ready to be used (i.e. deployed) -* ***Proposed*** - author of the package proposed that the package be published -* ***Published*** - the changes to the package have been approved and the package is ready to be used. Published +* **Proposed** - author of the package proposed that the package be published +* **Published** - the changes to the package have been approved and the package is ready to be used. Published packages can be deployed or cloned -***Function*** (specifically, [KRM functions][krm functions]) can be applied to packages to mutate or validate resources +**Function** (specifically, [KRM functions][krm functions]) can be applied to packages to mutate or validate resources within them. Functions can be applied to a package to create specific package mutation while editing a package draft, -functions can be added to package's Kptfile [pipeline][], or associated with a repository to be applied to all packages +functions can be added to package's *Kptfile* [pipeline][], or associated with a repository to be applied to all packages on changes. ([more details](#functions)) -A repository can be designated as ***deployment repository***. *Published* packages in a deployment repository are +A repository can be designated as deployment repository. Published packages in a deployment repository are considered deployment-ready. ([more details](#deployment)) ## Core Components of Configuration as Data Implementation -The Core implementation of Configuration as Data, *CaD Core*, is a set of components and APIs which collectively enable: +The Core implementation of Configuration as Data, CaD Core, is a set of components and APIs which collectively enable: -* Registration of repositories (Git, OCI) containing kpt packages or functions, and discovery of packages and functions +* Registration of repositories (Git, OCI) containing *kpt* packages or functions, and discovery of packages and functions * Porcelain package lifecycle, including authoring, versioning, deletion, creation and mutations of a package draft, process of proposing the package draft, and publishing of the approved package. * Package lifecycle operations such as: @@ -73,7 +73,7 @@ At the high level, the Core CaD functionality comprises: * package repository management * package discovery, authoring and lifecycle management -* [kpt][] - a Git-native, schema-aware, extensible client-side tool for managing KRM packages +* [*kpt*][] - a Git-native, schema-aware, extensible client-side tool for managing KRM packages * a GitOps-based deployment mechanism (for example [Config Sync][]), which distributes and deploys configuration, and provides observability of the status of deployed resources * a task-specific UI supporting repository management, package discovery, authoring, and lifecycle @@ -86,8 +86,8 @@ Concepts briefly introduced above are elaborated in more detail in this section. ### Repositories -[kpt][] and [Config Sync][] currently integrate with [git][] repositories, and there is an existing design to add OCI -support to kpt. Initially, the Package Orchestration service will prioritize integration with [git][], and support for +[*kpt*][] and [Config Sync][] currently integrate with [git][] repositories, and there is an existing design to add OCI +support to *kpt*. Initially, the Package Orchestration service will prioritize integration with [git][], and support for additional repository types may be added in the future as required. Requirements applicable to all repositories include: ability to store packages, their versions, and sufficient metadata @@ -126,8 +126,8 @@ We plan to use a simple integer sequence to represent package versions. ### Package Relationships -Kpt packages support the concept of ***upstream***. When a package is cloned from another, the new package -(called ***downstream*** package) maintains an upstream link to the specific version of the package from which it was +*kpt* packages support the concept of upstream. When a package is cloned from another, the new package +(called downstream package) maintains an upstream link to the specific version of the package from which it was cloned. If a new version of the upstream package becomes available, the upstream link can be used to [update](https://kpt.dev/book/03-packages/05-updating-a-package) the downstream package. @@ -140,11 +140,11 @@ others can be used as well. Here we highlight some key attributes of the deployment mechanism and its integration within the CaD Core: -* _Published_ packages in a deployment repository are considered ready to be deployed +* Published packages in a deployment repository are considered ready to be deployed * Config Sync supports deploying individual packages and whole repositories. For Git specifically that translates to a requirement to be able to specify repository, branch/tag/ref, and directory when instructing Config Sync to deploy a package. -* _Draft_ packages need to be identified in such a way that Config Sync can easily avoid deploying them. +* Draft packages need to be identified in such a way that Config Sync can easily avoid deploying them. * Config Sync needs to be able to pin to specific versions of deployable packages in order to orchestrate rollouts and rollbacks. This means it must be possible to GET a specific version of a package. * Config Sync needs to be able to discover when new versions are available for deployment. @@ -164,16 +164,16 @@ packages. Function can be: * applied imperatively to a package draft to perform specific mutation to the package's resources or meta-resources - (`Kptfile` etc.) -* registered in the package's `Kptfile` function pipeline as a *mutator* or *validator* in order to be automatically run + (*Kptfile* etc.) +* registered in the package's *Kptfile* function pipeline as a mutator or validator in order to be automatically run as part of package rendering -* registered at the repository level as *mutator* or *validator*. Such function then applies to all packages in the +* registered at the repository level as mutator or validator. Such function then applies to all packages in the repository and is evaluated whenever a change to a package in the repository occurs. ## Package Orchestration - Porch Having established the context of the CaD Core components and the overall architecture, the remainder of the document -will focus on **Porch** - Package Orchestration service. +will focus on Porch - Package Orchestration service. To reiterate the role of Package Orchestration service among the CaD Core components, it is: @@ -181,7 +181,7 @@ To reiterate the role of Package Orchestration service among the CaD Core compon * [Package Discovery](#package-discovery) * [Package Authoring](#package-authoring) and Lifecycle -In the following section we'll expand more on each of these areas. The term _client_ used in these sections can be +In the following section we'll expand more on each of these areas. The term *client* used in these sections can be either a person interacting with the UI such as a web application or a command-line tool, or an automated agent or process. @@ -209,7 +209,7 @@ The package discovery functionality of Package Orchestration service enables the * retrieve resources and metadata of an individual package, including latest version or any specific version or draft of a package, for the purpose of introspection of a single package or for comparison of contents of multiple versions of a package, or related packages -* enumerate _upstream_ packages available for creating (cloning) a _downstream_ package +* enumerate upstream packages available for creating (cloning) a downstream package * identify downstream packages that need to be upgraded after a change is made to an upstream package * identify all deployment-ready packages in a deployment repository that are ready to be synced to a deployment target by Config Sync @@ -223,28 +223,28 @@ The package discovery functionality of Package Orchestration service enables the The package authoring and lifecycle functionality of the package Orchestration service enables the client to: -* Create a package _draft_ via one of the following means: +* Create a package draft via one of the following means: - * an empty draft 'from scratch' (equivalent to [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/)) - * clone of an upstream package (equivalent to [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/)) from either a + * an empty draft 'from scratch' (equivalent to [`kpt pkg init`](https://kpt.dev/reference/cli/pkg/init/)) + * clone of an upstream package (equivalent to [`kpt pkg get`](https://kpt.dev/reference/cli/pkg/get/)) from either a registered upstream repository or from another accessible, unregistered, repository - * edit an existing package (similar to the CLI command(s) [kpt fn source](https://kpt.dev/reference/cli/fn/source/) or - [kpt pkg pull](https://github.com/GoogleContainerTools/kpt/issues/2557)) + * edit an existing package (similar to the CLI command(s) [`kpt fn source`](https://kpt.dev/reference/cli/fn/source/) or + [`kpt pkg pull`](https://github.com/GoogleContainerTools/kpt/issues/2557)) * roll back / restore a package to any of its previous versions - ([kpt pkg pull](https://github.com/GoogleContainerTools/kpt/issues/2557) of a previous version) + ([`kpt pkg pull`](https://github.com/GoogleContainerTools/kpt/issues/2557) of a previous version) -* Apply changes to a package _draft_. In general, mutations include adding/modifying/deleting any part of the package's +* Apply changes to a package draft. In general, mutations include adding/modifying/deleting any part of the package's contents. Some specific examples include: - * add/change/delete package metadata (i.e. some properties in the `Kptfile`) + * add/change/delete package metadata (i.e. some properties in the *Kptfile*) * add/change/delete resources in the package * add function mutators/validators to the package's [pipeline][] * invoke a function imperatively on the package draft to perform a desired mutation * add/change/delete sub-package * retrieve the contents of the package for arbitrary client-side mutations (equivalent to - [kpt fn source](https://kpt.dev/reference/cli/fn/source/)) + [`kpt fn source`](https://kpt.dev/reference/cli/fn/source/)) * update/replace the package contents with new contents, for example results of a client-side mutations by a UI - (equivalent to [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/)) + (equivalent to [`kpt fn sink`](https://kpt.dev/reference/cli/fn/sink/)) * Rebase a package onto another upstream base package ([detail](https://github.com/GoogleContainerTools/kpt/issues/2548)) or onto a newer version of the same package (to @@ -254,9 +254,9 @@ The package authoring and lifecycle functionality of the package Orchestration s * merge conflicts, invalid package changes, guardrail violations * compliance of the drafted package with repository-wide invariants and guardrails -* Propose for a _draft_ package be _published_. -* Apply an arbitrary decision criteria, and by a manual or automated action, approve (or reject) proposal of a _draft_ - package to be _published_. +* Propose for a draft package be published. +* Apply an arbitrary decision criteria, and by a manual or automated action, approve (or reject) proposal of a draft + package to be published. * Perform bulk operations such as: * Assisted/automated update (upgrade, rollback) of groups of packages matching specific criteria (i.e. base package @@ -292,7 +292,7 @@ perform in order to satisfy requirements of the basic roles. For example, only p ### Porch Architecture -The Package Orchestration service, **Porch** is designed to be hosted in a [Kubernetes](https://kubernetes.io/) cluster. +The Package Orchestration service, Porch is designed to be hosted in a [Kubernetes](https://kubernetes.io/) cluster. The overall architecture is shown below, and includes also existing components (k8s apiserver and Config Sync). @@ -319,34 +319,34 @@ extension API server are: Resources implemented by Porch include: -* `PackageRevision` - represents the _metadata_ of the configuration package revision stored in a _package_ repository. -* `PackageRevisionResources` - represents the _contents_ of the package revision -* `Function` - represents a [KRM function][krm functions] discovered in a registered _function_ repository. +* **PackageRevision** - represents the metadata of the configuration package revision stored in a package repository. +* **PackageRevisionResources** - represents the contents of the package revision +* **Function** - represents a [KRM function][krm functions] discovered in a registered function repository. -Note that each configuration package revision is represented by a _pair_ of resources which each present a different +Note that each configuration package revision is represented by a pair of resources which each present a different view (or [representation][] of the same underlying package revision. -Repository registration is supported by a `Repository` [custom resource][crds]. +Repository registration is supported by a Repository [custom resource][crds]. -**Porch server** itself comprises several key components, including: +Porch server itself comprises several key components, including: -* The *Porch aggregated apiserver* which implements the integration into the main Kubernetes apiserver, and directly - serves API requests for the `PackageRevision`, `PackageRevisionResources` and `Function` resources. -* Package orchestration *engine* which implements the package lifecycle operations, and package mutation workflows -* *CaD Library* which implements specific package manipulation algorithms such as package rendering (evaluation of - package's function *pipeline*), initialization of a new package, etc. The CaD Library is shared with `kpt` +* The **Porch aggregated apiserver** which implements the integration into the main Kubernetes apiserver, and directly + serves API requests for the PackageRevision, PackageRevisionResources and Function resources. +* Package orchestration engine which implements the package lifecycle operations, and package mutation workflows +* **CaD Library** which implements specific package manipulation algorithms such as package rendering (evaluation of + package's function pipeline), initialization of a new package, etc. The CaD Library is shared with *kpt* where it likewise provides the core package manipulation algorithms. -* *Package cache* which enables both local caching, as well as abstract manipulation of packages and their contents +* **Package cache** which enables both local caching, as well as abstract manipulation of packages and their contents irrespectively of the underlying storage mechanism (Git, or OCI) -* *Repository adapters* for Git and OCI which implement the specific logic of interacting with those types of package +* **Repository adapters** for Git and OCI which implement the specific logic of interacting with those types of package repositories. -* *Function runtime* which implements support for evaluating [kpt functions][functions] and multi-tier cache of +* **Function runtime** which implements support for evaluating [*kpt* functions][functions] and multi-tier cache of functions to support low latency function evaluation #### Function Runner -**Function runner** is a separate service responsible for evaluating [kpt functions][functions]. Function runner exposes -a [gRPC](https://grpc.io/) endpoint which enables evaluating a kpt function on the provided configuration package. +Function runner is a separate service responsible for evaluating [**kpt** functions][functions]. Function runner exposes +a [gRPC](https://grpc.io/) endpoint which enables evaluating a *kpt* function on the provided configuration package. The gRPC technology was chosen for the function runner service because the [requirements](#grpc-api) that informed choice of KRM API for the Package Orchestration service do not apply. The function runner is an internal microservice, @@ -356,29 +356,29 @@ The function runner also maintains cache of functions to support low latency fun #### CaD Library -The [kpt](https://kpt.dev/) CLI already implements foundational package manipulation algorithms in order to provide the +The [*kpt*](https://*kpt*.dev/) CLI already implements foundational package manipulation algorithms in order to provide the command line user experience, including: -* [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/) - create an empty, valid, KRM package -* [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/) - create a downstream package by cloning an upstream package; +* [`kpt pkg init`](https://kpt.dev/reference/cli/pkg/init/) - create an empty, valid, KRM package +* [`kpt pkg get`](https://kpt.dev/reference/cli/pkg/get/) - create a downstream package by cloning an upstream package; set up the upstream reference of the downstream package -* [kpt pkg update](https://kpt.dev/reference/cli/pkg/update/) - update the downstream package with changes from new +* [`kpt pkg update`](https://kpt.dev/reference/cli/pkg/update/) - update the downstream package with changes from new version of upstream, 3-way merge -* [kpt fn eval](https://kpt.dev/reference/cli/fn/eval/) - evaluate a kpt function on a package -* [kpt fn render](https://kpt.dev/reference/cli/fn/render/) - render the package by executing the function pipeline of +* [`kpt fn eval`](https://kpt.dev/reference/cli/fn/eval/) - evaluate a *kpt* function on a package +* [`kpt fn render`](https://kpt.dev/reference/cli/fn/render/) - render the package by executing the function pipeline of the package and its nested packages -* [kpt fn source](https://kpt.dev/reference/cli/fn/source/) and [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/) - - read package from local disk as a `ResourceList` and write package represented as `ResourcesList` into local disk +* [`kpt fn source`](https://kpt.dev/reference/cli/fn/source/) and [`kpt fn sink`](https://kpt.dev/reference/cli/fn/sink/) - + read package from local disk as a ResourceList and write package represented as ResourcesList into local disk The same set of primitives form the foundational building blocks of the package orchestration service. Further, the package orchestration service combines these primitives into higher-level operations (for example, package orchestrator renders packages automatically on changes, future versions will support bulk operations such as upgrade of multiple packages, etc). -The implementation of the package manipulation primitives in kpt was refactored (with initial refactoring completed, and +The implementation of the package manipulation primitives in *kpt* was refactored (with initial refactoring completed, and more to be performed as needed) in order to: -* create a reusable CaD library, usable by both kpt CLI and Package Orchestration service +* create a reusable CaD library, usable by both *kpt* CLI and Package Orchestration service * create abstractions for dependencies which differ between CLI and Porch, most notable are dependency on Docker for function evaluation, and dependency on the local file system for package rendering. @@ -389,14 +389,14 @@ Over time, the CaD Library will provide the package manipulation primitives: * perform 3-way merge (update) * render - core package rendering algorithm using a pluggable function evaluator to support: - * function evaluation via Docker (used by kpt CLI) + * function evaluation via Docker (used by *kpt* CLI) * function evaluation via an RPC to a service or appropriate function sandbox * high-performance evaluation of trusted, built-in, functions without sandbox * heal configuration (restore comments after lossy transformation) -and both kpt CLI and Porch will consume the library. This approach will allow leveraging the investment already made -into the high quality package manipulation primitives, and enable functional parity between KPT CLI and Package +and both *kpt* CLI and Porch will consume the library. This approach will allow leveraging the investment already made +into the high quality package manipulation primitives, and enable functional parity between *KPT* CLI and Package Orchestration service. ## User Guide diff --git a/content/en/docs/porch/package-variant.md b/content/en/docs/porch/package-variant.md index 7ef66109..6afbc318 100644 --- a/content/en/docs/porch/package-variant.md +++ b/content/en/docs/porch/package-variant.md @@ -36,20 +36,20 @@ variants. These are designed to address several different dimensions of scalabil ## Core Concepts -For this solution, "workloads" are represented by packages. "Package" is a more general concept, being an arbitrary +For this solution, workloads are represented by packages. Package is a more general concept, being an arbitrary bundle of resources, and therefore is sufficient to solve the originally stated problem. -The basic idea here is to introduce a PackageVariant resource that manages the derivation of a variant of a package from +The basic idea here is to introduce a *PackageVariant* resource that manages the derivation of a variant of a package from the original source package, and to manage the evolution of that variant over time. This effectively automates the -human-centered process for variant creation one might use with `kpt`: +human-centered process for variant creation one might use with *kpt*: 1. Clone an upstream package locally 1. Make changes to the local package, setting values in resources and executing KRM functions 1. Push the package to a new repository and tag it as a new version -Similarly, PackageVariant can manage the process of updating a package when a new version of the upstream package is +Similarly, *PackageVariant* can manage the process of updating a package when a new version of the upstream package is published. In the human-centered workflow, a user would use `kpt pkg update` to pull in changes to their derivative -package. When using a PackageVariant resource, the change would be made to the upstream specification in the resource, +package. When using a *PackageVariant* resource, the change would be made to the upstream specification in the resource, and the controller would propose a new Draft package reflecting the outcome of `kpt pkg update`. By automating this process, we open up the possibility of performing systematic changes that tie back to our different @@ -65,22 +65,22 @@ across packages but configured as needed for the specific package, are used to i package. This decouples authoring of the packages, creation of the input model, and deploy-time use of that input model within the packages, allowing those activities to be performed by different teams or organizations. -We refer to the mechanism described above as *configuration injection*. It enables dynamic, context-aware creation of +We refer to the mechanism described above as configuration injection. It enables dynamic, context-aware creation of variants. Another way to think about it is as a continuous reconciliation, much like other Kubernetes controllers. In -this case, the inputs are a parent package `P` and a context `C` (which may be a collection of many independent -resources), with the output being the derived package `D`. When a new version of `C` is created by updates to in-cluster -resources, we get a new revision of `D`, customized based upon the updated context. Similarly, the user (or an -automation) can monitor for new versions of `P`; when one arrives, the PackageVariant can be updated to point to that -new version, resulting in a newly proposed Draft of `D`, updated to reflect the upstream changes. This will be explained +this case, the inputs are a parent package *P* and a context *C* (which may be a collection of many independent +resources), with the output being the derived package *D*. When a new version of *C* is created by updates to in-cluster +resources, we get a new revision of *D*, customized based upon the updated context. Similarly, the user (or an +automation) can monitor for new versions of *P*; when one arrives, the *PackageVariant* can be updated to point to that +new version, resulting in a newly proposed Draft of *D*, updated to reflect the upstream changes. This will be explained in more detail below. -This proposal also introduces a way to "fan-out", or create multiple PackageVariant resources declaratively based upon a -list or selector, with the PackageVariantSet resource. This is combined with the injection mechanism to enable +This proposal also introduces a way to "fan-out", or create multiple *PackageVariant* resources declaratively based upon a +list or selector, with the *PackageVariantSet* resource. This is combined with the injection mechanism to enable generation of large sets of variants that are specialized to a particular target repository, cluster, or other resource. ## Basic Package Cloning -The PackageVariant resource controls the creation and lifecycle of a variant of a package. That is, it defines the +The *PackageVariant* resource controls the creation and lifecycle of a variant of a package. That is, it defines the original (upstream) package, the new (downstream) package, and the changes (mutations) that need to be made to transform the upstream into the downstream. It also allows the user to specify policies around adoption, deletion, and update of package revisions that are under the control of the package variant controller. @@ -101,30 +101,30 @@ variant control will do, depending upon the specified deletion policy. ### PackageRevision Metadata -The package variant controller utilizes Porch APIs. This means that it is not just doing a `clone` operation, but in -fact creating a Porch PackageRevision resource. In particular, that resource can contain Kubernetes metadata that is +The package variant controller utilizes Porch APIs. This means that it is not just doing a clone operation, but in +fact creating a Porch *PackageRevision* resource. In particular, that resource can contain Kubernetes metadata that is not part of the package as stored in the repository. -Some of that metadata is necessary for the management of the PackageRevision by the package variant controller - for -example, the owner reference indicating which PackageVariant created the PackageRevision. These are not under the user's -control. However, the PackageVariant resource does make the annotations and labels of the PackageRevision available as -values that may be controlled during the creation of the PackageRevision. This can assist in additional automation +Some of that metadata is necessary for the management of the *PackageRevision* by the package variant controller - for +example, the owner reference indicating which *PackageVariant* created the *PackageRevision*. These are not under the user's +control. However, the *PackageVariant* resource does make the annotations and labels of the *PackageRevision* available as +values that may be controlled during the creation of the *PackageRevision*. This can assist in additional automation workflows. ## Introducing Variance -Just cloning is not that interesting, so the PackageVariant resource also allows you to control various ways of mutating +Just cloning is not that interesting, so the *PackageVariant* resource also allows you to control various ways of mutating the original package to create the variant. ### Package Context[^porch17] -Every kpt package that is fetched with `--for-deployment` will contain a ConfigMap called `kptfile.kpt.dev`. +Every *kpt* package that is fetched with `--for-deployment` will contain a ConfigMap called *kptfile.kpt.dev*. Analogously, when Porch creates a package in a deployment repository, it will create this ConfigMap, if it does not -already exist. Kpt (or Porch) will automatically add a key `name` to the ConfigMap data, with the value of the package -name. This ConfigMap can then be used as input to functions in the Kpt function pipeline. +already exist. *Kpt* (or Porch) will automatically add a key name to the ConfigMap data, with the value of the package +name. This ConfigMap can then be used as input to functions in the *kpt* function pipeline. This process holds true for package revisions created via the package variant controller as well. Additionally, the -author of the PackageVariant resource can specify additional key-value pairs to insert into the package context, as +author of the *PackageVariant* resource can specify additional key-value pairs to insert into the package context, as shown in *Figure 2*. | ![Figure 2: Package Context Mutation](/static/images/porch/packagevariant-context.png) | @@ -137,8 +137,8 @@ than simple key/value pairs. ### Kptfile Function Pipeline Editing[^porch18] -In the manual workflow, one of the ways we edit packages is by running KRM functions imperatively. PackageVariant offers -a similar capability, by allowing the user to add functions to the beginning of the downstream package `Kptfile` +In the manual workflow, one of the ways we edit packages is by running KRM functions imperatively. *PackageVariant* offers +a similar capability, by allowing the user to add functions to the beginning of the downstream package *Kptfile* mutators pipeline. These functions will then execute before the functions present in the upstream pipeline. It is not exactly the same as running functions imperatively, because they will also be run in every subsequent execution of the downstream package function pipeline. But it can achieve the same goals. @@ -147,9 +147,9 @@ For example, consider an upstream package that includes a Namespace resource. In workload may not have the permissions to provision cluster-scoped resources like namespaces. This means that they would not be able to use this upstream package without removing the Namespace resource (assuming that they only have access to a pipeline that deploys with constrained permissions). By adding a function that removes Namespace resources, and a call -to `set-namespace`, they can take advantage of the upstream package. +to set-namespace, they can take advantage of the upstream package. -Similarly, the Kptfile pipeline editing feature provides an easy mechanism for the deployer to create and set the +Similarly, the *Kptfile* pipeline editing feature provides an easy mechanism for the deployer to create and set the namespace if their downstream package application pipeline allows it, as seen in *Figure 3*.[^setns] | ![Figure 3: KRM Function Pipeline Editing](/static/images/porch/packagevariant-function.png) | @@ -159,7 +159,7 @@ namespace if their downstream package application pipeline allows it, as seen in ### Configuration Injection[^porch18] Adding values to the package context or functions to the pipeline works for configuration that is under the control of -the creator of the PackageVariant resource. However, in more advanced use cases, we may need to specialize the package +the creator of the *PackageVariant* resource. However, in more advanced use cases, we may need to specialize the package based upon other contextual information. This particularly comes into play when the user deploying the workload does not have direct control over the context in which it is being deployed. For example, one part of the organization may manage the infrastructure - that is, the cluster in which we are deploying the workload - and another part the actual @@ -171,13 +171,13 @@ will use information specific to this instance of the package to lookup a resour information into the package. Of course, the package has to be ready to receive this information. So, there is a protocol for facilitating this dance: -- Packages may contain resources annotated with `kpt.dev/config-injection` -- Often, these will also be `config.kubernetes.io/local-config` resources, as they are likely just used by local +- Packages may contain resources annotated with *kpt.dev/config-injection* +- Often, these will also be *config.kubernetes.io/local-config* resources, as they are likely just used by local functions as input. But this is not mandatory. - The package variant controller will look for any resource in the Kubernetes cluster matching the Group, Version, and - Kind of the package resource, and satisfying the *injection selector*. -- The package variant controller will copy the `spec` field from the matching in-cluster resource to the in-package - resource, or the `data` field in the case of a ConfigMap. + Kind of the package resource, and satisfying the injection selector. +- The package variant controller will copy the spec field from the matching in-cluster resource to the in-package + resource, or the data field in the case of a ConfigMap. | ![Figure 4: Configuration Injection](/static/images/porch/packagevariant-config-injection.png) | | :---: | @@ -185,7 +185,7 @@ protocol for facilitating this dance: {{% alert title="Note" color="primary" %}} -Because we are injecting data *from the Kubernetes cluster*, we can also monitor that data for changes. For +Because we are injecting data from the Kubernetes cluster, we can also monitor that data for changes. For each resource we inject, the package variant controller will establish a Kubernetes "watch" on the resource (or perhaps on the collection of such resources). A change to that resource will result in a new Draft package with the updated configuration injected. @@ -201,27 +201,27 @@ API definition. The package variant controller allows you to specific a specific upstream package revision to clone, or you can specify a floating tag[^notimplemented]. -If you specify a specific upstream revision, then the downstream will not be changed unless the PackageVariant resource -itself is modified to point to a new revision. That is, the user must edit the PackageVariant, and change the upstream +If you specify a specific upstream revision, then the downstream will not be changed unless the *PackageVariant* resource +itself is modified to point to a new revision. That is, the user must edit the *PackageVariant*, and change the upstream package reference. When that is done, the package variant controller will update any existing Draft package under its ownership by doing the equivalent of a `kpt pkg update` to update the downstream to be based upon the new upstream revision. If a Draft does not exist, then the package variant controller will create a new Draft based on the current published downstream, and apply the `kpt pkg update`. This updated Draft must then be proposed and approved like any other package change. -If a floating tag is used, then explicit modification of the PackageVariant is not needed. Rather, when the floating tag +If a floating tag is used, then explicit modification of the *PackageVariant* is not needed. Rather, when the floating tag is moved to a new tagged revision of the upstream package, the package revision controller will notice and automatically propose and update to that revision. For example, the upstream package author may designate three floating tags: stable, -beta, and alpha. The upstream package author can move these tags to specific revisions, and any PackageVariant resource +beta, and alpha. The upstream package author can move these tags to specific revisions, and any *PackageVariant* resource tracking them will propose updates to their downstream packages. ### Adoption and Deletion Policies -When a PackageVariant resource is created, it will have a particular repository and package name as the downstream. The +When a *PackageVariant* resource is created, it will have a particular repository and package name as the downstream. The adoption policy controls whether the package variant controller takes over an existing package with that name, in that repository. -Analogously, when a PackageVariant resource is deleted, a decision must be made about whether or not to delete the +Analogously, when a *PackageVariant* resource is deleted, a decision must be made about whether or not to delete the downstream package. This is controlled by the deletion policy. ## Fan Out of Variant Generation[^pvsimpl] @@ -231,17 +231,17 @@ new versions of a package as the upstream changes, or as injected resources are automating common, systematic changes made when bringing an external package into an organization, or an organizational package into a team repository. -That is useful, but not extremely compelling by itself. More interesting is when we use PackageVariant as a primitive -for automations that act on other dimensions of scale. That means writing controllers that emit PackageVariant -resources. For example, we can create a controller that instantiates a PackageVariant for each developer in our -organization, or we can create a controller to manage PackageVariants across environments. The ability to not only clone +That is useful, but not extremely compelling by itself. More interesting is when we use *PackageVariant* as a primitive +for automations that act on other dimensions of scale. That means writing controllers that emit *PackageVariant* +resources. For example, we can create a controller that instantiates a *PackageVariant* for each developer in our +organization, or we can create a controller to manage *PackageVariant*s across environments. The ability to not only clone a package, but make systematic changes to that package enables flexible automation. Workload controllers in Kubernetes are a useful analogy. In Kubernetes, we have different workload controllers such as Deployment, StatefulSet, and DaemonSet. Ultimately, all of these result in Pods; however, the decisions about what Pods to create, how to schedule them across Nodes, how to configure those Pods, and how to manage those Pods as changes happen are very different with each workload controller. Similarly, we can build different controllers to handle -different ways in which we want to generate PackageRevisions. The PackageVariant resource provides a convenient +different ways in which we want to generate *PackageRevisions*. The *PackageVariant* resource provides a convenient primitive for all of those controllers, allowing a them to leverage a range of well-defined operations to mutate the packages as needed. @@ -250,12 +250,12 @@ include generating package variants to spin up development environments for each instantiating the same package, with slight configuration changes, across a fleet of clusters; or instantiating some package per customer. -The package variant set controller is designed to fill this common need. This controller consumes PackageVariantSet -resources, and outputs PackageVariant resources. The PackageVariantSet defines: +The package variant set controller is designed to fill this common need. This controller consumes *PackageVariantSet* +resources, and outputs *PackageVariant* resources. The *PackageVariantSet* defines: - the upstream package - targeting criteria -- a template for generating one PackageVariant per target +- a template for generating one *PackageVariant* per target Three types of targeting are supported: @@ -263,14 +263,14 @@ Three types of targeting are supported: - A label selector for Repository objects - An arbitrary object selector -Rules for generating a PackageVariant are associated with a list of targets using a template. That template can have -explicit values for various PackageVariant fields, or it can use +Rules for generating a *PackageVariant* are associated with a list of targets using a template. That template can have +explicit values for various *PackageVariant* fields, or it can use [Common Expression Language (CEL)](https://github.com/google/cel-go) expressions to specify the field values. -*Figure 5* shows an example of creating PackageVariant resources based upon the explicitly list of repositories. In this -example, for the `cluster-01` and `cluster-02` repositories, no template is defined the resulting PackageVariants; -it simply takes the defaults. However, for `cluster-03`, a template is defined to change the downstream package name to -`bar`. +*Figure 5* shows an example of creating *PackageVariant* resources based upon the explicitly list of repositories. In this +example, for the *cluster-01* and *cluster-02* repositories, no template is defined the resulting *PackageVariants*; +it simply takes the defaults. However, for *cluster-03*, a template is defined to change the downstream package name to +*bar*. | ![Figure 5: PackageVariantSet with Repository List](/static/images/porch/packagevariantset-target-list.png) | | :---: | @@ -278,16 +278,16 @@ it simply takes the defaults. However, for `cluster-03`, a template is defined t It is also possible to target the same package to a repository more than once, using different names. This is useful, for example, if the package is used to provision namespaces and you would like to provision many namespaces in the same -cluster. It is also useful if a repository is shared across multiple clusters. In *Figure 6*, two PackageVariant -resources for creating the `foo` package in the repository `cluster-01` are generated, one for each listed package name. -Since no `packageNames` field is listed for `cluster-02`, only one instance is created for that repository. +cluster. It is also useful if a repository is shared across multiple clusters. In *Figure 6*, two *PackageVariant* +resources for creating the *foo* package in the repository cluster-01 are generated, one for each listed package name. +Since no packageNames field is listed for cluster-02, only one instance is created for that repository. | ![Figure 6: PackageVariantSet with Package List](/static/images/porch/packagevariantset-target-list-with-packages.png) | | :---: | | *Figure 6: PackageVariantSet with Package List* | *Figure 7* shows an example that combines a repository label selector with configuration injection that various based -upon the target. The template for the PackageVariant includes a CEL expression for the one of the injectors, so that +upon the target. The template for the *PackageVariant* includes a CEL expression for the one of the injectors, so that the injection varies systematically based upon attributes of the target. | ![Figure 7: PackageVariantSet with Repository Selector](/static/images/porch/packagevariantset-target-repo-selector.png) | @@ -298,7 +298,7 @@ the injection varies systematically based upon attributes of the target. ### PackageVariant API -The Go types below defines the `PackageVariantSpec`. +The Go types below defines the PackageVariantSpec. ```go type PackageVariantSpec struct { @@ -343,74 +343,74 @@ type InjectionSelector struct { #### Basic Spec Fields -The `Upstream` and `Downstream` fields specify the source package and destination repository and package name. The -`Repo` fields refer to the names Porch Repository resources in the same namespace as the PackageVariant resource. -The `Downstream` does not contain a revision, because the package variant controller will only create Draft packages. -The `Revision` of the eventual PackageRevision resource will be determined by Porch at the time of approval. +The Upstream and Downstream fields specify the source package and destination repository and package name. The +Repo fields refer to the names Porch Repository resources in the same namespace as the *PackageVariant* resource. +The Downstream does not contain a revision, because the package variant controller will only create Draft packages. +The Revision of the eventual *PackageRevision* resource will be determined by Porch at the time of approval. -The `Labels` and `Annotations` fields list metadata to include on the created PackageRevision. These values are set -*only* at the time a Draft package is created. They are ignored for subsequent operations, even if the PackageVariant -itself has been modified. This means users are free to change these values on the PackageRevision; the package variant +The Labels and Annotations fields list metadata to include on the created *PackageRevision*. These values are set +*only* at the time a Draft package is created. They are ignored for subsequent operations, even if the *PackageVariant* +itself has been modified. This means users are free to change these values on the *PackageRevision*; the package variant controller will not touch them again. -`AdoptionPolicy` controls how the package variant controller behaves if it finds an existing PackageRevision Draft -matching the `Downstream`. If the `AdoptionPolicy` is `adoptExisting`, then the package variant controller will -take ownership of the Draft, associating it with this PackageVariant. This means that it will begin to reconcile the -Draft, just as if it had created it in the first place. An `AdoptionPolicy` of `adoptNone` (the default) will simply +AdoptionPolicy controls how the package variant controller behaves if it finds an existing *PackageRevision* Draft +matching the Downstream. If the AdoptionPolicy is adoptExisting, then the package variant controller will +take ownership of the Draft, associating it with this *PackageVariant*. This means that it will begin to reconcile the +Draft, just as if it had created it in the first place. An AdoptionPolicy of adoptNone (the default) will simply ignore any matching Drafts that were not created by the controller. -`DeletionPolicy` controls how the package variant controller behaves with respect to PackageRevisions that it has -created when the PackageVariant resource itself is deleted. A value of `delete` (the default) will delete the -PackageRevision (potentially removing it from a running cluster, if the downstream package has been deployed). A value -of `orphan` will remove the owner references and leave the PackageRevisions in place. +DeletionPolicy controls how the package variant controller behaves with respect to *PackageRevisions* that it has +created when the *PackageVariant* resource itself is deleted. A value of delete (the default) will delete the +*PackageRevision* (potentially removing it from a running cluster, if the downstream package has been deployed). A value +of orphan will remove the owner references and leave the *PackageRevisions* in place. #### Package Context Injection -PackageVariant resource authors may specify key-value pairs in the `spec.packageContext.data` field of the resource. -These key-value pairs will be automatically added to the `data` of the `kptfile.kpt.dev` ConfigMap, if it exists. +*PackageVariant* resource authors may specify key-value pairs in the spec.packageContext.data field of the resource. +These key-value pairs will be automatically added to the data of the *kptfile.kpt.dev* ConfigMap, if it exists. -Specifying the key `name` is invalid and must fail validation of the PackageVariant. This key is reserved for kpt or -Porch to set to the package name. Similarly, `package-path` is reserved and will result in an error. +Specifying the key name is invalid and must fail validation of the *PackageVariant*. This key is reserved for *kpt* or +Porch to set to the package name. Similarly, package-path is reserved and will result in an error. -The `spec.packageContext.removeKeys` field can also be used to specify a list of keys that the package variant -controller should remove from the `data` field of the `kptfile.kpt.dev` ConfigMap. +The spec.packageContext.removeKeys field can also be used to specify a list of keys that the package variant +controller should remove from the data field of the *kptfile.kpt.dev* ConfigMap. When creating or updating a package, the package variant controller will ensure that: -- The `kptfile.kpt.dev` ConfigMap exists, failing if not -- All of the key-value pairs in `spec.packageContext.data` exist in the `data` field of the ConfigMap. -- None of the keys listed in `spec.packageContext.removeKeys` exist in the ConfigMap. +- The *kptfile.kpt.dev* ConfigMap exists, failing if not +- All of the key-value pairs in spec.packageContext.data exist in the data field of the ConfigMap. +- None of the keys listed in spec.packageContext.removeKeys exist in the ConfigMap. {{% alert title="Note" color="primary" %}} -If a user adds a key via PackageVariant, then changes the PackageVariant to no longer add that key, it will -NOT be removed automatically, unless the user also lists the key in the `removeKeys` list. This avoids the need to track -which keys were added by PackageVariant. +If a user adds a key via *PackageVariant*, then changes the *PackageVariant* to no longer add that key, it will +NOT be removed automatically, unless the user also lists the key in the removeKeys list. This avoids the need to track +which keys were added by *PackageVariant*. {{% /alert %}} -Similarly, if a user manually adds a key in the downstream that is also listed in the `removeKeys` field, the package +Similarly, if a user manually adds a key in the downstream that is also listed in the removeKeys field, the package variant controller will remove that key the next time it needs to update the downstream package. There will be no attempt to coordinate "ownership" of these keys. If the controller is unable to modify the ConfigMap for some reason, this is considered an error and should prevent -generation of the Draft. This will result in the condition `Ready` being set to `False`. +generation of the Draft. This will result in the condition Ready being set to False. #### Kptfile Function Pipeline Editing -PackageVariant resource creators may specify a list of KRM functions to add to the beginning of the Kptfile's pipeline. -These functions are listed in the field `spec.pipeline`, which is a +*PackageVariant* resource creators may specify a list of KRM functions to add to the beginning of the *Kptfile's* pipeline. +These functions are listed in the field spec.pipeline, which is a [Pipeline](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L236), -just as in the Kptfile. The user can therefore prepend both `validators` and `mutators`. +just as in the *Kptfile*. The user can therefore prepend both validators and mutators. -Functions added in this way are always added to the *beginning* of the Kptfile pipeline. In order to enable management -of the list on subsequent reconciliations, functions added by the package variant controller will use the `Name` field +Functions added in this way are always added to the *beginning* of the *Kptfile* pipeline. In order to enable management +of the list on subsequent reconciliations, functions added by the package variant controller will use the Name field of the [Function](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L283). -In the Kptfile, each function will be named as the dot-delimited concatenation of `PackageVariant`, the name of the -PackageVariant resource, the function name as specified in the pipeline of the PackageVariant resource (if present), and +In the *Kptfile*, each function will be named as the dot-delimited concatenation of *PackageVariant*, the name of the +*PackageVariant* resource, the function name as specified in the pipeline of the *PackageVariant* resource (if present), and the positional location of the function in the array. -For example, if the PackageVariant resource contains: +For example, if the *PackageVariant* resource contains: ```yaml apiVersion: config.porch.kpt.dev/v1alpha1 @@ -431,7 +431,7 @@ spec: app: foo ``` -Then the resulting Kptfile will have these two entries prepended to its `mutators` list: +Then the resulting *Kptfile* will have these two entries prepended to its mutators list: ```yaml pipeline: @@ -447,30 +447,30 @@ Then the resulting Kptfile will have these two entries prepended to its `mutator ``` During subsequent reconciliations, this allows the controller to identify the functions within its control, remove them -all, and re-add them based on its updated content. By including the PackageVariant name, we enable chains of -PackageVariants to add functions, so long as the user is careful about their choice of resource names and avoids +all, and re-add them based on its updated content. By including the *PackageVariant* name, we enable chains of +*PackageVariants* to add functions, so long as the user is careful about their choice of resource names and avoids conflicts. If the controller is unable to modify the Pipeline for some reason, this is considered an error and should prevent -generation of the Draft. This will result in the condition `Ready` being set to `False`. +generation of the Draft. This will result in the condition Ready being set to False. #### Configuration Injection Details As described [above](#configuration-injection), configuration injection is a process whereby in-package resources are -matched to in-cluster resources, and the `spec` of the in-cluster resources is copied to the in-package resource. +matched to in-cluster resources, and the spec of the in-cluster resources is copied to the in-package resource. Configuration injection is controlled by a combination of in-package resources with annotations, and *injectors* -(also known as *injection selectors*) defined on the PackageVariant resource. Package authors control the injection +(also known as *injection selectors*) defined on the *PackageVariant* resource. Package authors control the injection points they allow in their packages, by flagging specific resources as *injection points* with an annotation. Creators -of the PackageVariant resource specify how to map in-cluster resources to those injection points using the injection -selectors. Injection selectors are defined in the `spec.injectors` field of the PackageVariant. This field is an ordered +of the *PackageVariant* resource specify how to map in-cluster resources to those injection points using the injection +selectors. Injection selectors are defined in the spec.injectors field of the *PackageVariant*. This field is an ordered array of structs containing a GVK (group, version, kind) tuple as separate fields, and name. Only the name is required. To identify a match, all fields present must match the in-cluster object, and all *GVK* fields present must match the in-package resource. In general the name will not match the in-package resource; this is discussed in more detail below. The annotations, along with the GVK of the annotated resource, allow a package to "advertise" the injections it can accept and understand. These injection points effectively form a configuration API for the package, and the injection -selectors provide a way for the PackageVariant author to specify the inputs for those APIs from the possible values in +selectors provide a way for the *PackageVariant* author to specify the inputs for those APIs from the possible values in the management cluster. If we define those APIs carefully, they can be used across many packages; since they are KRM resources, we can apply versioning and schema validation to them as well. This creates a more maintainable, automatable set of APIs for package customization than simple key/value pairs. @@ -479,78 +479,78 @@ As an example, we may define a GVK that contains service endpoints that many app package, we would then include an instance of that resource, say called "service-endpoints", and configure a function to propagate the values from that resource to others within our package. As those endpoints may vary by region, in our Porch cluster we can create an instance of this GVK for each region: "useast1-service-endpoints", -"useast2-service-endpoints", "uswest1-service-endpoints", etc. When we instantiate the PackageVariant for a cluster, we +"useast2-service-endpoints", "uswest1-service-endpoints", etc. When we instantiate the *PackageVariant* for a cluster, we want to inject the resource corresponding to the region in which the cluster exists. Thus, for each cluster we will -create a PackageVariant resource pointing to the upstream package, but with injection selector name values that are +create a *PackageVariant* resource pointing to the upstream package, but with injection selector name values that are specific to the region for that cluster. It is important to realize that the name of the in-package resource and the in-cluster resource need not match. In fact, -it would be an unusual coincidence if they did match. The names in the package are the same across PackageVariants -using that upstream, but we want to inject different resources for each one such PackageVariant. We also do not want to +it would be an unusual coincidence if they did match. The names in the package are the same across *PackageVariants* +using that upstream, but we want to inject different resources for each one such *PackageVariant*. We also do not want to change the name in the package, because it likely has meaning within the package and will be used by functions in the package. Also, different owners control the names of the in-package and in-cluster resources. The names in the package are in the control of the package author. The names in the cluster are in the control of whoever populates the cluster -(for example, some infrastructure team). The selector is the glue between them, and is in control of the PackageVariant +(for example, some infrastructure team). The selector is the glue between them, and is in control of the *PackageVariant* resource creator. The GVK on the other hand, has to be the same for the in-package resource and the in-cluster resource, because it tells us the API schema for the resource. Also, the namespace of the in-cluster object needs to be the same as the -PackageVariant resource, or we could leak resources from namespaces to which our PackageVariant user does not have +*PackageVariant* resource, or we could leak resources from namespaces to which our *PackageVariant* user does not have access. With that understanding, the injection process works as follows: 1. The controller will examine all in-package resources, looking for those with an annotation named - `kpt.dev/config-injection`, with one of the following values: `required` or `optional`. We will call these "injection + *kpt.dev/config-injection*, with one of the following values: required or optional. We will call these "injection points". It is the responsibility of the package author to define these injection points, and to specify which are required and which are optional. Optional injection points are a way of specifying default values. 1. For each injection point, a condition will be created *in the downstream PackageRevision*, with ConditionType set to - the dot-delimited concatenation of `config.injection`, with the in-package resource kind and name, and the value set - to `False`. Note that since the package author controls the name of the resource, kind and name are sufficient to + the dot-delimited concatenation of config.injection, with the in-package resource kind and name, and the value set + to False. Note that since the package author controls the name of the resource, kind and name are sufficient to disambiguate the injection point. We will call this ConditionType the injection point ConditionType". -1. For each required injection point, the injection point ConditionType will be added to the PackageRevision - `readinessGates` by the package variant controller. Optional injection points' ConditionTypes must not be added to - the `readinessGates` by the package variant controller, but humans or other actors may do so at a later date, and the +1. For each required injection point, the injection point ConditionType will be added to the *PackageRevision* + readinessGates by the package variant controller. Optional injection points' ConditionTypes must not be added to + the readinessGates by the package variant controller, but humans or other actors may do so at a later date, and the package variant controller should not remove them on subsequent reconciliations. Also, this relies upon - `readinessGates` gating publishing the package to a *deployment* repository, but not gating publishing to a blueprint + readinessGates gating publishing the package to a *deployment* repository, but not gating publishing to a blueprint repository. 1. The injection processing will proceed as follows. For each injection point: - - The controller will identify all in-cluster objects in the same namespace as the PackageVariant resource, with GVK + - The controller will identify all in-cluster objects in the same namespace as the *PackageVariant* resource, with GVK matching the injection point (the in-package resource). If the controller is unable to load this objects (e.g., - there are none and the CRD is not installed), the injection point ConditionType will be set to `False`, with a - message indicating that the error, and processing proceeds to the next injection point. Note that for `optional` + there are none and the CRD is not installed), the injection point ConditionType will be set to False, with a + message indicating that the error, and processing proceeds to the next injection point. Note that for optional injection this may be an acceptable outcome, so it does not interfere with overall generation of the Draft. - The controller will look through the list of injection selectors in order and checking if any of the in-cluster objects match the selector. If so, that in-cluster object is selected, and processing of the list of injection - selectors stops. Note that the namespace is set based upon the PackageVariant resource, the GVK is set based upon + selectors stops. Note that the namespace is set based upon the *PackageVariant* resource, the GVK is set based upon the in-package resource, and all selectors require name. Thus, at most one match is possible for any given selector. Also note that *all fields present in the selector* must match the in-cluster resource, and only the *GVK fields present in the selector* must match the in-package resource. - - If no in-cluster object is selected, the injection point ConditionType will be set to `False` with a message that + - If no in-cluster object is selected, the injection point ConditionType will be set to False with a message that no matching in-cluster resource was found, and processing proceeds to the next injection point. - If a matching in-cluster object is selected, then it is injected as follows: - - For ConfigMap resources, the `data` field from the in-cluster resource is copied to the `data` field of the + - For ConfigMap resources, the data field from the in-cluster resource is copied to the data field of the in-package resource (the injection point), overwriting it. - - For other resource types, the `spec` field from the in-cluster resource is copied to the `spec` field of the + - For other resource types, the spec field from the in-cluster resource is copied to the spec field of the in-package resource (the injection point), overwriting it. - - An annotation with name `kpt.dev/injected-resource-name` and value set to the name of the in-cluster resource is + - An annotation with name *kpt.dev/injected-resource-name* and value set to the name of the in-cluster resource is added (or overwritten) in the in-package resource. If the the overall injection cannot be completed for some reason, or if any of the below problems exist in the upstream package, it is considered an error and should prevent generation of the Draft: - There is a resource annotated as an injection point but having an invalid annotation value (i.e., other than - `required` or `optional`). + required or optional). - There are ambiguous condition types due to conflicting GVK and name values. These must be disambiguated in the upstream package, if so. -This will result in the condition `Ready` being set to `False`. +This will result in the condition Ready being set to False. {{% alert title="Note" color="primary" %}} -Whether or not all `required` injection points are fulfilled does not affect the *PackageVariant* conditions, +Whether or not all required injection points are fulfilled does not affect the *PackageVariant* conditions, only the *PackageRevision* conditions. {{% /alert %}} @@ -559,7 +559,7 @@ only the *PackageRevision* conditions. By allowing the use of GVK, not just name, in the selector, more precision in selection is enabled. This is a way to constrain the injections that will be done. That is, if the package has 10 different objects with -`config-injection` annotation, the PackageVariant could say it only wants to replace certain GVKs, allowing better +config-injection annotation, the *PackageVariant* could say it only wants to replace certain GVKs, allowing better control. Consider, for example, if the cluster contains these resources: @@ -569,8 +569,8 @@ Consider, for example, if the cluster contains these resources: - GVK2 foo - GVK2 bar -If we could only define injection selectors based upon name, it would be impossible to ever inject one GVK with `foo` -and another with `bar`. Instead, by using GVK, we can accomplish this with a list of selectors like: +If we could only define injection selectors based upon name, it would be impossible to ever inject one GVK with *foo* +and another with *bar*. Instead, by using GVK, we can accomplish this with a list of selectors like: - GVK1 foo - GVK2 bar @@ -583,50 +583,50 @@ different GVKs. During creation, the first thing the controller does is clone the upstream package to create the downstream package. -For update, first note that changes to the downstream PackageRevision can be triggered for several different reasons: +For update, first note that changes to the downstream *PackageRevision* can be triggered for several different reasons: -1. The PackageVariant resource is updated, which could change any of the options for introducing variance, or could also +1. The *PackageVariant* resource is updated, which could change any of the options for introducing variance, or could also change the upstream package revision referenced. 1. A new revision of the upstream package has been selected due to a floating tag change, or due to a force retagging of the upstream. 1. An injected in-cluster object is updated. -The downstream PackageRevision may have been updated by humans or other automation actors since creation, so we cannot -simply recreate the downstream PackageRevision from scratch when one of these changes happens. Instead, the controller +The downstream *PackageRevision* may have been updated by humans or other automation actors since creation, so we cannot +simply recreate the downstream *PackageRevision* from scratch when one of these changes happens. Instead, the controller must maintain the later edits by doing the equivalent of a `kpt pkg update`, in the case of changes to the upstream for -any reason. Any other changes require reapplication of the PackageVariant functionality. With that understanding, we can +any reason. Any other changes require reapplication of the *PackageVariant* functionality. With that understanding, we can see that the controller will perform mutations on the downstream package in this order, for both creation and update: 1. Create (via Clone) or Update (via `kpt pkg update` equivalent) - This is done by the Porch server, not by the package variant controller directly. - - This means that Porch will run the Kptfile pipeline after clone or update. + - This means that Porch will run the *Kptfile* pipeline after clone or update. 1. Package variant controller applies configured mutations - Package Context Injections - - Kptfile KRM Function Pipeline Additions/Changes + - *Kptfile* KRM Function Pipeline Additions/Changes - Config Injection -1. Package variant controller saves the PackageRevision and PackageRevisionResources. +1. Package variant controller saves the *PackageRevision* and *PackageRevisionResources*. - - Porch server executes the Kptfile pipeline + - Porch server executes the *Kptfile* pipeline -The package variant controller mutations edit resources (including the Kptfile), based on the contents of the -PackageVariant and the injected in-cluster resources, but cannot affect one another. The results of those mutations -throughout the rest of the package is materialized by the execution of the Kptfile pipeline during the save operation. +The package variant controller mutations edit resources (including the *Kptfile*), based on the contents of the +*PackageVariant* and the injected in-cluster resources, but cannot affect one another. The results of those mutations +throughout the rest of the package is materialized by the execution of the *Kptfile* pipeline during the save operation. #### PackageVariant Status PackageVariant sets the following status conditions: - - `Stalled` is set to True if there has been a failure that most likely requires user intervention. - - `Ready` is set to True if the last reconciliation successfully produced an up-to-date Draft. + - **Stalled** is set to True if there has been a failure that most likely requires user intervention. + - **Ready** is set to True if the last reconciliation successfully produced an up-to-date Draft. -The PackageVariant resource will also contain a `DownstreamTargets` field, containing a list of downstream `Draft` and -`Proposed` PackageRevisions owned by this PackageVariant resource, or the latest `Published` PackageRevision if there -are none in `Draft` or `Proposed` state. Typically, there is only a single Draft, but use of the `adopt` value for -`AdoptionPolicy` could result in multiple Drafts being owned by the same PackageVariant. +The *PackageVariant* resource will also contain a DownstreamTargets field, containing a list of downstream Draft and +Proposed *PackageRevisions* owned by this *PackageVariant* resource, or the latest Published *PackageRevision* if there +are none in Draft or Proposed state. Typically, there is only a single Draft, but use of the adopt value for +AdoptionPolicy could result in multiple Drafts being owned by the same *PackageVariant*. ### PackageVariantSet API[^pvsimpl] @@ -656,19 +656,19 @@ type Target struct { } ``` -At the highest level, a PackageVariantSet is just an upstream, and a list of targets. For each target, there is a set of -criteria for generating a list, and a set of rules (a template) for creating a PackageVariant from each list entry. +At the highest level, a *PackageVariantSet* is just an upstream, and a list of targets. For each target, there is a set of +criteria for generating a list, and a set of rules (a template) for creating a *PackageVariant* from each list entry. -Since `template` is optional, lets start with describing the different types of targets, and how the criteria in each is -used to generate a list that seeds the PackageVariant resources. +Since template is optional, lets start with describing the different types of targets, and how the criteria in each is +used to generate a list that seeds the *PackageVariant* resources. -The `Target` structure must include exactly one of three different ways of generating the list. The first is a simple +The Target structure must include exactly one of three different ways of generating the list. The first is a simple list of repositories and package names for each of those repositories[^repo-pkg-expr]. The package name list is needed for uses cases in which you want to repeatedly instantiate the same package in a single repository. For example, if a repository represents the contents of a cluster, you may want to instantiate a namespace package once for each namespace, with a name matching the namespace. -This example shows using the `repositories` field: +This example shows using the repositories field: ```yaml apiVersion: config.porch.kpt.dev/v1alpha2 @@ -696,7 +696,7 @@ spec: - foo-b ``` -In this case, PackageVariant resources are created for each of these pairs of downstream repositories and packages +In this case, *PackageVariant* resources are created for each of these pairs of downstream repositories and packages names: | Repository | Package Name | @@ -709,7 +709,7 @@ names: | cluster-04 | foo-a | | cluster-04 | foo-b | -All of those PackageVariants have the same upstream. +All of those *PackageVariants* have the same upstream. The second criteria targeting is via a label selector against Porch Repository objects, along with a list of package names. Those packages will be instantiated in each matching repository. Just like in the first example, not listing a @@ -723,7 +723,7 @@ four repositories defined in our Porch cluster: | cluster-03 | region=useast2, env=prod, org=hr | | cluster-04 | region=uswest1, env=prod, org=hr | -If we create a PackageVariantSet with the following `spec`: +If we create a *PackageVariantSet* with the following spec: ```yaml spec: @@ -745,7 +745,7 @@ spec: - foo-c ``` -then PackageVariant resources will be created with these repository and package names: +then *PackageVariant* resources will be created with these repository and package names: | Repository | Package Name | | ---------- | ------------ | @@ -760,7 +760,7 @@ then PackageVariant resources will be created with these repository and package | cluster-04 | foo-c | Finally, the third possibility allows the use of *arbitrary* resources in the Porch cluster as targeting criteria. The -`objectSelector` looks like this: +objectSelector looks like this: ```yaml spec: @@ -778,16 +778,16 @@ spec: ``` It works exactly like the repository selector - in fact the repository selector is equivalent to the object selector -with the `apiVersion` and `kind` values set to point to Porch Repository resources. That is, the repository name comes +with the apiVersion and kind values set to point to Porch Repository resources. That is, the repository name comes from the object name, and the package names come from the listed package names. In the description of the template, we will see how to derive different repository names from the objects. #### PackageVariant Template -As previously discussed, the list entries generated by the target criteria result in PackageVariant entries. If no -template is specified, then PackageVariant default are used, along with the downstream repository name and package name +As previously discussed, the list entries generated by the target criteria result in *PackageVariant* entries. If no +template is specified, then *PackageVariant* default are used, along with the downstream repository name and package name as described in the previous section. The template allows the user to have control over all of the values in the -resulting PackageVariant. The template API is shown below. +resulting *PackageVariant*. The template API is shown below. ```go type PackageVariantTemplate struct { @@ -906,11 +906,11 @@ type FunctionTemplate struct { ``` This is a pretty complicated structure. To make it more understandable, the first thing to notice is that many fields -have a plain version, and an `Expr` version. The plain version is used when the value is static across all the -PackageVariants; the `Expr` version is used when the value needs to vary across PackageVariants. +have a plain version, and an Expr version. The plain version is used when the value is static across all the +*PackageVariants*; the Expr version is used when the value needs to vary across *PackageVariants*. Let's consider a simple example. Suppose we have a package for provisioning namespaces called "base-ns". We want to -instantiate this several times in the `cluster-01` repository. We could do this with this PackageVariantSet: +instantiate this several times in the *cluster-01* repository. We could do this with this *PackageVariantSet*: ```yaml apiVersion: config.porch.kpt.dev/v1alpha2 @@ -932,9 +932,9 @@ spec: - ns-3 ``` -That will produce three PackageVariant resources with the same upstream, all with the same downstream repo, and each +That will produce three *PackageVariant* resources with the same upstream, all with the same downstream repo, and each with a different downstream package name. If we also want to set some labels identically across the packages, we can -do that with the `template.labels` field: +do that with the template.labels field: ```yaml apiVersion: config.porch.kpt.dev/v1alpha2 @@ -960,8 +960,8 @@ spec: org: hr ``` -The resulting PackageVariant resources will include `labels` in their `spec`, and will be identical other than their -names and the `downstream.package`: +The resulting *PackageVariant* resources will include labels in their spec, and will be identical other than their +names and the downstream.package: ```yaml apiVersion: config.porch.kpt.dev/v1alpha1 @@ -1017,10 +1017,10 @@ spec: org: hr ``` -When using other targeting means, the use of the `Expr` fields becomes more likely, because we have more possible -sources for different field values. The `Expr` values are all +When using other targeting means, the use of the Expr fields becomes more likely, because we have more possible +sources for different field values. The Expr values are all [Common Expression Language (CEL)](https://github.com/google/cel-go) expressions, rather than static values. This allows -the user to construct values based upon various fields of the targets. Consider again the `repositorySelector` example, +the user to construct values based upon various fields of the targets. Consider again the repositorySelector example, where we have these repositories in the cluster. | Repository | Labels | @@ -1030,10 +1030,10 @@ where we have these repositories in the cluster. | cluster-03 | region=useast2, env=prod, org=hr | | cluster-04 | region=uswest1, env=prod, org=hr | -If we create a PackageVariantSet with the following `spec`, we can use the `Expr` fields to add labels to the -PackageVariantSpecs (and thus to the resulting PackageRevisions later) that vary based on cluster. We can also use -this to vary the `injectors` defined for each PackageVariant, resulting in each PackageRevision having different -resources injected. This `spec`: +If we create a *PackageVariantSet* with the following spec, we can use the Expr fields to add labels to the +*PackageVariantSpecs* (and thus to the resulting *PackageRevisions* later) that vary based on cluster. We can also use +this to vary the injectors defined for each *PackageVariant*, resulting in each *PackageRevision* having different +resources injected. This spec: ```yaml spec: @@ -1054,9 +1054,9 @@ spec: - nameExpr: "repository.labels['region'] + '-endpoints'" ``` -will result in three PackageVariant resources, one for each Repository with the labels env=prod and org=hr. The `labels` -and `injectors` fields of the PackageVariantSpec will be different for each of these PackageVariants, as determined by -the use of the `Expr` fields in the template, as shown here: +will result in three *PackageVariant* resources, one for each Repository with the labels env=prod and org=hr. The labels +and injectors fields of the *PackageVariantSpec* will be different for each of these *PackageVariants*, as determined by +the use of the Expr fields in the template, as shown here: ```yaml apiVersion: config.porch.kpt.dev/v1alpha1 @@ -1114,65 +1114,65 @@ spec: name: uswest1-endpoints ``` -Since the injectors are different for each PackageVariant, the resulting PackageRevisions will each have different +Since the injectors are different for each *PackageVariant*, the resulting *PackageRevisions* will each have different resources injected. When CEL expressions are evaluated, they have an environment associated with them. That is, there are certain objects -that are accessible within the CEL expression. For CEL expressions used in the PackageVariantSet `template` field, +that are accessible within the CEL expression. For CEL expressions used in the *PackageVariantSet* template field, the following variables are available: | CEL Variable | Variable Contents | | -------------- | ------------------------------------------------------------ | | repoDefault | The default repository name based on the targeting criteria. | | packageDefault | The default package name based on the targeting criteria. | -| upstream | The upstream PackageRevision. | +| upstream | The upstream *PackageRevision*. | | repository | The downstream Repository. | | target | The target object (details vary; see below). | -There is one expression that is an exception to the table above. Since the `repository` value corresponds to the -Repository of the downstream, we must first evaluate the `downstream.repoExpr` expression to *find* that repository. -Thus, for that expression only, `repository` is not a valid variable. +There is one expression that is an exception to the table above. Since the repository value corresponds to the +Repository of the downstream, we must first evaluate the `ownstream.repoExpr expression to find that repository. +Thus, for that expression only, repository is not a valid variable. -There is one more variable available across all CEL expressions: the `target` variable. This variable has a meaning that +There is one more variable available across all CEL expressions: the target variable. This variable has a meaning that varies depending on the type of target, as follows: -| Target Type | `target` Variable Contents | +| Target Type | target Variable Contents | | ------------------- | ---------------------------------------------------------------------------------------------- | -| Repo/Package List | A struct with two fields: `repo` and `package`, the same as the `repoDefault` and `packageDefault` values. | -| Repository Selector | The Repository selected by the selector. Although not recommended, this could be different than the `repository` value, which an be altered with `downstream.repo` or `downstream.repoExpr`. | +| Repo/Package List | A struct with two fields: repo and package, the same as the repoDefault and packageDefault values. | +| Repository Selector | The Repository selected by the selector. Although not recommended, this could be different than the repository value, which an be altered with downstream.repo or downstream.repoExpr. | | Object Selector | The Object selected by the selector. | -For the various resource variables - `upstream`, `repository`, and `target` - arbitrary access to all fields of the +For the various resource variables - upstream, repository, and target - arbitrary access to all fields of the object could lead to security concerns. Therefore, only a subset of the data is available for use in CEL expressions. -Specifically, the following fields: `name`, `namespace`, `labels`, and `annotations`. +Specifically, the following fields: name, namespace, labels, and annotations. -Given the slight quirk with the `repoExpr`, it may be helpful to state the processing flow for the template evaluation: +Given the slight quirk with the repoExpr, it may be helpful to state the processing flow for the template evaluation: -1. The upstream PackageRevision is loaded. It must be in the same namespace as the PackageVariantSet[^multi-ns-reg]. +1. The upstream *PackageRevision* is loaded. It must be in the same namespace as the *PackageVariantSet*[^multi-ns-reg]. 1. The targets are determined. 1. For each target: - 1. The CEL environment is prepared with `repoDefault`, `packageDefault`, `upstream`, and `target` variables. + 1. The CEL environment is prepared with repoDefault, packageDefault, upstream, and target variables. 1. The downstream repository is determined and loaded, as follows: - - If present, `downstream.repoExpr` is evaluated using the CEL environment, and the result used as the downstream + - If present, downstream.repoExpr is evaluated using the CEL environment, and the result used as the downstream repository name. - - Otherwise, if `downstream.repo` is set, that is used as the downstream repository name. + - Otherwise, if downstream.repo is set, that is used as the downstream repository name. - If neither is present, the default repository name based on the target is used (i.e., the same value as the - `repoDefault` variable). + repoDefault variable). - The resulting downstream repository name is used to load the corresponding Repository object in the same - namespace as the PackageVariantSet. + namespace as the *PackageVariantSet*. 1. The downstream Repository is added to the CEL environment. 1. All other CEL expressions are evaluated. -1. Note that if any of the resources (e.g., the upstream PackageRevision, or the downstream Repository) are not found +1. Note that if any of the resources (e.g., the upstream *PackageRevision*, or the downstream Repository) are not found our otherwise fail to load, processing stops and a failure condition is raised. Similarly, if a CEL expression cannot be properly evaluated due to syntax or other reasons, processing stops and a failure condition is raised. #### Other Considerations -It would appear convenient to automatically inject the PackageVariantSet targeting resource. However, it is better to +It would appear convenient to automatically inject the *PackageVariantSet* targeting resource. However, it is better to require the package advertise the ways it accepts injections (i.e., the GVKs it understands), and only inject those. This keeps the separation of concerns cleaner; the package does not build in an awareness of the context in which it expects to be deployed. For example, a package should not accept a Porch Repository resource just because that happens @@ -1180,37 +1180,37 @@ to be the targeting mechanism. That would make the package unusable in other con #### PackageVariantSet Status -The PackageVariantSet status uses these conditions: +The *PackageVariantSet* status uses these conditions: - - `Stalled` is set to True if there has been a failure that most likely requires user intervention. - - `Ready` is set to True if the last reconciliation successfully reconciled all targeted PackageVariant resources. + - Stalled is set to True if there has been a failure that most likely requires user intervention. + - Ready is set to True if the last reconciliation successfully reconciled all targeted *PackageVariant* resources. ## Future Considerations - As an alternative to the floating tag proposal, we may instead want to have a separate tag tracking controller that can update PV and PVS resources to tweak their upstream as the tag moves. - Installing a collection of packages across a set of clusters, or performing the same mutations to each package in a - collection, is only supported by creating multiple PackageVariant / PackageVariantSet resources. Options to consider + collection, is only supported by creating multiple *PackageVariant* / *PackageVariantSet* resources. Options to consider for these use cases: - - `upstreams` listing multiple packages. - - Label selector against PackageRevisions. This does not seem that useful, as PackageRevisions are highly re-usable + - upstreams listing multiple packages. + - Label selector against *PackageRevisions*. This does not seem that useful, as *PackageRevisions* are highly re-usable and would likely be composed in many different ways. - - A PackageRevisionSet resource that simply contained a list of Upstream structures and could be used as an Upstream. - This is functionally equivalent to the `upstreams` option, but that list is reusable across resources. - - Listing multiple PackageRevisionSets in the upstream would be nice as well. - - Any or all of these could be implemented in PackageVariant, PackageVariantSet, or both. + - A *PackageRevisionSet* resource that simply contained a list of Upstream structures and could be used as an Upstream. + This is functionally equivalent to the upstreams option, but that list is reusable across resources. + - Listing multiple *PackageRevisionSets* in the upstream would be nice as well. + - Any or all of these could be implemented in *PackageVariant*, *PackageVariantSet*, or both. ## Footnotes [^porch17]: Implemented in Porch v0.0.17. [^porch18]: Coming in Porch v0.0.18. [^notimplemented]: Proposed here but not yet implemented as of Porch v0.0.18. -[^setns]: As of this writing, the `set-namespace` function does not have a `create` option. This should be added to +[^setns]: As of this writing, the set-namespace function does not have a create option. This should be added to avoid the user needing to also usethe `upsert-resource` function. Such common operation should be simple forusers. -[^pvsimpl]: This document describes PackageVariantSet `v1alpha2`, which will be available starting with Porch v0.0.18. - In Porch v0.0.16 and 17, the `v1alpha1` implementation is available, but it is a somewhat different API, without +[^pvsimpl]: This document describes *PackageVariantSet* v1alpha2, which will be available starting with Porch v0.0.18. + In Porch v0.0.16 and 17, the v1alpha1 implementation is available, but it is a somewhat different API, without support for CEL or any injection. It is focused only on fan out targeting, and uses a [slightly different targeting API](https://github.com/nephio-project/porch/blob/main/controllers/packagevariants/api/v1alpha1/packagevariant_types.go). -[^repo-pkg-expr]: This is not exactly correct. As we will see later in the `template` discussion, this the repository +[^repo-pkg-expr]: This is not exactly correct. As we will see later in the template discussion, this the repository and package names listed actually are just defaults for the template; they can be further manipulated in the template to reference different downstream repositories and package names. The same is true for the repositories selected via the `repositorySelector` option. However, this can be ignored for now. diff --git a/content/en/docs/porch/running-porch/running-locally.md b/content/en/docs/porch/running-porch/running-locally.md index 779c39a4..0a2ec8f9 100644 --- a/content/en/docs/porch/running-porch/running-locally.md +++ b/content/en/docs/porch/running-porch/running-locally.md @@ -20,7 +20,7 @@ To run Porch locally, you will need: ## Getting Started -Clone this repository into `${GOPATH}/src/github.com/GoogleContainerTools/kpt`. +Clone this repository into *${GOPATH}/src/github.com/GoogleContainerTools/kpt*. ```sh git clone https://github.com/GoogleContainerTools/kpt.git "${GOPATH}/src/github.com/GoogleContainerTools/kpt" @@ -53,8 +53,8 @@ make This will: -* create Docker network named `porch` -* build and start `etcd` Docker container +* create Docker network named *porch* +* build and start etcd Docker container * build and start main k8s apiserver Docker container * build and start the kpt function evaluator microservice [func](https://github.com/nephio-project/porch/tree/main/func) Docker container @@ -121,5 +121,5 @@ make stop ## Troubleshooting -If you run into issues that look like `git: authentication required`, make sure you have SSH +If you run into issues that look like *git: authentication required*, make sure you have SSH keys set up on your local machine. diff --git a/content/en/docs/porch/running-porch/running-on-GKE.md b/content/en/docs/porch/running-porch/running-on-GKE.md index 0c19468b..1d0cc5f7 100644 --- a/content/en/docs/porch/running-porch/running-on-GKE.md +++ b/content/en/docs/porch/running-porch/running-on-GKE.md @@ -25,7 +25,7 @@ need: * [gcloud](https://cloud.google.com/sdk/docs/install) * [kubectl](https://kubernetes.io/docs/tasks/tools/); you can install it via `gcloud components install kubectl` * [kpt](https://kpt.dev/) -* Command line utilities such as `curl`, `tar` +* Command line utilities such as *curl*, *tar* To build and run Porch on GKE, you will also need: @@ -40,8 +40,8 @@ To build and run Porch on GKE, you will also need: ## Getting Started -Make sure your `gcloud` is configured with your project (alternatively, you can augment all following `gcloud` -commands below with `--project` flag): +Make sure your gcloud is configured with your project (alternatively, you can augment all following gcloud +commands below with --project flag): ```bash gcloud config set project YOUR_GCP_PROJECT @@ -63,7 +63,7 @@ gcloud services enable container.googleapis.com gcloud container clusters create --region us-central1 porch-dev ``` -And ensure `kubectl` is targeting your GKE cluster: +And ensure *kubectl* is targeting your GKE cluster: ```bash gcloud container clusters get-credentials --region us-central1 porch-dev @@ -75,7 +75,7 @@ gcloud container clusters get-credentials --region us-central1 porch-dev To run a released version of Porch, download the release config bundle from [Porch release page](https://github.com/nephio-project/porch/releases). -Untar and apply the `deployment-blueprint.tar.gz` config bundle. This will install: +Untar and apply the *deployment-blueprint.tar.gz* config bundle. This will install: * Porch server * [Config Sync](https://kpt.dev/gitops/configsync/) @@ -87,7 +87,7 @@ kubectl apply -f porch-install kubectl wait deployment --for=condition=Available porch-server -n porch-system ``` -You can verify that Porch is running by querying the `api-resources`: +You can verify that Porch is running by querying the api-resources: ```bash kubectl api-resources | grep porch @@ -119,7 +119,7 @@ spec: To run custom build of Porch, you will need additional [prerequisites](#prerequisites). The commands below use [Google Container Registry](https://console.cloud.google.com/gcr). -Clone this repository into `${GOPATH}/src/github.com/GoogleContainerTools/kpt`. +Clone this repository into *${GOPATH}/src/github.com/GoogleContainerTools/kpt*. ```bash git clone https://github.com/GoogleContainerTools/kpt.git "${GOPATH}/src/github.com/GoogleContainerTools/kpt" @@ -136,7 +136,7 @@ named (example shown is the Porch server image). IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy-no-sa ``` -If you want to use a different repository, you can set `IMAGE_REPO` variable +If you want to use a different repository, you can set IMAGE_REPO variable (see [Makefile](https://github.com/nephio-project/porch/blob/main/Makefile#L32) for details). The `make push-and-deploy-no-sa` target will install Porch but not Config Sync. You can install Config Sync in your k8s @@ -145,12 +145,12 @@ cluster manually following the {{% alert title="Note" color="primary" %}} -The `-no-sa` (no service account) targets create Porch deployment +The -no-sa (no service account) targets create Porch deployment configuration which does not associate Kubernetes service accounts with GCP service accounts. This is sufficient for Porch to integate with Git repositories using Basic Auth, for example GitHub. -As above, you can verify that Porch is running by querying the `api-resources`: +As above, you can verify that Porch is running by querying the api-resources: ```bash kubectl api-resources | grep porch @@ -191,14 +191,14 @@ kubectl annotate serviceaccount porch-server -n porch-system \ iam.gke.io/gcp-service-account=porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com ``` -Build Porch, push images, and deploy porch server and controllers using the `make` target that adds workload identity +Build Porch, push images, and deploy porch server and controllers using the make target that adds workload identity service account annotations: ```bash IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy ``` -As above, you can verify that Porch is running by querying the `api-resources`: +As above, you can verify that Porch is running by querying the api-resources: ```bash kubectl api-resources | grep porch @@ -246,14 +246,14 @@ gcloud iam service-accounts add-iam-policy-binding porch-sync@${GCP_PROJECT_ID}. --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-controllers]" ``` -Build Porch, push images, and deploy porch server and controllers using the `make` target that adds workload identity +Build Porch, push images, and deploy porch server and controllers using the make target that adds workload identity service account annotations: ```bash IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy ``` -As above, you can verify that Porch is running by querying the `api-resources`: +As above, you can verify that Porch is running by querying the api-resources: ```bash kubectl api-resources | grep porch diff --git a/content/en/docs/porch/using-porch/adding-external-git-ca-bundle.md b/content/en/docs/porch/using-porch/adding-external-git-ca-bundle.md index 8b20a5f4..b24fd92a 100644 --- a/content/en/docs/porch/using-porch/adding-external-git-ca-bundle.md +++ b/content/en/docs/porch/using-porch/adding-external-git-ca-bundle.md @@ -11,15 +11,13 @@ To enable the porch server to communicate with a custom git deployment over HTTP The secret itself must meet the following criteria: -- exist in the same `namespace` as the Repository CR (Custom Resource) that requires it -- be named specifically `-ca-bundle` -- have a Data key named `ca.crt` containing the relevant ca certificate (chain) +- exist in the same namespace as the Repository CR (Custom Resource) that requires it +- be named specifically \-ca-bundle +- have a Data key named *ca.crt* containing the relevant ca certificate (chain) -For example, a Git Repository is hosted over HTTPS at the following URL: +For example, a Git Repository is hosted over HTTPS at the *https://my-gitlab.com/joe.bloggs/blueprints.git* URL: -`https://my-gitlab.com/joe.bloggs/blueprints.git` - -Before creating the new Repository in the **gitlab** namespace, we must create a secret that fulfils the criteria above. +Before creating the new Repository in the *gitlab* namespace, we must create a secret that fulfils the criteria above. `kubectl create secret generic gitlab-ca-bundle --namespace=gitlab --from-file=ca.crt` diff --git a/content/en/docs/porch/using-porch/install-and-using-porch.md b/content/en/docs/porch/using-porch/install-and-using-porch.md index ca23fd1b..d77b4fec 100644 --- a/content/en/docs/porch/using-porch/install-and-using-porch.md +++ b/content/en/docs/porch/using-porch/install-and-using-porch.md @@ -7,7 +7,7 @@ description: "A tutorial to install and use Porch" This tutorial is a guide to installing and using Porch. It is based on the [Porch demo produced by Tal Liron of Google](https://github.com/tliron/klab/tree/main/environments/porch-demo). Users -should be very comfortable with using `git`, `docker`, and `kubernetes`. +should be very comfortable with using *git*, *docker*, and *kubernetes*. See also [the Nephio Learning Resource](https://github.com/nephio-project/docs/blob/main/learning.md) page for background help and information. @@ -45,14 +45,14 @@ kind create cluster --config=kind_management_cluster.yaml kind create cluster --config=kind_edge1_cluster.yaml ``` -Output the kubectl config for the clusters: +Output the *kubectl* config for the clusters: ```bash kind get kubeconfig --name=management > ~/.kube/kind-management-config kind get kubeconfig --name=edge1 > ~/.kube/kind-edge1-config ``` -Toggling kubectl between the clusters: +Toggling *kubectl* between the clusters: ```bash export KUBECONFIG=~/.kube/kind-management-config @@ -73,7 +73,7 @@ kubectl wait --namespace metallb-system \ --timeout=90s ``` -Check the subnet that is being used by the `kind` network in docker +Check the subnet that is being used by the kind network in docker ```bash docker network inspect kind | grep Subnet @@ -86,7 +86,7 @@ Sample output: "Subnet": "fc00:f853:ccd:e793::/64" ``` -Edit the `metallb-conf.yaml` file and ensure the `spec.addresses` range is in the IPv4 subnet being used by the `kind` network in docker. +Edit the *metallb-conf.yaml* file and ensure the spec.addresses range is in the IPv4 subnet being used by the kind network in docker. ```yaml ... @@ -104,7 +104,7 @@ kubectl apply -f metallb-conf.yaml ## Deploy and set up gitea on the management cluster using kpt -Get the gitea kpt package: +Get the *gitea kpt* package: ```bash export KUBECONFIG=~/.kube/kind-management-config @@ -114,7 +114,7 @@ cd kpt_packages kpt pkg get https://github.com/nephio-project/catalog/tree/main/distros/sandbox/gitea ``` -Comment out the preconfigured IP address from the `gitea/service-gitea.yaml` file in the gitea Kpt package: +Comment out the preconfigured IP address from the *gitea/service-gitea.yaml* file in the *gitea kpt* package: ```bash 11c11 @@ -123,7 +123,7 @@ Comment out the preconfigured IP address from the `gitea/service-gitea.yaml` fil > # metallb.universe.tf/loadBalancerIPs: 172.18.0.200 ``` -Now render, init and apply the Gitea Kpt package: +Now render, init and apply the *gitea kpt* package: ```bash kpt fn render gitea @@ -142,11 +142,11 @@ gitea LoadBalancer 10.96.243.120 172.18.255.200 22:31305/TCP,3000:31102/ The UI is available at http://172.18.255.200:3000 in the example above. -To login to Gitea, use the credentials `nephio:secret`. +To login to Gitea, use the credentials nephio:secret. ## Create repositories on Gitea for management and edge1 -On the gitea UI, click the '+' opposite "Repositories" and fill in the form for both the `management` and `edge1` repositories. Use default values except for the following fields: +On the gitea UI, click the **+** opposite **Repositories** and fill in the form for both the *management* and *edge1* repositories. Use default values except for the following fields: - Repository Name: "Management" or "edge1" - Description: Something appropriate @@ -167,7 +167,7 @@ Check the repos: Now initialize both repos with an initial commit. -Initialize the `management` repo +Initialize the *management* repo ```bash cd ../repos @@ -188,7 +188,7 @@ git push -u origin main cd .. ``` -Initialize the `edge1` repo +Initialize the *edge1* repo ```bash git clone http://172.18.255.200:3000/nephio/edge1 @@ -210,7 +210,7 @@ cd ../../ ## Install Porch -We will use the Porch Kpt package from Nephio catalog repo. +We will use the *Porch Kpt* package from Nephio catalog repo. ```bash cd kpt_packages @@ -218,7 +218,7 @@ cd kpt_packages kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/porch ``` -Now we can install porch. We render the kpt package and then init and apply it. +Now we can install porch. We render the *kpt* package and then init and apply it. ```bash kpt fn render porch @@ -287,7 +287,7 @@ management git Package false True http://172.18.255.20 ## Configure configsync on the workload cluster -Configsync is installed on the `edge1` cluster so that it syncs the contents of the `edge1` repository onto the `edge1` +Configsync is installed on the edge1 cluster so that it syncs the contents of the *edge1* repository onto the edge1 workload cluster. We will use the configsync package from Nephio. ```bash @@ -310,13 +310,13 @@ config-management-operator-6946b77565-f45pc 1/1 Running 0 118m reconciler-manager-5b5d8557-gnhb2 2/2 Running 0 118m ``` -Now, we need to set up a Rootsync CR to synchronize the `edge1` repo: +Now, we need to set up a Rootsync CR to synchronize the *edge1* repo: ```bash kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/optional/rootsync ``` -Edit the `rootsync/package-context.yaml` file to set the name of the cluster/repo we are syncing from/to: +Edit the *rootsync/package-context.yaml* file to set the name of the cluster/repo we are syncing from/to: ```bash 9c9 @@ -325,13 +325,13 @@ Edit the `rootsync/package-context.yaml` file to set the name of the cluster/rep > name: edge1 ``` -Render the package. This configures the `rootsync/rootsync.yaml` file in the Kpt package: +Render the package. This configures the *rootsync/rootsync.yaml* file in the Kpt package: ```bash kpt fn render rootsync ``` -Edit the `rootsync/rootsync.yaml` file to set the IP address of Gitea and to turn off authentication for accessing +Edit the *rootsync/rootsync.yaml* file to set the IP address of Gitea and to turn off authentication for accessing gitea: ```bash @@ -437,7 +437,7 @@ external-blueprints git Package false True https://github.com/n management git Package false True http://172.18.255.200:3000/nephio/management.git ``` -A repository is a CR of the Porch Repository CRD. You can examine the 'repositories.config.porch.kpt.dev' CRD with +A repository is a CR of the Porch Repository CRD. You can examine the *repositories.config.porch.kpt.dev* CRD with either of the following commands (both of which are rather verbose): ```bash @@ -529,9 +529,9 @@ external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-b external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 external-blueprints 17 ``` -Let's examine the `free5gc-cp v1` package. +Let's examine the *free5gc-cp v1* package. -The PackageRevision CR name for free5gc-cp v1 is external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9. +The PackageRevision CR name for *free5gc-cp v1* is external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9. ```bash kubectl get packagerevision -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml @@ -560,7 +560,7 @@ status: upstreamLock: {} ``` -Getting the PackageRevisionResources pulls the package from its repository with each file serialized into a name-value +Getting the *PackageRevisionResources* pulls the package from its repository with each file serialized into a name-value map of resources in it's spec.
@@ -1310,7 +1310,7 @@ management git Package false True http://172.18.255.20 ```
-Check that porchctl lists our remote packages (PackageRevisions): +Check that porchctl lists our remote packages (PackageRevisions): ``` porchctl rpkg -n porch-demo get @@ -1352,7 +1352,7 @@ The output above is similar to the output of `kubectl get packagerevision -n por ### Blueprint with no Kpt pipelines -Create a new package in our `management` repo using the sample `network-function` package provided. This network function kpt package is a demo Kpt package that installs [nginx](https://github.com/nginx). +Create a new package in our *management* repo using the sample *network-function* package provided. This network function kpt package is a demo Kpt package that installs [nginx](https://github.com/nginx). ``` porchctl -n porch-demo rpkg init network-function --repository=management --workspace=v1 @@ -1362,7 +1362,7 @@ NAME PACKAGE WORKSPA management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Draft management ``` -This command creates a new PackageRevision CR in porch and also creates a branch called `network-function/v1` in our gitea `management` repo. Use the Gitea web UI to confirm that the branch has been created and note that it only has default content as yet. +This command creates a new *PackageRevision* CR in porch and also creates a branch called *network-function/v1* in our gitea *management* repo. Use the Gitea web UI to confirm that the branch has been created and note that it only has default content as yet. We now pull the package we have initialized from Porch. @@ -1401,13 +1401,13 @@ management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 ``` -Once we approve the package, the package is merged into the main branch in the `management` repo and the branch called `network-function/v1` in that repo is removed. Use the Gitea UI to verify this. We now have our blueprint package in our `management` repo and we can deploy this package into workload clusters. +Once we approve the package, the package is merged into the main branch in the *management* repo and the branch called *network-function/v1* in that repo is removed. Use the Gitea UI to verify this. We now have our blueprint package in our *management* repo and we can deploy this package into workload clusters. ### Blueprint with a Kpt pipeline -The second blueprint blueprint in the `blueprint` directory is called `network-function-auto-namespace`. This network function is exactly the same as the `network-function` package except that it has a Kpt function that automatically creates a namespace with the namespace configured in the `name` field in the `package-context.yaml` file. Note that no namespace is defined in the metadata of the `deployment.yaml` file of this Kpt package. +The second blueprint blueprint in the *blueprint* directory is called *network-function-auto-namespace*. This network function is exactly the same as the *network-function* package except that it has a Kpt function that automatically creates a namespace with the namespace configured in the name field in the *package-context.yaml* file. Note that no namespace is defined in the metadata of the *deployment.yaml* file of this Kpt package. -We use the same sequence of commands again to publish our blueprint package for `network-function-auto-namespace`. +We use the same sequence of commands again to publish our blueprint package for *network-function-auto-namespace*. ``` porchctl -n porch-demo rpkg init network-function-auto-namespace --repository=management --workspace=v1 @@ -1420,7 +1420,7 @@ cp blueprints/local-changes/network-function-auto-namespace/* blueprints/initial porchctl -n porch-demo rpkg push management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace ``` -Examine the `drafts/network-function-auto-namespace/v1` branch in Gitea. Notice that the `set-namespace` Kpt finction in the pipeline in the `Kptfile` has set the namespace in the `deployment.yaml` file to the value `default-namespace-name`, which it read from the `package-context.yaml` file. +Examine the *drafts/network-function-auto-namespace/v1* branch in Gitea. Notice that the set-namespace Kpt function in the pipeline in the *Kptfile* has set the namespace in the *deployment.yaml* file to the value default-namespace-name, which it read from the *package-context.yaml* file. Now we propose and approve the package. @@ -1442,7 +1442,7 @@ management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-name ### Blueprint with no Kpt pipelines -The process of deploying a blueprint package from our `management` repo clones the package, then modifies it for use on the workload cluster. The cloned package is then initialized, pushed, proposed, and approved onto the `edge1` repo. Remember that the `edge1` repo is being monitored by Configsync from the `edge1` cluster, so once the package appears in the `edge1` repo on the management cluster, it will be pulled by Configsync and applied to the `edge1` cluster. +The process of deploying a blueprint package from our *management* repo clones the package, then modifies it for use on the workload cluster. The cloned package is then initialized, pushed, proposed, and approved onto the *edge1* repo. Remember that the *edge1* repo is being monitored by Configsync from the edge1 cluster, so once the package appears in the *edge1* repo on the management cluster, it will be pulled by Configsync and applied to the edge1 cluster. ``` porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp @@ -1461,7 +1461,7 @@ The package we created in the last section is cloned. We now remove the original rm tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata ``` -We use a kpt function to change the namespace that will be used for the deployment of the network function. +We use a *kpt* function to change the namespace that will be used for the deployment of the network function. ``` kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -- namespace=edge1-network-function-a @@ -1472,7 +1472,7 @@ kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployme [info]: namespace "" updated to "edge1-network-function-a", 1 value(s) changed ``` -We now initialize and push the package to the `edge1` repo: +We now initialize and push the package to the *edge1* repo: ``` porchctl -n porch-demo rpkg init edge1-network-function-a --repository=edge1 --workspace=v1 @@ -1490,10 +1490,10 @@ NAME PACKAGE WORKSPACEN edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 false Draft edge1 ``` -You can verify that the package is in the `network-function-a/v1` branch of the deployment repo using the Gitea web UI. +You can verify that the package is in the *network-function-a/v1* branch of the deployment repo using the Gitea web UI. -Check that the `edge1-network-function-a` package is not deployed on the edge1 cluster yet: +Check that the *edge1-network-function-a* package is not deployed on the edge1 cluster yet: ``` export KUBECONFIG=~/.kube/kind-edge1-config @@ -1502,7 +1502,7 @@ No resources found in network-function-a namespace. ``` -We now propose and approve the deployment package, which merges the package to the `edge1` repo and further triggers Configsync to apply the package to the `edge1` cluster. +We now propose and approve the deployment package, which merges the package to the *edge1* repo and further triggers Configsync to apply the package to the edge1 cluster. ``` export KUBECONFIG=~/.kube/kind-management-config @@ -1518,7 +1518,7 @@ NAME PACKAGE WORKSPACEN edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 v1 true Published edge1 ``` -We can now check that the `network-function-a` package is deployed on the edge1 cluster and that the pod is running +We can now check that the *network-function-a* package is deployed on the edge1 cluster and that the pod is running ``` export KUBECONFIG=~/.kube/kind-edge1-config @@ -1536,7 +1536,7 @@ network-function-9779fc9f5-4rqp2 1/1 Running 0 44s ### Blueprint with a Kpt pipeline -The process for deploying a blueprint with a Kpt pipeline runs the Kpt pipeline automatically with whatever configuration we give it. Rather than explicitly running a Kpt function to change the namespace, we will specify the namespace as configuration and the pipeline will apply it to the deployment. +The process for deploying a blueprint with a *Kpt* pipeline runs the Kpt pipeline automatically with whatever configuration we give it. Rather than explicitly running a *Kpt* function to change the namespace, we will specify the namespace as configuration and the pipeline will apply it to the deployment. ``` porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp @@ -1556,7 +1556,7 @@ We now remove the original metadata from the package. rm tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata ``` -The package we created in the last section is cloned. We now initialize and push the package to the `edge1` repo: +The package we created in the last section is cloned. We now initialize and push the package to the *edge1* repo: ``` porchctl -n porch-demo rpkg init edge1-network-function-auto-namespace-a --repository=edge1 --workspace=v1 @@ -1569,7 +1569,7 @@ rm -fr tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone ``` -We now simply configure the namespace we want to apply. edit the `tmp_packages_for_deployment/edge1-network-function-auto-namespace-a/package-context.yaml` file and set the namespace to use: +We now simply configure the namespace we want to apply. edit the *tmp_packages_for_deployment/edge1-network-function-auto-namespace-a/package-context.yaml* file and set the namespace to use: ``` 8c8 @@ -1578,7 +1578,7 @@ We now simply configure the namespace we want to apply. edit the `tmp_packages_f > name: edge1-network-function-auto-namespace-a ``` -We now push the package to the `edge1` repo: +We now push the package to the *edge1* repo: ``` porchctl -n porch-demo rpkg push edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a @@ -1590,9 +1590,9 @@ porchctl -n porch-demo rpkg push edge1-48997da49ca0a733b0834c1a27943f1a0e075180 porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a ``` -You can verify that the package is in the `network-function-auto-namespace-a/v1` branch of the deployment repo using the Gitea web UI. You can see that the kpt pipeline fired and set the `edge1-network-function-auto-namespace-a` namespace in the `deployment.yaml` file on the `drafts/edge1-network-function-auto-namespace-a/v1` branch on the `edge1` repo in gitea. +You can verify that the package is in the *network-function-auto-namespace-a/v1* branch of the deployment repo using the Gitea web UI. You can see that the kpt pipeline fired and set the edge1-network-function-auto-namespace-a namespace in the *deployment.yaml* file on the *drafts/edge1-network-function-auto-namespace-a/v1* branch on the *edge1* repo in gitea. -Check that the `edge1-network-function-auto-namespace-a` package is not deployed on the edge1 cluster yet: +Check that the *edge1-network-function-auto-namespace-a* package is not deployed on the edge1 cluster yet: ``` export KUBECONFIG=~/.kube/kind-edge1-config @@ -1601,7 +1601,7 @@ No resources found in network-function-auto-namespace-a namespace. ``` -We now propose and approve the deployment package, which merges the package to the `edge1` repo and further triggers Configsync to apply the package to the `edge1` cluster. +We now propose and approve the deployment package, which merges the package to the *edge1* repo and further triggers Configsync to apply the package to the edge1 cluster. ``` export KUBECONFIG=~/.kube/kind-management-config @@ -1617,7 +1617,7 @@ NAME PACKAGE edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 ``` -We can now check that the `network-function-auto-namespace-a` package is deployed on the edge1 cluster and that the pod is running +We can now check that the *network-function-auto-namespace-a* package is deployed on the edge1 cluster and that the pod is running ``` export KUBECONFIG=~/.kube/kind-edge1-config @@ -1660,7 +1660,7 @@ spec: - network-function-c ``` -In this very simple PackageVariant, the `network-function` package in the `management` repo is cloned into the `edge1` repo as the `network-function-b` and `network-function-c` package variants. +In this very simple PackageVariant, the *network-function* package in the *management* repo is cloned into the *edge1* repo as the *network-function-b* and *network-function-c* package variants. {{% alert title="Note" color="primary" %}} @@ -1694,8 +1694,8 @@ NAME PACKAGE WORKSPACEN edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 ``` -We can see that our two new packages are created as draft packages on the edge1 repo. We can also examine the -PacakgeVariant CRs that have been created: +We can see that our two new packages are created as draft packages on the *edge1* repo. We can also examine the +*PacakgeVariant* CRs that have been created: ```bash kubectl get PackageVariant -n porch-demo @@ -1706,7 +1706,7 @@ network-function-c network-function-9779fc9f5-h7nsb ``` -It is also interesting to examine the yaml of the PackageVariant: +It is also interesting to examine the yaml of the *PackageVariant*: ```yaml kubectl get PackageVariant -n porch-demo -o yaml @@ -1799,8 +1799,8 @@ metadata: resourceVersion: "" ``` -We now want to customize and deploy our two packages. To do this we must pull the pacakges locally, render the kpt -functions, and then push the rendered packages back up to the `edge1` repo. +We now want to customize and deploy our two packages. To do this we must pull the pacakges locally, render the *kpt* +functions, and then push the rendered packages back up to the *edge1* repo. ```bash porchctl rpkg pull edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo @@ -1812,7 +1812,7 @@ kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployme porchctl rpkg push edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo ``` -Check that the namespace has been updated on the two packages in the `edge1` repo using the Gitea web UI. +Check that the namespace has been updated on the two packages in the *edge1* repo using the Gitea web UI. Now our two packages are ready for deployment: @@ -1830,7 +1830,7 @@ porchctl rpkg approve edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 approved ``` -We can now check that the `network-function-b` and `network-function-c` packages are deployed on the edge1 cluster and +We can now check that the *network-function-b* and *network-function-c* packages are deployed on the edge1 cluster and that the pods are running ```bash @@ -1845,7 +1845,7 @@ network-function-c network-function-9779fc9f5-h7nsb ### Using a PackageVariantSet to automatically set the package name and package namespace -The PackageVariant CR defined as: +The *PackageVariant* CR defined as: ```yaml apiVersion: config.porch.kpt.dev/v1alpha2 @@ -1870,9 +1870,9 @@ spec: ``` -In this PackageVariant, the `network-function-auto-namespace` package in the `management` repo is cloned into the `edge1` repo as the `network-function-auto-namespace-x` and `network-function-auto-namespace-y` package variants, similar to the PackageVariant in `simple-variant.yaml`. +In this *PackageVariant*, the *network-function-auto-namespace* package in the *management* repo is cloned into the *edge1* repo as the *network-function-auto-namespace-x* and *network-function-auto-namespace-y* package variants, similar to the *PackageVariant* in *simple-variant.yaml*. -An extra `template` section provided for the repositories in the PackageVariant: +An extra template section provided for the repositories in the PackageVariant: ```yaml template: @@ -1880,20 +1880,20 @@ template: packageExpr: "target.package + '-cumulus'" ``` -This template means that each package in the `spec.targets.repositories..packageNames` list will have the suffix -`-cumulus` added to its name. This allows us to automatically generate unique package names. Applying the -PackageVariantSet also automatically sets a unique namespace for each network function because applying the -PackageVariantSet automatically triggers the Kpt pipeline in the `network-function-auto-namespace` Kpt package to +This template means that each package in the spec.targets.repositories..packageNames list will have the suffix +-cumulus added to its name. This allows us to automatically generate unique package names. Applying the +*PackageVariantSet* also automatically sets a unique namespace for each network function because applying the +*PackageVariantSet* automatically triggers the Kpt pipeline in the *network-function-auto-namespace* *Kpt* package to gerenate unique namespaces for each deployed package. {{% alert title="Note" color="primary" %}} -Many other mutatinos can be performed using a PackageVariantSet. Use `kubectl explain PackageVariantSet` to get help on -the structure of the PackageVariantSet CRD to see the various mutations that are possible. +Many other mutatinos can be performed using a *PackageVariantSet*. Use `kubectl explain PackageVariantSet` to get help on +the structure of the *PackageVariantSet* CRD to see the various mutations that are possible. {{% /alert %}} -Applying the PackageVariantSet creates the new packages as draft packages: +Applying the *PackageVariantSet* creates the new packages as draft packages: ```bash kubectl apply -f name-namespace-variant.yaml @@ -1918,23 +1918,23 @@ The suffix `x-cumulonimbus` and `y-cumulonimbus` has been palced on the package {{% /alert %}} -Examine the `edge1` repo on Giea and you should see two new draft branches. +Examine the *edge1* repo on Giea and you should see two new draft branches. - drafts/network-function-auto-namespace-x-cumulonimbus/packagevariant-1 - drafts/network-function-auto-namespace-y-cumulonimbus/packagevariant-1 In these packages, you will see that: -1. The package name has been generated as `network-function-auto-namespace-x-cumulonimbus` and - `network-function-auto-namespace-y-cumulonimbus`in all files in the packages -2. The namespace has been generated as `network-function-auto-namespace-x-cumulonimbus` and - `network-function-auto-namespace-y-cumulonimbus` respectively in the `demployment.yaml` files -3. The PackageVariant has set the `data.name` field as `network-function-auto-namespace-x-cumulonimbus` and - `network-function-auto-namespace-y-cumulonimbus` respectively in the `pckage-context.yaml` files +1. The package name has been generated as network-function-auto-namespace-x-cumulonimbus and + network-function-auto-namespace-y-cumulonimbus in all files in the packages +2. The namespace has been generated as network-function-auto-namespace-x-cumulonimbus and + network-function-auto-namespace-y-cumulonimbus respectively in the *demployment.yaml* files +3. The PackageVariant has set the data.name field as network-function-auto-namespace-x-cumulonimbus and + network-function-auto-namespace-y-cumulonimbus respectively in the *pckage-context.yaml* files This has all been performed automatically; we have not had to perform the `porchctl rpkg pull/kpt fn render/porchctl rpkg push` combination of commands to make these chages as we had to in the -`simple-variant.yaml` case above. +*simple-variant.yaml* case above. Now, let us explore the packages further: @@ -1949,7 +1949,7 @@ edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace ``` We can see that our two new packages are created as draft packages on the edge1 repo. We can also examine the -PacakgeVariant CRs that have been created: +*PacakgeVariant* CRs that have been created: ```bash kubectl get PackageVariant -n porch-demo @@ -1960,7 +1960,7 @@ network-function-edge1-network-function-b 38m network-function-edge1-network-function-c 38m ``` -It is also interesting to examine the yaml of a PackageVariant: +It is also interesting to examine the yaml of a *PackageVariant*: ```yaml kubectl get PackageVariant -n porch-demo network-function-auto-namespace-edge1-network-function-35079f9f -o yaml diff --git a/content/en/docs/porch/using-porch/porchctl-cli-guide.md b/content/en/docs/porch/using-porch/porchctl-cli-guide.md index c630a688..5b0aa121 100644 --- a/content/en/docs/porch/using-porch/porchctl-cli-guide.md +++ b/content/en/docs/porch/using-porch/porchctl-cli-guide.md @@ -12,7 +12,7 @@ When Porch was ported to Nephio, the `kpt alpha rpkg` commands in kpt were moved To use it locally, [download](https://github.com/nephio-project/porch/releases), unpack and add it to your PATH. -_Optional: Generate the autocompletion script for the specified shell to add to your profile._ +Optional: Generate the autocompletion script for the specified shell to add to your profile. ``` porchctl completion bash @@ -49,7 +49,7 @@ Use "porchctl [command] --help" for more information about a command. ``` -The `porchtcl` command is an administration command for acting on Porch `Repository` (repo) and `PackageRevision` (rpkg) CRs. +The `porchtcl` command is an administration command for acting on Porch *Repository* (repo) and *PackageRevision* (rpkg) CRs. The commands for administering repositories are: diff --git a/content/en/docs/porch/using-porch/usage-porch-kpt-cli.md b/content/en/docs/porch/using-porch/usage-porch-kpt-cli.md index 8b3b2d0a..a5e0c330 100644 --- a/content/en/docs/porch/using-porch/usage-porch-kpt-cli.md +++ b/content/en/docs/porch/using-porch/usage-porch-kpt-cli.md @@ -6,7 +6,7 @@ description: --- -This document is focused on using Porch via the `kpt` CLI. +This document is focused on using Porch via the *kpt* CLI. Installation of Porch, including prerequisites, is covered in a [dedicated document](install-and-using-porch.md). @@ -14,15 +14,15 @@ Installation of Porch, including prerequisites, is covered in a [dedicated docum To use Porch, you will need: -* [`kpt`](https://kpt.dev) -* [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) -* [`gcloud`](https://cloud.google.com/sdk/gcloud) (if running on GKE) +* [*kpt*](https://kpt.dev) +* [*kubectl*](https://kubernetes.io/docs/tasks/tools/#kubectl) +* [*gcloud*](https://cloud.google.com/sdk/gcloud) (if running on GKE) -Make sure that your `kubectl` context is set up for `kubectl` to interact with the correct Kubernetes instance (see +Make sure that your *kubectl* context is set up for *kubectl* to interact with the correct Kubernetes instance (see [installation instructions](install-and-using-porch.md) or the [running-locally](../running-porch/running-locally.md) guide for details). -To check whether `kubectl` is configured with your Porch cluster (or local instance), run: +To check whether *kubectl* is configured with your Porch cluster (or local instance), run: ```bash kubectl api-resources | grep porch @@ -41,21 +41,21 @@ functions porch.kpt.dev/v1alpha1 true Porch server manages the following resources: -1. `repositories`: a repository (Git or OCI) can be registered with Porch to support discovery or management of KRM +1. **repositories**: a repository (Git or OCI) can be registered with Porch to support discovery or management of KRM configuration packages in those repositories, or discovery of KRM functions in those repositories. -2. `packagerevisions`: a specific revision of a KRM configuration package managed by Porch in one of the registered - repositories. This resource represents a _metadata view_ of the KRM configuration package. -3. `packagerevisionresources`: this resource represents the contents of the configuration package (KRM resources +2. **packagerevisions**: a specific revision of a KRM configuration package managed by Porch in one of the registered + repositories. This resource represents a metadata view of the KRM configuration package. +3. **packagerevisionresources**: this resource represents the contents of the configuration package (KRM resources contained in the package) -4. `functions`: function resource represents a KRM function discovered in a repository registered with Porch. Functions +4. **functions**: function resource represents a KRM function discovered in a repository registered with Porch. Functions are only supported with OCI repositories. {{% alert title="Note" color="primary" %}} -`packagerevisions` and `packagerevisionresources` represent different _views_ of the same underlying KRM -configuration package. `packagerevisions` represents the package metadata, and `packagerevisionresources` represents the -package content. The matching resources share the same `name` (as well as API group and version: -`porch.kpt.dev/v1alpha1`) and differ in resource kind (`PackageRevision` and `PackageRevisionResources` respectively). +packagerevisions and packagerevisionresources represent different views of the same underlying KRM +configuration package. packagerevisions represents the package metadata, and packagerevisionresources represents the +package content. The matching resources share the same name (as well as API group and version: +*porch.kpt.dev/v1alpha1*) and differ in resource kind (*PackageRevision* and *PackageRevisionResources* respectively). {{% /alert %}} @@ -93,7 +93,7 @@ $ kpt alpha repo register \ All command line flags supported: * `--directory` - Directory within the repository where to look for packages. -* `--branch` - Branch in the repository where finalized packages are committed (defaults to `main`). +* `--branch` - Branch in the repository where finalized packages are committed (defaults to *main*). * `--name` - Name of the package repository Kubernetes resource. If unspecified, will default to the name portion (last segment) of the repository URL (`blueprint` in the example above) * `--description` - Brief description of the package repository. @@ -102,9 +102,9 @@ All command line flags supported: * `--repo-basic-username` - Username for repository authentication using basic auth. * `--repo-basic-password` - Password for repository authentication using basic auth. -Additionally, common `kubectl` command line flags for controlling aspects of +Additionally, common *kubectl* command line flags for controlling aspects of interaction with the Kubernetes apiserver, logging, and more (this is true for -all `kpt` CLI commands which interact with Porch). +all *kpt* CLI commands which interact with Porch). Use the `kpt alpha repo get` command to query registered repositories: @@ -116,7 +116,7 @@ blueprints git Package True https://github.com/platkrm/bluepr deployments git Package true True https://github.com/platkrm/deployments.git ``` -The `kpt alpha get` commands support common `kubectl` +The `kpt alpha get` commands support common *kubectl* [flags](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output) to format output, for example `kpt alpha repo get --output=yaml`. @@ -129,7 +129,7 @@ $ kpt alpha repo unregister deployments --namespace default ## Package Discovery And Introspection The `kpt alpha rpkg` command group contains commands for interacting with packages managed by the Package Orchestration -service. the `r` prefix used in the command group name stands for 'remote'. +service. the r prefix used in the command group name stands for 'remote'. The `kpt alpha rpkg get` command list the packages in registered repositories: @@ -145,16 +145,16 @@ blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions v2 v2 ... ``` -The `LATEST` column indicates whether the package revision is the latest among the revisions of the same package. In the -output above, `v2` is the latest revision of `istions` package and `v1` is the latest revision of `basens` package. +The LATEST column indicates whether the package revision is the latest among the revisions of the same package. In the +output above, v2 is the latest revision of *istions* package and v1 is the latest revision of *basens* package. -The `LIFECYCLE` column indicates the lifecycle stage of the package revision, one of: `Published`, `Draft` or -`Proposed`. +The LIFECYCLE column indicates the lifecycle stage of the package revision, one of: Published, Draft or +Proposed. -The `REVISION` column indicates the revision of the package. Revisions are assigned when a package is `Published` and -starts at `v1`. +The REVISION column indicates the revision of the package. Revisions are assigned when a package is Published and +starts at v1. -The `WORKSPACENAME` column indicates the workspace name of the package. The workspace name is assigned when a draft +The WORKSPACENAME column indicates the workspace name of the package. The workspace name is assigned when a draft revision is created and is used as the branch name for proposed and draft package revisions. The workspace name must be must be unique among package revisions in the same package. @@ -169,7 +169,7 @@ Therefore, the names of the Kubernetes resources representing package revisions Simple filtering of package revisions by name (substring) and revision (exact match) is supported by the CLI using -`--name` and `--revision` flags: +--name and --revision flags: ```bash $ kpt alpha rpkg get --name istio --revision=v2 @@ -178,7 +178,7 @@ NAME PACKAGE WORKSPACENAME REV blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions v2 v2 true Published blueprints ``` -The common `kubectl` flags that control output format are available as well: +The common *kubectl* flags that control output format are available as well: ```bash $ kpt alpha rpkg get blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 -ndefault -oyaml @@ -201,7 +201,7 @@ spec: The `kpt alpha rpkg pull` command can be used to read the package resources. -The command can be used to print the package revision resources as `ResourceList` to `stdout`, which enables +The command can be used to print the package revision resources as ResourceList to stdout, which enables [chaining](https://kpt.dev/book/04-using-functions/02-imperative-function-execution?id=chaining-functions-using-the-unix-pipe) evaluation of functions on the package revision pulled from the Package Orchestration server. @@ -258,7 +258,7 @@ deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 ... ``` -The new package is created in the `Draft` lifecycle stage. This is true also for all commands that create new package +The new package is created in the Draft lifecycle stage. This is true also for all commands that create new package revision (`init`, `clone` and `copy`). Additional flags supported by the `kpt alpha rpkg init` command are: @@ -270,7 +270,7 @@ Additional flags supported by the `kpt alpha rpkg init` command are: * `--site` - Link to page with information about the package. -Use `kpt alpha rpkg clone` command to create a _downstream_ package by cloning an _upstream_ package: +Use `kpt alpha rpkg clone` command to create a *downstream* package by cloning an *upstream* package: ```bash $ kpt alpha rpkg clone blueprints-421a5b5e43b03bc697d96f471929efc6ba3f54b3 istions-clone \ @@ -307,11 +307,11 @@ The flags supported by the `kpt alpha rpkg clone` command are: package is located. * `--ref` - Ref in the upstream repository where the upstream package is located. This can be a branch, tag, or SHA. -* `--repository` - Repository to which package will be cloned (downstream +* `--repository` - Repository to which package will be cloned (*downstream* repository). * `--workspace` - Workspace to assign to the downstream package. * `--strategy` - Update strategy that should be used when updating this package; - one of: `resource-merge`, `fast-forward`, `force-delete-replace`. + one of: resource-merge, fast-forward, force-delete-replace. The `kpt alpha rpkg copy` command can be used to create a new revision of an existing package. It is a means to @@ -328,7 +328,7 @@ NAME PACKAGE WORKSPACENAME blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 istions v3 false Draft blueprints ``` -The `kpt alpha rpkg push` command can be used to update the resources (package contents) of a package _draft_: +The `kpt alpha rpkg push` command can be used to update the resources (package contents) of a package *draft*: ```bash $ kpt alpha rpkg pull \ @@ -376,9 +376,9 @@ blueprints-bf11228f80de09f1a5dd9374dc92ebde3b503689 deleted ## Package Lifecycle and Approval Flow -Authoring is performed on the package revisions in the _Draft_ lifecycle stage. Before a package can be deployed or -cloned, it must be _Published_. The approval flow is the process by which the package is advanced from _Draft_ state -through _Proposed_ state and finally to _Published_ lifecycle stage. +Authoring is performed on the package revisions in the Draft lifecycle stage. Before a package can be deployed or +cloned, it must be Published. The approval flow is the process by which the package is advanced from Draft state +through Proposed state and finally to Published lifecycle stage. The commands used to manage package lifecycle stages include: @@ -386,7 +386,7 @@ The commands used to manage package lifecycle stages include: * `approve` - Approves a proposal to finalize a package revision. * `reject` - Rejects a proposal to finalize a package revision -In the [Authoring Packages](#authoring-packages) section above we created several _draft_ packages and in this section +In the [Authoring Packages](#authoring-packages) section above we created several draft packages and in this section we will create proposals for publishing some of them. ```bash @@ -416,7 +416,7 @@ deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Proposed deployments ``` -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve +At this point, a person in platform administrator role, or even an automated process, will review and either approve or reject the proposals. To aid with the decision, the platform administrator may inspect the package contents using the commands above, such as `kpt alpha rpkg pull`. @@ -442,12 +442,12 @@ deployments-11ca1db650fa4bfa33deeb7f488fbdc50cdb3b82 istions-clone v1 deployments-c32b851b591b860efda29ba0e006725c8c1f7764 new-package v1 false Draft deployments ``` -Observe that the rejected proposal returned the package revision back to _Draft_ lifecycle stage. The package whose -proposal was approved is now in _Published_ state. +Observe that the rejected proposal returned the package revision back to Draft lifecycle stage. The package whose +proposal was approved is now in Published state. ## Deploying a Package -Commands used in the context of deploying a package include are in the `kpt alpha sync` command group (named `sync` to +Commands used in the context of deploying a package include are in the `kpt alpha sync` command group (named sync to emphasize that Config Sync is the deploying mechanism and that configuration is being synchronized with the actuation target as a means of deployment) and include: From 0e5a583186d51dc184e4a3db5b0fc02f783e25d9 Mon Sep 17 00:00:00 2001 From: Dominika Schweier Date: Mon, 28 Oct 2024 15:15:54 +0100 Subject: [PATCH 2/6] Adding examples to linkcheck ignore Signed-off-by: Dominika Schweier --- .linkspector.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.linkspector.yml b/.linkspector.yml index 00e62e46..4d4037e6 100644 --- a/.linkspector.yml +++ b/.linkspector.yml @@ -11,6 +11,9 @@ ignorePatterns: - pattern: "^http://localhost.*$" - pattern: "^http://HOSTNAME:PORT.*$" - pattern: "172\\.18\\.255\\.200" + - pattern: "https://\\*kpt\\*\\.dev/" + - pattern: "https://my-gitlab\\.com/joe\\.bloggs/blueprints\\.git" + - pattern: "http://172\\.18\\.0\\.200:3000/nephio/\" replacementPatterns: - pattern: ".md#.*$" replacement: ".md" From 843ae3d7e975ec6b407fe7ce08a48cc581d11f4c Mon Sep 17 00:00:00 2001 From: Dominika Schweier Date: Mon, 28 Oct 2024 15:19:49 +0100 Subject: [PATCH 3/6] Fixing yaml Signed-off-by: Dominika Schweier --- .linkspector.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.linkspector.yml b/.linkspector.yml index 4d4037e6..ab397a04 100644 --- a/.linkspector.yml +++ b/.linkspector.yml @@ -13,7 +13,7 @@ ignorePatterns: - pattern: "172\\.18\\.255\\.200" - pattern: "https://\\*kpt\\*\\.dev/" - pattern: "https://my-gitlab\\.com/joe\\.bloggs/blueprints\\.git" - - pattern: "http://172\\.18\\.0\\.200:3000/nephio/\" + - pattern: "http://172\\.18\\.0\\.200:3000/nephio/" replacementPatterns: - pattern: ".md#.*$" replacement: ".md" From 5d969e9f291978e11475fa6798ff8c87634ad3b9 Mon Sep 17 00:00:00 2001 From: Schweier Dominika Date: Thu, 31 Oct 2024 10:34:06 +0100 Subject: [PATCH 4/6] Apply suggestions from code review Co-authored-by: Liam Fallon <35595825+liamfallon@users.noreply.github.com> --- .../en/docs/guides/contributor-guides/unit-testing-mockery.md | 2 +- content/en/docs/porch/package-variant.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/guides/contributor-guides/unit-testing-mockery.md b/content/en/docs/guides/contributor-guides/unit-testing-mockery.md index eae4dd9f..8df5f8ec 100644 --- a/content/en/docs/guides/contributor-guides/unit-testing-mockery.md +++ b/content/en/docs/guides/contributor-guides/unit-testing-mockery.md @@ -53,7 +53,7 @@ We provide a list of the packages for which we want to generate mocks. In this e 6. dir: "{{.InterfaceDir}}" ``` -We want mocks to be generated for the GiteaClien go interface (line 4). The {{.InterfaceDir}} parameter (line 6) asks Mockery to generate the mock file in the same directory as the interface is located. +We want mocks to be generated for the GiteaClient go interface (line 4). The {{.InterfaceDir}} parameter (line 6) asks Mockery to generate the mock file in the same directory as the interface is located. ### Example 2 diff --git a/content/en/docs/porch/package-variant.md b/content/en/docs/porch/package-variant.md index 6afbc318..907b89ba 100644 --- a/content/en/docs/porch/package-variant.md +++ b/content/en/docs/porch/package-variant.md @@ -1130,7 +1130,7 @@ the following variables are available: | target | The target object (details vary; see below). | There is one expression that is an exception to the table above. Since the repository value corresponds to the -Repository of the downstream, we must first evaluate the `ownstream.repoExpr expression to find that repository. +Repository of the downstream, we must first evaluate the `downstream.repoExpr expression to find that repository. Thus, for that expression only, repository is not a valid variable. There is one more variable available across all CEL expressions: the target variable. This variable has a meaning that From d18129a7b289d099ec1bea9432c630c275e9f0eb Mon Sep 17 00:00:00 2001 From: Dominika Schweier Date: Thu, 31 Oct 2024 10:35:46 +0100 Subject: [PATCH 5/6] Small correction --- content/en/docs/porch/package-variant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/porch/package-variant.md b/content/en/docs/porch/package-variant.md index 907b89ba..d6b3995a 100644 --- a/content/en/docs/porch/package-variant.md +++ b/content/en/docs/porch/package-variant.md @@ -1130,7 +1130,7 @@ the following variables are available: | target | The target object (details vary; see below). | There is one expression that is an exception to the table above. Since the repository value corresponds to the -Repository of the downstream, we must first evaluate the `downstream.repoExpr expression to find that repository. +Repository of the downstream, we must first evaluate the downstream.repoExpr expression to find that repository. Thus, for that expression only, repository is not a valid variable. There is one more variable available across all CEL expressions: the target variable. This variable has a meaning that From 05d91cc413e7c6590167afeef4023beeac54e324 Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Thu, 14 Nov 2024 20:29:38 +0200 Subject: [PATCH 6/6] Jure retriggering reviewdog Signed-off-by: Gergely Csatari --- content/en/docs/_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md index 5b7601e3..a2c79e3a 100644 --- a/content/en/docs/_index.md +++ b/content/en/docs/_index.md @@ -93,3 +93,4 @@ demonstration purposes, the same principles and code can be used for managing other infrastructure and network functions. The *uniformity in systems* principle means that as long as something is manageable via the Kubernetes Resource Model, it is manageable via Nephio. +