The following is a set of guidelines for contributing to RPK.
- Contributing to RPK
The Reference Platform for Kubernetes team welcomes contributions from the community. Before you start working with RPK, please read our Developer Certificate of Origin. All contributions to this repository must be signed as described on that page. Your signature certifies that you wrote the patch or have the right to pass it on as an open-source patch.
The goal of RPK is to provide a reference implementation platform which encompasses the ideas and best practices as captured in the Tanzu Developer Center Kubernetes Guides (https://tanzu.vmware.com/developer/guides/kubernetes/). Any contributions to RPK should align with the ideals and practices that are found there.
For more information about RPK and it's relation to TDC, view the following links:
The following outlines the simple developer workflow that we use for RPK.
- Contributor discovers a bug or has an idea for a new feature or an improvement to the existing processes.
- Contributor opens an issue.
- Contributor assigns himself/herself to the issue.
- Contributor creates a fork in GitHub to their personal GitHub account.
- Contributor clones the RPK repo from their fork (e.g.
git clone [email protected]/<GITHUB_ID>/reference-platform-kubernetes.git
). See Development Environment for information on environment setup. - Working in the new fork on the local development workstation, the contributor modifies the code needed to address the opened and approved issue.
- Contributor commits and pushes the changes to their fork (
e.g. git add*; git commit -a -m -s 'Fixes #1, my commit message'; git push --set-upstream origin my-cool-new-feature
)- We do require signed commits as per DCO. Here is the process to follow to setup your workstation: https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification/signing-commits
- Contributor opens a merge request into the develop branch in GitHub and fills out the appropriate information in the Merge Request.
- A CI pipeline is kicked off. See PIPELINE.md for more details.
- NOTE: failed CI pipeline runs will not be merged.
- NOTE: please keep commits to their individual modules (e.g.
container-registry
, orstorage
) as this helps unit test the independent modules. - If additional changes are requested, steps 7-8 can be repeated until the branch is approved for merge by the maintainers.
- Once the request approved, your code is merged!
- When a new release is cut, code is merged code is merged from develop > master.
- Install TKG
- Provide admin access to TKG
- Choose a DNS Solution
- Choose a Cloud Provider
- Set inventory variables based on your Cloud Provider:
As per DCO, we require signed commits with a 'commit signed off by ...' message. See https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification/signing-commits for more details on setup.
Ansible requires several python dependencies in order to run RPK. Often times, this presents challenges when porting between different operating systems and python versions. To get around these issues, you can build RPK in a Docker image. To do so, run:
make build
For custom names and versions, run:
IMAGE=rpk-custom VERSION=v1.0.0 make build
To test an individual role using your previously built Docker image:
ROLE=my-role make deploy.test.role
The above is run in a CI pipeline each time you push your code for only the modules that are updated. For this reason, please keep your commits and pushes independent to each module!!!
To test a full end-to-end deployment using your previously built Docker image, also using the current state of the project, run:
make deploy.test
⚠️ this process is not formally supported, but may be wanted for developers who want to useansible-playbook
directly.
An alternative to developing with the Docker image is to install dependencies to your development
machine directly. This avoids having to rebuild the image each time your requirements.txt
file
changes, but could make it difficult to develop across different platforms.
To setup your local development environment, first set up venv. This will allow you to install python packages for RPK into a virtual environment, rather than system-wide:
python3 -m venv ansible-virtualenv
Activate the venv:
. ansible-virtualenv/bin/activate
Install python dependendencies:
pip3 install -r requirements.txt
Run the playbook against a single role:
./bin/rpk -r MY_ROLE_NAME
The above roughly translates into:
ansible-playbook test.yaml -e 'rpk_role_name=MY_ROLE_NAME' -c local -i build/inventory.yaml
Run a full deployment of RPK:
./bin/rpk
The above roughly translates into: ansible-playbook site.yaml -c local -i build/inventory.yaml --skip-tags module_dependencies
A common question that gets asked is "Why the use of declarative automation tool to apply declarative manifests in Kubernetes?". The answer is simple. We wanted to quickly give a reference implementation of a platform using best practices from TDC and we wanted to quickly "make it exist". Using the talent that we had, Ansible was the easiest, shortest path to success.
Long-term plans for Ansible are still up in the air.
In RPK, we now refer to Ansible roles as components.
For each module from the Tanzu Developer Center Kubernetes Guides (https://tanzu.vmware.com/developer/guides/kubernetes/), we tie that into an Ansible role (RPK Component). This allows us the ability to layer on the components as building blocks with our existing processes.
For each component, we require the following structure:
├── clean
│ └── tasks
│ └── main.yaml
├── common
│ └── defaults
│ └── main.yaml
├── defaults
│ └── main.yaml
├── demo
│ ├── tasks
│ │ └── main.yaml
├── tasks
│ └── main.yaml
├── templates
├── README.md
├── .dependencies.yaml
The following are deviances from common Ansible best practices:
- Use of common/defaults/ for variables. This allows us to require the variables in other roles without having to statically define variables.
- Use of .dependencies.yaml for module dependencies. This structure defines ONLY the modules/roles in which each role is dependent upon.
- Use of clean/ sub-role. This role is the code to cleanup the Ansible role using
ROLE=my-role make clean.role
. - Use of demo/ sub-role. This role is the code to demonstrate the role (either print out info or create K8S objects) using
ROLE=my-role make demo.role
.
Profiles were added to RPK to address differing needs:
- Support to deploy all of Tanzu Advanced features in a single pass
- A means for grouping RPK components that should belong together
- To support multi-cluster deployments where an end-user may want to provide differing platform services in differing clusters
RPK provides two base profiles:
platform
=> this is the defaultadvanced
=> this is used to install Tanzu Advanced components
To view available profiles, run:
make list.profiles
To view all available RPK components, run:
make list.components.all
To view available components in the platform
profile in the order they will be deployed, run:
make list.components.platform
To view available components in the advanced
profile in the order they will be deployed, run:
make list.components.advanced
Today, components.yaml
can be found in the profiles
directory. This file defines several things about each RPK component:
-
Order of deployment. The order of each RPK component in the list is exactly how it would execute during that profile's deployment.
-
It's name, which must be the directory name within the project's
roles
directory. -
Whether or not it is enabled by default. We provide some components that can be deployed out of band as desired, but set them to disabled for a normal deployment.
-
Whether or not it has a demo component.
-
A list of profiles that the component applies to. A component can belong to as many profiles as you would like. We currently do not support custom profiles, but we are looking to change this in a future release.
NOTE: A component can have dependencies that don't typically run in a profile. This is fine so long as all of a component's dependencies have been listed in the component's .dependencies.yaml
. See Component Dependencies for more details.
- name: "workload-tenancy"
enabled: true
profiles:
- "platform"
These values can be set in profiles/components.yaml
Variable Name | Description | Required | Type |
---|---|---|---|
name | Name of the RPK component (Ansible Role) directory relative to the roles directory. Example: roles/workload-tenancy . The name would be workload-tenancy . |
yes | string |
enabled | Whether the component is enabled or not | yes | boolean |
profiles | List of profiles the component applies to | yes | list |
Once a component has been modified or created in components.yaml
, the profiles must be rebuilt. components.yaml
should be the only place you make changes, as it is the specification for how all of the RPK profiles should be built.
To rebuild the RPK profiles from an updated components.yaml
, run make build.profiles
. If you are developing a new role, you will want to add the changed profile files to your branch, commit, and push the changes.
The documentation found in /docs/providers
has been moved into a templated format. Do not edit the files in /docs/providers
directly, or you will find your changes get overwritten eventually.
To properly edit these files, you can edit the common documentation sections in /roles/support/build-docs/templates/sections/common.
To properly edit the cloud provider documentation templates, you can edit the files in /roles/support/build-docs/templates/sections/main.
To properly edit the title's of the cloud provider documentation, you can edit the template files (by provider name) in /roles/support/build-docs/templates.
Further details can be found in support/build-docs README.md
Once you have modified any of the template files or common sections for the cloud provider documentation, you must rebuild the templated documentation.
This is performed by running the following command:
make build.docs
Add and commit the changed files into the repository.
Whenever a new role is required, use the command ROLE=my-new-role make new.role
, replacing my-new-role
with your desired RPK component name.
This will provide a new component structure consistent with the remainder of the project.
This should be completed once you have assigned your component to a profile in components.yaml
.
See Building and Rebuilding RPK Profiles
This is necessary, because the process rebuilds the file and tables at roles/support/build-docs/templates/sections/additional_vars.md based off of the profile variables, which you have just rebuilt after assigning the new role into components.yaml
.
The use of the standard Ansible Role meta/main.yaml
file to resolve component dependencies has been deprecated in RPK. Instead, dependencies are defined via a
.dependencies.yaml
file within each role. Dependencies are resolved with the following process:
-
When
rpk_profile
is set tosingle
(set when requesting an individual role), dependency resolution is turned on. -
When
rpk_profile
is set to anything butsingle
, dependencies are not resolved and the RPK deployer is trusting that all dependencies are pre-defined in the profile requested in their proper order. -
Once RPK has detected that the profile is
single
, it passes the component into a custom Ansible filterrpk_resolve_deps
. -
The custom filter will resolve the dependencies based on the
.dependencies.yaml
file within the requested role to ensure all required components have been deployed within the cluster first. -
The dependencies are stored in the
rpk_components
variable, as resolved in the previous step. -
The
site.yaml
file loops throughrpk_components
and runs each individual component.
This method has a couple limitations:
The use of meta/main.yaml
is to resolve dependencies has been deprecated in RPK. Instead, dependencies are defined via a
.dependencies.yaml
file within each role. Dependencies are resolved with the following process:
- When
rpk_profile
is set tosingle
(set when requesting an indvididual role), dependency resolution is turned on. - When
rpk_profile
is set to anything butsingle
, dependencies are not resolved and the RPK deployer is trusting that all dependencies are pre-defined in the profile requested in their proper order. - Once RPK has detected that the profile is
single
, it passes the component into a custom Ansible filterrpk_resolve_deps
. - The custom filter will resolve the dependencies based on the
.dependencies.yaml
file within the requested role. - The dependencies are stored in the
rpk_components
variable, as resolved in the previous step. - The
site.yaml
file loops throughrpk_components
and runs each individual component.
This method has a couple limitations:
- It is not a native Ansible-supported methodology like
meta/main.yaml
is. We went with this methodology because the out-of-the box method would re-run dependencies when requesting a single role, which significantly increased the time it took to deploy a role. - Child dependencies are not resolved. This means that you must specify ALL dependencies you need for a given role and not rely on the child role to resolve a dependency for you. This methodology is preferred because it leaves a documentation trail of what components get installed with each role and greatly simplifies the logic needed to compile the roles needed.
Given the long-term roadmap for the project's use of Ansible as a deployment tool is questionable until we evaluate further, please prefer simplicity over complexity. There are many cool and crazy things that we can do with Ansible, however we prefer to simply use Ansible to do the following:
- Apply Kubernetes manifests using the
common/manifest
role. - Talk to APIs, generally during role demonstration using
ROLE=my-role make demo.role
.- Sometimes we also talk to APIs during role deployment, but this should be limited.
Please open an Issue!