This project is intended as a reference project for containerized .NET applications deployed to Kubernetes using CI/CD pipelines. The project contains a fully functional ASP.NET 2.2 WebAPI web service, Kuberetes templates and helm charts needed to deploy the application, and a Jenkinsfile that can be used to stand up a CI/CD pipeline for the project.
The sample application is a very basic ASP.NET 2.2 WebAPI project. Details regarding the application itself are contained in the sample-dotnet-app subdirectory.
There are references for following deployment strategies in this repository. Each supports deploying to any Kubernetes cluster (including Openshift). Consult the README for each for further details.
The repo also contains a Jenkinsfile that can be used to setup a CI/CD pipeline for the application. The pipeline is intended to run as changes to branches are pushed to the origin repository of the application (e.g. git server hosted by Github or Bitbucket). The pipeline, depending on the branch, builds the deployable container image, deploys the application to intended environments, and automates versioning related tasks (e.g. creating git tags and incrementing application version).
The pipeline assumes the repository uses a Gitflow-based branching strategy. Specifically, the branching model has the following characteristics:
-
The main branch is
develop
. All feature branches are created from this branch. Upon completion, the feature branches are merged back into thedevelop
branch and deleted. -
The
develop
branch is occasionally merged into themaster
branch to create release candidates. -
The
master
branch is never merged intodevelop
. Instead, if fixes need to be done for a release, then a branch should be created frommaster
, and any changes should be both merged intomaster
and cherry-picked back intodevelop
.
The pipeline automates majority of the activity related to versioning of the application. The version is expected to follow typical semantic versioning, and the version of the software is stored in the version
field of the Chart.yaml
file used for Helm installs. This version number is expected to change under two circumstances:
-
It is automatically changed by the pipeline as a result of changes pushed to the
master
branch. In this case, the pipeline tags theHEAD
of themaster
branch with the version number contained in theChart.yaml
file. After pushing the tag to the origin repository, the pipeline then checks out the currentHEAD
of thedevelop
branch, increments the third digit of the semantic version byone
, and pushes the newHEAD
of develop to the origin repository. -
It is manually changed by the maintainers of the application if the major and/or minor versions need to be incremented. The change will be applied like any other change to the application source - using a feature branch that is merged into
develop
first.
Following sections describe each stage of the CI/CD pipeline and indicate the branch(es) for which the stage is run.
Run for: All branches
The intention of this stage is to do any initialization that modifies the pipeline environment before running any of the other stages. Currently, this includes the following:
-
Load shared Groovy modules - These are reusable functions written in Groovy that encapsulate high level tasks related to a specific tool or concern. They are organized in files, and each file contains one or more functions that belong to the same grouping (e.g. Helm related tasks). Each file is loaded as a namespace and bound to an appropriately named field in the
modules
global variable. -
Load Kubernetes pod templates - These are YAML files describing the pod used to run certain stages in the pipeline. For example, deployment stages are run with the
helm-agent
pod template. These templates are stored in external files to reduce clutter from the mainJenkinsfile
. They are loaded in this stage and bound to environment variables so they can be used by later stages. -
Set necessary environment variables - Following environment variables are set so they can be used by later stages.
- buildVersion - The build version is largely based on the version stored in the
Chart.yaml
file. Formaster
branch, the build version is exactly that. For other branches, the build version is determined by appending a '-' followed by the name of the branch (e.g. 1.0.0-develop or 2.3.5-some-feature-request). - buildVersionWithHash - This is a combination of
buildVersion
and the short git commit hash. It is injected into thesample-dotnet-app
instance itself as theAPP_VERSION
environment variable. This value is echoed back by the app in the/info
endpoint.
- buildVersion - The build version is largely based on the version stored in the
Run for: All branches
This is the build stage. The build stage is divided into two sections:
-
Build .NET binaries (typically this should also include running automated tests and other code analysis checks) using the multi stage
Dockerfile
-
Build the deployable container image. If the branch is
master
ordevelop
, deliver the container image to the image registry for long term storage.
This stage is run using the buildah-agent
which contains the buildah
tool.
NOTE: The buildah
container needs to run as priveleged to execute successfully.
Run for: Only master
branch
After a successful "build and deliver" stage, this stage runs the automated versioning tasks if the pipeline is running for the master
branch.
-
A Git tag is created using the build version and pushed to the origin repository.
-
Then, the pipeline checks out the
develop
branch, updates theversion
field in theChart.yaml
file by incrementing the last digit byone
, commits the changes, and pushes the changes to the origin repository.
This stage is run using agent any
as no specific Kubernetes pods need to be spawned to execute this stage.
Run for: Only master
and develop
branches
After a successful "build and deliver" stage, this stage deploys the application to a staging environment if the pipeline is running for develop
or master
branches. For develop
branch, this means deploying the application to the sample-projects-dev
namespace. On the other hand, for master
branch, this means starting the release process by deploying to the sample-projects-qa
namespace.
The deploy is done using the helm upgrade
command with the --install
flag. This allows first time deploys to an environment to execute as a new install, while subsequent deploys are executed as upgrades. The Helm release name, sample-dotnet-app-dev
for develop
and sample-dotnet-app-qa
for master, is reused each time.
Since the version in Chart.yaml
may not change for multiple deploys off develop
branch, the same image tags will often be updated with new container images. For this reason an image pull policy of Always
is used for deployments off the develop
branch.
On the other hand, since the version will always be incremented for subsequent builds on the master
branch, a pull policy of IfNotPresent
is used for deployments off the master
branch. This also falls in line with the "Build once and Promote" best practice for CI/CD pipelines. This way, the release candidate is built once, deployed to the "QA" environment, and then promoted to the production environment if deemed acceptable.
This stage is run using the helm-agent
which contains the Kubernetes and Helm CLI clients.
Run for: Only master
branch
This stage allows for a manual approval gateway before a version of the application is promoted to the production environment. The pipeline simply waits up to 5 days (configurable) for a human to click "Proceed" or "Abort". This is intended to allow time for manual testing to take place in the "QA" environment.
This stage is run using agent any
as no specific Kubernetes pods need to be spawned to execute this stage.
Run for: Only master
branch
This is the final stage of the pipeline and only run for the master
branch. Application versions that pass testing and verfication in the "QA" environment are deployed to the "Production" environment. The deployment is done in the same manner as it is done in the "Deploy to Staging" stage above.
This stage is run using the helm-agent
which contains the Kubernetes and Helm CLI clients.
The pipeline is created as a "Multi Branch Pipeline" in Jenkins.
Following dependencies must be met before being able to start using the pipeline.
-
Jenkins configured with the
Kubernetes plugin
. -
Jenkins
username and password
credential namedgit-auth
that allows Jenkins and the pipeline to fully consume this Git repository. -
Jenkins
username and password
credential namedimage-registry-auth
that allows pushing the application container to the target docker registry. -
An instance of the
tiller
server installed and running in thetiller
namespace on the target Kubernetes cluster. -
The
sample-projects
,sample-projects-qa
, andsample-projects-dev
namespaces created in Kubernetes withtiller
having the ability to manage projects inside them. -
Jenkins
username and password
credential namedk8s-cluster-auth
that contains the target Kubernetes cluster URL and authentication token for the service account that will be used to connect to the cluster. This service account needs to haveedit
privileges in thetiller
namespace. -
The namespace where Jenkins is deployed should have a
jenkins-privileged
service account that has the ability to run privileged containers. This is needed so that thebuildah
container instance can run as privileged.
Following set of instructions is an example of how to achieve the above setup in a vanilla Minishift instance.
-
Install the Jenkins (ephemeral) service from the provided catalog under
jenkins
namespace. -
Create a
jenkins-privileged
service account and grant it the ability to run privileged containers. Run the following as a user with cluster admin access.oc login -u system:admin oc create sa jenkins-privileged -n jenkins oc adm policy add-scc-to-user privileged -n jenkins -z jenkins-privileged oc login -u developer
-
Install the
tiller
server in Minishift under thetiller
namespace. See deployment/helm-k8s/README.md for instructions. -
Give the
tiller
service accountedit
privileges intiller
namespace.oc policy add-role-to-user edit -z tiller -n tiller
-
Get the service token for
tiller
service account.oc serviceaccounts get-token tiller -n tiller
-
In Jenkins, create the following three credentials:
-
git-auth
: Ausername and password
credential that will be used to pull this Git repository. -
image-registry-auth
: Ausername and password
credential containing the username and password for the image registry where thesample-dotnet-app
container image will be pushed. -
k8s-cluster-auth
: Ausername and password
credential containing the Kubernetes cluster URL andtiller
service account login token retrieved in previous step.
-
-
Create the
dev
,qa
, andproduction
namespaces and give thetiller
service accountedit
privileges.oc new-project sample-projects-dev oc new-project sample-projects-qa oc new-project sample-projects oc policy add-role-to-user edit \ "system:serviceaccount:tiller:tiller" \ -n sample-projects-dev oc policy add-role-to-user edit \ "system:serviceaccount:tiller:tiller" \ -n sample-projects-qa oc policy add-role-to-user edit \ "system:serviceaccount:tiller:tiller" \ -n sample-projects
-
Create a "Multi Branch Pipeline" in Jenkins using this Git repository as the source and
git-auth
credentials created above as the credentials. At a minimum, setup periodic polling of all branches so builds can trigger automatically. However, it is strongly recommended as a best practice to setup webhooks from the repository server so builds are triggered as push events from Git server rather than as a consequence of polling from Jenkins.