- Prerequisites
- Getting Started
- Load Testing
- Exposing the Ingress Controller
- Running the Minikube Service without Ingress
- GKE Instructions
-
To run a local Kubernetes cluster, we recommend using Minikube on your local machine.
-
Ensure that the Metrics Server add-on is enabled. Else, the autoscaling and ingress will not work.
-
For Minikube:
# For Horizontal Pod Autoscaling minikube addons enable metrics-server # For Nginx Ingress Controller # Install minikube addons enable ingress # Verify kubectl get pods -n ingress-nginx
-
For Kubernetes:
# Metric Server kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml # Ingress Controller # Install kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml # Deploy with load balancer (GKE, AKS, EKS) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml # Validate kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx kubectl get services ingress-nginx-controller --namespace=ingress-nginx
-
Run the command from the project root:
make k8s-up
-
Run the load test script:
./scripts/k8s-test-load.sh
In its current configuration, it will run a load testing container to ping the user-service. Add more services and their respective ports as desired.
Also, this will ping the service's
/health
endpoint, if configured. Else, it will not work. -
Run the command:
kubectl -n peerprep get all
You should be able to see the Horizontal Pod AutoScaler scaling up the services in respond to resource demand.
-
Run Ctrl+C to interrupt and terminate the load tester.
-
If you haven't already, run the command from the project root:
make k8s-up
-
Run the command to set up the ingress controller:
kubectl apply -f ./k8s/local
It should take a couple of minutes. Once done, you should run this command:
kubectl -n peerprep get ingress # You should see a similar output: # NAME CLASS HOSTS ADDRESS PORTS AGE # peerprep-ingress nginx peerprep-g16.net 172.17.0.15 80 38s
-
Run the command to expose the ingress controller:
minikube tunnel
-
Edit your
/etc/hosts
file and add the following at the bottom:127.0.0.1 peerprep-g16.net
-
If there is already an entry that points to
localhost
, comment it out temporarily.127.0.0.1 localhost # <- Comment this out, it should look like this ↙️ # 127.0.0.1 localhost 127.0.0.1 peerprep-g16.net
-
Visit
http://peerprep-g16.net
in your browser. -
When done, reset your
/etc/hosts
file to its original state. -
Run Ctrl+C on the Minikube Tunnel to stop it.
-
Run the command to set up the cluster:
make k8s-up
-
Expose the service:
minikube -n peerprep service frontend
A browser window should launch, directing you to the application's frontend.
-
Authenticate or ensure you are added as a user to the Google Cloud Project:
- Project ID:
cs3219-g16
- Project Zone:
asia-southeast1-c
- Project ID:
-
Install the
gcloud
C by following the instructions at this link: -
Setup the CLI with the following commands:
gcloud auth login gcloud config set project cs3219-g16 gcloud config set compute/zone asia-southeast1-c gcloud components install gke-gcloud-auth-plugin export USE_GKE_GCLOUD_AUTH_PLUGIN=True
-
Create the cluster with the following commands:
gcloud container clusters create \ cs3219-g16 \ --preemptible \ --machine-type e2-small \ --enable-autoscaling \ --num-nodes 1 \ --min-nodes 1 \ --max-nodes 25 \ --disk-size 20 \ --region=asia-southeast1-c
- We configure the node pool using the above command to create a cluster with
a node pool having the following characteristics:
- Preemptible: To schedule cheaper pods that can be claimed back by GCloud to save costs
- Machine Type:
e2-small
- This provides a good balance between performance and cost.
- Autoscaling: This allows the node pool to resize automatically, allowing us to scale our deployments without manually managing nodes
- Num Nodes (Initial Number of Nodes): 1. We let the cluster operate first, and scale up if needed.
- Min Nodes: 1. This ensures our cluster is always up.
- Max Nodes: 25. This prevents our cluster from scaling too large.
- Disk Size: 20GB. This prevents our cluster from requesting more than the Google Cloud quota of 500GB SSD, as the default is 100GB per node.
- We configure the node pool using the above command to create a cluster with
a node pool having the following characteristics:
-
Once the cluster has been created, run the commands below to configure
kubectl
and connect to the cluster:gcloud container clusters get-credentials cs3219-g16 # You should see some output here kubectl get nodes -o wide
-
Run the script (ensure you are in a Bash shell like on Mac or Linux):
make k8s-up
-
Wait until the deployments all reach status running:
kubectl -n peerprep rollout status deployment frontend
-
-
If you haven't already, visit the GCloud console -> 'Cloud Domains' and verify that a domain name has been created.
- We currently have one as
peerprep-g16.net
.- This can be created under 'Cloud Domains' -> 'Register Domain' in the GCloud console.
- We also associate a GCloud Global Web IP
web-ip
to this DNS record as an 'A' record.- To set an IP DNS 'A' record, follow these steps:
-
Create an IP:
gcloud compute addresses create web-ip --global
-
Verify that it exists:
gcloud compute addresses list
-
Grab the IP address:
gcloud compute addresses describe web-ip --format='value(address)' --global
-
Associate it via the console:
- Cloud DNS -> 'Zone Name': peerprep-g16.net -> 'Add standard'
- Paste the IP address
- 'Create'
-
- To set an IP DNS 'A' record, follow these steps:
- We currently have one as
-
Install the
cert-manager
plugin:kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.yaml
-
Create the ingress and secrets in the prod environment:
kubectl apply -f ./k8s/gcloud
- After 15 minutes, you should be able to access the UI over HTTPS at this link:
https://peerprep-g16.net
- After 15 minutes, you should be able to access the UI over HTTPS at this link:
-
Cleanup:
-
Delete the cluster:
gcloud container clusters delete cs3219-g16
-
When done with the project, delete the web records:
gcloud dns record-sets delete peerprep-g16 --type A gcloud compute addresses delete web-ip --global
-
-
Setup the following in Github Actions by:
-
heading to the 'Settings' -> 'Secrets and variables' -> 'Actions' -> 'New repository secret'
-
Adding the following keys:
GKE_SA_KEY: <redacted (get from the cloud console IAM -> 'Service Accounts' page)> GKE_PROJECT: cs3219-g16 GKE_CLUSTER: cs3219-g16 GKE_ZONE: asia-southeast1-c
- If the
GKE_SA_KEY
is needed, contact us.
- If the
-
-
Merge a PR to
main
. The following will happend:-
An action will run under the 'actions' tab in Github.
-
This will build and push the service images and verify that the cluster is redeployed with the latest images:
kubectl -n peerprep get deployment
-