This is an installation guide to configure Contour in a Deployment separate from Envoy which allows for easier scaling of each component.
This configuration has several advantages:
- Envoy runs as a daemonset which allows for distributed scaling across workers in the cluster
- Communication between Contour and Envoy is secured by mutually-checked self-signed certificates.
- Contour is run as Deployment and Envoy as a Daemonset
- Envoy runs on host networking
- Envoy runs on ports 80 & 443
- In our example deployment, the following certificates must be present as Secrets in the
projectcontour
namespace for the example YAMLs to apply:cacert
: must contain acacert.pem
key that contains a CA certificate that signs the other certificates.contourcert
: be a Secret of typekubernetes.io/tls
and must containtls.crt
andtls.key
keys that contain a certificate and key for Contour. The certificate must be valid for the namecontour
either via CN or SAN.envoycert
: be a Secret of typekubernetes.io/tls
and must containtls.crt
andtls.key
keys that contain a certificate and key for Envoy.
For detailed instructions on how to configure the required certs manually, see the step-by-step TLS HOWTO.
Either:
- Run
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
or: Clone or fork the repository, then run:
kubectl apply -f examples/contour
This will:
- set up RBAC and Contour's CRDs (that is, IngressRoute)
- run a Kubernetes Job that will generate one-year validity certs and put them into
projectcontour
- Install Contour and Envoy in a Deployment and Daemonset respectively.
NOTE: The current configuration exposes the /stats
path from the Envoy Admin UI so that Prometheus can scrape for metrics.
- Install a workload (see the kuard example in the main deployment guide).
In order to deploy the Envoy Daemonset with host networking enabled, you need to make two changes.
In the Envoy daemonset definition, at the Pod spec level, change:
dnsPolicy: ClusterFirst
to
dnsPolicy: ClusterFirstWithHostNet
and add
hostNetwork: true
Then, in the Envoy Service definition, change the annotation from:
# This annotation puts the AWS ELB into "TCP" mode so that it does not
# do HTTP negotiation for HTTPS connections at the ELB edge.
# The downside of this is the remote IP address of all connections will
# appear to be the internal address of the ELB. See docs/proxy-proto.md
# for information about enabling the PROXY protocol on the ELB to recover
# the original remote IP address.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
to
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Then, apply the example as normal. This will still deploy a LoadBalancer Service, but it will be an NLB instead of an ELB.