You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to set up a central LAPI so that the decisions can be shared among agents both in and outside of the Kubernetes cluster.
Should work with TLS authentication, but have a bit of a different way of doing things and the Helm Chart currently doesn't really support my usecase.
We have an existing cert-manager for one. Even with the examples in ./hack/tls, it's a bit tricky to make it work. So I'll try to list my findings here. We can separate those to different GitHub issues if need be.
LAPI setsCLIENT_ TLS variables, but probably doesn't need to
LAPI (and the agent) expect to find the ca.crt in the same Secret when tls.caBundle is true
This works when you have the same issuer for both agent and LAPI, since the at least with cert-manager the ca.crt will be in the same Secret.
We have a separate intermediate CA for the Kubernetes cluster the LAPI would be running in, and another one for the agents outside of the cluster, so there should be a way to define a separate root CA certificate to be trusted.
Enabling tls.certManager.enabled always creates an issuer, and there's no way to use an existing one.
In our case, there's an existing ClusterIssuer that I would like to use for fetching a Certificate for both LAPI and agents within the cluster.
I was unable to find a way to define the bouncers_allowed_ou and agents_allowed_ou, but perhaps I have missed something.
I'm also not sure about having both an Issuer and a ClusterIssuer, are both needed?
The text was updated successfully, but these errors were encountered:
At the bare minimum, I'd need a way to define a separate CA certificate to be trusted. The other things I can probably work my way around by defining the Certificate resources and such separately.
Would of course be neater, if the Chart just supported some of the things we need, but I understand if you want to keep the setup more opinionated.
But, for example, while it's kind of convenient to be able to create a cert-manager ClusterIssuer or Issuer here, I would perhaps lean towards having that out-of-scope for this Chart and just document the fact for the user.
the CLIENT_ variables are used by cscli inside the lapi pod, not a requirement but really useful for debugging. It needs to connect even if the agent is not running. It would be neat to have a unix socket connection to simplify local clients.
bouncers_allowed_ou, agents_allowed_ou are not set correctly. The fix is in 1.5.0-rc5, see crowdsecurity/crowdsec@68d4bdc
I am not sure it warrants a 1.4 release, unfortunately our release process is not yet geared towards small, hot-fixes
Adding an alternate CA for the client is a legitimate use case. We can easily support it in the container, but we'll be more careful to avoid adding unnecessary complications in the chart.
For the rest we have to think of the right approach
Trying to set up a central LAPI so that the decisions can be shared among agents both in and outside of the Kubernetes cluster.
Should work with TLS authentication, but have a bit of a different way of doing things and the Helm Chart currently doesn't really support my usecase.
We have an existing cert-manager for one. Even with the examples in
./hack/tls
, it's a bit tricky to make it work. So I'll try to list my findings here. We can separate those to different GitHub issues if need be.CLIENT_
TLS variables, but probably doesn't need toca.crt
in the sameSecret
whentls.caBundle
istrue
ca.crt
will be in the sameSecret
.tls.certManager.enabled
always creates an issuer, and there's no way to use an existing one.ClusterIssuer
that I would like to use for fetching aCertificate
for both LAPI and agents within the cluster.bouncers_allowed_ou
andagents_allowed_ou
, but perhaps I have missed something.The text was updated successfully, but these errors were encountered: