-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metrics endpoint will not be started because metrics-address
was not specified
#2
Comments
Hello @klis, thanks for reaching out. The log in question is generated by the external-provisioner sidecar container that runs alongside the beegfs-csi-driver container in a Kubernetes deployment (both inside the csi-beegfs-controller-0 pod you referenced). The csi-provisioner container acts as an intermediary between Kubernetes and the CSI controller service in a typical Kubernetes CSI deployment. While the beegfs-csi-driver container itself does not natively support Prometheus metrics, you should be able to scrape the external-provisioner container (inside csi-beegfs-controller-0) for a variety of interesting CSI metrics (including total count, error count, and call latency). This Kubernetes CSI issue mentions the need for documentation around HOW exactly to configure Prometheus to do that, but no such documentation has been produced (largely because Prometheus deployments vary by environment). Any solution likely requires adding either the --http-endpoint or --metrics-address (deprecated) argument to the csi-provisioner container in the deployment manifests. As is the case for other "directory-within-a-file-system" drivers (e.g. NFS), it is difficult to directly correlate requested capacity in Kubernetes with BeeGFS consumption. Our BeeGFS quotas support makes it possible to limit consumption on a per-storage-class basis (assuming a particular BeeGFS file system is under a support contract and allowed to use enterprise features), but the aggregate capacity shown by something like "kubectl get pv" doesn't generally represent BeeGFS storage consumed. |
@ejweber thank you for the detailed explanation. As for storage consumption, I will talk to my SysAdmins to check what can we do about it. |
Thanks for the update, @klis. I have created a low-priority story in the NetApp system to investigate ways to make scraping easier out of the box (e.g. add "--metrics-address or --http-endpoint" to the default deployment manifests). I'm not sure yet how to improve the experience generically. If you do end up scraping metrics, please share whatever you can about your experience. |
Will do |
Migrate repository to the ThinkParQ organization
Hi,
when
csi-provisioner
container insidecsi-beegfs-controller-0
pod it logs a warning with the message:Full log:
Does this driver support Prometheus metrics? If does, how they can be enabled? And is there any way that the Kubernetes admin can track how much storage is in use?
The text was updated successfully, but these errors were encountered: