You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tl;dr the values.yaml of openobserve-collector is over-complicated. A simpler solution can be achieved using the upstream OpenTelemetry collector's chart.
I am reviewing the code of the openobserve-collector and would like to ask some questions about how it works.
Currently I'm running a Kubernetes cluster with OpenObserve deployed in the monitoring namespace. Instead of using the openobserve-collector chart, I am using the upstream OpenTelemetry collector's chart with presets enabled. The setup can be achieved with a relatively concise helmfile:
The helmfile.yaml defines two releases. The one called collector-agent handles log ingestion. The generated collector config is obtained with the command:
kubectl get -n monitoring configmap collector-agent-opentelemetry-collector-agent -o jsonpath='{.data.relay}'
Meanwhile, openobserve-collector's default values.yaml specifies complex routing and regular expression named capture groups to extract metadata from log file names:
size: 128# default maximum amount of Pods per Node is 110
Seeing that the upstream's config can produce logs with the metadata k8s_pod_name, k8s_namespace_name, etc. (via the k8sattributes processor) with a simpler config, why does openobserve-collector's values.yaml have these regexes?
The text was updated successfully, but these errors were encountered:
tl;dr the
values.yaml
of openobserve-collector is over-complicated. A simpler solution can be achieved using the upstream OpenTelemetry collector's chart.I am reviewing the code of the openobserve-collector and would like to ask some questions about how it works.
Currently I'm running a Kubernetes cluster with OpenObserve deployed in the
monitoring
namespace. Instead of using the openobserve-collector chart, I am using the upstream OpenTelemetry collector's chart with presets enabled. The setup can be achieved with a relatively concise helmfile:The
helmfile.yaml
defines two releases. The one calledcollector-agent
handles log ingestion. The generated collector config is obtained with the command:kubectl get -n monitoring configmap collector-agent-opentelemetry-collector-agent -o jsonpath='{.data.relay}'
Upstream OpenTelemetry collector generated configuration
Here is an example log entry from OpenObserve using the above upstream OpenTelemetry collector chart:
Meanwhile, openobserve-collector's default values.yaml specifies complex routing and regular expression named capture groups to extract metadata from log file names:
openobserve-helm-chart/charts/openobserve-collector/values.yaml
Lines 130 to 170 in b146f80
Seeing that the upstream's config can produce logs with the metadata
k8s_pod_name
,k8s_namespace_name
, etc. (via thek8sattributes
processor) with a simpler config, why does openobserve-collector's values.yaml have these regexes?The text was updated successfully, but these errors were encountered: