You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our logging daemon set isn't pulling logs from k8s, it's just reporting this error to NR Logger over and over.
[ warn] [net] getaddrinfo(host='kubernetes.default.svc.cluster.local', err=-2): Name or service not known
[error] [filter:kubernetes:kubernetes.0] kubelet upstream connection error
it seems to be reporting this error a few times per-second per-instance (in our test environment it's 3-4 instances per-cluster).
Not sure what to do other than delete the DaemonSet to avoid our logs getting inundated with millions of these.
Is this a bug? or is there a permission that I missed?
The text was updated successfully, but these errors were encountered:
I didn't realize we had a custom DNS resolver/host name for our cluster. Rather than .svc.cluster.local it was appending our custom DNS name, so kubernetes.default resolved correctly, but the FQDN kubernetes.default.svc.cluster.local was unknown and did not resolve.
Perhaps there can be a setting added to the chart for this?
Also, our logging ingest got spammed like crazy with this error (like, millions of logs in an hour or two). Should the logging service be logging itself to NR? It seems like it should self-contain those logs and errors rather than blasting the ingest with them, at least by default.
Our logging daemon set isn't pulling logs from k8s, it's just reporting this error to NR Logger over and over.
it seems to be reporting this error a few times per-second per-instance (in our test environment it's 3-4 instances per-cluster).
Not sure what to do other than delete the DaemonSet to avoid our logs getting inundated with millions of these.
Is this a bug? or is there a permission that I missed?
The text was updated successfully, but these errors were encountered: