Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to pull input stats #203

Open
OKFOSTACK opened this issue Jul 8, 2022 · 9 comments
Open

Unable to pull input stats #203

OKFOSTACK opened this issue Jul 8, 2022 · 9 comments

Comments

@OKFOSTACK
Copy link

OKFOSTACK commented Jul 8, 2022

My fluentd config:

`
@type prometheus
@id in_prometheus
bind "0.0.0.0"
port 24231
metrics_path "/metrics"

@type prometheus_monitor @id in_prometheus_monitor @type prometheus_output_monitor @id in_prometheus_output_monitor @type prometheus name fluentd_input_status_num_records_total type counter desc The total number of incoming records tag ${tag} hostname ${hostname} `

My Prometheus config has the correct target, and I'm able to pull up all the default output and buffer stats already, not sure if this is the correct forum, but I'm unable to pull any input stats from the 'fluentd_input_status_num_records_total' filter. It shows up on the fluentd /metrics URL when I add the filter, but I can't seem to actually query this counter at all from Prometheus, it's like it doesn't exist.

Am I missing a piece of configuration somewhere?

@SwastikLGowda
Copy link

I am also experiencing the same issue!

@OKFOSTACK
Copy link
Author

Yeah I'm not sure what's going on, the metric is available on the /metrics URL, but returning null. There has to be a config issue somewhere in fluentd or Prometheus.

@SwastikLGowda
Copy link

can you send me your prometheus config!

@OKFOSTACK
Copy link
Author

OKFOSTACK commented Jul 12, 2022

rule_files:
     - /etc/config/recording_rules.yml
     - /etc/config/alerting_rules.yml
     - /etc/config/rules
     - /etc/config/alerts

   scrape_configs:
     - job_name: kube-metrics
       static_configs:
         - targets:
           - dev-inf-prometheus-node-exporter:9100
     - job_name: fluentd-metrics
       static_configs:
         - targets:
           - dev-inf-fluentd.test.org
       tls_config:
         insecure_skip_verify: true
       scheme: https

     # A scrape configuration for running Prometheus on a Kubernetes cluster.
     # This uses separate scrape configs for cluster components (i.e. API server, node)
     # and services to allow each to use different authentication configs.
     #
     # Kubernetes labels will be added as Prometheus labels on metrics via the
     # `labelmap` relabeling action.

     # Scrape config for API servers.
     #
     # Kubernetes exposes API servers as endpoints to the default/kubernetes
     # service so this uses `endpoints` role and uses relabelling to only keep
     # the endpoints associated with the default/kubernetes service using the
     # default named port `https`. This works for single API server deployments as
     # well as HA API server deployments.
     - job_name: 'kubernetes-apiservers'

       kubernetes_sd_configs:
         - role: endpoints

       # Default to scraping over https. If required, just disable this or change to
       # `http`.
       scheme: https

       # This TLS & bearer token file config is used to connect to the actual scrape
       # endpoints for cluster components. This is separate to discovery auth
       # configuration because discovery & scraping are two separate concerns in
       # Prometheus. The discovery auth config is automatic if Prometheus runs inside
       # the cluster. Otherwise, more config options have to be provided within the
       # <kubernetes_sd_config>.
       tls_config:
         ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
         # If your node certificates are self-signed or use a different CA to the
         # master CA, then disable certificate verification below. Note that
         # certificate verification is an integral part of a secure infrastructure
         # so this should only be disabled in a controlled environment. You can
         # disable certificate verification by uncommenting the line below.
         #
         insecure_skip_verify: true
       bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

       # Keep only the default/kubernetes service endpoints for the https port. This
       # will add targets for each API server which Kubernetes adds an endpoint to
       # the default/kubernetes service.
       relabel_configs:
         - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
           action: keep
           regex: default;kubernetes;https

     - job_name: 'kubernetes-nodes'

       # Default to scraping over https. If required, just disable this or change to
       # `http`.
       scheme: https

       # This TLS & bearer token file config is used to connect to the actual scrape
       # endpoints for cluster components. This is separate to discovery auth
       # configuration because discovery & scraping are two separate concerns in
       # Prometheus. The discovery auth config is automatic if Prometheus runs inside
       # the cluster. Otherwise, more config options have to be provided within the
       # <kubernetes_sd_config>.
       tls_config:
         ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
         # If your node certificates are self-signed or use a different CA to the
         # master CA, then disable certificate verification below. Note that
         # certificate verification is an integral part of a secure infrastructure
         # so this should only be disabled in a controlled environment. You can
         # disable certificate verification by uncommenting the line below.
         #
         insecure_skip_verify: true
       bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

       kubernetes_sd_configs:
         - role: node

       relabel_configs:
         - action: labelmap
           regex: __meta_kubernetes_node_label_(.+)
         - target_label: __address__
           replacement: kubernetes.default.svc:443
         - source_labels: [__meta_kubernetes_node_name]
           regex: (.+)
           target_label: __metrics_path__
           replacement: /api/v1/nodes/$1/proxy/metrics


     - job_name: 'kubernetes-nodes-cadvisor'

       # Default to scraping over https. If required, just disable this or change to
       # `http`.
       scheme: https

       # This TLS & bearer token file config is used to connect to the actual scrape
       # endpoints for cluster components. This is separate to discovery auth
       # configuration because discovery & scraping are two separate concerns in
       # Prometheus. The discovery auth config is automatic if Prometheus runs inside
       # the cluster. Otherwise, more config options have to be provided within the
       # <kubernetes_sd_config>.
       tls_config:
         ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
         # If your node certificates are self-signed or use a different CA to the
         # master CA, then disable certificate verification below. Note that
         # certificate verification is an integral part of a secure infrastructure
         # so this should only be disabled in a controlled environment. You can
         # disable certificate verification by uncommenting the line below.
         #
         insecure_skip_verify: true
       bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

       kubernetes_sd_configs:
         - role: node

       # This configuration will work only on kubelet 1.7.3+
       # As the scrape endpoints for cAdvisor have changed
       # if you are using older version you need to change the replacement to
       # replacement: /api/v1/nodes/$1:4194/proxy/metrics
       # more info here https://github.com/coreos/prometheus-operator/issues/633
       relabel_configs:
         - action: labelmap
           regex: __meta_kubernetes_node_label_(.+)
         - target_label: __address__
           replacement: kubernetes.default.svc:443
         - source_labels: [__meta_kubernetes_node_name]
           regex: (.+)
           target_label: __metrics_path__
           replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor

     # Scrape config for service endpoints.
     #
     # The relabeling allows the actual service scrape endpoint to be configured
     # via the following annotations:
     #
     # * `prometheus.io/scrape`: Only scrape services that have a value of
     # `true`, except if `prometheus.io/scrape-slow` is set to `true` as well.
     # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
     # to set this to `https` & most likely set the `tls_config` of the scrape config.
     # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
     # * `prometheus.io/port`: If the metrics are exposed on a different port to the
     # service then set this appropriately.
     # * `prometheus.io/param_<parameter>`: If the metrics endpoint uses parameters
     # then you can set any parameter
     - job_name: 'kubernetes-service-endpoints'
       honor_labels: true

       kubernetes_sd_configs:
         - role: endpoints

       relabel_configs:
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
           action: keep
           regex: true
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape_slow]
           action: drop
           regex: true
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
           action: replace
           target_label: __scheme__
           regex: (https?)
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
           action: replace
           target_label: __metrics_path__
           regex: (.+)
         - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
           action: replace
           target_label: __address__
           regex: (.+?)(?::\d+)?;(\d+)
           replacement: $1:$2
         - action: labelmap
           regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+)
           replacement: __param_$1
         - action: labelmap
           regex: __meta_kubernetes_service_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           action: replace
           target_label: namespace
         - source_labels: [__meta_kubernetes_service_name]
           action: replace
           target_label: service
         - source_labels: [__meta_kubernetes_pod_node_name]
           action: replace
           target_label: node

     # Scrape config for slow service endpoints; same as above, but with a larger
     # timeout and a larger interval
     #
     # The relabeling allows the actual service scrape endpoint to be configured
     # via the following annotations:
     #
     # * `prometheus.io/scrape-slow`: Only scrape services that have a value of `true`
     # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
     # to set this to `https` & most likely set the `tls_config` of the scrape config.
     # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
     # * `prometheus.io/port`: If the metrics are exposed on a different port to the
     # service then set this appropriately.
     # * `prometheus.io/param_<parameter>`: If the metrics endpoint uses parameters
     # then you can set any parameter
     - job_name: 'kubernetes-service-endpoints-slow'
       honor_labels: true

       scrape_interval: 5m
       scrape_timeout: 30s

       kubernetes_sd_configs:
         - role: endpoints

       relabel_configs:
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape_slow]
           action: keep
           regex: true
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
           action: replace
           target_label: __scheme__
           regex: (https?)
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
           action: replace
           target_label: __metrics_path__
           regex: (.+)
         - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
           action: replace
           target_label: __address__
           regex: (.+?)(?::\d+)?;(\d+)
           replacement: $1:$2
         - action: labelmap
           regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+)
           replacement: __param_$1
         - action: labelmap
           regex: __meta_kubernetes_service_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           action: replace
           target_label: namespace
         - source_labels: [__meta_kubernetes_service_name]
           action: replace
           target_label: service
         - source_labels: [__meta_kubernetes_pod_node_name]
           action: replace
           target_label: node

     - job_name: 'prometheus-pushgateway'
       honor_labels: true

       kubernetes_sd_configs:
         - role: service

       relabel_configs:
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
           action: keep
           regex: pushgateway

     # Example scrape config for probing services via the Blackbox Exporter.
     #
     # The relabeling allows the actual service scrape endpoint to be configured
     # via the following annotations:
     #
     # * `prometheus.io/probe`: Only probe services that have a value of `true`
     - job_name: 'kubernetes-services'
       honor_labels: true

       metrics_path: /probe
       params:
         module: [http_2xx]

       kubernetes_sd_configs:
         - role: service

       relabel_configs:
         - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
           action: keep
           regex: true
         - source_labels: [__address__]
           target_label: __param_target
         - target_label: __address__
           replacement: blackbox
         - source_labels: [__param_target]
           target_label: instance
         - action: labelmap
           regex: __meta_kubernetes_service_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           target_label: namespace
         - source_labels: [__meta_kubernetes_service_name]
           target_label: service

     # Example scrape config for pods
     #
     # The relabeling allows the actual pod scrape endpoint to be configured via the
     # following annotations:
     #
     # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`,
     # except if `prometheus.io/scrape-slow` is set to `true` as well.
     # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
     # to set this to `https` & most likely set the `tls_config` of the scrape config.
     # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
     # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
     - job_name: 'kubernetes-pods'
       honor_labels: true

       kubernetes_sd_configs:
         - role: pod

       relabel_configs:
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
           action: keep
           regex: true
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow]
           action: drop
           regex: true
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
           action: replace
           regex: (https?)
           target_label: __scheme__
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
           action: replace
           target_label: __metrics_path__
           regex: (.+)
         - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
           action: replace
           regex: (.+?)(?::\d+)?;(\d+)
           replacement: $1:$2
           target_label: __address__
         - action: labelmap
           regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
           replacement: __param_$1
         - action: labelmap
           regex: __meta_kubernetes_pod_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           action: replace
           target_label: namespace
         - source_labels: [__meta_kubernetes_pod_name]
           action: replace
           target_label: pod
         - source_labels: [__meta_kubernetes_pod_phase]
           regex: Pending|Succeeded|Failed|Completed
           action: drop

     # Example Scrape config for pods which should be scraped slower. An useful example
     # would be stackriver-exporter which queries an API on every scrape of the pod
     #
     # The relabeling allows the actual pod scrape endpoint to be configured via the
     # following annotations:
     #
     # * `prometheus.io/scrape-slow`: Only scrape pods that have a value of `true`
     # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
     # to set this to `https` & most likely set the `tls_config` of the scrape config.
     # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
     # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
     - job_name: 'kubernetes-pods-slow'
       honor_labels: true

       scrape_interval: 5m
       scrape_timeout: 30s

       kubernetes_sd_configs:
         - role: pod

       relabel_configs:
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow]
           action: keep
           regex: true
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
           action: replace
           regex: (https?)
           target_label: __scheme__
         - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
           action: replace
           target_label: __metrics_path__
           regex: (.+)
         - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
           action: replace
           regex: (.+?)(?::\d+)?;(\d+)
           replacement: $1:$2
           target_label: __address__
         - action: labelmap
           regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
           replacement: __param_$1
         - action: labelmap
           regex: __meta_kubernetes_pod_label_(.+)
         - source_labels: [__meta_kubernetes_namespace]
           action: replace
           target_label: namespace
         - source_labels: [__meta_kubernetes_pod_name]
           action: replace
           target_label: pod
         - source_labels: [__meta_kubernetes_pod_phase]
           regex: Pending|Succeeded|Failed|Completed
           action: drop

@OKFOSTACK
Copy link
Author

OKFOSTACK commented Jul 13, 2022

Perhaps Prometheus isn't scraping the fluentd "fluentd_input_status_num_records_total" metric correctly? As the value that seems to be returned from Prometheus is null. That, or the fluentd config is incorrect and unable to return any value in it's current state?

@OKFOSTACK
Copy link
Author

@SwastikLGowda are you using a helm chart for your fluentd pods? Not sure why I didn't catch this, but the helm chart I'm using has a config map associated with it that I think is overriding any Prometheus additions to the config

@OKFOSTACK
Copy link
Author

Maybe not... yeah I'm running out of thoughts on my side, hopefully someone can assist with some thought

I saw this issue:

#95

But I still am unable to see any values for the input metric no matter how I slice it

@SwastikLGowda
Copy link

I had various problems regarding the logging & monitoring, first is that our team was using very old image of fluentd, which had ruby version 2.2 and had everything configured to collect logs from various sources, this plugin requires ruby version >= 2.4 so if i tried using new image, things were breaking, so we decided to migrate to fluentbit , it has various other adv along with having monitoring out of the box

@yfractal
Copy link

@OKFOSTACK
it may relate to fluent-plugin-prometheus version,

gem 'fluent-plugin-prometheus', '2.0.3'
gem 'fluentd', '1.15.1'

works for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants