Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fluentd-elasticsearch] errros when upgrading from 11.14.0 to 11.15.0 #95

Open
TheMeier opened this issue Nov 3, 2021 · 1 comment
Open
Labels
bug Something isn't working

Comments

@TheMeier
Copy link

TheMeier commented Nov 3, 2021

Describe the bug
When upgrading the helm chart from 11.14.0 to 11.15.0 i get the following error:

fluentd-elasticsearch-gn4nx fluentd-elasticsearch {"time":"2021-11-03 10:24:25 +0000","level":"error","message":"unexpected error error_class=NoMethodError error=\"undefined method `host_unreachable_exceptions' for #<Elasticsearch::Transport::Client:0x00007fbb7ed98cf8>\""}
fluentd-elasticsearch-gn4nx fluentd-elasticsearch {"time":"2021-11-03 10:24:25 +0000","level":"error","message":"/usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_index_template.rb:41:in `rescue in retry_operate'\n/usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_index_template.rb:39:in `retry_operate'\n/usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/out_elasticsearch.rb:487:in `handle_last_seen_es_major_version'\n/usr/local/bundle/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/out_elasticsearch.rb:339:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/plugin.rb:178:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/agent.rb:132:in `add_match'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/agent.rb:74:in `block in configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/agent.rb:64:in `each'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/agent.rb:64:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/label.rb:31:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/root_agent.rb:143:in `block in configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/root_agent.rb:143:in `each'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/root_agent.rb:143:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/engine.rb:105:in `configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/engine.rb:80:in `run_configure'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/supervisor.rb:714:in `block in run_worker'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/supervisor.rb:966:in `main_process'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/supervisor.rb:706:in `run_worker'\n/usr/local/bundle/gems/fluentd-1.13.3/lib/fluent/command/fluentd.rb:364:in `<top (required)>'\n/usr/local/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'\n/usr/local/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:72:in `require'\n/usr/local/bundle/gems/fluentd-1.13.3/bin/fluentd:15:in `<top (required)>'\n/usr/local/bundle/bin/fluentd:23:in `load'\n/usr/local/bundle/bin/fluentd:23:in `<main>'"}

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.14", GitCommit:"0fd2b5afdfe3134d6e1531365fdb37dd11f54d1c", GitTreeState:"clean", BuildDate:"2021-08-11T18:07:41Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33", GitTreeState:"clean", BuildDate:"2021-05-12T12:19:22Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
11.15.0

What happened:
After upgrading the pods go into CrashLoopBackOff the logs show the error above

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

<~--
This could be something like:

values.yaml (only put values which differ from the defaults)

elasticsearch:
  suppressTypeName: true
  scheme: https
  logLevel: info
  auth:
    enabled: true
    user: user
    password: password
  hosts:
    - "opendistro-ingest-1:9200"
    - "opendistro-ingest-2:9200"
  logstash:
    prefix: k8s-dev
tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
fluentdLogFormat: json
prometheusRule:
  enabled: true
  prometheusNamespace: core-logging
serviceMonitor
  enabled: true
  jobLabel fluentd

Anything else we need to know:
After a rollback to 11.14.0 everything works as expected. Target elasticsearch is elasticsearch-oss-7.10.2-1.x86_64 from opendistroforelasticsearch-1.13.2-1.x86_64.
uken/fluent-plugin-elasticsearch#912 mentions this could be due to 7.14 client library from elasticsearch.

@TheMeier TheMeier added the bug Something isn't working label Nov 3, 2021
@s7an-it
Copy link

s7an-it commented Nov 8, 2021

Getting it with latest chart and 7.15.2 ES.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants