Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fluentd-elasticsearch] Incorrect handling of very long log entries (>16K characters) #97

Open
baczus opened this issue Nov 25, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@baczus
Copy link
Contributor

baczus commented Nov 25, 2021

Describe the bug
My application logs very long (more than 16K characters) messages in one entry. The problem is that such entries are split in ElastciSearch/Kibana into two separate records.

Example Docker logs:

{"log":"09:25:23.626 very_long_message_that_is_cut_after_16_k_characters...","stream":"stdout","time":"2021-11-25T09:25:23.629585 122Z"} {"log":"09:25:23.629 rest_of_very_long_message\n","stream":"stdout","time":"2021-11-25T09:25:23.629585122Z"}

I assume that the problem is related to concat plugin configuration: https://github.com/kokuwaio/helm-charts/blob/main/charts/fluentd-elasticsearch/templates/configmaps.yaml#L158

The problem seems to be fixed when I change key message to key log in

.

Version of Helm and Kubernetes:

Helm Version: 3.7.1

$ helm version
please put the output of it here

Kubernetes Version: 1.19

$ kubectl version
please put the output of it here

Which version of the chart: 13.1.0

What happened: Long log entry is split into two records in ElasticSearch/Kibana.

What you expected to happen: Long log entry should be saved as one record in ElasticSearch/Kibana.

How to reproduce it (as minimally and precisely as possible): Run container that generates very long log entry, at least 16K characters.

@baczus baczus added the bug Something isn't working label Nov 25, 2021
@hoxhaje
Copy link

hoxhaje commented Mar 3, 2023

Hello. I have been facing the same issues for the last couple of days.
changing key message to key log indeed seems to fix the parsing issue of those big logs but I stopped getting app/backend logs streamed.
I have added max_lines 65536 which I was hoping to work, but still no luck. What else might be causing such issue?
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants