Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new parser to be abble to simply count lines that match pattern #16033

Open
tguenneguez opened this issue Oct 16, 2024 · 5 comments
Open
Labels
feature request Requests for new plugin and for new features to existing plugins

Comments

@tguenneguez
Copy link
Contributor

Use Case

Be abble to simply count number of line in a stream that match or not a pattern.
I will developpe this plugin, but first I share the goal.

Sample of specification :

Pattern Parser Plugin

The pattern parser creates metrics from a stream containing lines.
It counts number of lines matching a pattern.

Configuration

[[inputs.file]]
  files = ["/tmp/test.log"]

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ##   https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "pattern"

  ## This is a list of searches to check the given stream for.
  ## An entry can have the following properties:
  ##  tag_name    --  Name of the tag added to the metric.
  ##  tag_value   --  Value of the tag added to the metric.
  ##  pattern     --  Pattern searched in the stream.
  pattern_searches = [
      { tag_name = "severite", tag_value = "error", pattern = "error" },
      { tag_name = "severite", tag_value = "warning", pattern = "[Ww]arning" },
      { tag_name = "severite", tag_value = "connection timeout", pattern = "connection to .* timeout" },
    ]

Metrics

One metric is created for each search with tag "tag_name" contain "tag_value".

Examples

Config:

[[inputs.file]]
  files = ["example"]
  data_format = "pattern"
  pattern_searches = [
      { tag_name = "status", tag_value = "Job success", pattern = "Job successfully completed" }
      ]

Input:

Job failed to run

Output:

file_pattern,status=Job\ success match_count=0,not_match_count=1

Config:

[[inputs.file]]
  files = ["example"]
  data_format = "pattern"
  pattern_searches = [
      { tag_name = "error_code", tag_value = "2XX", pattern = "^([^ ]* ){3}\[.*\] "[^"]*" (2[0-9][0-9]) .*$" },
      { tag_name = "error_code", tag_value = "3XX", pattern = "^([^ ]* ){3}\[.*\] "[^"]*" (3[0-9][0-9]) .*$" },
      { tag_name = "error_code", tag_value = "4XX", pattern = "^([^ ]* ){3}\[.*\] "[^"]*" (4[0-9][0-9]) .*$" },
      { tag_name = "error_code", tag_value = "5XX", pattern = "^([^ ]* ){3}\[.*\] "[^"]*" (5[0-9][0-9]) .*$" },
    ]

Input:

10.158.236.103 - - [15/Oct/2024:13:58:15 +0200] "GET / HTTP/1.0" 200 14 2195 0 - "-" "-" -
10.158.236.103 - - [15/Oct/2024:13:58:15 +0200] "POST / HTTP/1.0" 201 14 2195 0 - "-" "-" -
10.158.236.103 - - [15/Oct/2024:13:58:15 +0200] "GET /test.html HTTP/1.0" 500 14 2195 0 - "-" "-" -
10.158.236.103 - - [15/Oct/2024:13:58:15 +0200] "GET /login HTTP/1.0" 400 14 2195 0 - "-" "-" -
10.158.236.103 - - [15/Oct/2024:13:59:15 +0200] "GET / HTTP/1.0" 200 14 2195 0 - "-" "-" -

Output:

file_pattern,error_code=2XX match_count=3,not_match_count=2
file_pattern,error_code=3XX match_count=0,not_match_count=5
file_pattern,error_code=4XX match_count=1,not_match_count=4
file_pattern,error_code=5XX match_count=1,not_match_count=4

Expected behavior

Have a simple plugin to count lines that match a pattern.

Actual behavior

In fact, some use cases are possible by combining grok and aggregator, but it is very heavy to implement.
It is also very difficult to configure these plugins well, especially with logs whose content is not precisely structured. For example, counting the words "Error" anywhere in a string.
Finally, if no line matches, no value is returned.

Additional info

No response

@tguenneguez tguenneguez added the feature request Requests for new plugin and for new features to existing plugins label Oct 16, 2024
@julien64140
Copy link

Je me retrouve avec le même besoin de devoir récupérer le nombre d'occurrence sur une pattern recherchée. Je suis intéressé pour cette évolution

@srebhan
Copy link
Member

srebhan commented Oct 16, 2024

Could we please stick to English here so everyone can participate in the discussion!?

@srebhan
Copy link
Member

srebhan commented Oct 16, 2024

@tguenneguez would it make sense to extend the grok parser to be able to do this? The reason is that there are already many predefined patterns that can be used for matching... Furthermore, I wonder what the use case of not_match_count is?

@tguenneguez
Copy link
Contributor Author

my point of view on the proposals
2 questions :

  1. would it make sense to extend the grok parser to be able to do this ?
    The grok parser is very handy but it assumes that the content of the file respects a precise format known in advance.
    If you simply want to know the number of lines that contain KO, you must:
  • define a custom pattern
    grok_custom_patterns = '''
    SEARCH OK
    '''
  • use this custom a pattern search
    grok_patterns = ["%{SEARCH:find_ok}"]
  • use the agregator valuecounter to count number of occured line.
  • do something that I don't know to have a default value of "0" if non line match
  1. The reason is that there are already many predefined patterns that can be used for matching... Furthermore, I wonder what the use case of not_match_count is?
    if you know that an exe return in normal time a only line like : "treatment OK", you would know if there is lines that not match this string.

For a typical user (not a telegraf expert, or a developer of the solution), it is almost impossible to implement this system and make it work.

@srebhan
Copy link
Member

srebhan commented Oct 18, 2024

@tguenneguez let me address some things you assume:

If you simply want to know the number of lines that contain KO, you must

No, that's wrong. Grok uses regular expression just like what you've shown in your initial post. With grok you just do have the additional benefit of being able to use predefined patterns instead of having the need to come up with regexp for standard things.

if you know that an exe return in normal time a only line like : "treatment OK", you would know if there is lines that not match this string.

Yeah but you could also use a "not matching" regexp for exactly this. Why do you assume that someone in general would be interested in this? Alternatively, we could define a flag that generates a "remaining" metric output which sets a special value.

In my view we should have

[[inputs.file]]
  files = ["example"]
  data_format = "grok"
  grok_named patterns = [ 
    { name = "2XX", pattern = " 2\d{2} " },
    { name = "3XX", pattern = " 3\d{2} " },
    { name = "4XX", pattern = ""%{WORD:method} %{PATH:path} HTTP/.?\..?" 4\d{2} " },
    { name = "5XX", pattern = ""%{WORD:method} %{PATH:path} HTTP/.?\..?" 5\d{2} " },
    { name = "default" }
 ]

which should result in

file,pattern=2XX value=0i
file,pattern=2XX value=0i
file,pattern=5XX method="GET",path="/test.html" value=0i
file,pattern=4XX method="GET",path="/login"
file,pattern=2XX value=0i

You then can aggregate over the methods and count the patterns if you wish. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Requests for new plugin and for new features to existing plugins
Projects
None yet
Development

No branches or pull requests

3 participants