Skip to content

emanuil-tolev/python-logging

Repository files navigation

Centralized Application Logs with the Elastic Stack

This repository gives an overview of five different logging patterns:

  • Parse: Take the log files of your applications and extract the relevant pieces of information.
  • Send: Add a log appender to send out your events directly without persisting them to a log file.
  • Structure: Write your events in a structured file, which you can then centralize.
  • Containerize: Keep track of short lived containers and configure their logging correctly.
  • Orchestrate: Stay on top of your logs even when services are short lived and dynamically allocated on Kubernetes.

The slides for this talk are available on Speaker Deck.

Dependencies

  • Python 2 or 3 to run the Python code (but you don't need this if using the containerized app).
  • Docker (and Docker Compose) to run all the required components of the Elastic Stack (Filebeat, Logstash, Elasticsearch, and Kibana) and the containerized Python application.

Usage

  • Bring up the Elastic Stack: $ docker-compose up --build
  • Rerun the Python logging example application if necessary: $ docker restart <ID of the python app>
  • Remove the Elastic Stack (and its volumes): $ docker-compose down -v

Demo

  1. Take a look at the code — which pattern are we building with log statements here?

Parse

  1. Copy a log line and start parsing it with the Grok Debugger in Kibana, for example with the pattern ^\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}%{LOGLEVEL:level} — show https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns to get started. The rest will be done with the logstash.conf.
  2. Point to https://github.com/elastic/ecs for the naming conventions.
  3. Show the Data Visualizer in Machine Learning by uploading the LOG file. The output is actually quite good already, but we are sticking with our manual rules for now.
  4. Find the log statements in Kibana's Discover view for the parse index.
  5. Show the pipeline in Kibana's Monitoring view as well as the other components in Monitoring.
  6. How many log events should we have? 40. But we have 42 entries instead. Even though 42 would generally be the perfect number, here it's not.
  7. See the _grokparsefailure in the tag field. Enable the multiline rules in Filebeat. It should automatically refresh and when you run the application again it should now only collect 40 events.
  8. Show that this is working as expected now and drill down to the errors to see which emoji we are logging.
  9. Create a vertical bar chart visualization on the level field. Further break it down into session.

Send

  1. Show that the logs are missing from the first run, since no connection to Logstash had been established yet.
  2. Rerun the application and see that it is working now. And we have already seen the main downside of this approach.
  3. Finally, you would need to rename the fields to match ECS in a Logstash filter.

Structure

  1. Run the application and show the data in the structure index.
  2. Show the Logback configuration for JSON, since it is a little more complicated than the others.

Containerize

  1. Show the metadata we are collecting now.
  2. See why the console output works here, but we should turn off the colorization (otherwise the parsing breaks).
  3. Turn on the ingest pipeline and to show how everything is working and restart Docker Compose.
  4. See why we needed the grok failure rule, because of the startup error from sending to Logstash directly.
  5. Filter to the right container name and point out the hinting that stops the multiline statements from being broken up.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published