You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the current behavior?
Right now, we use Loki to query and display metrics using log aggregation. However, this could be a bottleneck at scale depending on the amount of logs we are ingesting.
One can argue we're not really using the right tool for our use case either which is to show metrics. Log aggregation may be overkill for this.
Instead, what if we built a customer exporter (ingestor) that scrapes and outputs time-series metrics based off each log? This will allow for a really responsive metric querying and visualization, while still relying on Loki for indepth log analysis.
Notes
Need to baseline the performance and determine if the benefits are worth it.
How much can an ingestor/exporter scale ingesting the logs?
Can we use grok_exporter ?
Is this an anti pattern to what Loki is suppose to be? Or can we use both? One for metrics, one for indepth debugging.
Does promtail/loki already support something like this so we don't have to spin up another process
The text was updated successfully, but these errors were encountered:
Instead of wasting a lot of write IO to parse logs, we're descoping this. I worked with Jorge from Poktscan on the next GeoMesh update which should include a lot more verbose metrics being emitted. This along side a general-purpose HAProxy / Nginx dashboard should be more powerful then our existing solution. See below
Loki can still be continued to be used for log aggregation and tracing, but we'll stop using them for visualization purposes. https://github.com/baaspoolsllc/nodies_monitoring/tree/yuppie/863g7wjtw working branch with the new dashboard, but this is TBD on Geomesh update which should result in curated metrics
Right now, we use Loki to query and display metrics using log aggregation. However, this could be a bottleneck at scale depending on the amount of logs we are ingesting.
One can argue we're not really using the right tool for our use case either which is to show metrics. Log aggregation may be overkill for this.
Instead, what if we built a customer exporter (ingestor) that scrapes and outputs time-series metrics based off each log? This will allow for a really responsive metric querying and visualization, while still relying on Loki for indepth log analysis.
Notes
Need to baseline the performance and determine if the benefits are worth it.
How much can an ingestor/exporter scale ingesting the logs?
Can we use grok_exporter ?
Is this an anti pattern to what Loki is suppose to be? Or can we use both? One for metrics, one for indepth debugging.
Does promtail/loki already support something like this so we don't have to spin up another process
The text was updated successfully, but these errors were encountered: