Skip to content

Commit

Permalink
Merge branch 'main' into farewell-bevity
Browse files Browse the repository at this point in the history
  • Loading branch information
pollett authored Oct 9, 2024
2 parents 780bd6e + 35a8594 commit 4a5e051
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 2 deletions.
3 changes: 1 addition & 2 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@
*** xref:data-formats:xml.adoc[XML]
*** xref:data-formats:protobuf.adoc[Protobuf]
* Performance
//** xref:streams:streaming-data.adoc[Benchmarks]
* xref:deploying:performance.adoc[]
.Query
* xref:querying:writing-queries.adoc[Query with {short-product-name}]
Expand Down
Binary file added docs/modules/deploying/images/perf-metrics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
25 changes: 25 additions & 0 deletions docs/modules/deploying/pages/performance.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
= Performance
:description: Performance when running {short-product-name}

Running {short-product-name} in production and at scale.

== Minimum specifications

We recommend running {short-product-name} on a machine with at least 2 CPUs and 8 GiB of memory.

For production resilience, please ensure you're running multiple {short-product-name} instances to ensure continued uptime.

{short-product-name} is efficient at utilizing spare resources so scales very effectively as you scale the underlying hardware to have greater available resources.

Doubling the available resources can roughly half your processing latency and increase your overall throughput.

== Tuning

When running {short-product-name} in xref:deploying:production-deployments.adoc[production], ensure you've configured an external xref:querying:observability.adoc#performance-metrics--prometheus[Prometheus] server to capture your metrics, then {short-product-name} will graph them on the Endpoint page. From there you can see the throughput of your query and the latency of each message processed.

image:perf-metrics.png[]

The throughput of {short-product-name} will be highly dependent on the complexity of your queries and external models that the system is having to make requests to. A query running a basic streaming process with minimal lookups should be able to scale until exhausting your available processing resources. A query with lots of external lookups and connections is more likely to be bound by IO time, so monitor your query durations closely.

If your external lookups are cacheable, consider using {short-product-name}'s native xref:describing-data-sources:caching.adoc[caching] annotations to save making the same API requests repeatedly and retrieve the data from local memory instead.

0 comments on commit 4a5e051

Please sign in to comment.