-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into farewell-bevity
- Loading branch information
Showing
3 changed files
with
26 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
= Performance | ||
:description: Performance when running {short-product-name} | ||
|
||
Running {short-product-name} in production and at scale. | ||
|
||
== Minimum specifications | ||
|
||
We recommend running {short-product-name} on a machine with at least 2 CPUs and 8 GiB of memory. | ||
|
||
For production resilience, please ensure you're running multiple {short-product-name} instances to ensure continued uptime. | ||
|
||
{short-product-name} is efficient at utilizing spare resources so scales very effectively as you scale the underlying hardware to have greater available resources. | ||
|
||
Doubling the available resources can roughly half your processing latency and increase your overall throughput. | ||
|
||
== Tuning | ||
|
||
When running {short-product-name} in xref:deploying:production-deployments.adoc[production], ensure you've configured an external xref:querying:observability.adoc#performance-metrics--prometheus[Prometheus] server to capture your metrics, then {short-product-name} will graph them on the Endpoint page. From there you can see the throughput of your query and the latency of each message processed. | ||
|
||
image:perf-metrics.png[] | ||
|
||
The throughput of {short-product-name} will be highly dependent on the complexity of your queries and external models that the system is having to make requests to. A query running a basic streaming process with minimal lookups should be able to scale until exhausting your available processing resources. A query with lots of external lookups and connections is more likely to be bound by IO time, so monitor your query durations closely. | ||
|
||
If your external lookups are cacheable, consider using {short-product-name}'s native xref:describing-data-sources:caching.adoc[caching] annotations to save making the same API requests repeatedly and retrieve the data from local memory instead. | ||
|