You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
High memory usage from daemon on chain synchronization
Systems with low available RAM may crash when there is a large amount of historical chain data to sync onto the node. This is due to memory usage of the daemon increasing with no stop limits, when downloading the chain and validating blocks.
Until a solution to this is implemented, it is recommended for nodes to have a minimum of 5 GB of RAM available for the daemon when performing a full sync. Once the node daemon is synced to the chaintip, the RAM usage is better... A /swap drive will also help to improve performance and if the memory maxes out, you may be able to get away with lower RAM.
Profiling of memory usage within the daemon may be needed to narrow down and reduce the amount of memory that is consumed. Additionally, a more aggressive garbage collection solution is needed to assure internal lists and maps do not grow to large sizes, and are emptied periodically to free up available RAM.
This ticket is for reference tracking of the issue solution
The text was updated successfully, but these errors were encountered:
High memory usage from daemon on chain synchronization
Systems with low available RAM may crash when there is a large amount of historical chain data to sync onto the node. This is due to memory usage of the daemon increasing with no stop limits, when downloading the chain and validating blocks.
Until a solution to this is implemented, it is recommended for nodes to have a minimum of 5 GB of RAM available for the daemon when performing a full sync. Once the node daemon is synced to the chaintip, the RAM usage is better... A
/swap
drive will also help to improve performance and if the memory maxes out, you may be able to get away with lower RAM.Profiling of memory usage within the daemon may be needed to narrow down and reduce the amount of memory that is consumed. Additionally, a more aggressive garbage collection solution is needed to assure internal lists and maps do not grow to large sizes, and are emptied periodically to free up available RAM.
This ticket is for reference tracking of the issue solution
The text was updated successfully, but these errors were encountered: