You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have noticed 2 issues that prevents feluda from being resilient to errors :
Feluda's API server, indexer and reporter should be scaled up individually without issues. The most useful one here is the indexer. It can take anything from 1 second to 10 second to index an image or video. So being able to spin up multiple instances of an indexer can be very useful to processing large index requests quickly.
When the indexer or reporter lose connectivity with RabbitMQ or elasticsearch, they don't seem to recover well from this. But since they are still running as processes, kubernetes does not register this as a crashing or dead container. This adds the need to manually restart them.
I have noticed 2 issues that prevents feluda from being resilient to errors :
Feluda's API server, indexer and reporter should be scaled up individually without issues. The most useful one here is the indexer. It can take anything from 1 second to 10 second to index an image or video. So being able to spin up multiple instances of an indexer can be very useful to processing large index requests quickly.
When the indexer or reporter lose connectivity with RabbitMQ or elasticsearch, they don't seem to recover well from this. But since they are still running as processes, kubernetes does not register this as a crashing or dead container. This adds the need to manually restart them.
We can build on the work here to audit feluda's performance and identifying bottlenecks
tattle-made/services-infrastructure#6
The text was updated successfully, but these errors were encountered: