You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've set up openhands for people to tinker with, and had it do some data analysis flows that eat >200GB of memory and tank the box.
Describe the UX of the solution you'd like
No UX needed, all we really need is like SANDBOX_MEMORY_LIMIT=100g here
Do you have thoughts on the technical implementation?
Yes, launch container if that's set with
MEMORY_LIMIT=100g docker-compose up
or similar
Describe alternatives you've considered
Well, my stopgap was to put a mem limit on the docker cgroup on the host box that would keep docker itself from using >3/4 of the system ram, which at least keeps it responsive to kill a container, which was fine for MY problem but would be a terrible way to implement this in a "real" env.
Additional context
I'd just say I think this makes a ton of sense to have given how there's a good amount of effort to deal with other resource limits (eg, concurrent LLM requests w/out user interaction, max quota/requests/cost, etc)
The text was updated successfully, but these errors were encountered:
What problem or use case are you trying to solve?
I've set up openhands for people to tinker with, and had it do some data analysis flows that eat >200GB of memory and tank the box.
Describe the UX of the solution you'd like
No UX needed, all we really need is like SANDBOX_MEMORY_LIMIT=100g here
Do you have thoughts on the technical implementation?
Yes, launch container if that's set with
MEMORY_LIMIT=100g docker-compose up
or similar
Describe alternatives you've considered
Well, my stopgap was to put a mem limit on the docker cgroup on the host box that would keep docker itself from using >3/4 of the system ram, which at least keeps it responsive to kill a container, which was fine for MY problem but would be a terrible way to implement this in a "real" env.
Additional context
I'd just say I think this makes a ton of sense to have given how there's a good amount of effort to deal with other resource limits (eg, concurrent LLM requests w/out user interaction, max quota/requests/cost, etc)
The text was updated successfully, but these errors were encountered: