Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Windows support for LM Studio #137

Closed
ayoubachak opened this issue Apr 8, 2024 · 3 comments
Closed

Adding Windows support for LM Studio #137

ayoubachak opened this issue Apr 8, 2024 · 3 comments

Comments

@ayoubachak
Copy link

Describe the feature

The whole project now is based on container structure using docker which is great for portability, but it would be great to have support for windows and linux systems without needing docker to have better access for code debugging, hardware and testing, and to have support for other LLMs like mentioned (#64 (comment)) and especially for LM Studio as it has access to a wide range of open source models via hugging face and it will cost 0$.
I know that I just have to install docker, but having a full windows support would be

Potential Solutions

I've noticed the extensive use of linux commands which I'm not sure if they will all be available in windows, but I think we can work this arround by checking the platform using a python library or changing commands to python code ( which should automatically detect the platform ).

@klieret
Copy link
Member

klieret commented Apr 8, 2024

While you might not need the docker container to run run.py (Windows support is probably ready soon after #17 is ready), the commands will always be executed in a docker container that runs Linux.

To support more models, I think #64 (litellm) makes more sense.

@klieret klieret closed this as not planned Won't fix, can't repro, duplicate, stale Apr 8, 2024
@klieret
Copy link
Member

klieret commented Apr 8, 2024

Generally, it's unlikely that local models are good enough to solve the issues at the current state of art.

@ayoubachak
Copy link
Author

Generally, it's unlikely that local models are good enough to solve the issues at the current state of art.

I totally agree, but as model architecture progresses we might be able to execute the 70B models locally, and we might start seeing models that are fine-tuned for swe-agents or similar platforms like devika which might perform great

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants