Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Enable using other AI sources as a first class citizen #32

Open
jtslear opened this issue May 16, 2024 · 1 comment
Open

Comments

@jtslear
Copy link

jtslear commented May 16, 2024

Problem Statement

It generally appears that butterfish provides an array of options to boost it's usage of OpenAI. This can easily be seen by the hard coding of models and various token options: https://github.com/bakks/butterfish/blob/2a69b7d06c737a7ab17b82035f2b851ddd4426bb/butterfish/common.go

Should a user want to avoid using OpenAI, we run into a few issues, such as not being able to configure the same token configurations. I think it would be neat if we enabled butterfish to be more friendly to those that want to run models locally. This would be especially useful when creating your own or if using OpenAI is not desired for any reason.

To be clear, I'm not asking that butterfish run a model, rather, be more flexible in its config when reaching out to another locally run service which processes the requests.

@januxnet
Copy link

Problem Statement

It generally appears that butterfish provides an array of options to boost it's usage of OpenAI. This can easily be seen by the hard coding of models and various token options: https://github.com/bakks/butterfish/blob/2a69b7d06c737a7ab17b82035f2b851ddd4426bb/butterfish/common.go

Should a user want to avoid using OpenAI, we run into a few issues, such as not being able to configure the same token configurations. I think it would be neat if we enabled butterfish to be more friendly to those that want to run models locally. This would be especially useful when creating your own or if using OpenAI is not desired for any reason.

To be clear, I'm not asking that butterfish run a model, rather, be more flexible in its config when reaching out to another locally run service which processes the requests.

I second this, would like to run local LLM models with ollama for butterfish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants