You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should a user want to avoid using OpenAI, we run into a few issues, such as not being able to configure the same token configurations. I think it would be neat if we enabled butterfish to be more friendly to those that want to run models locally. This would be especially useful when creating your own or if using OpenAI is not desired for any reason.
To be clear, I'm not asking that butterfish run a model, rather, be more flexible in its config when reaching out to another locally run service which processes the requests.
The text was updated successfully, but these errors were encountered:
Should a user want to avoid using OpenAI, we run into a few issues, such as not being able to configure the same token configurations. I think it would be neat if we enabled butterfish to be more friendly to those that want to run models locally. This would be especially useful when creating your own or if using OpenAI is not desired for any reason.
To be clear, I'm not asking that butterfish run a model, rather, be more flexible in its config when reaching out to another locally run service which processes the requests.
I second this, would like to run local LLM models with ollama for butterfish
Problem Statement
It generally appears that butterfish provides an array of options to boost it's usage of OpenAI. This can easily be seen by the hard coding of models and various token options: https://github.com/bakks/butterfish/blob/2a69b7d06c737a7ab17b82035f2b851ddd4426bb/butterfish/common.go
Should a user want to avoid using OpenAI, we run into a few issues, such as not being able to configure the same token configurations. I think it would be neat if we enabled butterfish to be more friendly to those that want to run models locally. This would be especially useful when creating your own or if using OpenAI is not desired for any reason.
To be clear, I'm not asking that butterfish run a model, rather, be more flexible in its config when reaching out to another locally run service which processes the requests.
The text was updated successfully, but these errors were encountered: