-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support function calling? #20
Comments
came here to ask if function calling was supported as well. |
Hi, thanks for raising this! Function calling is an interesting use case. It is possible to use function calling via the OpenAI format as long as both the models you are routing to support it. See the compatibility list here: https://docs.litellm.ai/docs/completion/input (under the We will look into this too. |
Hi Issac @iojw , If I understand correctly, RouteLLM takes a query and uses a router model to make a routing decision. So what stops it from being a function call? |
The main thing is that the all models used have to be trained to support function calling (also known as tool use) since function calling is different from normal text generation. e.g. The available functions are normally injected using a specific syntax that the model has been trained to understand. The generation is also constrained in a way such that the response is always syntactically valid. Looking here: https://docs.litellm.ai/docs/completion/input, you can see that only a small number of models support this. |
Could the fact that function calling is required be a routing criteria? |
@mathieuisabel Our existing routers are trained using primarily conversational data on Chatbot Arena, so they do not contain much data on function-calling. If there's a large enough function-calling dataset, it could be possible to train separate routers for function-calling specifically. |
@iojw I think maybe as a first step to facilitate the implementation would be to have the abstractions in place to allow for that to happen. i.e. Maybe as an initial implementation, the routing logic would be as simple as routing to a model that allows function calling (maybe with a general consideration as to how advanced are the reasoning requirements) without any kind of optimization. As more training data becomes available then one could introduce more refined logic for those particular calls. |
How about if we delegate function calling to the model. RouteLLM just handles the first part which is decide which model to be routed to, then the rest is done by the model. |
hey litellm maintainer here - we support a pretty wide range of providers with function calling - a better way to verify is to run it should return all the supported openai params (e.g. tool calling) for a particular model / provider Eg. from litellm import get_supported_openai_params
params = get_supported_openai_params(model="anthropic.claude-3", custom_llm_provider="bedrock")
assert "response_format" in params https://docs.litellm.ai/docs/completion/json_mode#check-model-support How could our docs be better here? @lylepratt @iojw @therealtimex |
How routellm supports function calling?
The text was updated successfully, but these errors were encountered: