Mistral AI integration #366
Replies: 1 comment 3 replies
-
Hi @mathieubrunpicard converted to a discussion just to help me stay organized. Thanks for raising this question. I know one other contributor also wants the project to run local models and he's doing some investigating of it. cc: @stephan-buckmaster , not sure how far along you are. Do you chat with Mistral currently? I haven't tried it yet. I just keep checking the public evals and it seems like Llama3 is the strongest of the open source models: https://chat.lmsys.org/?leaderboard I do think it would be great to support Llama3 and probably Mistral too. The challenge isn't with the ruby side, it's really the question of where to host the model. Are we using Grock and pointing to their web API? Or do we ask a user to run it locally? And if you run it locally, do you know if it conforms to the OpenAI API spec? If you're running it locally, what are you using? When I've run Llama I use the llamafile format: https://github.com/Mozilla-Ocho/llamafile Thinking aloud, maybe the task is this:
|
Beta Was this translation helpful? Give feedback.
-
HostedGPT currently includes Claude & OpenAI but could we also have a Mistral AI integration ?
I see that there are two lib that could be used in Ruby :
the latter might seem the better option has it mirrors the mistral python lib (cf. https://wilsonsilva.com/mistral-ruby-gem/)
Beta Was this translation helpful? Give feedback.
All reactions