Replies: 6 comments 10 replies
-
When I created all these models initially, my intention was to mirror the OpenAI schema closely, but I did intentionally diverge from it in a few small ways. This was the original PR and I expanded on that a bit: Off the top of my head, there were two tricky things: one was minor in that “Thread” is a core class in ruby so I renamed that to “Conversation”. The other tricky thing is a bit more squishy… OpenAI spec’d out a whole bunch of models for use with their new Assistants API but as I started to implement it I found that they hadn’t finished it. A bunch of basic functionality wasn’t supported through the new API so I fell back to their old “chat” API but I used the new models I had created anyway, just to future proof. But just last month (April) they finally released Assistants API v2.0: https://platform.openai.com/docs/assistants/whats-new I have not dug into this yet, but ruby-openai just released support for it so that should make our lives a bit easier. I am not sure yet in what ways they may have shifted their schema a bit. |
Beta Was this translation helpful? Give feedback.
-
I guess one thing I should say too is: let me know what differences that you’re seeing and we can either plan to address those or I can share if there was an intent behind it. |
Beta Was this translation helpful? Give feedback.
-
Hi @krschacht , hope you had a good weekend. I have noticed a significant differences between the "chat" api and vs OpenAI Assistant api. The biggest one was Invocation differences. I summarized it below
|
Beta Was this translation helpful? Give feedback.
-
Now back to the Assistants model. If we were to support the new v2 API for Assistant would you recommend reusing the Assistant model? Currently assistants seems to store and display chat api conversations between users and AI models (ChatGPT, Anthropic, etc). But I may be missing something |
Beta Was this translation helpful? Give feedback.
-
@krschacht Maybe looking at an example may help.
The My confusion is how would we decide to use the Chat API and when to use the Assistant API methods. |
Beta Was this translation helpful? Give feedback.
-
@Rmpanga I wanted to raise one question about this potential migration over to the new Assistants API: I was talking with someone who recently told me that they thought the Assistant’s built-in file search (built-in RAG) wasn’t very good. Have you tried it? Is it working well for you? I think we should evaluate this before doing the big migration, but maybe you already have. Also, I have another way to potentially do file search. Right now I’m in the process of implementing Tools and I’ve decided just to re-implement all the OpenAI tools. One of the things the Assistant API would give us is access to their tools (web browsing, dalle, code interpreter). I think it may not be that difficult to re-implement the RAG implementation they have and there’s a decent chance we could make it better than theirs. I’ve been talking with the creator of Also, while I’m implementing tools, I am having to reconcile a bunch of subtle differences between OpenAI API and Anthropic API. It’s not too bad, but as soon as we move from Chat API to the Assistants API then it creates a lot more differences between the two backends. Meanwhile, a bunch of people are requesting Groq and other backends so it would be nice to minimize the differences between all those. All of this is making me think it’s premature to move this project over to the Assistants API. This is all pretty new thinking on my part. This space is moving quickly :) |
Beta Was this translation helpful? Give feedback.
-
@Rmpanga, you were asking:
Beta Was this translation helpful? Give feedback.
All reactions