You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have configured various OpenAI, Groq and local Ollama models, and especially for the latter I regularly remove some and add others. Sometimes I forget the exact model name, or the alias I configured, especially when I have multiple sizes of the same model and I forget how many billion parameters a model had.
Then currently I have to mods --settings to open the YAML file, and scroll down to see the list.
I'd like to propose the introduction of an interactive model selection, both for the immediate chat, as well as for the interactive chat. Combined with fuzzy search you could then just type "ll38" and it would jump to the llama-3 models with 8b. If you have one configured for Groq and one for Ollama, it's then just onej/k or up/down away from selecting.
Regarding the fuzzy search, here's an example of ollama list | fzf and then typing ll38, while the full ollama list shows 23 models:
Regarding the proposed trigger for the interactive model selection (as opposed to using a default model):
For the immediate chat (i.e. mods "why is the sky blue?"), just -m without param would probably not work, as it could be tricky to differentiate the following from being a parameter or the prompt. So it could be a new flag, like mods --interactive-model "why is the sky blue?". Alternatively, and I think I would prefer that, could be a setting where you disable the default model, and then when not passing the -m flag you're always prompted for the model interactively.
For the interactive chat, the above mentioned setting could also trigger the model selection first, before dropping you into the interactive prompt input. Or otherwise (or additionally) it could be a keyboard shortcut, just like today ctrl+j adds a newline, then ctrl+m could switch the model.
The text was updated successfully, but these errors were encountered:
I just saw the -M flag exists, which I might have missed before. But it still doesn't allow the above proposed immediate chat. mods -M "why is the sky blue?" ignores the flag and uses the default model it seems.
And for the interactive chat, while mods -M is already useful for the two interactive steps (first the model selection, then the prompt input), being able to select or even switch the models with a shortcut inside the interactive prompt would still be useful.
I have configured various OpenAI, Groq and local Ollama models, and especially for the latter I regularly remove some and add others. Sometimes I forget the exact model name, or the alias I configured, especially when I have multiple sizes of the same model and I forget how many billion parameters a model had.
Then currently I have to
mods --settings
to open the YAML file, and scroll down to see the list.I'd like to propose the introduction of an interactive model selection, both for the immediate chat, as well as for the interactive chat. Combined with fuzzy search you could then just type "ll38" and it would jump to the llama-3 models with 8b. If you have one configured for Groq and one for Ollama, it's then just one
j/k
orup/down
away from selecting.Regarding the fuzzy search, here's an example of
ollama list | fzf
and then typingll38
, while the fullollama list
shows 23 models:Regarding the proposed trigger for the interactive model selection (as opposed to using a default model):
mods "why is the sky blue?"
), just-m
without param would probably not work, as it could be tricky to differentiate the following from being a parameter or the prompt. So it could be a new flag, likemods --interactive-model "why is the sky blue?"
. Alternatively, and I think I would prefer that, could be a setting where you disable the default model, and then when not passing the-m
flag you're always prompted for the model interactively.ctrl+j
adds a newline, thenctrl+m
could switch the model.The text was updated successfully, but these errors were encountered: