-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Ollama AI Provider #215
Comments
@miurla Ollama now supports tool call - https://ollama.com/blog/tool-support |
Ollama AI Provider needs to support tool calls: sgomez/ollama-ai-provider#11 |
Looks like we may get that soon....fingers crossed 🤞 |
It seems that v0.11 has been released and it has tool support. I haven't tested it yet but I guess I'll try it soon. https://github.com/sgomez/ollama-ai-provider/releases/tag/ollama-ai-provider%400.11.0 |
Ollama's tools do not support streaming yet.
I tested it with the llama3.1 model, but the Researcher did not work. We need to either not support streaming or manually detect tool-calls and respond. Test code: #294 |
UpdateObject generation and Tool usage with Ollama are now working stably, so we have implemented support for them. Ollama currently does not support tool streaming. Therefore, there is a waiting time until the answer is generated. Model
Currently, the only supported model is qwen2.5 PR |
Update (9/30): #215 (comment)
Overview
Currently, Ollama support is an unstable and experimental feature. This feature is implemented using the Ollama AI Provider. It is explicitly stated that it is unstable in
Object generation
andTool usage
. Additionally,Tool streaming
is not supported. Morphic is very unstable because it requires these capabilities. Please use it with the understanding of these limitations.Environment Variables
mistral
oropenherms
phi3
orllama3
http://localhost:11434
PR
The text was updated successfully, but these errors were encountered: