-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
features: ollama support #135
Comments
You can, view the wiki: https://github.com/yetone/avante.nvim/wiki scroll to the bottom of the page |
ollama uses the same calls as the openai api, so this part of the installation instructions should work with ollama, lm-studio, ... any local llm engine that follows the openai api. |
Hi, after I made the following changes: opts = {
provider = "openai",
openai = {
endpoint = "http://127.0.0.1:11434",
model = "codegemma",
temperature = 0,
max_tokens = 4096,
["local"] = true,
},
}, I still get prompted to enter the API key when I started NeoVim, and getting errors:
I've tried:
But none of the above works. Am I supposed to create a custom vendor/provider instead for Ollama? |
the endpoint has to be |
provider = "openai",
openai = {
endpoint = "http://localhost:11434",
-- endpoint = "http://localhost:11434/v1", -- doesn't work as well
model = "codegemma:2b",
temperature = 0,
max_tokens = 4096,
["local"] = true,
},
-- ... rest Still not working for me, it asks for api key EDIT: It works, I just put dummy api token and works.. no idea how to disable the prompt for token completely |
can you try with #286? |
now gives an error when I send message in chat, instead of prompting when opening nvim |
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
aight I will add a case for local llm to test this thoroughly, but #295 should address this |
It would be great if it would be possible to use local llms with ollama and not only claude/openai models.
The text was updated successfully, but these errors were encountered: