Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

features: ollama support #135

Closed
SlayVict opened this issue Aug 21, 2024 · 8 comments
Closed

features: ollama support #135

SlayVict opened this issue Aug 21, 2024 · 8 comments

Comments

@SlayVict
Copy link

It would be great if it would be possible to use local llms with ollama and not only claude/openai models.

@makyinmars
Copy link
Contributor

You can, view the wiki: https://github.com/yetone/avante.nvim/wiki scroll to the bottom of the page

@aarnphm aarnphm closed this as completed Aug 21, 2024
@aarnphm aarnphm reopened this Aug 21, 2024
@theesfeld
Copy link

theesfeld commented Aug 22, 2024

openai = {
  endpoint = "http://127.0.0.1:3000",
  model = "code-gemma",
  temperature = 0,
  max_tokens = 4096,
  ["local"] = true,
},

ollama uses the same calls as the openai api, so this part of the installation instructions should work with ollama, lm-studio, ... any local llm engine that follows the openai api.

@aarnphm aarnphm closed this as completed Aug 22, 2024
@jackplus-xyz
Copy link

jackplus-xyz commented Aug 26, 2024

Hi, after I made the following changes:

    opts = {
      provider = "openai",
      openai = {
        endpoint = "http://127.0.0.1:11434",
        model = "codegemma",
        temperature = 0,
        max_tokens = 4096,
        ["local"] = true,
      },
    },

I still get prompted to enter the API key when I started NeoVim, and getting errors:

  • Failed to setup openai. Avante won't work as expected
  • API request failed with status 404

I've tried:

  1. Changed the endpoint to http://127.0.0.1:11434/api/chat
  2. Use a different model
  3. Ensure Ollama is running with
    curl http://localhost:11434/api/chat -d '{                                             
        "model": "codegemma",
        "messages": [
          { "role": "user", "content": "why is the sky blue?" }
        ]
    }'

But none of the above works. Am I supposed to create a custom vendor/provider instead for Ollama?

@aarnphm
Copy link
Collaborator

aarnphm commented Aug 26, 2024

the endpoint has to be endpoint/v1 (avante will append /chat/completions to it)

@0xwal
Copy link

0xwal commented Aug 27, 2024

provider = "openai",

openai = {
	endpoint = "http://localhost:11434",
	-- endpoint = "http://localhost:11434/v1", -- doesn't work as well
	model = "codegemma:2b",
	temperature = 0,
	max_tokens = 4096,
	["local"] = true,
},
-- ... rest

Still not working for me, it asks for api key

EDIT:

It works, I just put dummy api token and works.. no idea how to disable the prompt for token completely

@aarnphm
Copy link
Collaborator

aarnphm commented Aug 27, 2024

can you try with #286?

@0xwal
Copy link

0xwal commented Aug 27, 2024

can you try with #286?

now gives an error when I send message in chat, instead of prompting when opening nvim
image

aarnphm added a commit that referenced this issue Aug 27, 2024
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
aarnphm added a commit that referenced this issue Aug 27, 2024
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
@aarnphm
Copy link
Collaborator

aarnphm commented Aug 27, 2024

aight I will add a case for local llm to test this thoroughly, but #295 should address this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants