-
Notifications
You must be signed in to change notification settings - Fork 7.4k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
multi-part model+safetensors
feature request
New feature or request
#7272
opened Oct 19, 2024 by
werruww
fail to run ollama run hf-mirror.com/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:Q8
bug
Something isn't working
#7268
opened Oct 19, 2024 by
taozhiyuai
Running out of memory when allocating to second GPU
bug
Something isn't working
#7267
opened Oct 19, 2024 by
joshuakoh1
Windows ARM64 fails when loading model, error code 0xc000001d
bug
Something isn't working
#7266
opened Oct 19, 2024 by
mikechambers84
Migrate off centos 7 for intermediate build layers in container image builds
feature request
New feature or request
#7260
opened Oct 18, 2024 by
cazlo
2
Return Triggered Stop Sequence
feature request
New feature or request
#7257
opened Oct 18, 2024 by
someone13574
Last character being truncated by stop sequence
bug
Something isn't working
#7256
opened Oct 18, 2024 by
someone13574
Support directly running GGUF files without importing
feature request
New feature or request
#7254
opened Oct 18, 2024 by
ahizap
The issue regarding concurrent processing with multiple GPU cards
bug
Something isn't working
#7253
opened Oct 18, 2024 by
SDAIer
add h2ovl-mississippi-800m and h2ovl-mississippi-2b
model request
Model requests
#7252
opened Oct 18, 2024 by
a-ghorbani
debug modelfile on create
feature request
New feature or request
#7251
opened Oct 18, 2024 by
belfie13
It is hoped that it can be compatible with Intel ultra1 and 2 generation chip core display
feature request
New feature or request
#7248
opened Oct 18, 2024 by
brownplayer
Pulling models from private OCI Registries
feature request
New feature or request
#7244
opened Oct 17, 2024 by
mitja
add module/plug-in system to ollama
feature request
New feature or request
#7243
opened Oct 17, 2024 by
malv-c
Pull Private Huggingface Model
feature request
New feature or request
#7240
opened Oct 17, 2024 by
DaddyCodesAlot
Add Tab-Enabled Autocomplete for Local Model Parameters in Ollama CLI
feature request
New feature or request
#7239
opened Oct 17, 2024 by
lucianoayres
Ollama document intelligence engine
feature request
New feature or request
#7238
opened Oct 17, 2024 by
dcasota
Suggest adding shibing624/text2vec model
feature request
New feature or request
#7237
opened Oct 17, 2024 by
smileyboy2019
OpenAI AI Compatiable
feature request
New feature or request
#7235
opened Oct 17, 2024 by
tobegit3hub
Support for Whisper-family models
model request
Model requests
#7233
opened Oct 17, 2024 by
gileneusz
Basic AI test result inconsistent compared to llama.cpp
bug
Something isn't working
#7232
opened Oct 17, 2024 by
brauliobo
Previous Next
ProTip!
Adding no:label will show everything without a label.