-
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dowload Progression Bar always stops, can't download models. #375
Comments
Hi, does it stop at the end? you might want to give it a minute or two |
that's strange, could you give me the output of running the app from a terminal? |
I just cleared Alpaca again and tried to download a model. I also have the logs, I hope there is nothing too sensitive, to share. I just put 3 X's on SSH. Also my download speed is really bad, but I have a stable connection via LAN. And I can download smaller models just fine. I hope thats enough info, this time it stopped at 51.89% and I cannot continue and when I close Alpaca, I have to download everything again. But the Userdata is at 7,2GB, so it downloaded something, but it can't finish I guess. INFO [main.py | main] Alpaca version: 2.7.0 ssh-xxx 2024/11/20 01:15:01 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/bp/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" |
it seems like the Ollama instance is not downloading the model correctly, I updated the Ollama instance in Alpaca 2.8.0, could you check if it works for you? thanks |
I seem to also be having this issue. It doesn't get past the pulling manifest stage. |
that's strange because it looks like it's an Ollama problem but if that would be the case more people would report it, might have to do something with your connection but I doubt it |
@Jeffser Other downloads in various other applications work, so I believe that my connection is fine. It has been a while since I launched Alpaca, but it worked fine a few months ago. The getting stuck thing seems to be a recent development after having not launched the application in a while. I tried nuking the flatpak and all associated files, but that did not resolve the issue either. |
@Jeffser I ran it from the terminal and got some interesting output.
|
well now that does look like an error I made |
no no, it actually just says that https://registry.ollama.ai/v2/library/llama3.2-vision/manifests/11b doesn't work, which it actually doesn't, that's strange |
now it does, omg |
Weird. Clicking the link that you provided. Returned some json output the first time I tried. |
it returned a 404 one time for me |
Seems to be back working again. So in the end maybe it really was some kind of shenanigans with connecting to the download links. |
Sorry... I got a little bit tired, I had multiple issues with my system and went back to Windows, just to realize that AMD on Windows is much worse. (llm & image generation related) I love Alpaca and I will test it again soon! But I'm currently facing an issue where my 6950 xt is not used, I even installed the extension, but it still won't use my GPU. |
yeah ROCm is not working currently, Alpaca 3.0.0 should fix that |
Hey, I just tried it again and my GPU is used! ROCm works, but the downloading issue is still there... this time it stopped at 74.55%, here is the log:
Is it possible to download the models elsewhere for myself and just add them to Alpaca? |
Hi, I'm sorry you are still having issues with Ollama, you can try using this command whilst Alpaca is running to start a model download. curl http://localhost:11435/api/pull -d '{
"model": "llama3.2"
}' If it still gives you trouble then we at least know it's an Ollama issue, if that's the case you could download a model in the GGUF format and then imported by pressing the |
Hi, I jsut wanted to report back. The curl method worked! Is there something else I can do? Do you think it has something todo with my low dload speed? Because Alpaca said in the log: "Looking for a faster connection" before. and btw, I can just modify your command and replace it with other models, right? For example, if I want llama3.2-vision:11b curl http://localhost:11435/api/pull -d '{ |
I'm not sure why Ollama works when using curl and not with Alpaca because Alpaca basically does the same thing internally when interacting with the instance, it's probably due to your internet speed but I'm not sure why it gives up with Alpaca and not curl. And yes you can download any model by changing the name in the command! |
Hey, I just tried the Alpaca Flatpak, it works perfectly fine with small Models.
But whenever I try to download Models bigger than 6GB, the Progression Bar always stops.
Llama 3.1 models, as well as Gemma2 models, but also Llava models for Image Recognition.
I just tried it again, to be completely sure. And yes, I can download a 700mb model without any issues.
The text was updated successfully, but these errors were encountered: