-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offline enviroment, how to run with local Ollama models? #830
Comments
[1] The error above is trying to access https://openrouter.ai/api/v1/models
[4] Now I can open the localhost:5173 web page after npm run dev (Docker should be considered after local running ok) [7] I've tried add "http://localhost:11434" to Settings-Providers-Ollama-Base URL, or to .env.local file, but it didnt work. |
[1] I changed the constants.ts PROVIDER_LIST as:
|
its fine these error wont cause the bolt to stop working. these are just alerts for you to know that its unable to load dynamic models. the only concern about fully offline is that you wont be able to load the webcontainer as this requires us to pull some web assembly from stackBlitz server, and not available offline. once its loaded you can actually work offline but the initial load requires access to StackBlitz assets |
yes the webcontainer wont start without loading the web assembly from stackblitz. |
So i manged to make this work out now its not using my 4090 it uses cpu but when i use openweui it uses 100% gpu i tried to modify .env but doesent help any one have a clue iam using windows as operating system and installed ollama on my desktop and using docker for bolt, accesing bolt from destop to laptop remotly, tried installing bolt both on desktop and laptop having same issue fro some reason it wont use gpu and when i install on laptop i uses cpu there even not needed, please help am i missing something here , for information iam tottaly noob on this so please have patience |
need to understand your full setup you are using |
|
@TRYOKETHEPEN Bro, I use bolt.new in a similar environment as you. After interacting with llm on the left, can the file system on the right generate files? |
Yes but that was not the problem problem was it was not utilizing gpu 100% when i used commande ollama ps it gave me NAME ID SIZE PROCESSOR UNTIL so i had to modify .env file Loaded environment variables: install and change vite.config.ts like so import { cloudflareDevProxyVitePlugin as remixCloudflareDevProxy, vitePlugin as remixVitePlugin } from '@remix-run/dev'; // Ladda .env-variabler console.log('Loaded environment variables:'); export default defineConfig((config) => { function chrome129IssuePlugin() {
}; And now i get C:\Users\12334>ollama ps and its fast :))) |
你好,它会一直卡在生成文件步骤上。 |
后来我发现是代理问题,代理导致我拉取webcontainer相关依赖失败了,所以那个文件系统起不来,处理好代理之后,我这边文件系统正常了,但是和终端交互不正常 |
Hello, I am experiencing a similar issue. When utilizing Bolt.diy in conjunction with Ollama, it operates on the GPU, CPU, and RAM simultaneously, rather than solely on the GPU as desired. Do you have any insights on how to configure Bolt.diy to exclusively utilize GPU resources? For your information, the OpenWeb UI interface appears to successfully employ Ollama in GPU-only mode. |
like I said earlier, its not ollama issue its webcontainer. you cannot start the webcontainer withouts loading it from stackblitz servers. and bolt waits for webcontainer to boot up before processing any llm responses. I am closing this issue as there no no available solution for this right now and is considered a limitation |
Describe the bug
Run with docker raise error: read ECONNRESET
The error relates to network problem while access open router model, but i didnt set its API
I only want to use local ollama, what should i do
Link to the Bolt URL that caused the error
\
Steps to reproduce
Expected behavior
\
Screen Recording / Screenshot
as above
Platform
as above
Provider Used
Ollama
Model Used
\
Additional context
No response
The text was updated successfully, but these errors were encountered: