Skip to content

Commit

Permalink
Merge pull request stackblitz#1 from dustinwloring1988/main
Browse files Browse the repository at this point in the history
Base
  • Loading branch information
dustinwloring1988 authored Dec 1, 2024
2 parents f20173f + 4250fa9 commit dd25bef
Show file tree
Hide file tree
Showing 16 changed files with 402 additions and 311 deletions.
6 changes: 6 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,18 @@ OLLAMA_API_BASE_URL=
# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=

# You only need this environment variable set if you want to use Together AI models
TOGETHER_API_BASE_URL=

# You only need this environment variable set if you want to use DeepSeek models through their API
DEEPSEEK_API_KEY=

# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=

# Get your Together API Key
TOGATHER_API_KEY=

# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
Expand Down
8 changes: 8 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ ARG ANTHROPIC_API_KEY
ARG OPEN_ROUTER_API_KEY
ARG GOOGLE_GENERATIVE_AI_API_KEY
ARG OLLAMA_API_BASE_URL
ARG TOGETHER_API_KEY
ARG TOGETHER_API_BASE_URL
ARG VITE_LOG_LEVEL=debug
ARG DEFAULT_NUM_CTX

Expand All @@ -36,6 +38,8 @@ ENV WRANGLER_SEND_METRICS=false \
OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
TOGETHER_API_KEY=${TOGETHER_API_KEY} \
TOGETHER_API_BASE_URL=${TOGETHER_API_BASE_URL} \
VITE_LOG_LEVEL=${VITE_LOG_LEVEL} \
DEFAULT_NUM_CTX=${DEFAULT_NUM_CTX}

Expand All @@ -58,6 +62,8 @@ ARG ANTHROPIC_API_KEY
ARG OPEN_ROUTER_API_KEY
ARG GOOGLE_GENERATIVE_AI_API_KEY
ARG OLLAMA_API_BASE_URL
ARG TOGETHER_API_KEY
ARG TOGETHER_API_BASE_URL
ARG VITE_LOG_LEVEL=debug
ARG DEFAULT_NUM_CTX

Expand All @@ -68,6 +74,8 @@ ENV GROQ_API_KEY=${GROQ_API_KEY} \
OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
TOGETHER_API_KEY=${TOGETHER_API_KEY} \
TOGETHER_API_BASE_URL=${TOGETHER_API_BASE_URL} \
VITE_LOG_LEVEL=${VITE_LOG_LEVEL} \
DEFAULT_NUM_CTX=${DEFAULT_NUM_CTX}

Expand Down
54 changes: 54 additions & 0 deletions FAQ.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
[![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new)

# Bolt.new Fork by Cole Medin - oTToDev

## FAQ

### How do I get the best results with oTToDev?

- **Be specific about your stack**: If you want to use specific frameworks or libraries (like Astro, Tailwind, ShadCN, or any other popular JavaScript framework), mention them in your initial prompt to ensure Bolt scaffolds the project accordingly.

- **Use the enhance prompt icon**: Before sending your prompt, try clicking the 'enhance' icon to have the AI model help you refine your prompt, then edit the results before submitting.

- **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps oTToDev understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality.

- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask oTToDev to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.

### Do you plan on merging oTToDev back into the official Bolt.new repo?

More news coming on this coming early next month - stay tuned!

### Why are there so many open issues/pull requests?

oTToDev was started simply to showcase how to edit an open source project and to do something cool with local LLMs on my (@ColeMedin) YouTube channel! However, it quickly
grew into a massive community project that I am working hard to keep up with the demand of by forming a team of maintainers and getting as many people involved as I can.
That effort is going well and all of our maintainers are ABSOLUTE rockstars, but it still takes time to organize everything so we can efficiently get through all
the issues and PRs. But rest assured, we are working hard and even working on some partnerships behind the scenes to really help this project take off!

### How do local LLMs fair compared to larger models like Claude 3.5 Sonnet for oTToDev/Bolt.new?

As much as the gap is quickly closing between open source and massive close source models, you’re still going to get the best results with the very large models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. This is one of the big tasks we have at hand - figuring out how to prompt better, use agents, and improve the platform as a whole to make it work better for even the smaller local LLMs!

### I'm getting the error: "There was an error processing this request"

If you see this error within oTToDev, that is just the application telling you there is a problem at a high level, and this could mean a number of different things. To find the actual error, please check BOTH the terminal where you started the application (with Docker or pnpm) and the developer console in the browser. For most browsers, you can access the developer console by pressing F12 or right clicking anywhere in the browser and selecting “Inspect”. Then go to the “console” tab in the top right.

### I'm getting the error: "x-api-key header missing"

We have seen this error a couple times and for some reason just restarting the Docker container has fixed it. This seems to be Ollama specific. Another thing to try is try to run oTToDev with Docker or pnpm, whichever you didn’t run first. We are still on the hunt for why this happens once and a while!

### I'm getting a blank preview when oTToDev runs my app!

We promise you that we are constantly testing new PRs coming into oTToDev and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.

### How to add a LLM:

To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.

By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!

When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!

### Everything works but the results are bad

This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
64 changes: 8 additions & 56 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for ea

https://thinktank.ottomator.ai

## Requested Additions to this Fork - Feel Free to Contribute!!
## Requested Additions - Feel Free to Contribute!

- ✅ OpenRouter Integration (@coleam00)
- ✅ Gemini Integration (@jonathands)
Expand All @@ -32,13 +32,13 @@ https://thinktank.ottomator.ai
- ✅ Cohere Integration (@hasanraiyan)
- ✅ Dynamic model max token length (@hasanraiyan)
- ✅ Prompt caching (@SujalXplores)
-**HIGH PRIORITY** - Load local projects into the app (@wonderwhy-er)
- ✅ Load local projects into the app (@wonderwhy-er)
- ✅ Together Integration (@mouimet-infinisoft)
- ✅ Mobile friendly (@qwikode)
-**HIGH PRIORITY** - ALMOST DONE - Attach images to prompts (@atrokhym)
-**HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
-**HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
-**HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
- ⬜ Mobile friendly
- ⬜ Together Integration
- ⬜ Azure Open AI API Integration
- ⬜ Perplexity Integration
- ⬜ Vertex AI Integration
Expand Down Expand Up @@ -198,15 +198,6 @@ sudo npm install -g pnpm
```bash
pnpm run dev
```

## Adding New LLMs:

To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.

By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!

When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!

## Available Scripts

- `pnpm run dev`: Starts the development server.
Expand All @@ -228,55 +219,16 @@ pnpm run dev

This will start the Remix Vite development server. You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.

## FAQ

### How do I get the best results with oTToDev?

- **Be specific about your stack**: If you want to use specific frameworks or libraries (like Astro, Tailwind, ShadCN, or any other popular JavaScript framework), mention them in your initial prompt to ensure Bolt scaffolds the project accordingly.

- **Use the enhance prompt icon**: Before sending your prompt, try clicking the 'enhance' icon to have the AI model help you refine your prompt, then edit the results before submitting.

- **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps oTToDev understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality.

- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask oTToDev to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.

### How do I contribute to oTToDev?
## How do I contribute to oTToDev?

[Please check out our dedicated page for contributing to oTToDev here!](CONTRIBUTING.md)

### Do you plan on merging oTToDev back into the official Bolt.new repo?

More news coming on this coming early next month - stay tuned!

### What are the future plans for oTToDev?
## What are the future plans for oTToDev?

[Check out our Roadmap here!](https://roadmap.sh/r/ottodev-roadmap-2ovzo)

Lot more updates to this roadmap coming soon!

### Why are there so many open issues/pull requests?

oTToDev was started simply to showcase how to edit an open source project and to do something cool with local LLMs on my (@ColeMedin) YouTube channel! However, it quickly
grew into a massive community project that I am working hard to keep up with the demand of by forming a team of maintainers and getting as many people involved as I can.
That effort is going well and all of our maintainers are ABSOLUTE rockstars, but it still takes time to organize everything so we can efficiently get through all
the issues and PRs. But rest assured, we are working hard and even working on some partnerships behind the scenes to really help this project take off!

### How do local LLMs fair compared to larger models like Claude 3.5 Sonnet for oTToDev/Bolt.new?

As much as the gap is quickly closing between open source and massive close source models, you’re still going to get the best results with the very large models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. This is one of the big tasks we have at hand - figuring out how to prompt better, use agents, and improve the platform as a whole to make it work better for even the smaller local LLMs!

### I'm getting the error: "There was an error processing this request"

If you see this error within oTToDev, that is just the application telling you there is a problem at a high level, and this could mean a number of different things. To find the actual error, please check BOTH the terminal where you started the application (with Docker or pnpm) and the developer console in the browser. For most browsers, you can access the developer console by pressing F12 or right clicking anywhere in the browser and selecting “Inspect”. Then go to the “console” tab in the top right.

### I'm getting the error: "x-api-key header missing"

We have seen this error a couple times and for some reason just restarting the Docker container has fixed it. This seems to be Ollama specific. Another thing to try is try to run oTToDev with Docker or pnpm, whichever you didn’t run first. We are still on the hunt for why this happens once and a while!

### I'm getting a blank preview when oTToDev runs my app!

We promise you that we are constantly testing new PRs coming into oTToDev and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.

### Everything works but the results are bad
## FAQ

This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
[Please check out our dedicated page for FAQ's related to oTToDev here!](FAQ.md)
12 changes: 8 additions & 4 deletions app/components/chat/ExamplePrompts.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,23 @@ const EXAMPLE_PROMPTS = [

export function ExamplePrompts(sendMessage?: { (event: React.UIEvent, messageInput?: string): void | undefined }) {
return (
<div id="examples" className="relative w-full max-w-xl mx-auto mt-8 flex justify-center">
<div className="flex flex-col space-y-2 [mask-image:linear-gradient(to_bottom,black_0%,transparent_180%)] hover:[mask-image:none]">
<div id="examples" className="relative flex flex-col gap-9 w-full max-w-3xl mx-auto flex justify-center mt-6">
<div
className="flex flex-wrap justify-center gap-2"
style={{
animation: '.25s ease-out 0s 1 _fade-and-move-in_g2ptj_1 forwards',
}}
>
{EXAMPLE_PROMPTS.map((examplePrompt, index: number) => {
return (
<button
key={index}
onClick={(event) => {
sendMessage?.(event, examplePrompt.text);
}}
className="group flex items-center w-full gap-2 justify-center bg-transparent text-bolt-elements-textTertiary hover:text-bolt-elements-textPrimary transition-theme"
className="border border-bolt-elements-borderColor rounded-full bg-gray-50 hover:bg-gray-100 dark:bg-gray-950 dark:hover:bg-gray-900 text-bolt-elements-textSecondary hover:text-bolt-elements-textPrimary px-3 py-1 text-xs transition-theme"
>
{examplePrompt.text}
<div className="i-ph:arrow-bend-down-left" />
</button>
);
})}
Expand Down
Loading

0 comments on commit dd25bef

Please sign in to comment.