Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: redact file contents from chat and put latest files into system prompt #904

Merged

Conversation

thecodacus
Copy link
Collaborator

@thecodacus thecodacus commented Dec 26, 2024

Re-enable Context Optimization with Toggle Feature

Overview

This PR re-enables the previously implemented context optimization feature (from PR #578) while adding a user-configurable toggle in the settings tab. This allows users to choose whether they want to use context optimization based on their specific needs and the LLM model they're using.

Key Changes

1. Context Optimization Toggle

  • Added a new switch in the Features tab for enabling/disabling context optimization
  • Implemented persistent storage of the setting using cookies
  • Added new state management through enableContextOptimizationStore

2. Integration with Chat System

  • Modified chat implementation to pass context optimization flag through the API
  • Updated stream-text functionality to conditionally apply optimization
  • Re-enabled previously commented out code for file content handling

Technical Details

Settings Management

  • Added new contextOptimizationEnabled state in useSettings hook
  • Implemented cookie persistence for the setting
  • Added enableContextOptimization callback for toggle handling

System Changes

  • Modified the chat API endpoint to accept contextOptimization parameter
  • Updated streamText to conditionally apply optimization based on the toggle
  • Re-enabled file context integration with system prompts when optimization is active

Testing

  • Verified toggle functionality persists across sessions
  • Tested chat behavior with optimization enabled and disabled
  • Verified file context handling with various LLM models
  • Confirmed proper state management and API integration

Migration Impact

  • No breaking changes
  • Feature is opt-in through UI toggle
  • Maintains backward compatibility with existing chat functionality
  • No migration steps required for existing installations

Future Improvements

  • Consolidate feature flags into a single configuration map (noted in TODO)
  • Add analytics for feature usage
  • Implement model-specific optimization defaults
  • Add documentation about optimal use cases for different models

Preview

Context.Optimization.demo.mp4

@thecodacus thecodacus added this to the v0.0.4 milestone Dec 26, 2024
@thecodacus thecodacus changed the title feat: redact file contents from chat and placed current files into system prompt feat: redact file contents from chat and put files into system prompt Dec 26, 2024
@thecodacus thecodacus changed the title feat: redact file contents from chat and put files into system prompt feat: redact file contents from chat and put latest files into system prompt Dec 26, 2024
@traflagar
Copy link

Very important feature!

@thecodacus thecodacus merged commit 3a36a44 into stackblitz-labs:main Dec 29, 2024
6 checks passed
@thecodacus thecodacus deleted the context-optimization-settings branch December 29, 2024 10:06
Tony-BDS pushed a commit to Tony-BDS/bolt.diy that referenced this pull request Jan 1, 2025
przbadu added a commit to przbadu/bolt.diy that referenced this pull request Jan 6, 2025
* Fixed console error for SettingsWIndow & Removed ts-nocheck where not needed

* fix: added wait till terminal prompt for bolt shell execution

* removed logs

* add/toc-for-readme

added a TOC for the README file, renamed some headings to better suite the TOC

* Update README.md

* feat: added terminal error capturing and automated fix prompt

* add: xAI grok-2-1212 model

* feat: Data Tab

Removed Chat History Tab
Added Data Tab
Data tab can export and delete chat history, import API keys, import and export settings

* Update DataTab.tsx

* feat: improved providers list style

made the list 2 columns wide and separate out the experimental providers

* fixed API Key import

* updated styling wordings and animations icons

* updated title header

* docs: updated setup guide to have more detailed instructions

* updated some text

* chore: update commit hash to 95dbcf1

* chore: update commit hash to de64007

* chore: update commit hash to 381d490

* chore: update commit hash to a53b10f

* Update ProvidersTab.tsx

* Update ProvidersTab.tsx

* chore: update commit hash to 75ec49b

* docs: updated style in faq

updated style in FAQ docs to be an accordion like style
added a TOC to the index page in the docs

* chore: update commit hash to 636f87f

* docs: updated Contributing

updated Contributing in the docs
updated Contributing and FAQ in the GitHub part as well

* docs: added info on updating using docker

Added docker-compose --profile development up --build  to the update section

* docs: added info on the Releases Page

Added the option to download from the Releases Page instead of git clone in the README

* docs: added info on both ways to set api keys

* chore: update commit hash to ab5cde3

* refactor: updated vite config to inject add version metadata into the app on build (stackblitz-labs#841)

* refactor: removes commit.json and used vite.config to load these variables

* updated precommit hook

* updated the pre start script

* updated the workflows

* ci: updated the docs ci to only trigger if any files changed in the docs folder (stackblitz-labs#849)

* docs: updated download link (stackblitz-labs#850)

* fix: add Message Processing Throttling to Prevent Browser Crashes (stackblitz-labs#848)

* fix hotfix for version metadata issue (stackblitz-labs#853)

* refactor:  refactored LLM Providers: Adapting Modular Approach (stackblitz-labs#832)

* refactor: Refactoring Providers to have providers as modules

* updated package and lock file

* added grok model back

* updated registry system

* ignored alert on project reload

* updated read me

* fix: provider menu dropdown fix (ghost providers) (stackblitz-labs#862)

* better osc code cleanup

* fix: ollama provider module base url hotfix for docker (stackblitz-labs#863)

* fix: ollama base url hotfix

* cleanup logic

* docs: updated env.example of OLLAMA & LMSTUDIO base url (stackblitz-labs#877)

* correct OLLAMA_API_BASE_URL

* correct OLLAMA_API_BASE_URL

* correct OLLAMA_API_BASE_URL

* fix: check for updates does not look for commit.json now (stackblitz-labs#861)

* feat: add Starter template menu in homepage (stackblitz-labs#884)

* added icons and component

* updated unocss to add dynamic icons

* removed temp logs

* updated readme

* feat: catch errors from web container preview and show in actionable alert so user can send them to AI for fixing (stackblitz-labs#856)

* Catch errors from web container

* Show fix error popup on errors in preview

* Remove unneeded action type

* PR comments

* Cleanup urls in stacktrace

---------

Co-authored-by: Anirban Kar <thecodacus@gmail.com>

* ci: improved change-log generation script and cleaner release ci action (stackblitz-labs#896)

* build: improved-changelog

* added a better change log script

* improved changelog script

* improved change log script

* fix: detect and remove markdown block syntax that llms sometimes hallucinate for file actions (stackblitz-labs#886)

* Clean out markdown syntax

* Remove identation removal

* Improve for streaming

* feat: redact file contents from chat and put latest files into system prompt  (stackblitz-labs#904)

* feat: added Automatic Code Template Detection And Import (stackblitz-labs#867)

* initial setup

* updated template list

* added optional switch to control this feature

* removed some logs

* fix: import folder filtering

* fix: add defaults for LMStudio to work out of the box (stackblitz-labs#928)

* feat: added hyperbolic llm models (stackblitz-labs#943)

* Added Hyperbolic Models

* Fix: Fixed problem in connecting with hyperbolic models

* added dynamic models for hyperbolic

* removed logs

* fix: refresh model list after api key changes (stackblitz-labs#944)

* fix: better model loading ui feedback and model list update (stackblitz-labs#954)

* fix: better model loading feedback and model list update

* added load on providersettings  update

* fix: updated logger and model caching minor bugfix #release (stackblitz-labs#895)

* fix: updated logger and model caching

* usage token stream issue fix

* minor changes

* updated starter template change to fix the app title

* starter template bigfix

* fixed hydretion errors and raw logs

* removed raw log

* made auto select template false by default

* more cleaner logs and updated logic to call dynamicModels only if not found in static models

* updated starter template instructions

* browser console log improved for firefox

* provider icons fix icons

* chore: release version 0.0.4

* fix: hotfix auto select starter template works without github token #release (stackblitz-labs#959)

* fix: hotfix starter template fix, updated header link to use navigate

* template auth fix

* updated changelog script

* chore: release version 0.0.5

* fix: show warning on starter template failure and continue (stackblitz-labs#960)

* Update hyperbolic.ts

Changed updated Hyperbolic Settings link

* fix: introduce our own cors proxy for git import to fix 403 errors on isometric git cors proxy (stackblitz-labs#924)

* Exploration of improving git import

* Fix our own git proxy

* Clean out file counting for progress, does not seem to work well anyways

* fix: git private clone with custom proxy (stackblitz-labs#1010)

* cookie fix

* fix: git private clone with custom proxy

* list -fix

* docs: updating copyright in LICENSE (stackblitz-labs#796)

* fix: added XAI to docker config (stackblitz-labs#274)

* commit

* Create .env.example

* Update docker-compose.yaml

---------

Co-authored-by: Anirban Kar <thecodacus@gmail.com>

* ci: docker Image creation pipeline (stackblitz-labs#1011)

* Create docker.yaml

* Add build target

* Use build target var

* Use github token instead

---------

Co-authored-by: kris1803 <kristiansstraume17@gmail.com>
Co-authored-by: Anirban Kar <thecodacus@gmail.com>
Co-authored-by: Dustin Loring <dustinwloring1988@gmail.com>
Co-authored-by: GK <gokul@aospa.co>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Cole Medin <cole@dynamous.ai>
Co-authored-by: Eduard Ruzga <wonderwhy.er@gmail.com>
Co-authored-by: Alex Parker <128879861+Soumyaranjan-17@users.noreply.github.com>
Co-authored-by: Juan Manuel Campos Olvera <juan4106@hotmail.com>
Co-authored-by: Arsalaan Ahmed <147995884+Ahmed-Rahil@users.noreply.github.com>
Co-authored-by: Gaurav-Wankhede <73575353+Gaurav-Wankhede@users.noreply.github.com>
Co-authored-by: Siddarth <pullakhandam.siddartha@gmail.com>
Co-authored-by: twsl <45483159+twsl@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants