-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a devcontainer with Redis and a Ollama server #217
Conversation
Codecov ReportAttention: Patch coverage is
@@ Coverage Diff @@
## main #217 +/- ##
==========================================
- Coverage 71.82% 71.49% -0.33%
==========================================
Files 55 55
Lines 2875 3021 +146
==========================================
+ Hits 2065 2160 +95
- Misses 810 861 +51
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why remove this?
@@ -777,7 +765,7 @@ def process_history( | |||
async def agenerate_init_profile( | |||
model_name: str, | |||
basic_info: dict[str, str], | |||
bad_output_process_model: str = DEFAULT_BAD_OUTPUT_PROCESS_MODEL, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why remove the default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To use custom model without OpenAI keys, the original way doesn't work. Now we use the the model as the default reformat model, but allow the user to use other models for reformatting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this break the existing running script that does not specify reformatting models? can we still have a default one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the default one would be the one used for generating the output for the first time.
This PR introduces an experimental dev container which would allow developers to easily set up a clean environment with a Redis server and an LLM server running in the local environment with minimal memory footprint (<4GB) so that it could run easily on laptop and codespaces.
📑 Description
model_name
for reformatting.Out of scope:
✅ Checks
type/descript
(e.g.feature/add-llm-agents
)ℹ Additional Information