Open
Description
Details
Let's add opensource llama model. can you give me a hint so I can try it out
Add Llama 2 LLM Support to Core LLM Module
Description:
Extend the LLM module to support Llama 2 models through LangChain's integration, following the existing pattern for other LLM providers.
Tasks:
-
In
gpt_all_star/core/llm.py
:- Add
LLAMA
toLLM_TYPE
enum - Create new
_create_chat_llama
helper function - Update
create_llm
to handle Llama case
- Add
-
In
.env.sample
:- Add Llama-specific environment variables section
- Include model path and parameters
Test:
- In
tests/core/test_llm.py
:- Add test case for Llama LLM creation
- Mock Llama model initialization
- Test configuration parameters
Implementation Notes:
- Follow existing pattern of other LLM implementations
- Use LangChain's Llama integration
- Maintain consistent temperature and streaming settings
- Support model path configuration via environment variables