- Large language models (LLMs) have brought about a transformative shift in the world of natural language processing, demonstrating remarkable capabilities across a diverse spectrum of tasks, from basic text generation to intricate problem-solving.
- With the ongoing revelation of LLMs' potential, there is a growing need to customize these models to cater to distinct domains and industries, aligning their extensive knowledge with specific and niche demands.
- This repository is dedicated to curating a repository of domain-specific LLMs meticulously fine-tuned for various sectors, spanning healthcare, legal, finance, entertainment, and more.
- Its primary objective is to bridge the gap between the versatile power of LLMs and the unique requirements of specialized fields, facilitating their seamless integration into industry-specific applications.
Name | Type | Description | Demo | Paper | Repo | Site |
---|---|---|---|---|---|---|
BloombergGPT | Pre-trained | 50-billion parameter LLM trained on a wide range of financial data (363 billion token dataset) | - | π | - | - |
FinChat | ? | Generative AI tool for investment research, helping to greatly reduce time requirements for data aggregation, visualization and summaries. | π | - | - | π |
FinGPT | Fine-tuned | Series of LLMs fine-tuned on base models (e.g., Llama-2) with open finance data | - | π | π | π |
FinMA | Fine-tuned | Financial LLM from fine-tuning LLaMa with finance-based instruction data with 136K data samples | π | π | π | - |
Name | Type | Description | Repo | Paper | Demo | Site |
---|---|---|---|---|---|---|
Med-PaLM | Fine-tuned | Google's LLM (fine-tuned using PaLM as base model) designed to provide high quality answers to medical questions. | - | π | - | π |
Med-PaLM 2 | Fine-tuned | Enhanced version of Med-PaLM released on March 2023 by Google with improved performance | π | π | π | π |
PharmacyGPT | In-context Learning | GPT-4 model coupled with in-context learning (dynamic prompting approach) involving domain-specific data | - | π | - | - |
RUSSELL-GPT | Fine-tuned | LLM developed by National University Health System in Singapore to enhance clinicians' productivity (e.g., medical Q&A, case note summarization) | - | - | - | π |
ClinicalBERT | Fine-Tuned | Modeling Clinical Notes and Predicting Hospital Readmission | - | π | - | - |
DrugGPT | Fine-tuned | A GPT-based Strategy for Designing Potential Ligands Targeting Specific Proteins | π | π | - | - |
Name | Type | Description | Demo | Paper | Repo | Site |
---|---|---|---|---|---|---|
Roomgpt | ControlNet | Upload a photo of your room to generate your dream room with AI. | π | - | π | - |
These LLMs (Large Language Models) are all licensed for commercial use (e.g., Apache 2.0, MIT, OpenRAIL-M). Contributions welcome!
Name | Release Date | Paper/Blog | Dataset | Tokens (T) | License |
---|---|---|---|---|---|
starcoderdata | 2023/05 | StarCoder: A State-of-the-Art LLM for Code | starcoderdata | 0.25 | Apache 2.0 |
RedPajama | 2023/04 | RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens | RedPajama-Data | 1.2 | Apache 2.0 |
Name | Release Date | Paper/Blog | Dataset | Samples (K) | License |
---|---|---|---|---|---|
MPT-7B-Instruct | 2023/05 | Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs | dolly_hhrlhf | 59 | CC BY-SA-3.0 |
databricks-dolly-15k | 2023/04 | Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM | databricks-dolly-15k | 15 | CC BY-SA-3.0 |
OIG (Open Instruction Generalist) | 2023/03 | THE OIG DATASET | OIG | 44,000 | Apache 2.0 |
Name | Release Date | Paper/Blog | Dataset | Samples (K) | License |
---|---|---|---|---|---|
OpenAssistant Conversations Dataset | 2023/04 | OpenAssistant Conversations - Democratizing Large Language Model Alignment | oasst1 | 161 | Apache 2.0 |
- Leaderboard by lmsys.org
- Evals by MosaicML
- Holistic Evaluation of Language Models (HELM)
- LLM-Leaderboard
- TextSynth Server Benchmarks
- Open LLM Leaderboard by Hugging Face
- Apache 2.0: Allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties.
- MIT: Similar to Apache 2.0 but shorter and simpler. Also, in contrast to Apache 2.0, does not require stating any significant changes to the original code.
- CC BY-SA-4.0: Allows (i) copying and redistributing the material and (ii) remixing, transforming, and building upon the material for any purpose, even commercially. But if you do the latter, you must distribute your contributions under the same license as the original. (Thus, may not be viable for internal teams.)
- OpenRAIL-M v1: Allows royalty-free access and flexible downstream use and sharing of the model and modifications of it, and comes with a set of use restrictions (see Attachment A)
- BSD-3-Clause: This version allows unlimited redistribution for any purpose as long as its copyright notices and the license's disclaimers of warranty are maintained.
Disclaimer: The information provided in this repo does not, and is not intended to, constitute legal advice. Maintainers of this repo are not responsible for the actions of third parties who use the models. Please consult an attorney before using models for commercial purposes.
- Complete entries for context length, and check entries with
?
-
Add number of tokens trained?(see considerations) - Add (links to) training code?
- Add (links to) eval benchmarks?
- Include examples from the range of other domains/industries listed in Contributing
- Include non-LLM GenAI examples (expand scope of repo)