Skip to content
#

pre-training

Here are 157 public repositories matching this topic...

Use PEFT or Full-parameter to finetune 350+ LLMs or 90+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)

  • Updated Sep 30, 2024
  • Python

A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!

  • Updated Sep 27, 2024
  • Python

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

  • Updated Sep 24, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the pre-training topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pre-training topic, visit your repo's landing page and select "manage topics."

Learn more