Skip to content

Insights gleaned from work on each stage of Large Language Model (LLM) development.

Notifications You must be signed in to change notification settings

eugenesiow/LLM-Insights

Repository files navigation

LLM Insights

  1. Infrastructure/Platform
  2. Data Engineering
  3. Pre-Training Stage
  4. Fine-Tuning Stage
  5. Alignment Stage
  6. Inference Stage
  7. Applications

Acknowledgements

I'm grateful to my employers for trusting me to lead the team that built the GPU supercompute platform/infrastructure and to co-lead the team doing LLM pre-training. This allowed me to work on large on-premise GPU compute clusters with A100s and then H100s, which is certainly a privilege. Hopefully sharing some of these notes and insights helps the community.

About

Insights gleaned from work on each stage of Large Language Model (LLM) development.

Resources

Stars

Watchers

Forks

Languages