- Berkeley, CA
- https://sea-snell.github.io
- @sea_snell
Highlights
- Pro
Stars
Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
aider is AI pair programming in your terminal
Minimal transformer for arbtirary data (i.e. bio stuff!)
An extremely fast Python package and project manager, written in Rust.
A plotting tool that outputs Line Rider maps, so you can watch a man on a sled scoot down your loss curves. 🎿
Official inference repo for FLUX.1 models
A set of Python scripts that makes your experience on TPU better
TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.
Minimal but scalable implementation of large language models in JAX
Read Google Cloud Storage, Azure Blobs, and local paths with the same interface
Turn jitted jax functions back into python source code
[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?
lightweight, standalone C++ inference engine for Google's Gemma models.
Code for Paper: Autonomous Evaluation and Refinement of Digital Agents
Schedule-Free Optimization in PyTorch
Reaching LLaMA2 Performance with 0.1M Dollars
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges.
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax
Language models scale reliably with over-training and on downstream tasks
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
Accelerate, Optimize performance with streamlined training and serving options with JAX.
SGLang is a fast serving framework for large language models and vision language models.