You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like be listed here, please open a Pull Request so it can be included under the Community notebooks.
Notebook | Description | |
---|---|---|
Getting Started Tokenizers | How to train and use your very own tokenizer | |
Getting Started Transformers | How to easily start using transformers | |
How to use Pipelines | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | |
How to train a language model | Highlight all the steps to effectively train Transformer model on custom data | |
How to generate text | How to use different decoding methods for language generation with transformers | |
How to export model to ONNX | Highlight how to export and run inference workloads through ONNX | |
How to use Benchmarks | How to benchmark models with transformers | |
Reformer | How Reformer pushes the limits of language modeling |
Notebook | Description | Author | |
---|---|---|---|
Train T5 on TPU | How to train T5 on SQUAD with Transformers and Nlp | Suraj Patil | |
Fine-tune T5 for Classification and Multiple Choice | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | Suraj Patil | |
Fine-tune DialoGPT on New Datasets and Languages | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | Nathan Cooper | |
Long Sequence Modeling with Reformer | How to train on sequences as long as 500,000 tokens with Reformer | Patrick von Platen | |
Fine-tune BART for Summarization | How to fine-tune BART for summarization with fastai using blurr | Wayde Gilliam | |
Fine-tune a pre-trained Transformer on anyone's tweets | How to generate tweets in the style of your favorite Twitter account by fine-tune a GPT-2 model | Boris Dayma | |
A Step by Step Guide to Tracking Hugging Face Model Performance | A quick tutorial for training NLP models with HuggingFace and & visualizing their performance with Weights & Biases | Jack Morris | |
Pretrain Longformer | How to build a "long" version of existing pretrained models | Iz Beltagy | |
Fine-tune Longformer for QA | How to fine-tune longformer model for QA task | Suraj Patil | |
Evaluate Model with 🤗nlp | How to evaluate longformer on TriviaQA with nlp |
Patrick von Platen | |
Fine-tune T5 for Sentiment Span Extraction | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | Lorenzo Ampil | |
Fine-tune DistilBert for Multiclass Classification | How to fine-tune DistilBert for multiclass classification with PyTorch | Abhishek Kumar Mishra | |
Fine-tune BERT for Multi-label Classification | How to fine-tune BERT for multi-label classification using PyTorch | Abhishek Kumar Mishra | |
Fine-tune T5 for Summarization | How to fine-tune T5 for summarization in PyTorch and track experiments with WandB | Abhishek Kumar Mishra | |
Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing | How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing | Michael Benesty | |
Pretrain Reformer for Masked Language Modeling | How to train a Reformer model with bi-directional self-attention layers | Patrick von Platen | |
Expand and Fine Tune Sci-BERT | How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | Tanmay Thakur | |
Fine-tune Electra and interpret with Integrated Gradients | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | Eliza Szczechla |