Skip to content

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

License

Notifications You must be signed in to change notification settings

FrancescoFernicola/multitask-learning-transformers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

multitask-learning-transformers

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

Colab Notebook

Colab Notebook

Trained Huggingface Model

HF Model

Install depedencies

pip install -r requirements.txt

Run training

python3 main.py \
        --model_name_or_path='roberta-base' \
        --per_device_train_batch_size=8 \
        --output_dir=output --num_train_epochs=1

Single Encoder Multiple Output Heads

A multi-task model in the age of BERT works by having a shared BERT-style encoder transformer, and different task heads for each task.

mt1

Shared Encoder

Separate models for each task, but we make them share the same encoder.

mt2

References: Multi-task Training with Transformers+NLP

About

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%