Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate models #24787

Merged
merged 16 commits into from
Jul 13, 2023
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/bort.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# BORT

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.
sgugger marked this conversation as resolved.
Show resolved Hide resolved

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The BORT model was proposed in [Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) by
Expand Down
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/mctct.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# M-CTC-T

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The M-CTC-T model was proposed in [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
Expand Down
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/retribert.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# RetriBERT

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The RetriBERT model was proposed in the blog post [Explain Anything Like I'm Five: A Model for Open Domain Long Form
Expand Down
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/tapex.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# TAPEX

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The TAPEX model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu,
Expand Down
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/trajectory_transformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# Trajectory Transformer

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine.
Expand Down
9 changes: 9 additions & 0 deletions docs/source/en/model_doc/van.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.

# VAN

<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.

</Tip>

## Overview

The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
Expand Down
Loading