Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add centralized data preparation for OWSM #5478

Merged
merged 31 commits into from
Dec 5, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
3be1f40
add whisper data.sh for v1 and v2
jctian98 Oct 17, 2023
37ab173
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 17, 2023
92bf631
add OWSM v3 data recipe
jctian98 Oct 17, 2023
ac8e423
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Oct 17, 2023
a3c24bd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 17, 2023
063dc3f
fix ci issues
jctian98 Oct 18, 2023
5e14a62
update with ci issues
jctian98 Oct 18, 2023
7b707cd
change egs name from mixed_v* to owsm_v*
jctian98 Oct 23, 2023
14204e2
v3 shuold be ready except wsj
jctian98 Oct 30, 2023
ae05a6c
add wsj
jctian98 Oct 30, 2023
c515f76
update db.sh
jctian98 Oct 30, 2023
b53ce47
Merge branch 'master' into owsm_data
jctian98 Oct 30, 2023
ec109e2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 30, 2023
31ad173
almost finish all scripts
jctian98 Nov 10, 2023
8a09625
fix small problems
jctian98 Nov 10, 2023
952acf6
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Nov 10, 2023
2fd2668
merge master
jctian98 Nov 10, 2023
c53afd0
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 10, 2023
bdaf344
update the langauge mapping
jctian98 Nov 11, 2023
d379fd0
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Nov 11, 2023
b2cb427
Merge branch 'master' into owsm_data
jctian98 Nov 11, 2023
f5e5414
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Nov 11, 2023
51e3691
fix CI issue
jctian98 Nov 11, 2023
7f75d15
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Nov 11, 2023
66176bc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 11, 2023
3d89d78
update wsj and commonvoice
jctian98 Nov 26, 2023
8f1e0fa
Merge commit 'FETCH_HEAD' into owsm_data
jctian98 Nov 26, 2023
77fe14b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 26, 2023
c391765
update wsj text norm script
jctian98 Nov 26, 2023
642fd22
update wsj text norm 2
jctian98 Nov 26, 2023
ee00c6c
revise voxpopuli
jctian98 Nov 29, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Prev Previous commit
Next Next commit
Merge branch 'master' into owsm_data
  • Loading branch information
jctian98 authored Oct 30, 2023
commit b53ce473f83791ef4c5421571aa051bc8ee7a5ac
1 change: 0 additions & 1 deletion .mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ pull_request_rules:
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.7, 1.11.0, 6.0.0, false)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.7, 1.12.1, 6.0.0, false)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.7, 1.13.1, 6.0.0, false)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.7, 2.0.1, 6.0.0, false)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.8, 2.0.1, false, 6.0.0)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.9, 2.0.1, false, 6.0.0)"
- "check-success=unit_test_espnet2_and_integration_test_espnet2 (ubuntu-latest, 3.10, 2.0.1, false, 6.0.0)"
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: trailing-whitespace
exclude: ^(egs2/TEMPLATE/asr1/utils|egs2/TEMPLATE/asr1/steps|egs2/TEMPLATE/tts1/sid|tools/installers/patch_mwerSegmenter)
Expand All @@ -14,7 +14,7 @@ repos:
exclude: ^(egs2/TEMPLATE/asr1/utils|egs2/TEMPLATE/asr1/steps|egs2/TEMPLATE/tts1/sid|tools/installers/patch_mwerSegmenter)

- repo: https://github.com/psf/black
rev: 23.9.1
rev: 23.10.1
hooks:
- id: black
exclude: ^(egs2/TEMPLATE/asr1/utils|egs2/TEMPLATE/asr1/steps|egs2/TEMPLATE/tts1/sid|doc)
Expand Down
10 changes: 10 additions & 0 deletions ci/test_configuration_espnet2.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,16 @@ if python3 -c 'import torch as t; from packaging.version import parse as L; asse
continue
fi
fi
if [ "$f" == "egs2/stop/asr1/conf/train_asr_whisper_full_correct.yaml" ]; then
if ! python3 -c "import whisper" > /dev/null; then
continue
fi
fi
if [ "$f" == "egs2/uslu14/asr1/conf/train_asr_whisper_full_correct_specaug.yaml" ]; then
if ! python3 -c "import whisper" > /dev/null; then
continue
fi
fi
${python} -m espnet2.bin.asr_train --config "${f}" --iterator_type none --dry_run true --output_dir out --token_list dummy_token_list
done

Expand Down
6 changes: 6 additions & 0 deletions ci/test_integration_espnet2.sh
Original file line number Diff line number Diff line change
Expand Up @@ -202,8 +202,14 @@ if python -c 'import torch as t; from packaging.version import parse as L; asser
for t in ${feats_types}; do
echo "==== feats_type=${t} ==="
./run.sh --ngpu 0 --stage 1 --stop-stage 10 --skip-upload false --feats-type "${t}" --ref-num 1 --python "${python}" --enh-args "--num_workers 0"
./run.sh --ngpu 0 --stage 3 --stop-stage 6 --skip-upload false --feats-type "${t}" --ref-num 1 --python "${python}" \
--train_set train_nodev_unk_nspk --valid_set test_unk_nspk --test_sets "train_dev_unk_nspk" \
--enh_config ./conf/train_variable_nspk_debug.yaml --enh-args "--num_workers 0" --variable_num_refs true
./run.sh --ngpu 0 --stage 1 --stop-stage 10 --skip-upload false --feats-type "${t}" --ref-num 1 --python "${python}" \
--local_data_opts "--random-enrollment true" --enh_config ./conf/train_random_enrollment_debug.yaml --enh-args "--num_workers 0"
./run.sh --ngpu 0 --stage 3 --stop-stage 6 --skip-upload false --feats-type "${t}" --ref-num 1 --python "${python}" \
--train_set train_nodev_unk_nspk --valid_set test_unk_nspk --test_sets "train_dev_unk_nspk" \
--enh_config ./conf/train_variable_nspk_random_enrollment_debug.yaml --enh-args "--num_workers 0" --variable_num_refs true
done
# Remove generated files in order to reduce the disk usage
rm -rf exp dump data
Expand Down
10 changes: 10 additions & 0 deletions egs2/README.md

Large diffs are not rendered by default.

208 changes: 208 additions & 0 deletions egs2/TEMPLATE/asr1/README_multitask.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,208 @@
# ESPnet2 ASR1 Multi-tasking Recipe TEMPLATE

This is a template of ASR1 Multi-tasking recipe for ESPnet2.
This README provides comprehensive instructions on how to enhance ASR1 for prompt-based multi-task learning.

## Table of Contents

* [ESPnet2 ASR1 Multi-tasking Recipe TEMPLATE](#espnet2-asr2-recipe-template)
* [Table of Contents](#table-of-contents)
* [Recipe flow](#recipe-flow)
* [1\. Data preparation](#1-data-preparation)
* [2\. Speed perturbation](#2-speed-perturbation)
* [3\. Generate dump folder](#3-generate-dump-folder)
* [4\. Removal of long / short data](#4-removal-of-long--short-data)
* [5\. Input / Output Token list generation](#5-input--output-token-list-generation)
* [6\. LM statistics collection](#6-lm-statistics-collection)
* [7\. LM training](#7-lm-training)
* [8\. LM perplexity](#8-lm-perplexity)
* [9\. Ngram-LM training](#9-n-gram-lm-training)
* [10\. ASR statistics collection](#10-asr-statistics-collection)
* [11\. ASR training](#11-asr-training)
* [12\. ASR inference](#12-asr-inference)
* [13\. ASR scoring](#13-asr-scoring)
* [14\-16\. (Optional) Pack results for upload](#14-16-optional-pack-results-for-upload)
* [How to run](#how-to-run)
* [SLU Multi-task training](#slu-multi-task-training)
* [Related works](#related-works)

## Recipe flow

ASR1 recipe consists of 13 stages.

### 1. Data preparation

Data preparation stage.

#### ESPnet format:

It calls `local/data.sh` to creates Kaldi-style data directories in `data/` for training, validation, and evaluation sets. In addition to the files in the `asr1` recipe, it generates an additional file called `prompt` that specifies the task to be performed for the given utterance..

- `prompt` format
```
uttidA <prompt>
uttidB <prompt>
...
```

See also:
- [About Kaldi-style data directory](https://github.com/espnet/espnet/tree/master/egs2/TEMPLATE#about-kaldi-style-data-directory)

### 2. Speed perturbation

Augment training data with speed perturbation. `data/${train_set}_spXX` would be generated (`XX` means the speed factor). This step is optional.

### 3. Generate dump folder

Dumping stage.
This stage move necessary files for training from `data` folder to `dump` folder.

### 4. Removal of long / short data

This stage is the same as that in ASR recipes. At this stage, the dump directories for all datasets on which multi-tasking is to be performed are merged by simple concatenation.

### 5. Input / Output Token list generation

Token list (BPE / Char / etc) generation for both input and targets. Additionally, for Whisper tokenization, you have the option to incorporate special tokens into the Whisper vocabulary using the `--nlsyms_txt` flag. If you are utilizing task specifiers for prompt-based multi-tasking, similar to the original Whisper formulation, it is necessary to include these task specifiers in the Whisper vocabulary.

### 6. LM statistics collection

Neural-network (NN) based Language model (LM) is optional for ASR task. You can skip stage 5-8 by setting `--use_lm false`.
Statistics calculation stage.
It collects the shape information of LM texts and calculates statistics for LM training.

### 7. LM training

NN-based LM model training stage.
You can change the training setting via `--lm_config` and `--lm_args` options.

See also:
- [Supported models](#supported-models).
- [Change the configuration for training](https://espnet.github.io/espnet/espnet2_training_option.html)
- [Distributed training](https://espnet.github.io/espnet/espnet2_distributed.html)

### 8. LM perplexity

NN-based LM evaluation stage. Perplexity (PPL) is computed against the trained model

See also:
- [Change the configuration for training](https://espnet.github.io/espnet/espnet2_training_option.html)

### 9. N-gram LM training

N-gram-based LM model training stage.


### 10. ASR statistics collection

Statistics calculation stage.
It collects the shape information of input and output texts for ASR training.
#### Prompt based multi-tasking

- Instructions:
1. To enable prompt-based multi-task learning across multiple tasks in English, ensure that `--use_prompt` is set to True. By default, this setting replaces the task specifier in the Whisper formulation with the one specified in the prompt file to perform multi-task learning across multiple tasks in English. Please refer to stage 5 for instructions on adding task specifiers to the Whisper vocabulary.
2. If you want to perform prompt-based multi-task learning across multiple tasks in multiple languages, additionally, set `--use_lang_prompt` to true. This step replaces both the language and task specifiers in the Whisper formulation with those specified in the prompt file and can also introduce a new dataset specifier. Please ensure that task, dataset, and language specifiers are all included in the Whisper vocabulary for this option to work.
3. (Optional) To use natural language phrases for prompt-based multi-tasking, set `--use_nlp_prompt` to true. In this case, you do not need to make any modifications to the Whisper vocabulary.

### 11. ASR training

ASR model training stage.
You can change the training setting via `--asr_config` and `--asr_args` options. You need to follow similar steps as described in stage 10 to perform prompt based multi-task learning.

See also:
- [Supported models](#supported-models).
- [Change the configuration for training](https://espnet.github.io/espnet/espnet2_training_option.html)
- [Distributed training](https://espnet.github.io/espnet/espnet2_distributed.html)

### 12. ASR inference

ASR inference stage.

#### Prompt based multi-tasking

- Instructions:
1. If you have incorporated any special tokens into the Whisper vocabulary, make sure to specify the file containing these special tokens as `prompt_token_file` in decoder config.
2. If you are utilizing task, language, and dataset specifiers, please specify these specifiers as `lang_prompt_token` in decoder config.
3. If you are employing a natural language phrase as a prompt, specify the phrase as `nlp_prompt_token` in decoder config.
4. To perform language identification and voice activity detection, we follow the Whisper's pre-training setupwhere we predict ``language id`` and ``no speech`` tags immediately after the start of the transcript tag. Hence for these tasks, set ``lid prompt`` to true.

### 13. ASR scoring

ASR scoring stage: error rates (char / word / token) are computed.

### 14-16. (Optional) Pack results for upload

Packing stage.
It packs the trained model files and uploads to [Zenodo](https://zenodo.org/) (Zenodo upload will be deprecated).
If you want to run this stage, you need to register your account in zenodo.

See also:
- [ESPnet Model Zoo](https://github.com/espnet/espnet_model_zoo)

#### Stage 16-18: Upload model

Upload the trained model to Hugging Face for sharing. Additional information at [Docs](https://espnet.github.io/espnet/espnet2_tutorial.html#packing-and-sharing-your-trained-model).

## How to run

### SLU-Multi-task-training
Here, we show the procedure to run multi-tasking learning across 14 speech classification tasks.


Create a dump directory using the following recipes: . ``asvspoof, speechcommands, grabo, lt_speech_commands, arabic_sc, fsc, voxforge/lid1, iemocap, accentdb, mustard, mustard_plus_plus, voxceleb1, freesound and esc50``. You can do this by running the following command in each of these recipes:
```sh
$ ./run.sh --stop_stage 4
```
Note: Download all the dataset zip files first before creating dump directory. Please refer to ``https://github.com/ga642381/SpeechPrompt-v2/blob/main/docs/dataset.md`` to download all datasets.


Move to the `egs2/uslu14/asr1` recipe directory. Generate the `prompt` file by running
```sh
$ python local/create_*_prompt.py
```


Concatenate ``wav.scp, prompt, text, utt2spk, spk2utt, utt2num_samples`` from all train and valid dump folders in each of the dump directories and create two new directories, ``dump/raw/train_combined`` and ``dump/raw/valid`` to contain the combined data. Start training using:
```sh
$ ./run.sh --stage 5 --stop_stage 11
```


Run decoding for each of the datasets, i.e., ``test_<dataset>``, with the specified inference_config, e.g., ``conf/decode_asr_<task>.yaml``, using the following command:
```sh
$ ./run.sh --stage 12 --stop_stage 12 --inference_config conf/decode_asr_<task>.yaml --test_sets test_<dataset>
```


For some tasks, you need to clean prediction files using ``python local/clean_emotion_pred.py``, ``python local/check_lid_results.py``, ``python local/check_vad_results.py``. To get accuracy, run
```sh
$ ./run.sh --stage 13 --stop_stage 13 --inference_config conf/decode_asr_<task>.yaml --test_sets test_<dataset>
```
For tasks where you need to compute f1 or weighted_f1, run ``python local/compute_f1.py`` and ``python local/compute_weighted_f1.py``.


## Related works
```

@misc{arora2023universlu,
title={UniverSLU: Universal Spoken Language Understanding for Diverse Classification and Sequence Generation Tasks with a Single Network},
author={Siddhant Arora and Hayato Futami and Jee-weon Jung and Yifan Peng and Roshan Sharma and Yosuke Kashiwagi and Emiru Tsunoo and Shinji Watanabe},
year={2023},
eprint={2310.02973},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

@InProceedings{pmlr-v202-radford23a,
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and Mcleavey, Christine and Sutskever, Ilya},
booktitle = {Proceedings of the 40th International Conference on Machine Learning},
pages = {28492--28518},
year = {2023},
editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan},
volume = {202},
series = {Proceedings of Machine Learning Research},
month = {23--29 Jul},
publisher = {PMLR},
}
```
28 changes: 26 additions & 2 deletions egs2/TEMPLATE/asr1/asr.sh
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,9 @@ use_word_lm=false # Whether to use word language model.
num_splits_lm=1 # Number of splitting for lm corpus.
# shellcheck disable=SC2034
word_vocab_size=10000 # Size of word vocabulary.
use_prompt=false # Use prompt ids for multi tasking
use_lang_prompt=false # Use language prompt ids for multi lingual multi tasking
use_nlp_prompt=false # Use text prompt ids for multi lingual multi tasking

# ASR model related
asr_task=asr # ASR task mode. Either 'asr' or 'asr_transducer'.
Expand Down Expand Up @@ -218,6 +221,9 @@ Options:
# e.g., --lm_args "--max_epoch 10"
# Note that it will overwrite args in lm config.
--use_word_lm # Whether to use word language model (default="${use_word_lm}").
--use_prompt # Whether to use prompt for multi tasking (default="${use_prompt}").
--use_lang_prompt # Whether to use language prompt for multi tasking (default="${use_lang_prompt}").
--use_nlp_prompt # Whether to use nlp prompt for multi tasking (default="${use_nlp_prompt}").
--word_vocab_size # Size of word vocabulary (default="${word_vocab_size}").
--num_splits_lm # Number of splitting for lm corpus (default="${num_splits_lm}").

Expand Down Expand Up @@ -953,6 +959,7 @@ if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ] && ! [[ " ${skip_stages} " =~ [
echo ${token_list}
${python} -m espnet2.bin.whisper_export_vocabulary \
--whisper_model "${token_type}" \
--add_token_file_name "${nlsyms_txt}" \
--whisper_language "${lang}" \
--whisper_task "transcribe" \
--sot_asr "${sot_asr}" \
Expand Down Expand Up @@ -1247,6 +1254,12 @@ if [ ${stage} -le 10 ] && [ ${stop_stage} -ge 10 ] && ! [[ " ${skip_stages} " =~
_opts+="--train_data_path_and_name_and_type ${_asr_train_dir}/${ref_text_files[$i]},${ref_text_names[$i]},text "
_opts+="--valid_data_path_and_name_and_type ${_asr_valid_dir}/${ref_text_files[$i]},${ref_text_names[$i]},text "
done
if ${use_prompt}; then
_opts+="--train_data_path_and_name_and_type ${_asr_train_dir}/prompt,prompt,text "
_opts+="--valid_data_path_and_name_and_type ${_asr_valid_dir}/prompt,prompt,text "
_opts+="--use_lang_prompt ${use_lang_prompt} "
_opts+="--use_nlp_prompt ${use_nlp_prompt} "
fi

# shellcheck disable=SC2046,SC2086
${train_cmd} JOB=1:"${_nj}" "${_logdir}"/stats.JOB.log \
Expand Down Expand Up @@ -1384,7 +1397,14 @@ if [ ${stage} -le 11 ] && [ ${stop_stage} -ge 11 ] && ! [[ " ${skip_stages} " =~
_opts+="--valid_data_path_and_name_and_type ${_asr_valid_dir}/${ref_text_files[$i]},${ref_text_names[$i]},text "
_opts+="--valid_shape_file ${asr_stats_dir}/valid/${ref_text_names[$i]}_shape.${token_type} "
done

if ${use_prompt}; then
_opts+="--train_data_path_and_name_and_type ${_asr_train_dir}/prompt,prompt,text "
_opts+="--train_shape_file ${asr_stats_dir}/train/prompt_shape "
_opts+="--valid_data_path_and_name_and_type ${_asr_valid_dir}/prompt,prompt,text "
_opts+="--valid_shape_file ${asr_stats_dir}/valid/prompt_shape "
_opts+="--use_lang_prompt ${use_lang_prompt} "
_opts+="--use_nlp_prompt ${use_nlp_prompt} "
fi
log "Generate '${asr_exp}/run.sh'. You can resume the process from stage 11 using this script"
mkdir -p "${asr_exp}"; echo "${run_args} --stage 11 \"\$@\"; exit \$?" > "${asr_exp}/run.sh"; chmod +x "${asr_exp}/run.sh"

Expand Down Expand Up @@ -1621,7 +1641,11 @@ if [ ${stage} -le 13 ] && [ ${stop_stage} -ge 13 ] && ! [[ " ${skip_stages} " =~
if [ "${_tok_type}" = "char" ] || [ "${_tok_type}" = "word" ]; then
_type="${_tok_type:0:1}er"
_opts+="--non_linguistic_symbols ${nlsyms_txt} "
_opts+="--remove_non_linguistic_symbols true "
if grep -q "whisper" <<< ${token_type}; then
log "Non linguistic_symbols used for prompting"
else
_opts+="--remove_non_linguistic_symbols true "
fi

elif [ "${_tok_type}" = "bpe" ]; then
_type="ter"
Expand Down
10 changes: 10 additions & 0 deletions egs2/TEMPLATE/asr1/db.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
# "downloads" means the corpus can be downloaded by the recipe automatically

ACCENTED_FR=downloads
ACCENT_DB=
ACESINGER=downloads
AIDATATANG_200ZH=downloads
AISHELL=downloads
Expand All @@ -13,7 +14,9 @@ AMERICASNLP22=downloads
AN4=downloads
ASVTutorial=espnet_tutorial_asvspoof
APHASIABANK=
AR_SC=
AUDIOSET=
ASVSpoof_CMD=
BIBLETTS=downloads
COVOST2=
DIRHA_ENGLISH_PHDEV=
Expand All @@ -24,6 +27,7 @@ DNS2=
DNS3=
DNS4=downloads
DSING=downloads
ESC50=
WSJ0=
WSJ1=
WSJCAM0=
Expand Down Expand Up @@ -57,6 +61,7 @@ TEDXJP=
LIBRISPEECH=downloads
LIBRILIGHT_LIMITED=
FSC=
FREESOUND=
MELD=downloads
SLURP=
SLURP_S= # Output file path
Expand All @@ -72,9 +77,12 @@ LIBRIMIX=downloads
LIBRITTS=downloads
LIBRITTS_R=downloads
LJSPEECH=downloads
LT_SPEECH_CMD=
MUSAN=
MUSDB18=downloads
MUST_C=downloads
MUSTARD=
MUSTARD_PLUS=
NSC=
NIT_SONG070=
JMD=downloads
Expand All @@ -87,6 +95,8 @@ KSS=
QASR_TTS=downloads
SNIPS= # smart-light-en-closed-field data path
SPGISPEECH=
SPEECH_PROMPT_v2=
STOP=
SWBD=
FISHER_CALLHOME_SPANISH=
SWBD_NXT=
Expand Down
File renamed without changes.
Loading
Loading
You are viewing a condensed version of this merge commit. You can view the full changes here.