Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect check on concat original start index #2717

Merged

Conversation

gfdb
Copy link
Contributor

@gfdb gfdb commented Oct 10, 2024

What does this PR do?

In the Augmenter class we allow the users to concatenate the original signals (the input batch) after augmentation. Furthermore, we allow them to select a subset of the original batch to concatenate via concat_start_index and concat_end_index. This PR fixes a bug in the concatenation validation logic where we check to see if the concat_start_index exceeds the size of the input batch before performing the concatenation. Instead of checking the batch size of the original signal x_original.shape[0], we are currently checking the batch size of the subset of the input that the user has selected for augmentation x.shape[0] which got changed when we did x = x[self.augment_start_index : self.augment_end_index_batch]. This will skip the concatenation of the original signal when the batch dimension of the slice the user selects for augmentation is less than or equal to concat_start_index. These two have nothing to do with each other, this is a mistake.

Before submitting
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Does your code adhere to project-specific code style and conventions?

PR review

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform)
  • Review the self-review checklist to ensure the code is ready for review

@mravanelli mravanelli self-requested a review October 11, 2024 12:47
@mravanelli mravanelli added the bug Something isn't working label Oct 11, 2024
@mravanelli
Copy link
Collaborator

Hi @gfdb, thank you for the fix.

@mravanelli mravanelli merged commit 0d77a46 into speechbrain:develop Oct 11, 2024
5 checks passed
naspert added a commit to naspert/speechbrain that referenced this pull request Oct 29, 2024
* data prep scripts update

* iterate over utterances

* without parallel map

* parallel map -> so fast omfg

* gigaspeech data prep done

* speechcolab extra dep if one must download gigaspeech

* create ASR CTC folder

* base yaml + update data prep to better reflect potential different naming for csvs

* update recipe

* update recipe to be compliant with gigaspeech csv

* add transformers dep

* convert opus to wav

* recipe --debug mode works.

* typo GRABAGE_UTTERANCE_TAGS -> GARBAGE_UTTERANCE_TAGS

* tmp DL file

* update DL FILE

* add DL file in ASR/CTC

* update extra_requirements.txt

* add support of savedir within Pretrained subclasses

* add wbs requirements

* webdataset

* remove print

* tmp files webdataset

* verbosity + metada.json

* letzo now label_encoder can actually train + the recipe seems to work.

* remove wbs

* DL info

* HF DL support

* remove webdataset as it sucks :p

* name

* ngram commands

* whisper baseline

* fix HF

* pre-commit + sentencepiece char

* remove csv

* Add quirks.py, move global PyTorch config and GPU workarounds there

* Add support for SB_DISABLE_QUIRKS environment variable

* Fetch rework: make savedir optional

* bunch of updates to make it run

* no download script

* fix precommit

* fix precommit

* readmes

* readmes

* readmes

* readmes

* doc update

* CI god not happy, make CI god happy

* why you here little encoder

* adding a tranduscer streaming recipe, because why not

* add test for transducer

* works better when me not stupid

* fix yaml

* update req

* add warning for cache dir

* add warning for cache dir

* enable multiprocessing

* Minor cleanups to fetching

* Change default behavior of inference to not create savedir if not specified

* allow data prep without ddp

* fix tests

* smoll readme update

* fix review comments

* fixed concat_start_index check (speechbrain#2717)

* Ensure adapted models save their parameters (speechbrain#2716)

Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>

* wtf

* update doc

* more documentation on storage

* missing arg

* a bit of logs

* new schedulers

* new schedulers

* Fixes speechbrain#2656: Remove EOS from SoundChoice

* fix my stupidity

* Update non-HF code path for new preprocessing code in GigaSpeech

* Fix CSV path for non-HF Gigaspeech

* Fix formatting

* Kmeans fix (speechbrain#2642)

* fix kmeans bug

* fix final batch

* fix chuncksize

* fix

* fix

* fix precommit

* fix doxstrin inconsistency

* fix precommit

* fix doc string

---------

Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>

* add call on start of fit_batch fn

* Update core.py

Fix old commit

* Update core.py

* Fix preprocess_text example

* Fix guess_source docstring with up-to-date info

* Also remove default savedir from Pretrained

* Fix function name for log_applied_quirks

* wip audiomnist+gt

* Revert "fix normalization for LFB"

This reverts commit 3fd0330.

* audiomnist classification setup

* fix config

* add missing file

* update dataset load/training

* remove unnecessary params

* remove sort

* remove unnecessary code

* fix paths

* fix loss computation

* add missing flatten

* print summary

* Explain quirks in docs/experiment.md

* ok stupid linter check that hates intentional leading spaces in markdown

* add citing in README

* add code to pad all wavs to the same length

* fix pad call

* fix error computation

* fix error computation

* Make `collect_in` optional for `Pretrainer`, disable it by default

* Change more defaults to `savedir=None` and `fetch_strategy=SYMLINK`

Since the SYMLINK strategy falls back to NO_LINK whenever `savedir is None`, it makes sense to switch more things to default to `savedir=None`.

Should the `savedir` explicitly be set by the user, past behavior is preserved (defaulting to symlinks).

* move flatten in audionet

* Fix GS transducer test prediction decoding?

* fix data prep logic and paths

* Actually fix GS transducer test prediction decoding

* Remove punctuation filtering that is handled elsewhere

* HuggingFance

* fix skip data prep logic

* add original audionet feature extraction

* fix pooling for audionet feature extraction

* fix audionet shape + remove input norm

* try data augmentation

* add missing refs

* - rework AudioNet to have optional pooling
- use official AudioMNIST train/test/valid splits

* fix typo in url

* update audionet hparams

* update audionet custom hparams

* update audionet custom hparams

* Updated warning for load_collected

* Add results and notices for results for GigaSpeech transducer & wavlm

* english hard

* update audionet custom hparams

* fix doc + pre-commit clean

* fix code examples

* fix consistency tests

* fix pre commit

* remove config

* fix docstring for LFB

* fix docstring for GammatoneConv1D

---------

Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: TParcollet <parcollet.titouan@gmail.com>
Co-authored-by: Peter Plantinga <plantinga.peter@proton.me>
Co-authored-by: gianfranco <62777451+gfdb@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: flexthink <flexthink@users.noreply.github.com>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants