-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于损失函数的一些问题 #123
Comments
只要频谱的规格和wavenet是对应的音质就不会差到哪里去。还是autoencoder本身的问题。你的batch size多大? |
我的batch size设置为2,参数这些我都按照源代码来的,并没有做调整:dim_neck=32,freq=32 |
数据集是什么呢 |
我按照论文中的来的,用的是VCTK数据集;我选择了其中mic1的语音,通过audition下采样到16k |
那可以试试调bottleneck的参数 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
作者您好!
我在训练的时候发现损失下降很快,但是结果并不好,我查看了前面有人提出的问题,发现是损失函数的输入维度有问题:
x_real的torchsize是[2,128,80],而x_identic_psnt的torchsize是[2,1,128,80],我将x_identic_psnt的第二个维度的数据删除后再输入到损失函数中,发现损失下降到0.0007左右就不再下降了,合成的结果相比修改前好了一点(能听懂)但是没什么语音风格迁移的效果,我想知道是否还有哪些地方是我可以修改的?我在前面的提问中找到了可能需要重新训练wavenet的答案,我想知道作者您是重新训练了wavenet吗?
The text was updated successfully, but these errors were encountered: