We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
树哥你好,我把网络换了resnet50,发现还真的只有RMSprop的结果能正常一点点,Adam和SGD好像没法正常更新。Any insight on this?仅是讨论哈哈,不是Issue。
The text was updated successfully, but these errors were encountered:
我实验发现只要存在形如$S_{ij}*\Theta_{ij} - log(1+e^{\Theta_{ij}})$的Loss,在PyTorch里就只有RMSprop效果还能看,其他的优化器效果很差。不过我当时在复现DPSH这篇论文的时候发现 SGD是可以优化的,不过momentum这一项需要置为0,并且U和B变量(分别负责保存CNN输出和优化出来的B)初始化非常重要,具体细节忘了,好像是其中一个必须zero初始化,另一个必须randn初始化,可能还要加sign之类的,SGD才可能优化,特别玄学。 我阅读过作者提供的Matlab代码,它们直接是用SGD就可以做,能和PyTorch这个框架有关?理论水平不够,分析不出来哈哈哈,现在遇到这种形式的Loss我就直接用RMSprop了= =
Sorry, something went wrong.
我佛了。。我现在遇到的问题是用了RMSprop,大概前50个epoch还是正常的map基本上是在提高,然后50个epoch之后,map反而开始下降了,但loss的值还是在变小,就很晕。。还没能正常复现出DPSH的实际性能。(突然发现Issue是在DSDH下,我跑的是DPSH) 我之前跑过其他的方法,个人感觉,可能log()的梯度在pytorch里有点问题,但我也说不清楚。
No branches or pull requests
树哥你好,我把网络换了resnet50,发现还真的只有RMSprop的结果能正常一点点,Adam和SGD好像没法正常更新。Any insight on this?仅是讨论哈哈,不是Issue。
The text was updated successfully, but these errors were encountered: