You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was successful in replicating the results using the provided MNIST dataset; however, I faced challenges in reproducing the outcomes with other datasets using default parameters. Specifically, I used the default structure of AutoEncoder as the code given on https://github.com/shahsohil/DCC. I followed the same training process as the tutorial on github shows. I used the same commands for pretrainning as those used for MNIST for other datasets, but the results were not as expected. Here is the ami results I got, the first is our result, the second it the reuslt on the paper: YTF: 0.69 | 0.88; Yale: 0.11 | 0.96; reuters: 0.02 | 0.57; RCV1: 0.04: 0.50.
What is the the training parameters used during your experiments?
The text was updated successfully, but these errors were encountered:
I was successful in replicating the results using the provided MNIST dataset; however, I faced challenges in reproducing the outcomes with other datasets using default parameters. Specifically, I used the default structure of AutoEncoder as the code given on https://github.com/shahsohil/DCC. I followed the same training process as the tutorial on github shows. I used the same commands for pretrainning as those used for MNIST for other datasets, but the results were not as expected. Here is the ami results I got, the first is our result, the second it the reuslt on the paper: YTF: 0.69 | 0.88; Yale: 0.11 | 0.96; reuters: 0.02 | 0.57; RCV1: 0.04: 0.50.
What is the the training parameters used during your experiments?
The text was updated successfully, but these errors were encountered: