-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the results on the coat dataset. #3
Comments
unbiased-pairwise-rec/src/trainer.py Line 224 in 562005a
unbiased-pairwise-rec/src/trainer.py Line 225 in 562005a
In addition, The paper reports the ranking metrics averaged over 10 different initializations. However, the seed in the above lines is never used in the for-loop. The seed is always 12345. |
,DCG@3,DCG@5,DCG@8,Recall@3,Recall@5,Recall@8,MAP@3,MAP@5,MAP@8 I also conducted the experiments on the Yahoo dataset and can not reproduce the results reported in your paper. |
We are also experiencing the same problem, where none of the results in the paper seem reproducable. By running the code as-is, we get results that are almost identical to those posted in this issue, where ExpoMF is always performing best. The same issue holds for the cold-start and rare-item tables. We tried fixing the seed issue mentioned such that each run is actually seeded differently. This does lead to a lot more variation in the results per run, but the averaged results are also similar to those posted in this issue, again not supporting any of the conclusions in the paper. @usaito Could you perhaps explain how we can approximately reproduce the results from the paper? |
Thanks for sharing the code. I follow your instructions, directly run the code (no modification) and get the following results in the ranking_all.csv file.
,DCG@3,DCG@5,DCG@8,Recall@3,Recall@5,Recall@8,MAP@3,MAP@5,MAP@8
wmf,0.0578918,0.0786351,0.1034409,0.0652348,0.1101706,0.1790738,0.0422196,0.054651,0.0690561
expomf,0.0909645,0.1102964,0.1276119,0.0959223,0.1364149,0.1836412,0.0848237,0.097459,0.1076237
relmf,0.0573253,0.0784824,0.1036326,0.064685,0.1102209,0.1801171,0.0422954,0.0555759,0.0704563
bpr,0.0664874,0.0864072,0.1110801,0.0748996,0.1178387,0.1868158,0.0483589,0.0607375,0.0754249
ubpr,0.0586047,0.0793928,0.1034167,0.0665724,0.1117508,0.1782271,0.0409541,0.0537214,0.0678179
The results are different from your paper results of Coat in the Table 2. Can you tell me how to get your paper results?
The text was updated successfully, but these errors were encountered: