-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about different results when trying to reproduce #1
Comments
Hello, thanks for your interest in our work. I guess you used this repository (https://github.com/gmberton/VPR-datasets-downloader) to format the MSLS dataset and used all query images (about 11k) in MSLS-val for testing. However, the official version of MSLS val (https://github.com/mapillary/mapillary_sls) only contains 740 query images (i.e. a subset). The vast majority of VPR works use the official version of MSLS-val for testing. You can get these 740 query images through the official repository, or get the key (name) of these images here. |
Yes, I indeed used that repository to format the dataset and used all the validation query images for testing. I did not know about the different subset of MSLS that is typically used for MSLS-val. This will most likely resolve the differences that I encountered. I'll use the official MSLS-val subset for testing as advised. Thank you so much for the help! |
The keys (names) of official MSLS-val images aren't follow the form "path/to/file/@utm_easting@utm_northing@...@.jpg", which is used in the code. Could you kindly give the names like "path/to/file/@utm_easting@utm_northing@...@.jpg"? |
@HUSTNO1WXY Hello, you can directly run the code in VPR-datasets-downloader to format the image name. |
Thank you for sharing your work, your paper was very interesting and the results are also very impressive!
I had a question regarding the evaluation on MSLS-val. I attempted to reproduce your results by following the repository, downloading the data, and training the model as described in the README. Initially, I trained the model solely on the MSLS dataset. I attempted to evaluate the results on MSLS by executing the following command for both my trained model and the provided trained model:
However, these were the results that I obtained:
Further fine-tuning the model on Pitts30k and evaluating it gave the same results as you had in your README for evaluation on Pitts30k. Therefore, I'm wondering if you could help me understand why there's a difference for the MSLS-val. Am I evaluating with the wrong data, or is there something else I might be missing?
The text was updated successfully, but these errors were encountered: