Open
Description
Thank you for your excellent work. But I have one question about evaluation.
When I run evaluation on COCO dataset for zero-shot setting, I found that if I set the DATASET.TEST to "coco_2017_test_stuff_sem_seg", I got the results:
"miou-base": 37.7 "miou-unbase" 36.8
It seems to be normal.
But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)
I wonder why this weird variation exists, thanks!
Metadata
Metadata
Assignees
Labels
No labels
Activity
NanAlbert commentedon Mar 27, 2023
Hello, have you solved this question? I have the same question. In general, the result of "coco_2017_test_stuff_base_sem_seg" should be higher than "miou-base". Do you know the reason behind this strange phenomenon? Thank you so much.
Harry-zzh commentedon Apr 2, 2023
Sorry, I haven't solved it yet.
MendelXu commentedon May 9, 2023
Sorry for late reply. I have forgot the details of the code and I have no idea why it happened. If you found any possible errors that may cause the problem, please tell me or open a pull request. Thanks very much.