Open
Description
Thank you for your excellent work. But I have one question about evaluation.
When I run evaluation on COCO dataset for zero-shot setting, I found that if I set the DATASET.TEST to "coco_2017_test_stuff_sem_seg", I got the results:
"miou-base": 37.7 "miou-unbase" 36.8
It seems to be normal.
But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)
I wonder why this weird variation exists, thanks!
Metadata
Assignees
Labels
No labels