Skip to content

Evaluation problem on zero-shot COCO dataset #21

Open
@Harry-zzh

Description

Thank you for your excellent work. But I have one question about evaluation.

When I run evaluation on COCO dataset for zero-shot setting, I found that if I set the DATASET.TEST to "coco_2017_test_stuff_sem_seg", I got the results:
"miou-base": 37.7 "miou-unbase" 36.8
It seems to be normal.

But when I set the DATASET.TEST to "coco_2017_test_stuff_base_sem_seg" and "coco_2017_test_stuff_novel_sem_seg" respectively, I got the results:
"miou": 30.8 (for base) "miou": 66.0 (for novel)

I wonder why this weird variation exists, thanks!

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions