Skip to content

[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

License

Notifications You must be signed in to change notification settings

aim-uofa/Matcher

Repository files navigation

Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

Yang Liu1*,   Muzhi Zhu1*,   Hengtao Li1*,   Hao Chen1,   Xinlong Wang2,   Chunhua Shen1

1Zhejiang University,   2Beijing Academy of Artificial Intelligence

ICLR 2024

🚀 Overview

image

📖 Description

Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. However, unlike large language models that excel at directly tackling various language tasks, vision foundation models require a task-specific model structure followed by fine-tuning on specific tasks. In this work, we present Matcher, a novel perception paradigm that utilizes off-the-shelf vision foundation models to address various perception tasks. Matcher can segment anything by using an in-context example without training. Additionally, we design three effective components within the Matcher framework to collaborate with these foundation models and unleash their full potential in diverse perception tasks. Matcher demonstrates impressive generalization performance across various segmentation tasks, all without training. Our visualization results further showcase the open-world generality and flexibility of Matcher when applied to images in the wild.

Paper

ℹ️ News

  • 2024.1 Matcher has been accepted to ICLR 2024!
  • 2024.1 Matcher supports Semantic-SAM for better part segmentation.
  • 2024.1 We provide a Gradio Demo.
  • 2024.1 Release code of one-shot semantic segmentation and one-shot part segmentation tasks.

📖 Recommanded Works

  • SINE: A Simple Image Segmentation Framework via In-Context Examples. GitHub.
  • DiffewS: Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation. GitHub.

🗓️ TODO

  • Gradio Demo
  • Release code of one-shot semantic segmentation and one-shot part segmentation tasks
  • Release code and models for VOS

🏗️ Installation

See installation instructions.

👻 Getting Started

See Preparing Datasets for Matcher.

See Getting Started with Matcher.

🖼️ Demo

One-Shot Semantic Segmantation

image

One-Shot Object Part Segmantation

image

Cross-Style Object and Object Part Segmentation

image

Controllable Mask Output

image

Video Object Segmentation

vos_demo.mp4

🎫 License

For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact Chunhua Shen.

🖊️ Citation

If you find this project useful in your research, please consider to cite:

@article{liu2023matcher,
  title={Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching},
  author={Liu, Yang and Zhu, Muzhi and Li, Hengtao and Chen, Hao and Wang, Xinlong and Shen, Chunhua},
  journal={arXiv preprint arXiv:2305.13310},
  year={2023}
}

Acknowledgement

SAM, DINOv2, SegGPT, HSNet, Semantic-SAM and detectron2.

About

[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages