Skip to content

OpenAdaptAI/SoM

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logo Set-of-Mark Visual Prompting for GPT-4V

πŸ‡ [Read our arXiv Paper] Β  🍎 [Project Page]

Jianwei Yang*βš‘, Hao Zhang*, Feng Li*, Xueyan Zou*, Chunyuan Li, Jianfeng Gao

* Core Contributors Β Β Β Β  βš‘ Project Lead

Introduction

We present Set-of-Mark (SoM) prompting, simply overlaying a number of spatial and speakable marks on the images, to unleash the visual grounding abilities in the strongest LMM -- GPT-4V. Let's using visual prompting for vision!

method2_xyz

GPT-4V + SoM Demo

som_gpt4v_demo.mp4

πŸ”₯ News

  • [11/21] Thanks to Roboflow and @SkalskiP, a huggingface demo for SoM + GPT-4V is online! Try it out!

  • [11/07] We released the vision benchmark we used to evaluate GPT-4V with SoM prompting! Check out the benchmark page!

  • [11/07] Now that GPT-4V API has been released, we are releasing a demo integrating SoM into GPT-4V!

export OPENAI_API_KEY=YOUR_API_KEY
python demo_gpt4v_som.py
  • [10/23] We released the SoM toolbox code for generating set-of-mark prompts for GPT-4V. Try it out!

πŸ”— Fascinating Applications

Fascinating applications of SoM in GPT-4V:

πŸ”— Related Works

Our method compiles the following models to generate the set of marks:

  • Mask DINO: State-of-the-art closed-set image segmentation model
  • OpenSeeD: State-of-the-art open-vocabulary image segmentation model
  • GroundingDINO: State-of-the-art open-vocabulary object detection model
  • SEEM: Versatile, promptable, interactive and semantic-aware segmentation model
  • Semantic-SAM: Segment and recognize anything at any granularity
  • Segment Anything: Segment anything

We are standing on the shoulder of the giant GPT-4V (playground)!

πŸš€ Quick Start

  • Install segmentation packages
# install SEEM
pip install git+https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once.git@package
# install SAM
pip install git+https://github.com/facebookresearch/segment-anything.git
# install Semantic-SAM
pip install git+https://github.com/UX-Decoder/Semantic-SAM.git@package
# install Deformable Convolution for Semantic-SAM
cd ops && sh make.sh && cd ..

# common error fix:
python -m pip install 'git+https://github.com/MaureenZOU/detectron2-xyz.git'
  • Download the pretrained models
sh download_ckpt.sh
  • Run the demo
python demo_som.py

And you will see this interface:

som_toolbox

Deploy to AWS

To deploy SoM to EC2 on AWS via Github Actions:

  1. Fork this repository and clone your fork to your local machine.
  2. Follow the instructions at the top of deploy.py.

πŸ‘‰ Comparing standard GPT-4V and its combination with SoM Prompting

teaser_github

πŸ“ SoM Toolbox for image partition

method3_xyz Users can select which granularity of masks to generate, and which mode to use between automatic (top) and interactive (bottom). A higher alpha blending value (0.4) is used for better visualization.

πŸ¦„ Interleaved Prompt

SoM enables interleaved prompts which include textual and visual content. The visual content can be represented using the region indices. Screenshot 2023-10-18 at 10 06 18

πŸŽ–οΈ Mark types used in SoM

method4_xyz

πŸŒ‹ Evaluation tasks examples

Screenshot 2023-10-18 at 10 12 18

Use case

🌷 Grounded Reasoning and Cross-Image Reference

Screenshot 2023-10-18 at 10 10 41

In comparison to GPT-4V without SoM, adding marks enables GPT-4V to ground the reasoning on detailed contents of the image (Left). Clear object cross-image references are observed on the right. 17

πŸ•οΈ Problem Solving

Screenshot 2023-10-18 at 10 18 03

Case study on solving CAPTCHA. GPT-4V gives the wrong answer with a wrong number of squares while finding the correct squares with corresponding marks after SoM prompting.

πŸ”οΈ Knowledge Sharing

Screenshot 2023-10-18 at 10 18 44

Case study on an image of dish for GPT-4V. GPT-4V does not produce a grounded answer with the original image. Based on SoM prompting, GPT-4V not only speaks out the ingredients but also corresponds them to the regions.

πŸ•Œ Personalized Suggestion

Screenshot 2023-10-18 at 10 19 12

SoM-pormpted GPT-4V gives very precise suggestions while the original one fails, even with hallucinated foods, e.g., soft drinks

🌼 Tool Usage Instruction

Screenshot 2023-10-18 at 10 19 39 Likewise, GPT4-V with SoM can help to provide thorough tool usage instruction , teaching users the function of each button on a controller. Note that this image is not fully labeled, while GPT-4V can also provide information about the non-labeled buttons.

🌻 2D Game Planning

Screenshot 2023-10-18 at 10 20 03

GPT-4V with SoM gives a reasonable suggestion on how to achieve a goal in a gaming scenario.

πŸ•Œ Simulated Navigation

Screenshot 2023-10-18 at 10 21 24

🌳 Results

We conduct experiments on various vision tasks to verify the effectiveness of our SoM. Results show that GPT4V+SoM outperforms specialists on most vision tasks and is comparable to MaskDINO on COCO panoptic segmentation. main_results

βœ’οΈ Citation

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@article{yang2023setofmark,
      title={Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V}, 
      author={Jianwei Yang and Hao Zhang and Feng Li and Xueyan Zou and Chunyuan Li and Jianfeng Gao},
      journal={arXiv preprint arXiv:2310.11441},
      year={2023},
}

About

Set-of-Mark Prompting for LMMs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 74.1%
  • Cuda 21.7%
  • C++ 2.4%
  • Other 1.8%