Object tracking using Roboflow Inference API and Zero-Shot (CLIP) Deep SORT. Read more in our Zero-Shot Object Tracking announcement post.
Example object tracking courtesy of the Roboflow Universe public Aquarium model and dataset. You can adapt this to your own dataset on Roboflow or any pre-trained model from Roboflow Universe.
Object tracking involves following individual objects of interest across frames. It combines the output of an object detection model with a secondary algorithm to determine which detections are identifying "the same" object over time.
Previously, this required training a special classification model to differentiate the instances of each different class. In this repository, we have used OpenAI's CLIP zero-shot image classifier to create a universal object tracking repository. All you need is a trained object detection model and CLIP handles the instance identification for the object tracking algorithm.
Upload, annotate, and train your model on Roboflow with Roboflow Train. Your model will be hosted on an inference URL.
In the future we will update this repository to support using other custom models (like YOLOv5) as the object detector (PRs welcome).
Clone repositories
git clone https://github.com/roboflow-ai/zero-shot-object-tracking
cd zero-shot-object-tracking
git clone https://github.com/openai/CLIP.git CLIP-repo
cp -r ./CLIP-repo/clip ./clip # Unix based
robocopy CLIP-repo/clip clip\ # Windows
Install requirements (python 3.7+)
pip install --upgrade pip
pip install -r requirements.txt
Install requirements (anaconda python 3.8)
conda install pytorch torchvision torchaudio -c pytorch
conda install ftfy regex
pip install opencv pycocotools tensorflow
Run
python clip_object_tracker.py --source data/video/cards.mp4 --url https://detect.roboflow.com/playing-cards-ow27d/1 --api_key ROBOFLOW_API_KEY
(by default, output will be in runs/detect/exp[num])
Help
python clip_object_tracker.py -h
Huge thanks to: