This repo contains the tools for training, running, and evaluating detectors and classifiers for images collected from motion-triggered camera traps. The core functionality provided is:
- Data parsing from frequently-used camera trap metadata formats into a common format
- Training and evaluation of detectors, particularly our "MegaDetector", which does a pretty good job finding terrestrial animals in a variety of ecosystems
- Training and evaluation of species-level classifiers for specific data sets
- A Web-based demo that runs our models via a REST API that hosts them on a Web endpoint
- Miscellaneous useful tools for manipulating camera trap data
- Research experiments we're doing around camera trap data (i.e., some directories are highly experimental and you should take them with a grain of salt)
Classifiers and detectors are trained using TensorFlow.
This repo is maintained by folks in the Microsoft AI for Earth program who like looking at pictures of animals. I mean, we want to use machine learning to support conservation too, but we also really like looking at pictures of animals.
This repo does not directly host camera trap data, but we work with our collaborators to make data and annotations available whenever possible on lila.science.
This repo does not extensively host models, though we will release models when they are at a level of generality that they might be useful to other people.
Speaking of models that might be useful to other people, we have trained a one-class animal detector trained on several hundred thousand bounding boxes from a variety of ecosystems. The model is trained with the TensorFlow Object Detection API; it can be downloaded here (.pb). We use this as our first stage for classifier training and inference. An example script for invoking this detector on new images can be found here. Let us know how it works on your images!
Here's a "teaser" image of what detector output looks like:
Image credit University of Washington.
For questions about this repo, contact cameratraps@microsoft.com.
This repo is organized into the following folders...
Code for hosting our models as an API, either for synchronous operation (e.g. for real-time inference or for our Web-based demo) or as a batch process (for large biodiversity surveys).
Code for training species classifiers on new data sets, generally trained on crops generated via an existing detector.
Code for:
- Converting frequently-used metadata formats to COCO Camera Traps format
- Creating, visualizing, and editing COCO Camera Traps .json databases
- Generating tfrecords
Source for the Web-based demo of our MegaDetector model (which we'll release soon!).
Code for training and evaluating detectors.
Ongoing research projects that use this repository in one way or another; as of the time I'm editing this README, there are projects in this folder around active learning and the use of simulated environments for training data augmentation.
Random things that don't fit in any other directory. Currently contains a single file, a not-super-useful but super-duper-satisfying and mostly-successful attempt to use OCR to pull metadata out of image pixels in a fairly generic way, to handle those pesky cases when image metadata is lost.
Image credit USDA, from the NACTI data set.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This repository is licensed with the MIT license.