Vision Detector 4+

Run your Vision CoreML model

Kazufumi Suzuki

    • Free

Description

Unlock the power of CoreML on video streams with Vision Detector, simplifying model execution without the need for Xcode previews or application builds.

Vision Detector performs real-time image processing using a CoreML model on Mac.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools.
When you launch Vision Detector, it will search for input devices in the order of external video inputs connected to your Mac, the MacBook's FaceTime camera, and nearby iPhones, and then display the video.
You can switch input devices via the camera menu.
You can select your CoreML machine learning model from the app's open menu or control panel buttons, or you can drag and drop it onto the Vision Detector app icon in Finder or Dock.
Once the CoreML machine learning model is loaded, you can start or stop processing by pressing the 'Play' button or hitting the space bar.

The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.

In the iClouds documents folder (/Libraries/Containers/VisionDetector/Data/Documents/), you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed while running object detection model. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
This is experimental.

Note: This application does not include a machine learning model.

What’s New

Version 1.6

Video capture is automatically initiated on startup.
The menu items of the Camera menu can be actively updated.
Pressing the green window button while holding the Option key resizes the window to match the size of the input signal.

App Privacy

The developer, Kazufumi Suzuki, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer's privacy policy.

Data Not Collected

The developer does not collect any data from this app.

Privacy practices may vary, for example, based on the features you use or your age. Learn More

More By This Developer

You Might Also Like

ML Trainer: Make Training Data
Developer Tools
True Scanner
Developer Tools
ML Annotator
Developer Tools
Neural Object Detector
Developer Tools
JSON Dashboard
Developer Tools
TensorFlow TFLite Debugger
Developer Tools