Train models from MMClassification toolbox on your custom data (Supervisely format is supported).
- Configure Train / Validation splits, model architecture and training hyperparameters
- Visualize and validate training data
- App automatically generates training py configs in MMClassification format
- Run on any computer with GPU (agent) connected to your team
- Monitor progress, metrics, logs and other visualizations withing a single dashboard
Watch how-to video for more details:
📊 Application supports Multi-label classification. Trained multi-label model will predict some of labels for every image with confidence score > 0.5. You can choose multi-label mode at the end of step 3:
You can try training multi-label classification model on project from:
-
Movie genre from its poster - Application imports kaggle dataset to supervisely. The movie posters are obtained from IMDB website. Every image in the dataset labeled with multiple tags.
By default, classification model trains on the tagged images. There are cases, when user need to use tagged objects as training examples. To cover this scenario, we added additional mode to training dashboard. Now user can run training on images or objects crops. If user selects training on objects
mode, then the additional settings with preview will be available. It means that user dont't need to run Crop objects on images app before training and prepare temporary project with objects crops, now it will be done automatically in training dashboard.
Here is the UI screenshot with settings and preview if training on objects
mode is selected:
Other features, like saving image examples for every class for trained classification model also supports new mode, technically it is achieved by saving images with other training artifacts (like checkpoints and metrics) in resulting directory in Team Files
. Serve MMClassification app can correctly use them with other inference applications from ecosystem:
- Add app to your team from Ecosystem
- Be sure that you connected computer with GPU to your team by running Supervisely Agent on it (how-to video)
- Run app from context menu of project with tagged images
-
App downloads input project from Supervisely Instance to the local directory
-
Define train / validation splits
- Randomly
- Based on image tags (for example "train" and "val", you can assign them yourself)
-
Preview all available tags with corresponding image examples. Select training tags (model will be trained to predict them).
-
App validates data consistency and correctness and produces report.
-
Select how to augment data. All augmentations performed on the fly during training.
- use one of the predefined pipelines
-
Select model and how weights should be initialized
- pretrained on imagenet
-
App generates py configs for MMClassification toolbox automatically. Just press
Generate
button and move forward. You can modify configuration manually if you are advanced user and understand MMToolBox. -
All training artifacts (metrics, visualizations, weights, ...) are uploaded to Team Files. Link to the directory is generated in UI after training.
Save path is the following: "/mmclassification/<task id>_<input project name>/
For example: /mmclassification/5886_synthetic products v2/
Structure is the following:
.
├── checkpoints
│ ├── 20210701_113427.log
│ ├── 20210701_113427.log.json
│ ├── best_accuracy_top-1_epoch_44.pth
│ ├── epoch_48.pth
│ ├── epoch_49.pth
│ ├── epoch_50.pth
│ └── latest.pth
├── configs
│ ├── augs_config.json
│ ├── augs_preview.py
│ ├── dataset_config.py
│ ├── model_config.py
│ ├── runtime_config.py
│ ├── schedule_config.py
│ └── train_config.py
├── info
│ ├── gt_labels.json
│ ├── tag2urls.json
│ └── ui_state.json
└── open_app.lnk
-
checkpoints
directory contains MMClassification logs and weights -
configs
directory contains all configs that app generated for MMClassification toolbox, they may be useful for advanced user who would like ot export models and use them outside Supervisely -
info directory contains basic information about training
gt_labels.json
- mapping between class names and their indices, this file allows to understand NN predictions. For examples:
{ "cat": 0, "dog": 1, "bird": 2, "frog": 3 }
tag2urls.json
- for every tag some image examples were saved, this file is used when the model is integrated into labeling interface
{ "cat": [ "http://supervisely.private/a/b/c/01.jpg", "http://supervisely.private/a/b/c/02.jpg" ], "dog": [ "http://supervisely.private/d/d/d/01.jpg", "http://supervisely.private/d/d/d/02.jpg" ], "bird": [ "http://supervisely.private/c/c/c/01.jpg", "http://supervisely.private/c/c/c/02.jpg" ], "frog": [ "http://supervisely.private/c/c/c/01.jpg", "http://supervisely.private/c/c/c/02.jpg" ] }
ui_state.json
file with all values defined in UI
{ "...": "...", "epochs": 50, "gpusId": "0", "imgSize": 256, "batchSizePerGPU": 64, "workersPerGPU": 3, "valInterval": 1, "metricsPeriod": 10, "checkpointInterval": 1, "maxKeepCkptsEnabled": true, "maxKeepCkpts": 3, "saveLast": true, "saveBest": true, "optimizer": "SGD", "lr": 0.001, "momentum": 0.9, "weightDecay": 0.0001, "....": "..." }
-
open_app.lnk
- link to finished session, you can open finished training dashboard at any time and check all settings and visualizations
You can use your trained models outside Supervisely platform without any dependencies on Supervisely SDK. You just need to download config files and model weights (.pth) from Team Files, and then you can build and use the model as a normal model in mmcls/mmpretrain. See this Jupyter Notebook for details.