Skip to content

Commit

Permalink
updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
dusty-nv committed Jul 15, 2020
1 parent 32e96b3 commit a8028ae
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 17 deletions.
32 changes: 17 additions & 15 deletions docs/pytorch-collect-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The previously-used `camera-capture` tool can also label object detection datase

<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/pytorch-collection-detect.jpg" >

When the `Dataset Type` drop-down is in Detection mode, the tool creates datasets in [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format (which is supported by the training code).
When the `Dataset Type` drop-down is in Detection mode, the tool creates datasets in [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format (which is supported during training).

> **note:** if you wish to label a set of images that you already have (as opposed to capturing them from camera), try using a tool like [LabelImg](https://github.com/tzutalin/labelImg) that also saves in Pascal VOC format.
Expand Down Expand Up @@ -49,25 +49,25 @@ Below is the `Data Capture Control` window after the `Dataset Type` drop-down ha

<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/pytorch-collection-detection-widget.jpg" >

Then, open the dataset path and class labels. The `Freeze/Edit` and `Save` buttons will then become active.
Then, open the dataset path and class labels that you created. The `Freeze/Edit` and `Save` buttons will then become active.

Position the camera at the object(s) in your scene, and click the `Freeze/Edit` button (or press the spacebar). The live camera view will then be 'frozen' and you will be able to draw bounding boxes over the objects. You can then select the appropriate object class for each bounding box in the grid table in the control window. When you are done labelling the image, click the depressed `Freeze/Edit` button again to save the data and unfreeze the camera view for the next image.

It's important that your data is collected from varying object orientations, camera viewpoints, lighting conditions, and ideally with different backgrounds to create a model that is robust to noise and changes in environment. If you find that you're model isn't performing as well as you'd like, try adding more training data and playing around with the conditions.

Other widgets in the control window include:

* `Current Set` drop-down: select from train/val/test
* `Save on Unfreeze` - automatically save the data when `Freeze/Edit` is unfreezed
* `Clear on Unfreeze` - automatically remove the previous bounding boxes on unfreeze
* `Merge Sets` - save the same data across the train, val, and test sets
* `Current Set` - select from train/val/test
* for object detection, you need at least train and test sets
* although if you check `Merge Sets`, the data will be replicated as train, val, **and** test
* `JPEG Quality` slider: control the encoding quality and size of the saved images
* `Save on Unfreeze` checkbox: automatically save the data when `Freeze/Edit` is unfreezed
* `Clear on Unfreeze` checkbox: automatically remove the previous bounding boxes on unfreeze
* `Merge Sets` checkbox: save the same data across the train, val, and test sets
* `JPEG Quality` - control the encoding quality and size of the saved images

It's important that your data is collected from varying object orientations, camera viewpoints, lighting conditions, and ideally with different backgrounds to create a model that is robust to noise and changes in environment. If you find that you're model isn't performing as well as you'd like, try adding more training data and playing around with the conditions.

## Training your Model

When you've collected a bunch of data, then you can try training a model on it using the same `train_ssd.py` script. The training process is the same as the previous examples, with the exception that the `--dataset-type=voc` flag should be set and `--data` should be set to the location of your dataset:
When you've collected a bunch of data, then you can try training a model on it using the same `train_ssd.py` script. The training process is the same as the previous examples, with the exception that the `--dataset-type=voc` and `--data=<PATH>` arguments should be set:

```bash
$ cd jetson-inference/python/training/detection/ssd
Expand All @@ -87,16 +87,18 @@ DATASET=<PATH-TO-YOUR-DATASET>

detectnet --model=<YOUR-MODEL>/ssd-mobilenet.onnx --labels=<YOUR-MODEL>/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
csi://0
csi://0
```

> **note:** it's important to run inference with the labels file that gets generated to your model directory, and not the one that you originally created from your dataset. This is because a `BACKGROUND` class gets added to the class labels by `train_ssd.py` and saved to the model directory, which the trained model expects.
> **note:** it's important to run inference with the labels file that gets generated to your model directory, and not the one that you originally created from your dataset. This is because a `BACKGROUND` class gets added to the class labels by `train_ssd.py` and saved to the model directory (which the trained model expects to use).
If you need to, go back and collect more training data and re-train your model again. You can restart the again and pick up where you left off using the `--resume` argument (run `python3 train_ssd.py --help` for more info). Remember to re-export the model to ONNX after re-training.
If you need to, go back and collect more training data and re-train your model again. You can restart again and pick up where you left off using the `--resume` argument (run `python3 train_ssd.py --help` for more info). Remember to re-export the model to ONNX after re-training.

## What's Next

This is the last step of the *Hello AI World* tutorial, which covers inferencing and transfer learning on Jetson with TensorRT and PyTorch. To recap, together we've covered:
This is the last step of the *Hello AI World* tutorial, which covers inferencing and transfer learning on Jetson with TensorRT and PyTorch.

To recap, together we've covered:

* Using image recognition networks to classify images
* Coding your own image recognition programs in Python and C++
Expand All @@ -113,7 +115,7 @@ Next we encourage you to experiment and apply what you've learned to other proje
* an interactive toy or treat dispenser for your pet
* a smart doorbell camera that greets your guests

For more examples to inspire your creativity, see the **[Jetson Projects](https://developer.nvidia.com/embedded/community/jetson-projects)** page. Have fun and good luck!
For more examples to inspire your creativity, see the **[Jetson Projects](https://developer.nvidia.com/embedded/community/jetson-projects)** page. Good luck and have fun!

<p align="right">Back | <b><a href="pytorch-ssd.md">Re-training SSD-Mobilenet</a></p>
</b><p align="center"><sup>© 2016-2020 NVIDIA | </sup><a href="../README.md#hello-ai-world"><sup>Table of Contents</sup></a></p>
4 changes: 2 additions & 2 deletions docs/pytorch-ssd.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ mkdir test_fruit

detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
"images/fruit_*.jpg" test_fruit
"images/fruit_*.jpg" test_fruit
```

> **note:** `detectnet.py` can be substituted above to run the Python version of the program
Expand All @@ -182,7 +182,7 @@ You can also try running your re-trained plant model on a camera or video stream
```bash
detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
csi://0
csi://0
```

For more details about other camera/video sources, please see [Camera Streaming and Multimedia](aux-streaming.md).
Expand Down

0 comments on commit a8028ae

Please sign in to comment.