diff --git a/CHANGELOG.md b/CHANGELOG.md
index 373305b55..3a842bac5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,12 +1,55 @@
-
Back | Next | Contents
-
-Appendix
# Change Log
Major updates and new features to this project will be listed in this document.
+## July 15, 2020
+
+> **note:** API changes from this update are intended to be backwards-compatible, so previous code should still run.
+
+* [Re-training SSD-Mobilenet](docs/pytorch-ssd.md) Object Detection tutorial with PyTorch
+* Support for collection of object detection datasets and bounding-box labeling in `camera-capture` tool
+* [`videoSource`](docs/aux-streaming.md) and [`videoOutput`](docs/aux-streaming.md) APIs for C++/Python that supports multiple types of video streams:
+ * MIPI CSI cameras
+ * V4L2 cameras
+ * RTP / RTSP
+ * Video & Images
+ * Image sequences
+ * OpenGL windows
+* Unified the `-console` and `-camera` samples to process both images and video streams
+ * [`imagenet.cpp`](examples/imagenet/imagenet.cpp) / [`imagenet.py`](python/examples/imagenet.py)
+ * [`detectnet.cpp`](examples/detectnet/detectnet.cpp) / [`detectnet.py`](python/examples/detectnet.py)
+ * [`segnet.cpp`](examples/segnet/segnet.cpp) / [`segnet.py`](python/examples/segnet-console.py)
+* Support for `uchar3/uchar4/float3/float4` images (default is now `uchar3` as opposed to `float4`)
+* Replaced opaque Python image capsule with [`jetson.utils.cudaImage`](docs/aux-image.md#image-capsules-in-python) object
+ * See [Image Capsules in Python](docs/aux-image.md#image-capsules-in-python) for more info
+ * Images are now subscriptable/indexable from Python to directly access the pixel dataset
+ * Numpy ndarray conversion now supports `uchar3/uchar4/float3/float4` formats
+* [`cudaConvertColor()`](https://github.com/dusty-nv/jetson-utils/blob/a587c20ad95d71efd47f9c91e3fbf703ad48644d/cuda/cudaColorspace.h#L31) automated colorspace conversion function (RGB, BGR, YUV, Bayer, grayscale, ect)
+* Python CUDA bindings for `cudaResize()`, `cudaCrop()`, `cudaNormalize()`, `cudaOverlay()`
+ * See [Image Manipulation with CUDA](docs/aux-image.md) and [`cuda-examples.py`](https://github.com/dusty-nv/jetson-utils/blob/master/python/examples/cuda-examples.py) for examples of using these
+* DIGITS tutorial is now marked as deprecated (replaced by PyTorch transfer learning tutorial)
+* Logging can now be controlled/disabled from the command line (e.g. `--log-level=verbose`)
+
+## October 3, 2019
+
+* Added new FCN-ResNet18 semantic segmentation models:
+
+| Dataset | Resolution | CLI Argument | Accuracy | Jetson Nano | Jetson Xavier |
+|:------------:|:----------:|--------------|:--------:|:-----------:|:-------------:|
+| [Cityscapes](https://www.cityscapes-dataset.com/) | 512x256 | `fcn-resnet18-cityscapes-512x256` | 83.3% | 48 FPS | 480 FPS |
+| [Cityscapes](https://www.cityscapes-dataset.com/) | 1024x512 | `fcn-resnet18-cityscapes-1024x512` | 87.3% | 12 FPS | 175 FPS |
+| [Cityscapes](https://www.cityscapes-dataset.com/) | 2048x1024 | `fcn-resnet18-cityscapes-2048x1024` | 89.6% | 3 FPS | 47 FPS |
+| [DeepScene](http://deepscene.cs.uni-freiburg.de/) | 576x320 | `fcn-resnet18-deepscene-576x320` | 96.4% | 26 FPS | 360 FPS |
+| [DeepScene](http://deepscene.cs.uni-freiburg.de/) | 864x480 | `fcn-resnet18-deepscene-864x480` | 96.9% | 14 FPS | 190 FPS |
+| [Multi-Human](https://lv-mhp.github.io/) | 512x320 | `fcn-resnet18-mhp-512x320` | 86.5% | 34 FPS | 370 FPS |
+| [Multi-Human](https://lv-mhp.github.io/) | 640x360 | `fcn-resnet18-mhp-512x320` | 87.1% | 23 FPS | 325 FPS |
+| [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | 320x320 | `fcn-resnet18-voc-320x320` | 85.9% | 45 FPS | 508 FPS |
+| [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | 512x320 | `fcn-resnet18-voc-512x320` | 88.5% | 34 FPS | 375 FPS |
+| [SUN RGB-D](http://rgbd.cs.princeton.edu/) | 512x400 | `fcn-resnet18-sun-512x400` | 64.3% | 28 FPS | 340 FPS |
+| [SUN RGB-D](http://rgbd.cs.princeton.edu/) | 640x512 | `fcn-resnet18-sun-640x512` | 65.1% | 17 FPS | 224 FPS |
+
## July 19, 2019
* Python API support for imageNet, detectNet, and camera/display utilities
@@ -20,21 +63,15 @@ Major updates and new features to this project will be listed in this document.
* ResNet-18, ResNet-50, ResNet-101, ResNet-152
* VGG-16, VGG-19
* Inception-v4
-
-
-
* New pre-trained object detection models (on 90-class MS-COCO)
* SSD-Mobilenet-v1
* SSD-Mobilenet-v2
* SSD-Inception-v2
-
-
-
* API Reference documentation for C++ and Python
* Command line usage info for all examples, run with --help
* Output of network profiler times, including pre/post-processing
* Improved font rasterization using system TTF fonts
-Back | Image Manipulation with CUDA
+##
© 2016-2020 NVIDIA | Table of Contents
diff --git a/README.md b/README.md
index 467c1b4b7..aa7d3ceaa 100644
--- a/README.md
+++ b/README.md
@@ -20,10 +20,11 @@ It's recommended to walk through the Hello AI World module first to familiarize
* [Code Examples](#code-examples)
* [Pre-Trained Models](#pre-trained-models)
* [System Requirements](#recommended-system-requirements)
-* [Extra Resources](#extra-resources)
+* [Change Log](CHANGELOG.md)
> > Jetson Xavier NX and JetPack 4.4 is now supported in the repo.
-> > See our latest technical blog on the [`NVIDIA Jetson Xavier NX Developer Kit`](https://devblogs.nvidia.com/bringing-cloud-native-agility-to-edge-ai-with-jetson-xavier-nxBringing).
+> > Try the new [Re-training SSD-Mobilenet](docs/pytorch-ssd.md) object detection tutorial.
+> > See the [Change Log](CHANGELOG.md) for the latest updates and new features.
## Hello AI World
@@ -93,18 +94,13 @@ Introductory code walkthroughs of using the library are covered during these ste
Additional C++ and Python samples for running the networks on static images and live camera streams can be found here:
-| | Images | Camera |
+| | C++ | Python |
|-------------------|---------------------|---------------------|
-| **C++ ([`examples`](examples/))** | |
-| Image Recognition | [`imagenet-console`](examples/imagenet-console/imagenet-console.cpp) | [`imagenet-camera`](examples/imagenet-camera/imagenet-camera.cpp) |
-| Object Detection | [`detectnet-console`](examples/detectnet-console/detectnet-console.cpp) | [`detectnet-camera`](examples/detectnet-camera/detectnet-camera.cpp)
-| Segmentation | [`segnet-console`](examples/segnet-console/segnet-console.cpp) | [`segnet-camera`](examples/segnet-camera/segnet-camera.cpp) |
-| **Python ([`python/examples`](python/examples/))** | | |
-| Image Recognition | [`imagenet-console.py`](python/examples/imagenet-console.py) | [`imagenet-camera.py`](python/examples/imagenet-camera.py) |
-| Object Detection | [`detectnet-console.py`](python/examples/detectnet-console.py) | [`detectnet-camera.py`](python/examples/detectnet-camera.py) |
-| Segmentation | [`segnet-console.py`](python/examples/segnet-console.py) | [`segnet-camera.py`](python/examples/segnet-camera.py) |
-
-> **note**: for working with numpy arrays, see [`cuda-from-numpy.py`](https://github.com/dusty-nv/jetson-utils/blob/master/python/examples/cuda-from-numpy.py) and [`cuda-to-numpy.py`](https://github.com/dusty-nv/jetson-utils/blob/master/python/examples/cuda-to-numpy.py)
+| Image Recognition | [`imagenet.cpp`](examples/imagenet/imagenet.cpp) | [`imagenet.py`](python/examples/imagenet.py) |
+| Object Detection | [`detectnet.cpp`](examples/detectnet/detectnet.cpp) | [`detectnet.py`](python/examples/detectnet.py) |
+| Segmentation | [`segnet.cpp`](examples/segnet/segnet.cpp) | [`segnet.py`](python/examples/segnet-console.py) |
+
+> **note**: for working with numpy arrays, see [Converting to Numpy Arrays](docs/aux-image.md#converting-to-numpy-arrays) and [Converting from Numpy Arrays](docs/aux-image.md#converting-from-numpy-arrays)
These examples will automatically be compiled while [Building the Project from Source](docs/building-repo-2.md), and are able to run the pre-trained models listed below in addition to custom models provided by the user. Launch each example with `--help` for usage info.
diff --git a/docs/aux-image.md b/docs/aux-image.md
index 221c738df..644dea6af 100644
--- a/docs/aux-image.md
+++ b/docs/aux-image.md
@@ -1,5 +1,5 @@
-Back | Next | Contents
+
Back | Next | Contents
Appendix
@@ -449,9 +449,7 @@ CUDA(cudaOverlay(imgInputA, dimsA, imgOutput, dimsOutput, 0, 0));
CUDA(cudaOverlay(imgInputB, dimsB, imgOutput, dimsOutput, dimsA.x, 0));
```
-Next | Change Log
-
-Back | Camera Streaming and Multimedia
-Back |
+##
+
Back | Camera Streaming and Multimedia
© 2016-2020 NVIDIA | Table of Contents
diff --git a/docs/pytorch-collect-detection.md b/docs/pytorch-collect-detection.md
index 109a98f4b..0def0f616 100644
--- a/docs/pytorch-collect-detection.md
+++ b/docs/pytorch-collect-detection.md
@@ -51,7 +51,7 @@ Below is the `Data Capture Control` window after the `Dataset Type` drop-down ha
Then, open the dataset path and class labels that you created. The `Freeze/Edit` and `Save` buttons will then become active.
-Position the camera at the object(s) in your scene, and click the `Freeze/Edit` button (or press the spacebar). The live camera view will then be 'frozen' and you will be able to draw bounding boxes over the objects. You can then select the appropriate object class for each bounding box in the grid table in the control window. When you are done labelling the image, click the depressed `Freeze/Edit` button again to save the data and unfreeze the camera view for the next image.
+Position the camera at the object(s) in your scene, and click the `Freeze/Edit` button (or press the spacebar). The live camera view will then be 'frozen' and you will be able to draw bounding boxes over the objects. You can then select the appropriate object class for each bounding box in the grid table in the control window. When you are done labeling the image, click the depressed `Freeze/Edit` button again to save the data and unfreeze the camera view for the next image.
Other widgets in the control window include:
diff --git a/docs/pytorch-collect.md b/docs/pytorch-collect.md
index 9129e9ab7..f01b23456 100644
--- a/docs/pytorch-collect.md
+++ b/docs/pytorch-collect.md
@@ -5,7 +5,7 @@
# Collecting your own Classification Datasets
-In order to collect your own datasets for training customized models to classify objects or scenes of your choosing, we've created an easy-to-use tool called `camera-capture` for capturing and labelling images on your Jetson from live video:
+In order to collect your own datasets for training customized models to classify objects or scenes of your choosing, we've created an easy-to-use tool called `camera-capture` for capturing and labeling images on your Jetson from live video:
diff --git a/docs/pytorch-plants.md b/docs/pytorch-plants.md
index 3adc768fa..2e60d15b9 100644
--- a/docs/pytorch-plants.md
+++ b/docs/pytorch-plants.md
@@ -181,7 +181,7 @@ imagenet.py --model=plants/resnet18.onnx --input_blob=input_0 --output_blob=outp
Looks like I should be watching out for poison ivy!
-Next, we're going to cover a camera-based tool for collecting and labelling your own datasets captured from live video.
+Next, we're going to cover a camera-based tool for collecting and labeling your own datasets captured from live video.
Next | Collecting your own Classification Datasets
diff --git a/docs/pytorch-transfer-learning.md b/docs/pytorch-transfer-learning.md
index 88bb31370..4e68dfd29 100644
--- a/docs/pytorch-transfer-learning.md
+++ b/docs/pytorch-transfer-learning.md
@@ -11,7 +11,7 @@ Transfer learning is a technique for re-training a DNN model on a new dataset, w
Although training is typically performed on a PC, server, or cloud instance with discrete GPU(s) due to the often large datasets used and the associated computational demands, by using transfer learning we're able to re-train various networks onboard Jetson to get started with training and deploying our own DNN models.
-PyTorch is the machine learning framework that we'll be using, and example datasets along with training scripts are provided to use below, in addition to a camera-based tool for collecting and labelling your own training datasets.
+PyTorch is the machine learning framework that we'll be using, and example datasets along with training scripts are provided to use below, in addition to a camera-based tool for collecting and labeling your own training datasets.
## Installing PyTorch
diff --git a/docs/segnet-console-2.md b/docs/segnet-console-2.md
index 33c27dd37..5ed7c2969 100644
--- a/docs/segnet-console-2.md
+++ b/docs/segnet-console-2.md
@@ -4,7 +4,7 @@
Semantic Segmentation
# Semantic Segmentation with SegNet
-The next deep learning capability we'll cover in this tutorial is **semantic segmentation**. Semantic segmentation is based on image recognition, except the classifications occur at the pixel level as opposed to the entire image. This is accomplished by *convolutionalizing* a pre-trained image recognition backbone, which transforms the model into a [Fully Convolutional Network (FCN)](https://arxiv.org/abs/1605.06211) capable of per-pixel labelling. Especially useful for environmental perception, segmentation yields dense per-pixel classifications of many different potential objects per scene, including scene foregrounds and backgrounds.
+The next deep learning capability we'll cover in this tutorial is **semantic segmentation**. Semantic segmentation is based on image recognition, except the classifications occur at the pixel level as opposed to the entire image. This is accomplished by *convolutionalizing* a pre-trained image recognition backbone, which transforms the model into a [Fully Convolutional Network (FCN)](https://arxiv.org/abs/1605.06211) capable of per-pixel labeling. Especially useful for environmental perception, segmentation yields dense per-pixel classifications of many different potential objects per scene, including scene foregrounds and backgrounds.