Skip to content

Commit

Permalink
updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
dusty-nv committed Aug 3, 2021
1 parent c1010d8 commit 171763d
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/depthnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Depth sensing is useful for tasks such as mapping, navigation and obstacle detec

<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/depthnet-0.jpg">

The [`depthNet`]((../c/depthNet.h) object accepts a single color image as input, and outputs the depth map. The depth map is colorized for visualization, but the raw [depth field](#getting-the-raw-depth-field) is also accessible for directly accessing the depths. [`depthNet`]((../c/depthNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#depthNet) and [C++](../c/depthNet.h).
The [`depthNet`](../c/depthNet.h) object accepts a single color image as input, and outputs the depth map. The depth map is colorized for visualization, but the raw [depth field](#getting-the-raw-depth-field) is also accessible for directly accessing the depths. [`depthNet`](../c/depthNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#depthNet) and [C++](../c/depthNet.h).

As examples of using the `depthNet` class, we provide sample programs for C++ and Python:

Expand Down
2 changes: 1 addition & 1 deletion docs/detectnet-console-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The previous recognition examples output class probabilities representing the en

<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/detectnet.jpg" >

The [`detectNet`]((../c/detectNet.h) object accepts an image as input, and outputs a list of coordinates of the detected bounding boxes along with their classes and confidence values. [`detectNet`]((../c/detectNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#detectNet) and [C++](../c/detectNet.h). See below for various [pre-trained detection models](#pre-trained-detection-models-available) available for download. The default model used is a [91-class](../data/networks/ssd_coco_labels.txt) SSD-Mobilenet-v2 model trained on the MS COCO dataset, which achieves realtime inferencing performance on Jetson with TensorRT.
The [`detectNet`](../c/detectNet.h) object accepts an image as input, and outputs a list of coordinates of the detected bounding boxes along with their classes and confidence values. [`detectNet`](../c/detectNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#detectNet) and [C++](../c/detectNet.h). See below for various [pre-trained detection models](#pre-trained-detection-models-available) available for download. The default model used is a [91-class](../data/networks/ssd_coco_labels.txt) SSD-Mobilenet-v2 model trained on the MS COCO dataset, which achieves realtime inferencing performance on Jetson with TensorRT.

As examples of using the `detectNet` class, we provide sample programs for C++ and Python:

Expand Down
2 changes: 1 addition & 1 deletion docs/posenet.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Pose estimation consists of locating various body parts (aka keypoints) that for

<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/posenet-0.jpg">

The [`poseNet`]((../c/poseNet.h) object accepts an image as input, and outputs a list of object poses. Each object pose contains a list of detected keypoints, along with their locations and links between keypoints. You can query these to find particular features. [`poseNet`]((../c/poseNet.h) can be used from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#poseNet) and [C++](../c/poseNet.h).
The [`poseNet`](../c/poseNet.h) object accepts an image as input, and outputs a list of object poses. Each object pose contains a list of detected keypoints, along with their locations and links between keypoints. You can query these to find particular features. [`poseNet`](../c/poseNet.h) can be used from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#poseNet) and [C++](../c/poseNet.h).

As examples of using the `poseNet` class, we provide sample programs for C++ and Python:

Expand Down
2 changes: 1 addition & 1 deletion docs/segnet-console-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The next deep learning capability we'll cover in this tutorial is **semantic seg

<img src="https://github.com/dusty-nv/jetson-inference/raw/pytorch/docs/images/segmentation.jpg">

[`segNet`]((../c/segNet.h) accepts as input the 2D image, and outputs a second image with the per-pixel classification mask overlay. Each pixel of the mask corresponds to the class of object that was classified. [`segNet`]((../c/segNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/pytorch/docs/html/python/jetson.inference.html#segNet) and [C++](../c/segNet.h).
[`segNet`](../c/segNet.h) accepts as input the 2D image, and outputs a second image with the per-pixel classification mask overlay. Each pixel of the mask corresponds to the class of object that was classified. [`segNet`](../c/segNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/pytorch/docs/html/python/jetson.inference.html#segNet) and [C++](../c/segNet.h).

As examples of using the `segNet` class, we provide sample programs C++ and Python:

Expand Down

0 comments on commit 171763d

Please sign in to comment.