Introducing the Ascend Eco-Platform for Intelligent UAVs - Enabled by the Atlas 200 DK and DJI Tello to achieve real-time deep learning solutions and fast prototyping to developers. This project was created while keeping in mind of modularity and fast prototype development. We hope to build ready-to-use modularized capabilities for developers interested in taking deep learning to the realm of UAVs.
Official Atlas 200 DK Developer Kit
Ascend Samples
πΆ Ascend ModelZoo
πΉ HiFly Introduction YouTube Video
π HiFly PID Tracker Video
- Atlas 200 DK
- DJI Tello Ryze
- Wireless Router (TP-Link TL-WR902AC)
- RoadMap
- How to Contribute
- Project Ideas
- TP Link Wireless Router Setup
- Project: Parallel Inference
- Project: PID Tracker
The following is for simple Python Implementation. For ROS implementation, please refer to this guide
Installation
How to run the project
Code Implementation and how to extend
Available Modules
-
Login to Atlas 200 DK from PC (Refer to this guide on how to setup and access).
Note
, it is required to useVScode
withRemote-SSH
extension to login remotely, otherwise you might not get the video stream to display on your PC. -
On Atlas 200 DK, git clone this repo (No internet access? Just try connecting Atlas 200 Dk to a router with Ethernet Cable. For details, check official document)
git clone https://github.com/Ascend-Huawei/HiFly_Drone.git
-
Navigate to the project directory:
cd HiFly_Drone
-
Create and activate python virutal environment:
python3 -m venv hifly && source hifly/bin/activate
-
Install the required dependencies to run this project:
pip3 install -r requirements.txt
This section covers how to run inference on UAV's camera livefeed.
Knowing how to build an AI inference application on Ascend AI processor is required for following steps. Bascially, you will need to firstly get an inference offline model (.om
file) by converting a pretrained deep learning model (TensorFlow/PyTorch/MindSpore/Caffe) using the Ascend Tensor Compiler (ATC)
tool. The APIs to run the om model is AscendCL
. A quick guide and experiment can be found here.
-
Get the model: refer to the list of supported modules below and get the model you wish to run. Once downloaded, rename the '.om' and store the file inside the
models
subdirectory. -
Turn on DJI Tello and connect it to the 200 DK via a wireless router. For more details, please refer to the TP Link Wireless Router Setup Guide.
Note that you are not limited to only TPLink Wireless Routers
-
Once the offline model is in place and the UAV is connected to the 200 DK. Activate the virtual environment and navigate to the project's
src
subdirectory:-
To prepare the presenter server to view live video footage
bash lib/server/run_presenter_server.sh uav_presenter_server.conf
-
Run real-time inference
python3 main.py
-
-
Running the above will prompt a manual on the terminal. Select the task and models you wish to use. The inference result can be found at http://127.0.0.1:7007.
Click here to learn more about what the program is doing in the back
- Download the YOLO Face Detector (
yolo_model.om
), rename it asface_detection.om
, and store the file inside themodels
subdirectory. - Follow this guide to run the PIDtracker. (Set
--use_ps=True
for live stream on PC)
The modularity feature of this project is enabled by the ModuleSelector
. It takes the user's specification and dynamically load the corresponding Processor
to handle real-time inference
File | Description |
---|---|
main.py |
Main script to invoke the interface and prompts user to select a model for inference |
ModuleSelector.py |
ModuleSelector Python class that takes the user's specifications and decides which task-specific Processor to load for inference |
BaseProcessor.py |
Parent Processor class responsible for initialization of ACL resources and Model parameters. Each inference model have their respective Processor (child of BaseProcessor ) for pre-and-post processing |
params.py |
Hash table storing information for each task (which offline model to use and its corresponding Processor) |
To add your own inference module to this project, you need:
-
An offline model stored in the
model
subdirectory (you can view the projects in Ascend Samples for more ideas or use the Ascend Tensor Compiler [ATC] to convert .caffe or .pb models into .om models). -
Write a custom
Processor
class (inherited fromBaseProcessor
) to take care of the offline model's inputs and outputs by overriding the preprocess and postprocess methods. -
Register the model's info to the dictionary in
src/utils/params.py
such that theModuleSelector
will know which Processor and model to load during selectionFor example, to add a YOLO face detector module for object detection, you would add another dictionary item inside object_detection. The dictionary is required to have the following keys to run the
ModuleSelector
andProcessor
:model_width
,model_height
,model_path
,model_processor
"object_detection": { "face_detection": { "model_width": 416, "model_height": 416, "model_path": os.path.join(paths["MODEL_PATH"], "face_detection.om"), "model_processor": "FaceDetectionProcessor", "model_info": "<model description>", "camera_width": "<optional parameters>"960, "camera_height": 720, } }
Note that you may also pass in other parameters in the dictionary for later uses by deconstructing them in the
params
argument in yourProcessor
class
A list of integrated modules.
Model | Rename the OM | Terminal Manual |
---|---|---|
YOLO Face Detector | face_detection.om |
object_detection ->face_detection |
YOLOv3 Object Detection | yolov3.om |
object_detection ->object_detection |
Hand Detection | hand_detection.om |
object_detection ->hand_detection |
Indoor Depth Estimation | indoor_depth_estimation.om |
depth_estimation ->indoor_depth_estimation |