Skip to content

cobalt12345/DeeplensMoneyCounter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Repository description

Current repository shows, how to deal with Machine Learning and AWS DeepLens edge device. It is based on the example project provided by Amazon - Amazon SageMaker Object Detection from Scratch.

The main goal of the repository is to give an introduction, how to prepare your own train data set with AWS DeepLens, train a model with SageMaker and avoid expenses caused by the usage of expensive AWS resources, like SageMaker Studio and AWS GroundTruth. It's rather simple to forget to release resources or underestimate a price of labeling jobs - which may cause you lose a few hundred dollars. I assume, that you already set up your AWS account, registered DeepLens device, and played with example projects. So, I won't explain the basics.
In this example, I'm focusing on the most interesting part (imho) - collect and prepare training data. All examples, I saw, ignore this important step.

Our goal

Let's make our AWS DeepLens to recognize banknotes. As an example, I used Russian Rubles.

Steps

  1. Collect a number of images, by capturing frames from AWS DeepLens. It makes a sense to use the same camera for preparing and processing data.
  2. Label prepared images. During the labeling job you will mark objects with bounding boxes.
  3. Label shall be transformed to the applicable format, consumed by the model.
  4. Train the model and deploy it on DeepLens device.

Collect Images

The simplest way to do that, capture frames one by one from the project stream.
Deploy pretrained model and lambda function to the device. It's necessary to increase a 'detection_threshold' in the lambda function to avoid appearance of bounding boxes as we don't need them on training images.

bash_scripts folder contains helpful scripts, that make work with a device more convenient.
Connect to your device using Micro HDMI cable and copy scripts folder.

Use 'capture_single_frame.sh' to save frames from device output (project stream) to the file system. Script is not perfect, and sometimes it's necessary to repeat its execution several times because of artifacts on images.
Basically, the more train images you do, the more precise will be predictions of you model.
Download images from your device to the local file system, when you're ready.
dataset folder contains my images, collected for both sides of two banknotes.

Data Labeling

Amazon provides an AWS GroundTruth service for images labeling. It provides the result in a format, that can be consumed by the training job without of any transformations. Unfortunately, it's pretty expensive and not very usable. I recommend singing up to Labelbox, as it has a free trial version. And its result can be transformed using resize_dataset_300x300.py. This script runs locally, resizes (not needed for this example) images, and creates a manifest file, that will be used for the training job.

The whole sequence looks as follows:

  • create a new project on Label box for each sub-set of images (1000_front, 1000_back, 2000_front, 2000_back), upload you images and complete labeling
  • modify script using your data to download data from Labelbox (can be found in the 'Export' section of Label box's UI)
  • execute script for each project and collect resulting data manually to the single manifest file. See: sg-manifest.manifest
    ℹ️ More about manifest file format can be found here

Model Training

Instead of using SageMaker Studio, it's more reasonable to start experimenting in SageMaker Studio Lab. It's free of charge, and has all we need. Don't hesitate to ask Amazon, to provide you an account.

  • create and run new environment in 'SageMaker Studio Lab'
  • upload notebook and incubator.tar.gz.part*
  • create new Terminal: File -> New -> Terminal
  • join archive parts: cat incubator.tar.gz.parta* > incubator.tar.gz
  • extract incubator: tar -xzf incubator.tar.gz
  • run the notebook step by step. Read provided descriptions carefully and modify code if necessary!

Deploy Trained Model to the Device

Update re-deploy you DeepLens project.
Don't forget to decrease the 'detection_threshold' value in your lambda function. Otherwise, you won' see any bounding boxes, because the 'confidence_score' will be always lower than the detection threshold.

Troubleshooting

Useful information can be found in AWS article Logging and Troubleshooting Your AWS DeepLens Project
⚠️ Every time you change your Lambda function or re-train the model, don't forget to publish a new version of your function, re-import model, and update AWS DeepLens project before its re-deployment to the device.
⚠️ Sometimes device demonstrates a weird behavior - you re-deployed the model, but it predicts the same way.
Try to back up your device folder '/opt/awscam/artifacts' and remove everything from it. Then re-deploy your project again.

Feel free to ask questions. You may contact me via e-mail address exposed in profile.

About

Prepare train data and train a model

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published