Skip to content

AI from Data Center to the Edge - An optimized path using Intel Architecture

Notifications You must be signed in to change notification settings

srinivas-6/AI-on-Intel

Repository files navigation

AI-on-Intel

AI from Data Center to the Edge - An optimized path using Intel Architecture

This github repo is all about the course from Intel AI developer program, the course can be accessed here for registration https://software.seek.intel.com/DataCenter_to_Edge_REG

Objective

Using Intel hardware and sofware portfolio for demonstarting the Data Science process. Hands on understanding of building a Deep Learning model and deploying to the Edge.

  • Using the concept of Transfer Learning for Image classification problem
  • Exploratory Data analysis on Vehicle Make and Model Recognition (VMMR) dataset
  • Training three different frameworks (Inception V3, VGG and MobileNet) on Intel Dev cloud
  • Obtaining the graph and weights of the trained networks
  • Evaluation of the models and freezing the graphs obtained
  • Deployment to the Edge using Intel OpenVINO Toolkit

For Setting up the Environment on your workstation or on Intel AI Devcloud refer to Installation Instructions doc

Exploratory Data Analysis on Vehicle Make and Model Recognition (VMMR) dataset

The project starts with VMMR, contains 9170 classes, identified 76 Car Manufacturers and 291,752 samples in total. The time scale of the dataset covers models manufactured between 1950-2016. VMMRdb dataset contains images that were taken by different users, different imaging devices, and multiple view angles, ensuring a wide range of variations to account for various scenarios that could be encountered in a real-life scenario.

Data Creation, Preprocessing and Data Augmentation see Data_analysis

Training three different frameworks (Inception V3, VGG and MobileNet) on Intel Dev cloud

Training a model involves multiple steps, Choosing a framework (TensorFlow), Transfer Learning (InceptionV3, VGG16, MobileNet), Training the models and tune it for better performance, Hyper parameter tuning, Generate a frozen graph (a trained model) for inference on Edge.

Training and comparision of the performance & results see training

Evaluation of the models

Interpreting the results of the training by analyzing the models with different metrics and graphs see here

  • Confusion Matrix
  • Classification Report
  • Precision-Recall Plot
  • ROC Plot

Deployment to the Edge using Intel OpenVINO Toolkit

Inference on edge is real time evaluation of model subject to the contraints of power, latency and memory. Intel's OpenVINO toolkit optimizes pre-trained models for inference on intel processors.

  • Indentification of stolen cars

  • In same initialized environment, run the Inference_GUI.py script, you can see a working solution to the problem of creating a car theft classification application using Intel hardware and software tools.

  • In the GUI select the XML file generated by the Model optimizer, the labels file, test samples and also you can configure to run the inference on CPU, GPU or Movidius hardware.

image

For Deployment of the frozen model using OpenVINO Toolkit see deployment

Resources

About

AI from Data Center to the Edge - An optimized path using Intel Architecture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published