Program Introduction

Leverage a pre-trained model for computer vision inferencing. You will convert pre-trained models into the framework agnostic intermediate representation with the Model Optimizer, and perform efficient inference on deep learning models through the hardware-agnostic Inference Engine. Finally, you will deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases

Introduction to AI at the Edge

Leveraging Pre-Trained Models

The Model Optimizer

The Inference Engine

Deploying an Edge App

Project Deploy a People Counter App at the Edge

Introduction to Hardware at the Edge

Grow your expertise in choosing the right hardware. Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU). Utilize the Intel® DevCloud for the Edge to test model performance and deploy power-efficient deep neural network inference on on the various hardware types. Finally, you will distribute workload on available compute devices in order to improve model performance.

CPUs and Integrated GPUs

VPUs

FPGAs

Project Smart Queuing System

Introduction to Software Optimization

Learn how to optimize your model and application code to reduce inference time when running your model at the edge. Use different software optimization techniques to improve the inference time of your model. Calculate how computationally expensive your model is. Use the DL Workbench to optimize your model and benchmark the performance of your model. Use a VTune amplifier to find and fix hotspots in your application code. Finally, package your application code and data so that it can be easily deployed to multiple devices.

Reducing Model Operations

Reducing Model Size

Other Optimization Tools and Techniques

Project Computer Pointer Controller

TensoRT – Nvidia

Learn about TensorRT, developed by NVIDIA, an advanced software development kit (SDK) designed for high-speed deep learning inference.

Onnx, TensorRT, Docker Overview

NVIDIA Drivers

Nvidia Hardware and Software, Cuda programming API Levels

Docker Installation and Configuration

Installation of Docker Cuda Toolkit & Setup DockerFile with required packages

TensorRT & Onnx AI frameworks

Resnet 18 with ONNX-TENSORRT

Resnet 18 TensorRT Inference

YOLOV4 ONNX DNN

YOLOV4 ONNX DNN Video

YOLOv5 Onnx Inference – OpenCV

Yolov5 TensorRT Inference on Images

02. Prerequisites & Other Requirements

Prerequisites

Before you begin, please check the following Nanodegree Program prerequisite requirements to make sure you have the skills to succeed in this program.

To succeed in this program, students should have the following:

  • Intermediate knowledge of programming in Python
  • Experience with training and deploying deep learning models
  • Familiarity with different DL layers and architectures (CNN based)
  • Familiarity with the command line (bash terminal)
  • Experience using OpenCV

Hardware & Software Requirements

Please review these requirements to make sure you have what you need to complete this Nanodegree Program:

  • 64-bit operating system that has 6th or newer generation of Intel processor running either Windows 10, Ubuntu 18.04.3 LTS, or macOS 10.13 or higher.
  • Installing OpenVINO (version 202.1) on your local environment. OpenVINO and the software listed below will only need to run locally to complete the project & exercises in Course 3. All other projects and exercises can be completed within Udacity’s classroom using workspaces.
  • Installing Intel’s Deep Learning Workbench (version 202.1). Please note that DL Workbench does not currently support Windows 10 Home Edition. We recommend students either upgrade to Windows 10 Professional or use a Linux based system.
  • Installing Intel’s VTune Amplifier.