Program Introduction

Leverage a pre-trained model for computer vision inferencing. You will convert pre-trained models into the framework agnostic intermediate representation with the Model Optimizer, and perform efficient inference on deep learning models through the hardware-agnostic Inference Engine. Finally, you will deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases

Introduction to AI at the Edge

Leveraging Pre-Trained Models

The Model Optimizer

The Inference Engine

Deploying an Edge App

Project Deploy a People Counter App at the Edge

Introduction to Hardware at the Edge

Grow your expertise in choosing the right hardware. Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU). Utilize the Intel® DevCloud for the Edge to test model performance and deploy power-efficient deep neural network inference on on the various hardware types. Finally, you will distribute workload on available compute devices in order to improve model performance.

CPUs and Integrated GPUs

VPUs

FPGAs

Project Smart Queuing System

Introduction to Software Optimization

Learn how to optimize your model and application code to reduce inference time when running your model at the edge. Use different software optimization techniques to improve the inference time of your model. Calculate how computationally expensive your model is. Use the DL Workbench to optimize your model and benchmark the performance of your model. Use a VTune amplifier to find and fix hotspots in your application code. Finally, package your application code and data so that it can be easily deployed to multiple devices.

Reducing Model Operations

Reducing Model Size

Other Optimization Tools and Techniques

Project Computer Pointer Controller

01. Notebooks and Workspaces

Welcome to Intel® Edge AI for IoT Developers

Hi and welcome to the program! Chances are if you’re taking this program, you already have a working knowledge of deep learning models and how to train and deploy them. But what if you need to deploy these models locally on devices that need to process data in real-time without sending it to the cloud? That’s where edge AI applications become a critical skill set for any developer tasked with solving such use cases.

Edge AI applications are revolutionizing the IoT industry by bringing fast, intelligent behavior to the locations where it is needed. In this Nanodegree program, you will learn how to develop and optimize Edge AI systems using Intel’s® OpenVINO™ toolkit and also how to utilize Intel’s® DevCloud platform to test different model configurations before you purchase any hardware. We’re going to take you through how to use pre-trained models and use them on existing apps. Then we’ll take it further, showing you how to choose the right hardware given a set of requirements. And finally, we’ll take you through some more advanced optimization techniques to improve the performance of your system.

By the time you complete this Nanodegree program, you’ll be ready to design, test, and deploy an edge AI application in any number of settings for a broad array of different companies, industries, and devices. As a graduate of this program, you will be able to:

  • Leverage the Intel® OpenVINOâ„¢ toolkit to fast-track the development of high-performance computer vision and deep learning inference applications.
  • Run pre-trained deep learning models for computer vision on-prem.
  • Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU).
  • Utilize Intel’s DevCloud to test model performance on various hardware types (CPU, VPU, FPGA, and
    Integrated GPU).

This program consists of 3 courses and 3 projects. Each project you build will be an opportunity to demonstrate what you’ve learned in the course and will demonstrate to potential employers that you have skills in these areas.

We’re excited to have you with us on this journey and wish you all the best as you get started!