Program Introduction
Leverage a pre-trained model for computer vision inferencing. You will convert pre-trained models into the framework agnostic intermediate representation with the Model Optimizer, and perform efficient inference on deep learning models through the hardware-agnostic Inference Engine. Finally, you will deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases
03. Notebooks and Workspaces00:00
Introduction to AI at the Edge
02. What is AI at the Edge?
03. Why is AI at the Edge Important1:26
04. Applications of AI at the Edge1:09
04. Applications of AI at the Edge Quiz
05. Historical Context1:11
05. Historical Context
06. Course Structure1:26
07. Why Are the Topics Distinct1:01
07. Why Are the Topics Distinct?
08. Relevant Tools and Prerequisites00:00
09. What You Will Build0:40
09.1 What You Will Build0:15
10. Recap0:22
Leveraging Pre-Trained Models
01. Introduction0:19
02. The OpenVINO™ Toolkit1:41
02. The OpenVINO™ Toolkit
03. Pre-Trained Models in OpenVINO™1:04
04. Types of Computer Vision Models3:24
04. Types of Computer Vision Models
05. Case Studies in Computer Vision2:28
05. Case Studies in Computer Vision
06. Available Pre-Trained Models in OpenVINO™00:00
06. Available Pre-Trained Models in OpenVINO™
07. Exercise Loading Pre-Trained Models00:00
08. Solution Loading Pre-Trained Models4:44
09. Optimizations on the Pre-Trained Models0:52
10. Choosing the Right Model for Your App2:25
10. Choosing the Right Model for Your App
11. Pre-processing Inputs3:15
12. Exercise Pre-processing Inputs00:00
13. Solution Pre-processing Inputs5:33
14. Handling Network Outputs2:22
14. Handling Network Outputs
15. Running Your First Edge App4:23
16. Exercise Deploy An App at the Edge
17. Solution Deploy An App at the Edge7:38
17.1 Solution: Deploy An App at the Edge00:00
17.2 Solution: Deploy An App at the Edge00:00
18. Recap00:00
19. Lesson Glossary
The Model Optimizer
01. Introduction00:00
02. The Model Optimizer00:00
02. The Model Optimizer
03. Optimization Techniques00:00
03. Optimization Techniques
04. Supported Frameworks00:00
05. Intermediate Representations00:00
05. Intermediate Representations
06. Using the Model Optimizer with TensorFlow Models4:11
07. Exercise Convert a TF Model
08. Solution Convert a TF Model2:54
09. Using the Model Optimizer with Caffe Models00:00
10. Exercise Convert a Caffe Model
11. Solution Convert a Caffe Model
12. Using the Model Optimizer with ONNX Models00:00
13. Exercise Convert an ONNX Model
14. Solution Convert an ONNX Model00:00
15. Cutting Parts of a Model00:00
16. Supported Layers00:00
16. Supported Layers
17. Custom Layers00:00
18. Exercise Custom Layers
19. Recap00:00
20. Lesson Glossary
The Inference Engine
01. Introduction00:00
02. The Inference Engine00:00
02. The Inference Engine
03. Supported Devices00:00
03. Supported Devices
04. Using the Inference Engine with an IR00:00
05. Exercise Feed an IR to the Inference Engine
06. Solution Feed an IR to the Inference Engine00:00
07. Sending Inference Requests to the IE00:00
08. Asynchronous Requests00:00
08. Asynchronous Requests
09. Exercise Inference Requests
10. Solution Inference Requests00:00
11. Handling Results00:00
11. Handling Results
12. Integrating into Your App00:00
13. Exercise Integrate into an App
14. Solution Integrate into an App00:00
15. Behind the Scenes of Inference Engine00:00
15. Behind the Scenes of Inference Engine
16. Recap00:00
17. Lesson Glossary
Deploying an Edge App
01. Introduction00:00
02. OpenCV Basics00:00
03. Handling Input Streams00:00
04. Exercise Handling Input Streams
05. Solution Handling Input Streams
06. Gathering Useful Information from Model Outputs00:00
06. Gathering Useful Information from Model Outputs
07. Exercise Process Model Outputs
08. Solution Process Model Outputs00:00
09. Intro to MQTT00:00
10. Communicating with MQTT00:00
10. Communicating with MQTT
11. Streaming Images to a Server00:00
11. Streaming Images to a Server
12. Handling Statistics and Images from a Node Server00:00
13. Exercise Server Communications
14. Solution Server Communications00:00
15. Analyzing Performance Basics00:00
15. Analyzing Performance Basics
16. Model Use Cases00:00
16. Model Use Cases
17. Concerning End User Needs00:00
17. Concerning End User Needs
18. Recap00:00
19. Lesson Glossary
20. Course Recap00:00
21. Partner with Intel
Project Deploy a People Counter App at the Edge
01. Project Introduction
02. Project Set-Up
03. Project Instructions Code
04. Running the App
05. Project Instructions Write-Up
06. Minimum Viable Project
07. Project Workspace
Project Description – Deploy a People Counter App at the Edge
Project Rubric – Deploy a People Counter App at the Edge
Introduction to Hardware at the Edge
Grow your expertise in choosing the right hardware. Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU). Utilize the Intel® DevCloud for the Edge to test model performance and deploy power-efficient deep neural network inference on on the various hardware types. Finally, you will distribute workload on available compute devices in order to improve model performance.
01. Instructor Introduction00:00
01.2 Instructor Introduction00:00
02. Course Overview00:00
03. Changes in OpenVINO 2020.1
04. Lesson Overview00:00
05. Why is Choosing the Right Hardware Important00:00
05. Why is Choosing the Right Hardware Important?
06. Design of Edge AI Systems00:00
06.2 Design of Edge AI Systems00:00
06.2 Design of Edge AI Systems
07. Analyze00:00
08. Design00:00
09. Develop00:00
10. Test and Deploy00:00
10. Test and Deploy
11. Basic Terminology00:00
12. Intel DevCloud00:00
12. Intel DevCloud
13. Updating Your Workspace
13. Updating Your Workspace
14. Walkthrough Using Intel DevCloud
15. Exercise Using Intel DevCloud
16. Lesson Review00:00
CPUs and Integrated GPUs
01. Lesson Overview00:00
02. CPU Basics
02. CPU Basics
03. Threads and Processes
03. Threads and Processes
04. Multithreading and Multiprocessing
04. Multithreading and Multiprocessing
05. Introduction to Intel Processors00:00
05. Introduction to Intel Processors
06. Intel CPU Architecture00:00
06. Intel CPU Architecture
07. CPU Specifications (Part 1)
07. CPU Specifications (Part 1)
08. CPU Specifications (Part 2)
08. CPU Specifications (Part 2)
09. Exercise CPU Scenario
09. Exercise: CPU Scenario
10. Updating Your Workspace
10. Updating Your Workspace
11. Walkthrough CPU and the DevCloud
12. Exercise CPU and the Devcloud
13. Integrated GPU (IGPU)00:00
13. Integrated GPU (IGPU)
14. Walkthrough IGPU and the DevCloud
15. IGPU and Batch Processing
15. IGPU and Batch Processing
16. Exercise IGPU Scenario
16. IGPU Scenario
17. Exercise IGPU and the DevCloud
18. Lesson Review00:00
VPUs
01. Lesson Overview00:00
02. Introduction to VPUs00:00
02. Introduction to VPUs
03. Architecture of VPUs00:00
04. Myriad X Characteristics00:00
05. Intel Neural Compute Stick 200:00
05. Intel Neural Compute Stick 2
06. Exercise: VPU Scenario
06. Exercise: VPU Scenario
07. Updating Your Workspace
08. Walkthrough: VPU and the DevCloud
09. Exercise: VPU and the DevCloud
10. Multi-Device Plugin00:00
10. Multi-Device Plugin
11. Walkthrough: Multi-Device Plugin and the DevCloud
12. Exercise: Multi Device Plugin on DevCloud
13. Lesson Review00:00
FPGAs
01. Lesson Overview00:00
02. Introduction to FPGAs00:00
02. Introduction to FPGAs
03. Architecture of FPGAs00:00
03. Architecture of FPGAs
04. Programming FPGAs00:00
04. Programming FPGAs
04.2 Programming FPGAs00:00
05. FPGA Specifications00:00
05. FPGA Specifications
06. Intel Vision Accelerator Design00:00
06. Intel Vision Accelerator Design
07. Exercise FPGA Scenario
07. FPGA Scenario
08. Updating Your Workspace
09. Walkthrough FPGA and the DevCloud
10. Exercise FPGA and the DevCloud
11. Heterogeneous Plugin00:00
11. Heterogeneous Plugin
12. Exercise Heterogeneous Plugin on DevCloud
13. Lesson Review00:00
14. Course Review00:00
Project Smart Queuing System
01. Project Overview00:00
02. Part 1 Hardware Proposal
03. Scenario 1 Manufacturing
04. Scenario 2 Retail
05. Scenario 3 Transportation
06. Part 2 Testing your Hardware
07. Step 1 Create the Python Script
08. Step 2 Create the Job Submission Script
09. Step 3 Manufacturing Scenario
10. Step 4 Retail Scenario
11. Step 5 Transportation Scenario
12. Step 6 Submit your Project
Project Description – Smart Queuing System
Project Rubric – Smart Queuing System
Introduction to Software Optimization
Learn how to optimize your model and application code to reduce inference time when running your model at the edge. Use different software optimization techniques to improve the inference time of your model. Calculate how computationally expensive your model is. Use the DL Workbench to optimize your model and benchmark the performance of your model. Use a VTune amplifier to find and fix hotspots in your application code. Finally, package your application code and data so that it can be easily deployed to multiple devices.
01. Instructor Introduction00:00
02. Course Overview00:00
03. Installing OpenVINO
04. Lesson Overview00:00
05. What is Software Optimization and Why Does it Matter00:00
05. What is Software Optimization and Why Does it Matter?
05.2 What is Software Optimization and Why Does it Matter00:00
05.2 What is Software Optimization and Why Does it Matter?
06. Types of Software Optimization00:00
06. Types of Software Optimization
07. Performance Metrics00:00
07. Performance Metrics
07.2 Performance Metrics00:00
07.2 Performance Metrics
08. Some Other Performance Metrics00:00
08. Some Other Performance Metrics
09. When do we do Software Optimization00:00
09. When do we do Software Optimization?
10. Lesson Review00:00
Reducing Model Operations
01. Lesson Overview00:00
02. Calculating Model FLOPs Dense Layers00:00
02. Calculating Model FLOPs: Dense Layers
03. Calculating Model FLOPS Convolutional Layers00:00
03. Calculating Model FLOPS: Convolutional Layers
04. Calculate the FLOPs in a model
04. Calculate the FLOPs in a model
05. Using Efficient Layers Pooling Layers00:00
05. Using Efficient Layers: Pooling Layers
06. Exercise Pooling Performance
07. Using Efficient Layers Separable Convolutions00:00
07. Using Efficient Layers: Separable Convolutions
08. Exercise Separable Convolutions Performance
09. Measuring Layerwise Performance
10. Exercise Measuring Layerwise Performance
11. Model Pruning00:00
11. Model Pruning
12. Lesson Review00:00
Reducing Model Size
01. Lesson Overview00:00
02. Introduction to Quantization00:00
02. Introduction to Quantization
03. Benchmarking Model Performance00:00
03. Benchmarking Model Performance
04. Exercise Benchmarking Model Performance00:00
05. Advanced Benchmarking00:00
05. Advanced Benchmarking
06. Exercise Advanced Benchmarking00:00
07. How Quantization is Done00:00
07. How Quantization is Done
08. Quantizing a Model using DL Workbench00:00
08. Quantizing a Model using DL Workbench
09. Exercise Quantizing a Model Using DL Workbench00:00
09. Exercise: Quantizing a Model Using DL Workbench
10. Model Compression00:00
10. Model Compression
11. Knowledge Distillation00:00
11. Knowledge Distillation
12. Lesson Review00:00
Other Optimization Tools and Techniques
01. Lesson Overview00:00
02. Introduction to Intel VTune00:00
02. Introduction to Intel VTune
03. Exercise Profiling Using VTune00:00
03. Exercise: Profiling Using VTune
04. Advanced Concepts in Intel VTune00:00
04. Advanced Concepts in Intel VTune
05. Exercise Advanced Profiling Using VTune Amplifier00:00
05. Exercise: Advanced Profiling Using VTune Amplifier
06. Packaging Your Application
07. Exercise Packaging Your Application
08. Exercise Deploying Runtime Package
09. Lesson Review
10. Course Review00:00
Project Computer Pointer Controller
01. Overview00:00
02. Part 1 Project Setup
03. Part 2 Build the Inference Pipeline00:00
04. Part 3 Complete the README
05. Part 4 Standout Suggestions00:00
06. Part 5 Check Your Work
Project Description – Computer Pointer Controller
Project Rubric – Computer Pointer Controller
02. Prerequisites & Other Requirements
Prerequisites
Before you begin, please check the following Nanodegree Program prerequisite requirements to make sure you have the skills to succeed in this program.
To succeed in this program, students should have the following:
- Intermediate knowledge of programming in Python
- Experience with training and deploying deep learning models
- Familiarity with different DL layers and architectures (CNN based)
- Familiarity with the command line (bash terminal)
- Experience using OpenCV
Hardware & Software Requirements
Please review these requirements to make sure you have what you need to complete this Nanodegree Program:
- 64-bit operating system that has 6th or newer generation of Intel processor running either Windows 10, Ubuntu 18.04.3 LTS, or macOS 10.13 or higher.
- Installing OpenVINO (version 202.1) on your local environment. OpenVINO and the software listed below will only need to run locally to complete the project & exercises in Course 3. All other projects and exercises can be completed within Udacity’s classroom using workspaces.
- Installing Intel’s Deep Learning Workbench (version 202.1). Please note that DL Workbench does not currently support Windows 10 Home Edition. We recommend students either upgrade to Windows 10 Professional or use a Linux based system.
- Installing Intel’s VTune Amplifier.