Welcome to the Nanodegree program

Introduction to Software Engineering

In this lesson, you’ll write production-level code and practice object-oriented programming, which you can integrate into machine learning projects.

Software Engineering Practices Pt I

Software Engineering Practices Pt II

OOP

Portfolio Exercise: Upload a Package to PyPi

Part 15-Module 01-Lesson 06_Web Development

Portfolio Exercise: Deploy a Data Dashboard

Introduction to Data Engineering

ETL Pipelines

Introduction to NLP

Learn Natural Language Processing one of the fields with the most real applications of Deep Learning

Machine Learning Pipelines

Disaster Response Pipeline

Project1: Disaster Response Pipeline

Part 17-Module 02-Lesson 01_Concepts in Experiment Design

Part 17-Module 02-Lesson 02_Statistical Considerations in Testing

Statistical Considerations in Testing

Part 17-Module 02-Lesson 03_AB Testing Case Study

A/B Testing Case Study

Part 17-Module 02-Lesson 04_Portfolio Exercise Starbucks

Part 17-Module 03-Lesson 01_Introduction to Recommendation Engines

Part 17-Module 03-Lesson 02_Matrix Factorization for Recommendations

Part 17-Module 04-Lesson 01_Recommendation Engines

Part 17-Module 05-Lesson 01_Upcoming Lesson

Sentiment Prediction RNN

Convolutional Neural Networks

Transfer Learning

Weight Initialization

Autoencoders

Job Search

Find your dream job with continuous learning and constant effort

Refine Your Entry-Level Resume

Craft Your Cover Letter

Optimize Your GitHub Profile

Develop Your Personal Brand

01. Introduction

In this lesson, we are going to take a look at how we can improve our models using one of SageMaker’s features. In particular, we are going to explore how we can use SageMaker to perform hyperparameter tuning.

In many machine learning models there are some parameters that need to be specified by the model creator and which can’t be determined directly from the data itself. Generally the approach to finding the best parameters is to train a bunch of models with different parameters and then choose the model that works best.

SageMaker provides an automated way of doing this. In fact, SageMaker also does this in an intelligent way using Bayesian optimization. What we will do is specify ranges for our hyperparameters. Then, SageMaker will explore different choices within those ranges, increasing the performance of our model over time.

In addition to learning how to use hyperparameter tuning, we will look at Amazon’s CloudWatch service. For our purposes, CloudWatch provides a user interface through which we can examine various logs generated during training. This can be especially useful when diagnosing errors.