Welcome to the Data Engineering Nanodegree Program

Introduction to Data Engineering

Working with source systems

In this lesson you will explore the source systems that data engineers typically interact with. Then, in Lesson 2, you will learn how to connect to various source systems and troubleshoot common connectivity issues.

Introduction to Data Modeling

In this course, you’ll learn to create relational and NoSQL data models to fit the diverse needs of data consumers. You’ll understand the differences between different data models, and how to choose the appropriate data model for a given situation. You’ll also build fluency in PostgreSQL and Apache Cassandra

Relational Data Models

Project Data Modeling with Postgres

NoSQL Data Models

Project Data Modeling with Apache Cassandra

Introduction to Data Warehouses

In this course, you’ll learn to create cloud-based data warehouses. You’ll sharpen your data warehousing skills, deepen your understanding of data infrastructure, and be introduced to data engineering on the cloud using Amazon Web Services (AWS).

Introduction to Cloud Computing and AWS

Implementing Data Warehouses on AWS

Project: Data Warehouse

Data ingestion

This week you'll delve deeper into batch and streaming ingestion patterns. You'll identify use cases and considerations for each, and then create a batch ingestion pipeline and a streaming pipeline. When examining batch ingestion, you'll compare and contrast the ETL and ELT paradigms. You'll also explore various AWS services for batch and streaming ingestion.

The Power of Spark

In this course, you will learn more about the big data ecosystem and how to use Spark to work with massive datasets. You’ll also learn about how to store big data in a data lake and query it with Spark.

Data Wrangling with Spark

Debugging and Optimization

Introduction to Data Lakes

Project: Data Lake

DataOps

In the first lesson, you'll explore DataOps automation practices, including applying CI/CD to both data and code, and using infrastructure-as-code tools, such as Terraform, to automate the provisioning and management of your resources. Then, in lesson 2, you'll explore DataOps observability and monitoring practices, including using tools like Great Expectation to monitor data quality and Amazon CloudWatch to monitor infrastructure.

Data Pipeline

In this course, you’ll learn to schedule, automate, and monitor data pipelines using Apache Airflow. You’ll learn to run data quality checks, track data lineage, and work with data pipelines in production.

Data Quality

Production Data Pipelines

Orchestration, monitoring, and automation of your data pipelines

This week, you'll learn all about orchestrating your data pipeline tasks. You'll identify the various orchestration tools, but focus on Airflow, one of the most popular and widely used tools today. You'll explore Airflow's core components, the Airflow UI, and how to create and manage DAGs using Airflow's various features.

Project Data Pipelines

Capstone Project

Take 30 Min to Improve your LinkedIn

Job Search

You’re in this Nanodegree program to take the next big step in your career - maybe you’re looking for a new job, or you’re learning new skills for your current job … or maybe you’re not sure what to do, but you know you need to make a career change.

Refine Your Entry-Level Resume

Craft Your Cover Letter

Optimize Your GitHub Profile

Develop Your Personal Brand

Project Portfolio

Real-world projects are integral to every Bootcamp AI Nanodegree program. They become the foundation for a job-ready portfolio to help learners advance their careers in their chosen field. The projects in the Data Engineer Nanodegree program were designed in collaboration with a group of highly talented industry professionals to ensure you develop the most in-demand skills. Every project in a Nanodegree program is human-graded by a member of Bootcamp AI’s mentor and reviewer network. These project reviews include detailed, personalized feedback on how you can improve their work. Bootcamp AI graduates consistently rate projects and project reviews as one of the best parts of their experience with Bootcamp AI.

The Project Journey

The projects will take you on a journey where you’ll assume the role of a Data Engineer at a fabricated data streaming company called “Sparkify” as it scales its data engineering in both size and sophistication. You’ll work with simulated data of listening behavior, as well as a wealth of metadata related to songs and artists. You’ll start working with a small amount of data, with low complexity, processed and stored on a single machine. By the end, you’ll develop a sophisticated set of data pipelines to work with massive amounts of data processed and stored on the cloud. There are five projects in the program. Below is a description of each.

Project 1 – Data Modeling

In this project, you’ll model user activity data for a music streaming app called Sparkify. The project is done in two parts. You’ll create a database and import data stored in CSV and JSON files, and model the data. You’ll do this first with a relational model in Postgres, then with a NoSQL data model with Apache Cassandra. You’ll design the data models to optimize queries for understanding what songs users are listening to. For PostgreSQL, you will also define Fact and Dimension tables and insert data into your new tables. For Apache Cassandra, you will model your data to help the data team at Sparkify answer queries about app usage. You will set up your Apache Cassandra database tables in ways to optimize writes of transactional data on user sessions.

Project 2 – Cloud Data Warehousing

In this project, you’ll move to the cloud as you work with larger amounts of data. You are tasked with building an ELT pipeline that extracts Sparkify’s data from S3, Amazon’s popular storage system. From there, you’ll stage the data in Amazon Redshift and transform it into a set of fact and dimensional tables for the Sparkify analytics team to continue finding insights in what songs their users are listening to.

Project 3 – Data Lakes with Apache Spark

In this project, you’ll build an ETL pipeline for a data lake. The data resides in S3, in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in the app. You will load data from S3, process the data into analytics tables using Spark, and load them back into S3. You’ll deploy this Spark process on a cluster using AWS.

Project 4 – Data Pipelines with Apache Airflow

In this project, you’ll continue your work on Sparkify’s data infrastructure by creating and automating a set of data pipelines. You’ll use the up-and-coming tool Apache Airflow, developed and open-sourced by Airbnb and the Apache Foundation. You’ll configure and schedule data pipelines with Airflow, setting dependencies, triggers, and quality checks as you would in a production setting.

Project 5 – Data Engineering Capstone

The capstone project is an opportunity for you to combine what you’ve learned throughout the program into a more self-driven project. In this project, you’ll define the scope of the project and the data you’ll be working with. We’ll provide guidelines, suggestions, tips, and resources to help you be successful, but your project will be unique to you. You’ll gather data from several different data sources; transform, combine, and summarize it; and create a clean database for others to analyze.

We’re excited to see what you build!