Day 2: Building a Streaming Data Pipeline
On Day 1, we made some solid progress on understanding the core concepts of DynamoDB, from how DynamoDB partitions data to the data types that it supports to secondary indexes. We worked through some simple table management commands and some basic CRUD operations on that table to see the concepts in action.
We were exposed to some features that set DynamoDB apart from other, more strict key-value stores, but those features are mostly garnishes. The main dish is DynamoDB’s unique blend of extreme scalability, predictably solid performance as you scale out, and freedom from operational burdens.
On Day 2, we’ll build something that can take full advantage of those features in a way that we couldn’t ...