This chapter covers
- Using a database for a more efficient data-wrangling process
- Getting a huge data file into MongoDB
- Working effectively with a large database
- Optimizing your code for improved data throughput
This chapter addresses the question: How can we be more efficient and effective when we’re working with a massive data set?
In the last chapter, we worked with several extremely large files that were originally downloaded from the National Oceanic and Atmospheric Administration. Chapter 7 showed that it’s possible to work with CSV and JSON files that are this large! However, files of this magnitude are too big for effective use in data analysis. To be productive now, we must move our large data ...