Chapter 1. Analyzing Big Data
When people say that we live in an age of big data they mean that we have tools for collecting, storing, and processing information at a scale previously unheard of. The following tasks simply could not have been accomplished 10 or 15 years ago:
-
Build a model to detect credit card fraud using thousands of features and billions of transactions
-
Intelligently recommend millions of products to millions of users
-
Estimate financial risk through simulations of portfolios that include millions of instruments
-
Easily manipulate genomic data from thousands of people to detect genetic associations with disease
-
Assess agricultural land use and crop yield for improved policymaking by periodically processing millions of satellite images
Sitting behind these capabilities is an ecosystem of open source software that can leverage clusters of servers to process massive amounts of data. The introduction/release of Apache Hadoop in 2006 has led to widespread adoption of distributed computing. The big data ecosystem and tooling have evolved at a rapid pace since then. The past five years have also seen the introduction and adoption of many open source machine learning (ML) and deep learning libraries. These tools aim to leverage vast amounts of data that we now collect and store.
But just as a chisel and a block of stone do not make a statue, there is a gap between having access to these tools and all this data and doing something useful with it. Often, “doing ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access