Chapter 1. Analyzing Big Data
[Data applications] are like sausages. It is better not to see them being made.
Otto von Bismarck
-
Build a model to detect credit card fraud using thousands of features and billions of transactions.
-
Intelligently recommend millions of products to millions of users.
-
Estimate financial risk through simulations of portfolios including millions of instruments.
-
Easily manipulate data from thousands of human genomes to detect genetic associations with disease.
These are tasks that simply could not be accomplished 5 or 10 years ago. When people say that we live in an age of “big data,” they mean that we have tools for collecting, storing, and processing information at a scale previously unheard of. Sitting behind these capabilities is an ecosystem of open source software that can leverage clusters of commodity computers to chug through massive amounts of data. Distributed systems like Apache Hadoop have found their way into the mainstream and have seen widespread deployment at organizations in nearly every field.
But just as a chisel and a block of stone do not make a statue, there is a gap between having access to these tools and all this data, and doing something useful with it. This is where “data science” comes in. As sculpture is the practice of turning tools and raw material into something relevant to nonsculptors, data science is the practice of turning tools and raw data into something that nondata scientists might care about. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access