The oil and gas industry was one of the first aggregators of what we now call “big data,” but the amount of information these companies currently collect is truly unprecedented. In 1990, one square kilometer yielded 300 megabytes of seismic data; in 2015, it was 10 petabytes—33 million times more. This report features highlights from recent Strata+Hadoop World conferences to demonstrate how the petroleum industry uses data science in their operations today.
Oil companies use machine learning to mitigate short-term operational risk and to optimize long-term reservoir management. But, as author Naveen Viswanath explains, machine learning models alone can’t distinguish between good and bad data or reasonable and unreasonable results. Human intelligence—including a deep understanding of how data sources fit into business use cases—is crucial for making these distinctions.
With this report, you’ll learn the challenges these companies face when collecting a variety of data for seismic research, drilling, mechanical maintenance, worldwide logistics, and even gas station retail.
- Title: Reducing Risk in the Petroleum Industry
- Release date: August 2016
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781491964705
You might also like
Effective Computation in Physics
More physicists today are taking on the role of software developer as part of their research, …
Fundamentals of Software Architecture
Salary surveys worldwide regularly place software architect in the top 10 best jobs, yet no real …
Anomaly Detection for Monitoring
Monitoring, the practice of observing systems and determining if they're healthy, is hard--and getting harder. In …
Head First Design Patterns, 2nd Edition
You know you don’t want to reinvent the wheel, so you look to design patterns—the lessons …