Chapter 7. Data Lake Management Service

Now that we have discovered and collected the required data to develop the insights, we enter the next phase of preparing the data. Data is aggregated in the data lake. Data lakes have become the central data repositories for aggregating petabytes of structured, semi-structured, and unstructured data. Consider the example of developing a model to forecast revenue. Data scientists will often explore hundreds of different models over a period of weeks and months. When they revisit their experiments, they need a way to reproduce the models. Typically, the source data has been modified by upstream pipelines, making it nontrivial to reproduce their experiments. In this example, the data lake needs to support versioning and rollback of data. Similarly, there are other data life cycle management tasks, such as ensuring consistency across replicas, schema evolution of the underlying data, supporting partial updates, ACID consistency for updates to existing data, and so on.

While data lakes have become popular as central data warehouses, they lack the support for traditional data life cycle management tasks. Today, multiple workarounds need to be built and lead to several pain points. First, primitive data life cycle tasks have no automated APIs and require engineering expertise for reproducibility and rollback, provisioning data-serving layers, and so on. Second, application workarounds are required to accommodate lack of consistency in the lake ...

Get The Self-Service Data Roadmap now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.