Case Study: How California State University used DataOps Principles to Build Data Pipelines for Rapid Deployment and Scalability

Video description

Though we have in the past depended on traditional data warehouses to drive business intelligence from data, they were generally based on databases and structured data formats and proved insufficient for the challenges that the current data-driven world faces. Enter data lakes, which are optimized for unstructured and semistructured data, can scale to petabytes easily, and allow better integration of a wide range of tools to help businesses get the most out of their data. Data and software professionals are now moving toward a disaggregated architecture, which makes each layer much more independent and allows them to use the perfect tool for each job.

California State University took a different approach by residing most of their logic within a programming framework that allowed them to build a testable platform. By applying DataOps techniques, they were able to create reusable, scalable, and extensible data architectures.

Recorded on November 2, 2021. See the original event page for resources for further learning or watch recordings of other past events.

O'Reilly Case Studies explore how organizations have overcome common challenges in business and technology through a series of one-hour interactive events. You’ll engage in a live conversation with experts, sharing your questions and challenges while hearing their unique perspectives, insights, and lessons learned.

Product information

  • Title: Case Study: How California State University used DataOps Principles to Build Data Pipelines for Rapid Deployment and Scalability
  • Author(s): Subash D'Souza
  • Release date: November 2021
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 0636920623632