Preface
Data mesh is the nudge that puts us on a new trajectory in how we approach data: how we imagine data, how we capture and share it, and how we create value from it, at scale and in the field of analytics and AI. This new trajectory moves us away from the centralization of data and its ownership toward a decentralized model. This new path embraces the complexity of our organizations, their rapid change, and their continuous growth. It aims to enable organizations to get value from data at scale, despite the messiness and organizational complexity.
Looking back through the history of our industry, we have been nudged before. The birth of Unix and its philosophy to “Write programs that do one thing and do it well. Write programs to work together …” was perhaps the butterfly flapping its wings that set the conditions for us to tackle complexity at the heart of software, decades later, through distributed architecture, service-oriented design, communications through standard APIs, and autonomous domain team organization. I hope that data mesh sets the condition for a new path to tackle complexity at the heart of data in the field that needs it most, analytics and AI.
I formulated the thesis of data mesh in 2018, after observing common failure patterns in getting value from data in large and technologically forward companies that had made substantial investments in their data technologies. Observing their struggles to scale data management solutions and organization to meet their ...