When Zhamak Dehghani introduced data mesh in 2019, she was acknowledging both the unmet expectations of business leaders and major frustrations of technologists in the data warehousing world. The talk channeled a decades-long groundswell of sentiment in the field, but most importantly, described a better approach for analytical data management. Data mesh surrenders to data’s naturally distributed state, breaking down the monolithic thinking that has hung on in the data world—even as the advent of cloud and microservices has transformed application development.
Data warehousing dream has become a nightmare
The dream that Teradata spun up more than 40 years ago with its purpose-built data warehouse turned into a nightmare over the years: Data became subject to centralized, often proprietary management and vendor lock-in. Pipelines and technical implementations took center stage over business concerns. Siloed data engineering teams bore the brunt of moving and copying data, transforming it, and delivering useful datasets across every nook of an enterprise. Those engineers have often been swamped with impossible backlogs of data requests, while business units have futilely waited for data quickly growing stale. Even though data management tools have improved rapidly in the last five to ten years, many of these same problems have been imported to the cloud.
And the crux of the matter? Businesses, in truth, have only used a small fraction of their vast, centralized stores of data to produce new products and offer customers value—because existing systems don’t let them operate on all of their data.
Now, the data mesh concept advocates a decentralized architecture, where data is owned and treated as products by domain teams that most intimately know the data—those creating, consuming and resharing it. That spurs more widespread use of data. With a data mesh, complexity is abstracted away into a self-serve, easy-to-use infrastructure layer, supported by a platform offering both freedom and federated governance.
But how is this concept of a business-first, interoperable distributed system for data actually made real?
The open data lakehouse answers data mesh’s call
An important achievement of the open data lakehouse is that it can be used as the technical foundation for data mesh. Data mesh aims to enable domains (often manifesting as business units in an enterprise) to use best-of-breed technologies to support their use cases. So the lakehouse, which allows domains to use all of their preferred tools directly on data as it lives in object storage, is a natural fit. For example, domains can use an engine like Spark to transform data, then a purpose-built tool to run interactive dashboards on that same data once it’s ready for consumption. The lakehouse’s inherent no-copy nature easily answers objections that have been leveled against some implementations of data mesh, which unfortunately resulted in a proliferation of data pipelines and copies.
This flexibility remains the same as the organization evolves. Because data in an open lakehouse is stored in open formats on object storage, when a new engine emerges, it’s easy for domains to evaluate and use that new engine directly on their lakehouse data. Open table formats like Apache Iceberg offer the flexibility to use any engine, while ensuring there’s no vendor lock-in.
Aside from providing openness and flexibility, lakehouses eliminate the need for data teams to build and maintain convoluted pipelines into data warehouses, as they provide data warehouse functionality and performance directly on object storage.
When looking to implement the technical platform for data mesh, in addition to the fundamental attributes mentioned above that a lakehouse delivers, companies should look for a platform that enables self-service for data consumers. This is a business-first approach. Different platforms enable this at different levels of the architecture. For example, companies can provide a self-service UI for domain users to explore, curate and share datasets in their semantic layer, and create dedicated compute resources for each domain, to ensure workloads are never bottlenecked by other domains’ workloads.
And, while not every data lakehouse can connect to external sources across clouds and on-premises, the best implementations do, enabling data consumers to analyze and combine datasets regardless of location. For data mesh, it’s also advantageous for business units to be able to easily manage these data products like code for streamlined testing and improved workflows and to meet stringent availability, quality, and freshness SLAs for data products.
Freeing IT from bottlenecks, empowering governance
When business units have a self-service experience at their fingertips to create, manage, document, and share data products, and discover and consume other domains’ data products, IT can step back and focus on delivering a reliable and performant self-service platform to support analytics workloads in the company. That data mesh-enabling platform makes implementation details like pipelines secondary to business needs. With the lakehouse, IT zeros in on establishing common taxonomy, naming conventions and SLAs for data products, applying fine-grained global access policies, and deploying the best compute engines for each domain directly on object storage without worrying about rogue data copying.
Implementing data mesh may not be necessary for every company. But if an enterprise has a large number of business units that benefit from sharing and combining each other’s data, and is currently bottlenecked by engineering when trying to share data or build their own datasets due to lack of self-service capabilities, the data mesh approach is probably suitable.
Engaging with data, analyzing it and crafting data products should not only delight the user and foremost serve business goals, but it should empower cross-functional teams and open up a company’s volumes of data, often growing dusty in object stores, for vigorous use.
Dehghani opined that the paradigm shift is from ingesting, extracting and loading data, pushing it to and fro via centralized pipelines and monolithic data lakes to a distributed architecture that serves data, makes it discoverable and consumable, publishes output with data ports and supports a true ecosystem of data products. That is what the open data lakehouse makes concrete, putting concept into practice.