Adam Bellemare
Adam Bellemare is a staff technologist at Confluent and formerly a data platform engineer at Shopify, Flipp and BlackBerry. He has been working in the data space for over a decade with a successful history in big-data architecture, event-driven microservices and building streaming data in an organization. He is also the author of the O’Reilly title ‘Building Event-Driven Microservices.’

This is the third in a four-part series. Here are parts one and two.

Getting reliable data to the right people when they need it is one of the never-ending challenges any growing organization faces. Part 2 of this series on data mesh addressed the first factor in that essential equation: outlining the need for domain control and a cultural mindset that data must be available as a stand-alone and elegant product, to ensure the best quality information.

Here in Part 3, let’s deal with the next variable, the timeliness of data delivery. The most efficient data architectures allow data consumers everywhere within the company to access real-time and historical data when they need it. But there’s more: We must ensure that producers have guidelines and standards for creating and managing their data products such that consumers can discover, use and rely on them. The way to achieve greater ease of use is through a carefully designed system of Self-Service Data and Federated Governance.

Building a Self-Service Data Platform

A self-service data platform should serve the producers, consumers and maintainers of data products. Each of these individuals has a different set of needs that overlap with one another. The self-serve platform must provide the tools and interfaces that simplify this lifecycle of creating, finding, using and possibly deleting a data product. Some of the essential self-serve features include:

  • Discovery: Users should be able to browse, search and filter for the available data products in their organization, identifying which they need to access for their jobs and enabling them to seek direct assistance from domain experts (such as by email, direct message or telephone) to help their decision-making.
  • Data Product Management: Data product owners and producers should be able to publish their data products for others to discover and access. Publishing permission should be gated on compliance with federated governance requirements.
  • Access Control: Users should be able to request access to sensitive data products. Access can be automatically granted to lower-security data products, while internal agents review access requests to higher-security or more restricted data products prior to granting permission. A complete tree of read and write permissions to each data product can be used to draft dependency graphs and track data lineage across the organization.
Read More:   How to Bring Your Company Along on the Journey – InApps Technology 2022

The main point of having a self-serve data platform is to reduce, and ideally eliminate, the overhead work in accessing and using data products. How you do this will vary, and there are several ways of structuring a self-serve data platform. For instance, if your company uses microservices, you may also integrate your self-serve data platform into your self-serve microservice platform, allowing users to request access to data products alongside compute, storage, and monitoring resources.

But what about analytical purposes? A major benefit of using a data product in the form of an event stream is that you can easily use it as a single source of truth for both operational and analytical needs. Event streams can power analytical data sets by using a tool like Kafka Connect to source data from the event stream and write it to a set of files in cloud storage for batch analytics. Data platform teams often go one step further, registering the derived batch data set as its own data product, suitable for reuse for other batch analytics in the organization.

A related point, particularly if your service or batch analytics needs access to the entire data set: Consuming data from the data mesh means potentially engaging real-time as well as historical data. There are two ways to do this:

  1. The first is called Lambda Architecture. It requires you to build two separate systems, one for historical data and one for real-time data, and resolve them at runtime
  2. A simpler, and often better, solution is the Kappa Architecture. It begins with an event streaming platform that stores the streams indefinitely. This gives the consumers of the data products the choice of data they need for their own use cases. They can start with the latest events from the platform or consume from the beginning of time, building up their own model of state specific to their business use cases.
Read More:   Update KOps Adds Support for Calico’s eBPF Dataplane

It is important that individual domains retain a degree of autonomy and that the data product is accessible in a way that is well suited to the domain. A team must have sufficient autonomy to properly define and publish their data products, while consumers must have well-defined standards and expectations to harness its value in a reliable way. The result is an equilibrium where producers and consumers find a common middle ground and work toward improving the experience for all members. That leads us to federated governance.

The Need for Federated Governance

Federated governance is a balancing act. While a producer of a data product should have full autonomy to build, populate and publish in any way they see fit, they must also ensure that it is in a form that is easy and reasonable for consumers to access and use. There are many parallels that can be drawn between the microservices domain and the data mesh domain: Both empower users to select the tools and technology that is best suited for their use cases while simultaneously offering resistance to technological sprawl, confusing implementations and difficulty in usage.

For example, a microservice platform may restrict the languages that developers may use to a specific subset. In the data mesh, a similar analogy would be to restrict the format of data products such that only one or two mechanisms (such as event streams and their derived cloud storage data products) are the usable standards. In both cases, the goal isn’t to make life more difficult for the creators, but rather to limit the technological sprawl and implementation complexity, particularly if existing technologies and standards are more than sufficient to meet the product needs. While exploring new technologies and formats is exciting, we must balance this against the need for reliability, maintainability and full support of our existing toolsets.

Getting Started: Creating a Standards and Practices Group

The best way to get started with federated governance is to create a standards and practices group that helps define policies for an organization’s data products that facilitate cross-domain interoperability. This group may establish such standards as data product types, schema types, change management requirements, quality levels and service-level objectives. It also should include deletion and cleanup policies, as well as topic retention time, compaction and any other properties relating to how the data may change over time.

This group should be composed of technical experts and domain experts from across the organization, including both producers and consumers of data products. Through dialogue, argument, and principle-based discussion, this team must come to a consensus on streamlining the lifecycle and usage of data products. Additionally, this team must be responsible for the inevitable incremental changes necessary to keep the data mesh current, such as the addition of new “sanctioned” technologies, deprecation of older technologies, and revisiting standards, formats, and operational requirements as both the business and the technology landscape change.

Federated governance is more of an ongoing organizational process than a strictly technological concern. It ensures that independent and autonomous teams, which own all the data products within the mesh, can work together. If you want to correlate disparate data products, you don’t want to deal with three different types of schema formats (Avro, Protobuf and JSON) and access semantics, right? You want to make sure that the various constituents of the mesh actually “mesh” so that you can generate real network effects and create value.

Establishing a governing body for the data mesh is an extremely important part of building a solution that works for all participants. Keep in mind that even the form of governance will vary among organizations, with some opting for stricter top-down regulation and others being less centralized and minimally involved.

The data mesh design calls for changing many activities and processes in your organization — ideally, for the better. In our next, and final article — Part 4 — we’ll look at how these changes will affect your organizational structure.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Flipp.

Featured image via Pixabay.