Data Mesh Requires a Change in Organizational Mindsets – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Data Mesh Requires a Change in Organizational Mindsets – InApps in today’s post !

Read more about Data Mesh Requires a Change in Organizational Mindsets – InApps at Wikipedia



You can find content about Data Mesh Requires a Change in Organizational Mindsets – InApps from the Wikipedia website

Adam Bellemare

Adam Bellemare is a staff technologist at Confluent and formerly a data platform engineer at Shopify, Flipp and BlackBerry. He has been working in the data space for over a decade with a successful history in big-data architecture, event-driven microservices and building streaming data in an organization. He is also the author of the O’Reilly title ‘Building Event-Driven Microservices.’

This is the fourth in a four-part series. Here are parts one, two and three.

The data demands of an organization have changed substantially over the years. While initially it may have been simple to serve all use cases from a single monolith, the vast explosion of data over the past decades has made this no longer tenable. Though big data solutions and service-oriented architecture both emerged as solutions for handling large amounts of data, both fall short of the ultimate goal: providing data as first-class building blocks to any service that needs it. Ad-hoc copying of data on a per-system basis has utterly failed as a solution for communicating data.

I’m not casting blame. It’s a challenging problem, and it requires both technological and cultural shifts to prioritize making it easy to produce and consume important data sets. But that’s why it’s important to take the next step — to embrace the data mesh.

Data mesh overcomes the constrictions and slowdowns caused by a data lake and data warehouse designs and instead connects data in a decentralized peer-to-peer fashion — meshing it. The idea was introduced by Zhamak Dehghani, Director of Emerging Technologies at Thoughtworks, and is built on four foundational legs: domain-oriented data ownership, data as a product, self-serve data and proper governance. I believe these are essential to help us scale data analytics and machine learning for the next decade.

Because the data mesh distributes data ownership across product domains, it also means rethinking how teams produce and use that data. This is a substantial inversion from the traditional “reach in and grab the data you need” model familiar to many data engineers, and is part of the cultural shift necessary for a successful data mesh.

Read More:   10 Tips for Adapting Security Practices from Your Home Office – InApps 2022

The first three parts of this series described the need for data mesh and the changes that must occur for you to build one. Here, in Part Four, we’ll explore some of the organizational and mindset evolutions that must occur to ensure a successful data mesh. I’ll also sketch out a typical journey you might go through when implementing a data mesh. It won’t cover the technical nitty-gritty details (code, schemas, etc.), but will give you some organization and management guidance.

A Changing Mindset for a New Design

While the data mesh has big technological implications, it is at its foundation a new management approach. Like any organizational shift, it requires buy-in and commitment to succeed.

The biggest change is that the original owners of the data, along with prospective consumers like the analytics team, must adjust some of their approaches to facilitate data decentralization. Let’s start by considering the original data owners, often represented by the role of the application developer. Data must be viewed as a first-class product — the data product — by its generating team. It must be clean, clear and easy to use. That generating team is responsible for its quality, representation and cohesiveness. This team may need to maintain several different data products, depending on the domain and the needs of the business. Each data product can be formatted and served in its own manner, though event streams form the optimum solution.

This mindset helps put accountability in the right place — right at the source, where the data is produced. The domain teams are the experts on their data, and it is on them to make sure good quality information is available to the data consumers, such as analysts and data scientists, when they need it.

Turning to the analytics teams, it’s important to recognize that data analytics is no longer a specialized and siloed activity.

The mesh approach to data analytics is in line with product development as a whole. Historically, there was only one centralized database, and it was owned by the application engineers. Analysts would have to beg for time, usually after hours, to run their queries because their work was not deemed a priority, being seen more as a back-office activity. As the importance of data grew, though, we split ourselves into transactional and analytical realms, created centralized data teams, and delegated all of that complexity and responsibility to analytics teams. It worked for a while, but it no longer serves.

Read More:   What is Software Reporter Tool and Why is it there in Chrome Folder?

Now, analytics teams can pull the information they need on demand from the groups that generated it. (Note: Analytics teams may also consider their data lakes/warehouses as data products as well, with an emphasis on lineage). In this mesh design, everyone is a consumer of data.

In some ways, it’s a parallel with the mobile technology journey. Just a few years back, it was common to see a central team where all mobile app development happened. Today, mobile development is an integral part of every product development team. Similarly with the mesh, the data shifts from a centralized development team to the domain owner.

Proof of Concept: Build Your First Data Mesh

For many years, engineering leaders have been reluctant, or unable, to change their traditional organizational structure around data, data engineering and data science. There are many reasons for this, but it’s safe to say that legacy systems, complex yet fragile workflows and difficulty in acquiring new infrastructure are part of the problem. Infrastructure as a service, along with cloud services, helps to solve this issue by making it easier to push the necessary responsibilities to the domain teams.

Once you have the necessary management commitment to data mesh, start with some solid initial use cases where you can show how this idea can work successfully. A proof of concept, if you will. Ideally, these will be projects that are contained, simple and owned by high-capability, forward-looking teams. You also want them to be visible — you want results that you can show to the business.

What does it all look like in real life? One organization used the principles of data mesh to create a set of event streams providing oft-used business data. These building blocks were originally powered by Kafka Connect connectors, but over time were migrated to direct production by the owning team. They then integrated this system with their microservices management platform. Creating a new service was as simple as selecting the language (Ruby, Python, Go, Java), the compute, memory, and disk resources required, the desired state store (DynamoDB, Aurora, RocksDB), and finally, the event streams to couple on (Items, Merchants, Inventory). Offering these first-class data products alongside the other application building blocks greatly simplified the creation of new data-intensive applications.

Managing, creating and evolving data products is also another important part of the life cycle. Read permissions were automatically assigned to the consumers of the event streams, such that a complete picture of lineage was readily available. Additionally, this lets data product owners notify the consumers when they are going to make a change. For example, the team responsible for the item data product was able to send out a notification of an impending change to the way that items were modeled to ensure that downstream consumers would not be adversely affected. This simple mechanism provided sufficient forewarning to consumers, eliminating the vast majority of unexpected breaking changes, substantially improving uptime, customer satisfaction and developer happiness metrics.

Read More:   GoDaddy’s Warehouse.ai Assures JavaScript Devs Get the Latest Modules – InApps Technology 2022

Implementing a Data Mesh

Here are a few ideas for how you can implement a data mesh in the field:

  1. Centralize data in motion: Introduce a central event-streaming platform; Apache Kafka and Confluent Cloud are good solutions, as they reduce the toil and let you focus on actually using your mesh.
  2. Assign data owners: Have designated owners for the key datasets in your organization. You want everyone to know who owns which dataset.
  3. Write data to Kafka Topics: You can store events in Kafka indefinitely and use compaction to keep data at a manageable size. Consumers can read from any point in the topic, as many times as they need.
  4. Handle schema changes: Owners are going to publish schema information to the mesh (perhaps in the form of a wiki, or data extracted from the Confluent Cloud Schema Registry and transformed into an HTML document), and you need a process to deal with schema change approval.
  5. Secure event streams: You need a central authority to grant access to individual event streams. There may be internal regulations and rules you’ll need to sort out here.
  6. Connect from any database: There are source and sink connectors available for many supported database types. Make sure that your desired connectors exist so you can easily provision production and consumption.
  7. Make a central user interface for the discovery and registration of new event streams: This can be an application or even a wiki. It must support several key activities, including searching for data of interest, previewing event streams, requesting access to new event streams and data lineage views.

While the data mesh involves a valuable rethink of your current data architecture design, it’s not going to solve every problem and address every concern in your organization. It works in conjunction with other important strategies, such as cloud computing, microservices and domain-driven design. Those other methods are most likely going to need to be a part of your work, alongside and sometimes even orthogonal to data mesh. Apply the data mesh concepts as you see fit to gain the maximum benefit for your company.

Data mesh as a concept is still nascent. There’s no right or wrong way of building one, as long as the fundamental principles of data mesh are intact. Good luck with your journey.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Flipp.

Featured image via Pixabay



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      [cf7sr-simple-recaptcha]

      Success. Downloading...