Teresa Wingfield
As the Director of Product Marketing at Actian, Teresa Wingfield focuses on the company’s leading hybrid cloud data solutions. Prior to joining Actian, Teresa managed cloud and security product marketing at industry leaders such as Cisco, VMware, and McAfee. She was also Datameer’s first Vice President of Marketing where she led all marketing functions for the company’s big data analytics solution built on Hadoop. Before this, Teresa was Vice President of Research at Giga Information Group, acquired by Forrester, providing strategic advisory services for data warehousing and analytics. Teresa holds graduate degrees in management from MIT’s Sloan School and software engineering from Harvard University.

In Gartner’s 2021 top 10 data and analytics trends, engineered decision intelligence is identified as a rising trend with the potential to drive a successful business decision-making process. Companies regularly face complex ecosystems of data in motion that delay data assembly, dragging out and potentially tainting the decision-making process. When organizations pair engineered decision intelligence tools with a common data fabric and composability support, they are able to subsequently pave the way for more accurate, repeatable and traceable decisions. 

Before diving into the benefits of engineered decision intelligence, we must first understand what it is. Conveniently, it’s definition is found in the name: it is a deliberate and structured process of deriving business decision-making intelligence from data. Businesses need to make well-informed decisions as a situation unfolds in real-time; after the fact is usually too late, and results in suboptimal, or worse, unsuccessful outcomes. That’s one of the main challenges that engineered decision intelligence is trying to overcome. 

But there’s an even more fundamental problem to overcome first. Right now, there’s a huge arsenal of decision intelligence tools one can use: from basic query and reporting to advanced analytics, including artificial intelligence and adaptive system applications. However, the insights these provide are only as good as the data that powers them. 

According to Gartner, that means engineered decision intelligence is about pairing these tools with a common data fabric and composability support — which enables the use of components from multiple data, analytics, and AI solutions — thus paving the way for decisions that are more accurate, repeatable, traceable, and timely. 

The Need for a Data Fabric

Lewis Carr
Lewis Carr is Senior Director of Product Marketing at Actian. In his role, Lewis leads product management, marketing and solutions strategies and execution. Lewis has extensive experience in cloud, big data analytics, IoT, mobility and security, as well as a background in original content development and diverse team management. He is an individual contributor and manager in engineering, pre-sales, business development and most areas of marketing targeted at enterprise, government, OEM and embedded marketplaces. Prior to his time at Actian, Lewis developed his career at HPE, Oracle, BEA, Sun Microsystems, Motorola and SRI International, and founded Prism Technology Marketing.

The data fabric is essential for engineered decision intelligence. It is an architecture that provides a consistent set of data services and capabilities across your critical on-premises and cloud environments. It acts as a foundation that enables you to abstract data from systems that are physically and logically distinct, creating a common set of data objects that you can treat as a unified enterprise data set. 

Read More:   Update Data Fabric or Data Mesh? Find the Happy Medium

Because engineered decision intelligence needs to work with data from systems that may be on-premises, in the cloud, spread across multiple clouds, and even deployed remotely at the network’s edge, the data fabric provides a way to weave these sources into a network of information to power your decision intelligence tools. 

By utilizing a data fabric, you can realize the full potential of your decision intelligence tools. Since data fabrics can access data across the enterprise faster and more efficiently, you’ll gain more integrated and accurate business insights and increased business agility. And, as decisions become more operationalized and standardized by the data fabric, they become more repeatable and traceable. Plus, as decision intelligence tools are able to execute more iterations on new data exposed by the data fabric, they can learn from previous outcomes to produce more reliable and repeatable results. 

Building a Modern Data Fabric

Implementing a modern data fabric and unlocking the value of your data for engineered decision intelligence is a roughly three-step process: 

  1. The first step is to build a metadata catalog of contextual information about the data you intend to access — such as where the data came from, how the data is defined, and when it was last updated. Metadata makes the data more easily searchable and provides insight into the data profiles used in decision intelligence. This metadata should not be seen as static. Instead, expect the metadata to change as additional data sources are joined, algorithms are tuned and modified, and changes to outcomes generate subsequent changes to the business process.
  2. Next, use the metadata catalog to create a knowledge graph. This provides a semantic layer that represents each entity (things such as person, location, organization, product, etc.) and its relationships with other entities. Artificial intelligence and machine learning can enrich the metadata, which further enhances data interpretation and contextualization. This helps users get more relevant and faster query responses. The knowledge graph also makes it possible to view the data from multiple dimensions and to access the data using a variety of decision intelligence tools — without modifying the source data on the underlying systems.
  3. Lastly, integration services use the knowledge graph to bring together the requisite enterprise data sources and reconcile them into a common data set. Integration services should be able to connect with Google Cloud Storage, Amazon S3, and Azure Data Lake Storage — as well as more than common applications, web-service APIs, JSON data, and even spreadsheets. Once your data sources are integrated, the data fabric drives data flow orchestration and automation to deliver information to users and decision intelligence tools.
Read More:   Paving a Path to Continuous Delivery – InApps 2022

Pairing the Data Fabric with Composable Data and Analytics

While a data fabric gives you access to data across various systems, composability is all about using components that work together even though they come from a variety of data, analytics, and AI solutions. Combining components can help you to create a flexible, user-friendly, and user-tailored experience. Many types of analytical tools exist, the purpose and value delivered by each varying greatly. Composability enables you to assemble more sophisticated and complex solution stacks to help you gain new, powerful insights.

There are several ways to better enable and support composable data and analytics, including data warehousing:

1. Extending real-time analytics capabilities through transactional and edge data processing

Decision-makers are looking for ways to act faster using data from their billions of connected mobile and Internet of Things (IoT) devices. Predictive maintenance, real-time inventory management, production efficiency, and service delivery are just a few of the many areas where real-time analytics on IoT data can help a company cut costs and drive additional revenues.

Real-time transactional analytics and artificial intelligence-enabled insights from IoT data are likely to play increasingly important roles in many organizations. What we’re seeing today is just the beginning of benefit streams to come. Realizing greater benefits will depend upon an organization’s ability to deliver varied data to decision intelligence solutions.

That said, historically, there was a clear distinction between a transactional database and a data warehouse. A transactional database tracked and processed business transactions. A data warehouse, in contrast, analyzed historical data. However, modern needs for real-time insights have brought these formerly distinct worlds ever closer together — to the point where, today, there is strong demand for mixed workloads that combine transactional processing and analytics. You see this in a range of use cases, from automated personalized e-commerce offers and real-time quotes for insurance to credit approval and portfolio management, to name just a few. Most transactions are not executed in a vacuum. Profitability and process optimization are a function of multiple transactions contingent upon in-line analytics with real-time queries of disparate datasets, as seen in cross-selling and up-selling, complex system configuration, cargo loading systems, and more. These analytics drive both automated decisions and decision support for humans that are essential to profitability in modern enterprises.  

Read More:   Update Volcano: A Kubernetes Native Batch System for AI, Big Data and HPC Workloads

2. Bring in any data source, anytime

The real-time needs of engineered decision intelligence mean that analytic tools can no longer rely solely on historical data for insights. Decision-makers still want on-demand access to data from traditional batch processing sources, but they also want the ability to act on current trends and real-time behaviors. This requires seamless orchestration, scheduling, and management of real-time streaming data from systems throughout the organization and the Internet that are continuously generating it.

In the world that is evolving, data must be available for analysis regardless of where it lives, yet only to systems and humans with a need to know and use that data. Business decision-makers that can gain insights from the real-time analysis of both semi-structured and unstructured data, for example, may be able to seize opportunities more efficiently and increase the probability that strategic initiatives will be successful.

3. Take advantage of the efficiencies enabled by containerization and microservices

A containerized approach makes analytics capabilities more composable so that they can be more flexibly combined into applications. However, this is more advantageous if the data warehouse architecture itself supports containers, a fundamental building block for microservices infrastructure. Support is key to enabling an organization to meet the resource demands associated with artificial intelligence, machine learning, streaming analytics, and other resource-intensive decision intelligence processing.

Container deployment represents a more portable and resource-efficient way to virtualize compute infrastructure versus virtualized deployment. Because containers virtualize the operating system rather than the underlying hardware, applications require fewer virtual machines and operating systems to run them. Architecting around a combination of containers and microservices delivers the most value with fewer resources and provides a superior means of iterating the services delivered.

4. Accommodate any tool

Look for the flexibility to integrate decision intelligence easily with the data warehouse. Or, if you have unique requirements that require you to build custom applications, look at the development tools the platform supports so that you can achieve the composability that a modern analytics environment requires.

The purpose of and value delivered by different types of analytical tools vary greatly, and different users — including data engineers, data scientists, business analysts, and business users — need different tools. However, there’s a desperate need for real-time insights and for mixed workloads that combine transactional processing and analytics. A modern data warehouse capable of operation across both on-premise and multiple cloud environments is one option for empowering all different platform users in the enterprise to analyze anything, anywhere, anytime, allowing businesses to utilize crucial real-time data and make well-informed decisions. Anyone can reap the benefits of engineered decision intelligence — combining a data fabric with composability will set you off on the path to success.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Real.

Feature image via Pixabay.