Adam Frank
Adam Frank is a product and technology leader with more than 15 years of AI and IT Development and Operations experience. His imagination and passion for creating AIOps solutions are helping DevOps and SREs around the world. As Moogsoft’s VP of Product Management and UX Design, he’s focused on delivering products and strategies that help businesses to digitally transform, carry out organizational change, and attain continuous service assurance.

As artificial intelligence (AI) takes on increasingly complex decision-making, the humans behind it are losing touch with how it derives its conclusions. Many algorithms can no longer be examined to understand a decision path, to know why AI arrived at a particular answer. When trust is exceptionally important, like when AI produces technical diagnoses, simple rationale on the why behind AI’s decisions helps users solve issues more quickly, fix issues before they affect the business, and gain more value from their system.

To build trust in AI and simplify how humans interact with it, teams should invest in increasing transparency in AI operations. That begins by developing explainable and trainable AI. And as you talk with vendors, pay attention: some will try passing off tags with regular expressions as AI, machine learning or continuous learning. Those are only rules — and rules won’t take you far in today’s IT environments.

Read More:   DevOps World | Jenkins World 2018 Preview: Are DevOps and Jenkins Synonymous? – InApps 2022

Explainable and Trainable AI Improves Decision Visibility

An AI system’s decisions caused less concern when their systems were simpler. More straightforward functions like semantic reasoning or inherently explainable ones like decision trees allowed humans to track what AI did and clearly follow its logic. But, as AI use cases and needed amounts of data grew, AI became more complex, which has left humans needing to understand the decisions made. Complexity makes it harder to explain, and lack of explanation breeds distrust.

This distrust has led to a recent push to simplify how humans can follow the steps and models involved in AI’s decisions to better understand how it works. Ideally, this kind of explainable AI lets users discover what led a system to its eventual answer, why it didn’t choose a different answer, and where it succeeded or didn’t produce the desired outcome. Transparent access to this information through easy-to-interpret graphs, charts and visualizations informs teams on what’s happening under the hood and how they can make adjustments, if needed.

As artificial intelligence (AI) takes on increasingly complex decision making, the humans behind it are losing touch with how it derives its conclusions.

Teams can also use the information to influence and train AI’s decision-making. AI training is usually a complex process, needing humans with advanced degrees in data science to manage it. But by making information and decisions more transparent, a complex training process can be reduced into a simple UX, where anyone can train an AI with a couple of clicks. It builds trust in the AI’s process and offers humans real visibility into more complex decision-making.

Rules Struggle with Scale

IT teams have traditionally relied on rules-based templates (e.g. regex), but modern complexity has now made rules untenable for humans to keep up with. Rules seemingly start as simple solutions, with references to exact values, or even sometimes partial values (e.g. contains), resulting in known outputs. But each rule has exceptions, which means building more rules and regular expressions to address them. To the point where 10 rules can have over 3 million exceptions and possible combinations. Not good.

Read More:   Update How Redis Simplifies Microservices Design Patterns

The teams managing these rules soon get trapped in an endless process of creating, checking and revising every single rule — not to mention they often have to “tag” the data with an exact reference they want to use first. Humans end up addressing errors and issues instead of truly innovating. A once-simple and inexpensive process transforms into a morass that consumes your team’s time and attention.

We’ve seen this flawed rules approach applied to AI operations crumble time and time again; so, we invented AIOps, which involves applying AI to IT Development and Operations (DevOps) to manage the complexity of today’s IT systems and help assure the customer experience. This presents an opportunity for AI to take over, but in a way, humans can also manage. Instead of teams getting buried in rules, they can use AIOps to determine where to focus their most experienced team members’ efforts. Apply that to observability data, the river of metrics, traces and log events your systems gather, and you can produce true intelligent observability that creates actionability through context. Users can simplify mountains of data in minutes and easily spot and resolve issues, saving time and resources overall.

Build Trust Through Transparent, Trainable AI

A movement is now underway to equip humans with tools to abstract clear understanding from increasing complexity and do so with simple UI and features that explain and train AI’s decisions. Transparent AI helps users solve issues faster, make more informed decisions, and trust their AI system, so vendors have a responsibility to include these tools as part of the experience they deliver. A truly simple process increases a user’s trust in AI and makes it trainable by people who don’t hold degrees in data science and mathematics.

Intelligent observability solutions can assist in achieving this goal. For example, these solutions can deploy simple visualization techniques that plot relationships, similarities and inferences on graphs to clearly highlight AI’s decisions. Coupled with simple training “yes/no” buttons, users can interact with the data and meaningfully draw insights about how AI reached its conclusions while training the system on improvements. Easily graphed data and simple training options help humans gain more value from the AI they’re training and leveraging. It unlocks continuous learning opportunities for your team and improves a system’s functions in the process.

Read More:   Update When Is Decentralized Storage the Right Choice?

With explainable and trainable AI, humans spend a fraction of the time doing a fraction of the work to reap a much better understanding. The result is more trust in AI, higher productivity and a better experience. Tools offering intelligent observability can generate useful results in minutes while making AI more transparent and simpler to use for your team and business.

Feature image via Pixabay.