Explainable AI is inherently more transparent than the typical application of AI or ML and will help establish more trust in the technology. Source: Shutterstock

Explainable AI is inherently more transparent than the typical application of AI or ML and will help establish more trust in the technology. Source: Shutterstock

Why explainable AI should be the next big thing in finance

THE VIRTUALLY unlimited computing powers made possible by the cloud in recent times has led to breakthroughs in other emerging revolutionary technologies. The most transformative among them, in the realm of enterprise technology, is definitely artificial intelligence (AI).

AI aims to provide human-like thinking and capabilities at machine-like speeds and efficiency, which could potentially be deployed to perform repetitive but critical tasks, freeing up the human workforce to focus on strategic aspects of business.

And accordingly, many of the AI use cases in the industry involve the recreation of natural intelligence and logic through algorithms and scripts.

In the accounting and the finance sector, for example, an AI application is expected to detect and flag spending variances, or trends that deviate from the norm.

While humans could perform the same task, and identify the anomalies in a group of hundreds of transactions, AI could do the same across billions of rows of records and analyze the flagged records for possible false positives, all within the same time frame.

However, there is a concern about how exactly this additional analysis is done to rule out potential deviations.

Should the industry just trust the “judgment” of the AI application? This element of trust, or lack thereof, could be addressed by explainable AI.

Explainable AI brings transparency to applications

Explainable AI or XAI refers to methods and mechanisms involving AI applications whereby the results could be understood by human experts.

This framework differs from traditional AI deployments that focus primarily on the outcome where even designers of the solution struggle to explain how a result was derived, a concept commonly known as “black box”.

With the added requirement of being able to explain how a particular outcome was achieved, XAI is inherently more transparent than the typical application of AI or ML.

The truth is, this “need to explain” is already enforced in many industries. For examples, in the healthcare industry, physicians control all the decision parameters when using AI-powered medical diagnostics software enabling them to control the diagnostics process, and explain and trust the outcome.

Similarly, AI should be able to explain and justify the reason certain initially flagged transactions were eliminated as false positives, and back this explanation with results.

Placing this added requirement on AI should not, however, be seen as an effort to impede the technology or its adoption. Instead, it is quite the opposite.

Practically all human professionals and decision-makers are subjected to similar standards in every industry. Engineers should be able to explain the troubleshooting process of machines, the same way how dentists should be able to back up their diagnostics and prognoses.

While it may place an additional burden on developers and solution providers, the requirement will help establish more trust in AI technology.

The future of AI within the accounting and financial services space is already looking pretty bright, and with the added security that XAI provides, we could definitely expect broader adoption for more mission-critical tasks in coming years.