Technology

Tackling Black-Box Challenge To Unlock AI’s Potential

Nuno Godinho 20 February 2024

Tackling Black-Box Challenge To Unlock AI’s Potential

In setting out what "explainable AI" means and how it fits into the private banking and wealth management world, the author points out that investors want more visibility into what happens behind the scenes – and that applies to the sort of tech tools firms use.

The author of this article examines the details of terms such as “explainable AI” and “regularisation” to set out ways in which AI can and should be used in the wealth management sector. Nuno Godinho, group chief executive at Industrial Thought Group, cuts through some of the jargon and illustrates the terms that one suspects will be a part of our vocabulary for some time to come. The editors are pleased to share this content; the usual editorial disclaimers apply. Email tom.burroughes@wealthbriefing.com

Wealth managers are under intense pressure to increase client value. They are not blind to the changing, fintech-driven world they and their customers now live in. Today’s investors want connectivity, accessibility, and personalisation with individually-tailored investment solutions aligned with their values and preferences. Harnessing the latest technology is essential to satisfy these demands. Firms that fail to embrace innovation won’t survive against competitors using digital capabilities to enhance decision-making and streamline customer interaction. 

Against this backdrop, artificial intelligence (AI) is a powerful technology being used in new and exciting ways to transform wealth management. AI offers added dimensions to data analysis, market prediction, and portfolio management, easing the burden for professionals and improving performance for investors. And we’re not even close to seeing its full potential.

However, a significant challenge persists, which could severely limit these possibilities: a need for more transparency and, therefore, a growing lack of trust. With the all-too-common, inaccessible and closed-off black-box model, AI algorithms make decisions without providing clear reasoning and traceability. It is impossible for humans to understand what happens between the input and output of data. As a result, it is difficult for clients and regulators to trust AI-driven decisions without knowing their provenance – posing a risk to the industry’s evolution.

There are also liability and accountability issues if errors or adverse outcomes occur. Problems are difficult to fix if we can’t see how they have arisen. Moreover, assigning responsibility to either human or machine agents is difficult, creating legal and ethical dilemmas. 

We are at a juncture where investors are demanding more visibility into what happens behind the scenes, not less. And advisors are being called to answer questions at a deeper level. Data holds the key to change in so many ways, but we must balance technological advancement with rational thought and clarity to create strong foundations for the future. 

Role of explainable AI
Explainable AI (XAI) is a set of processes with the power to turn opaque black-box models into transparent glass-box models by rationalising how they work. Explainable models and explainable interfaces should be incorporated into existing AI algorithms to track prediction accuracy and illuminate the decision-making process. XAI falls under responsible AI, which is focused on ensuring fairness, unbiasedness, accountability, privacy, and ethics remain at the forefront. Some of the main XAI strategies include:

1. Using Interpretable models: There are many machine learning models that are inherently interpretable, such as linear regression, logistic regression, decision trees, and rule-based systems. These models provide clear insights into how they make decisions. While they might not be as powerful as more complex models, they can be a good choice for applications where interpretability is crucial.

2. Explainability techniques: For complex deep learning models like neural networks, where AI teaches computers how to process information similarly to the human brain, explainability techniques can be used to shed light on their decision-making process. These include LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and saliency maps, among others. These techniques can help provide insights into the most important features that drive the model's predictions.

3. Hybrid models: Hybrid models combine interpretable models with black-box models to leverage the strengths of both. For example, a decision tree can be used to create rules that guide the overall decision-making process, while a deep learning model can be used for specific tasks within the tree.

4. Regularisation and simplification: Regularisation techniques may reduce the complexity of a model and make it more interpretable. For example, L1 regularisation can be implemented to encourage sparsity in a model, making it easier to interpret.

5. Feature engineering: By carefully engineering and selecting features, it may be possible to improve the interpretability of a model. This involves removing irrelevant or redundant features, combining features, or creating new features that are more meaningful.

6. Human-in-the-loop (HITL): Incorporating human expertise into the decision-making process can help mitigate the black-box problem. Humans are able to provide oversight, intuition, and domain knowledge throughout the process to complement machine learning models.

7. Documentation and communication: Providing clear documentation of the algorithms, their assumptions, limitations, and validation process will help build trust and understanding among stakeholders. It is also important to communicate the rationale behind the use of a specific model and its expected outcomes.

8. Model validation and testing: Regularly testing and validating models can help ensure their accuracy and fairness. This includes using different datasets, cross-validation techniques, and performance metrics to assess the robustness and generalisation of the model.

Embracing responsible AI and ensuring we use the glass-box approach (XAI) will help in several fundamental areas. Firstly, fairness and de-biasing, as any biases or unfair advantages inherent in the models can be easily tracked. From a risk perspective, analysing AI recommendations based on logical outcomes, as well as quantifying and mitigating the model’s risks will ensure that any deviations are identified and handled correctly. A more in-depth understanding will also improve code confidence and accelerate compliance as regulations increase. Overall, it will facilitate greater accountability, transparency and trust.

In today's ever-evolving financial landscape, the fintech revolution has brought forth sophisticated algorithms and technological developments that promise more efficient wealth management. Now we need to work together to find ways of overcoming opacity, whilst revolutionising the wealth management experience for firms and investors.

Register for WealthBriefing today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes