Technology

Encouraging Responsible Decision-Making With Explainable AI

Nuno Godinho 14 May 2024

Encouraging Responsible Decision-Making With Explainable AI

The author of this article argues that at the moment, there still is a general lack of accountability across the board when it comes to educating people on the responsible use of AI technology.

The wealth management and wider financial services sector continues to digest what AI means for business practices – and also what it does not mean. Definitions are important. In this article, Nuno Godinho, group chief executive of Industrial Thought, gives an outline of what is meant by “explainable AI.” (More on the author below.)

As AI develops, the editorial team will continue to follow developments. We welcome feedback: email tom.burroughes@wealthbriefing.com. The usual editorial disclaimers apply. 


Moving from a black box to a glass box model is imperative if we want to unlock AI’s potential and create a sustainable future built on transparency and trust. The development and deployment of finance-related AI systems is moving fast. Still, when AI algorithms make decisions without providing clear reasoning and traceability, it’s impossible to understand what’s happening beneath the surface. Consequently, it’s difficult for stakeholders to embrace digital innovations in the high-risk world of finance without knowing how complex algorithms make their decisions.

Explainable AI (XAI) is seen as the answer by many. But what does that involve, and how can different parties harness the technology ethically and responsibly?

Broadly speaking XAI relies on four key principles:

-- Explainable – AI systems should be able to provide clear explanations for their actions; 
-- Meaningful – explanations provided need to be understandable and meaningful to humans; 
-- Accuracy level/explanation accuracy – systems need to clearly provide the level of accuracy for each result and actions that were taken; and
-- Knowledge limits – clearly stating the knowledge limitations and uncertainties inherited from its own training dataset. It should indicate that results were provided under certain conditions, based on accurate data until a certain date and relevant for a specific market. 

If investment professionals are giving advice using AI tools, there is an inherent duty of care. Yet due diligence is only possible if they have an available explanation of the system’s recommendations. 

To ensure that this happens, firstly, when developers are creating the algorithms and tools, they also need to implement ways of generating an explanation. This enables advisors to understand the evidence and see what information is being used to generate the output. However, the buck doesn’t stop with developers. If there is an explanation, the user has to ask for it. 

If we ask ChatGPT an open question without establishing clear parameters, we are voluntarily choosing to work within a black box setting by placing our trust in the unknown. Users must want an explanation, not least because large language models will adapt based on whether we prioritise speed or accuracy. While establishing reasoning may take longer, the results will be better for everyone.

An explanation also needs to be meaningful. When we use AI to generate a job description, for example, it’s not enough to receive a list of necessary skills; we should also seek to understand why these skills are important for the role. The same goes for investment recommendations; we can’t act on system-generated decisions with confidence if we don’t have context.

Then, there is explanation accuracy. Every response in an AI system has an accuracy level, unfortunately users rarely request it. Just as we should ask for explanation and context, we should also check how the tool rates the accuracy of its recommendations in order to manage expectations.

Lastly, it’s crucial to ascertain a system’s knowledge limits and know whether a request falls outside of its operating capabilities or pushes its knowledge boundaries to such an extent that the explanation is unreliable.

Owning our involvement
Wealthtech companies creating algorithms must ensure that they adhere to the four principles outlined above: explanation, meaning, accuracy, and knowledge limits. Currently, not all of them do. With regards to knowledge limits, are they being sufficiently clear about what data their investment recommendations are based on? Can users easily differentiate whether that market data is real-time, today’s data or data from the previous day? Do they know how accurate or not the recommendations are and why they should trust them? 

Advisors need to ask questions. That also means knowing how to ask the right questions so that they can establish a comprehensive picture. They should be very clear about the parameters they are working within. It’s not enough to ask for a list of top ten shares for consideration. They need to say who they are, what their role is, what context they want, and what their goals are. They should even go as far as telling the system to take its time and give sources for each of its recommendations. 

The more precise the input, the more relevant the response will be. In turn, advisors will be better equipped to use the information responsibly and explain its provenance to their clients.

To reiterate, when finance professionals are using AI technology, they must be deliberate, defining their requirements step-by-step. AI was positioned at the Peak of Inflated Expectations on the Gartner Inc Hype Cycle for Emerging Technologies 2023. Appetite for its ability to help satisfy client demands is growing – AI certainly offers many advantages for our industry and its customers. Still, it shouldn’t be seen as a silver bullet. 

AI tools aren’t a quick fix or suitable for every purpose. They are there to automate and augment the intelligence and accuracy of existing or new processes when based on best practices, and not as a “magic trick.” The role of prompt engineering is an area ripe for development so that more people in our industry are trained on structuring their questions and optimising prompts for the most effective answers.

Right now, there still is a general lack of accountability across the board when it comes to educating people on the responsible use of AI technology. The main thing we can do is to acknowledge that AI-driven decision-making needs very careful thought, and encourage collaboration between fintechs and wealth management providers as digital transformation continues to evolve as an undisputed force in modern finance.

About the author

Nuno Godinho is the group CEO of Industrial Thought Ltd. Originally from Portugal, and with more than 25 years of experience in business and digital transformation, he has founded several strategic businesses, been instrumental in launching several multi-billion-dollar businesses, and held senior leadership roles in world-leading organisations, including Microsoft, GE, SAGE, and EMIS Group Plc. He has a long-standing expertise in managing the complexities of creating and growing global businesses spanning industries such as finance, healthcare, pharmaceuticals, and sports.

Register for WealthBriefing today

Gain access to regular and exclusive research on the global wealth management sector along with the opportunity to attend industry events such as exclusive invites to Breakfast Briefings and Summits in the major wealth management centres and industry leading awards programmes