Compliance
The EU Artificial Intelligence Act: Effects On Wealth Management And Beyond – Part 2
This is the second in a three-part series examining how new European Union rules affecting artificial intelligence will affect development and the use of AI and what this means for the private banking and wealth management community.
(See part one, here.)
Content summary
Artificial intelligence (AI) continues to be the focus of
increasing public attention; board members at wealth management
firms and financial institutions around the world need to take AI
technologies seriously. This article – from compliance,
regulatory, and legal experts – includes discussions on
AI’s impact on financial services and wealth management firms,
what firms in the industry can do, and suggests a potential path
forward. In light of recent stories and euphoria concerning AI
and its reported capability to drive the biggest labour market
shift since the industrial revolution, we are writing three
articles on the topic; a triology from personnel at
AI & Partners (Sean Musch and Michael Charles
Borrelli).
In this second article we cover an introduction to the European
Commission’s proposed Harmonised Rules on Artificial Intelligence
(the EU AI Act) and its impact on the wealth management sector.
(Editor's note: to comment, email tom.burroughes@wealthbriefing.com;
the usual editorial disclaimers apply to views of outside
contributors.)
Compliance – the competitive differentiator
Practical guidance for compliance
Significantly, the EU AI Act (and AI regulation generally) is
expected to help improve the business of financial institutions
and wealth management firms. Not only does the EU AI Act seek to
protect the rights of EU citizens globally, but by default it
also creates a framework for safe deployment of AI systems,
thereby encouraging its confident adoption, roll out and use, to
improve business productivity and services, while managing client
perception, expectation and acceptance of these new
technologies.
Successful development, deployment and use of AI never depends on
one person; it rests on a collaboration between different teams
and positions within a business. The EU AI Act functions much the
same, drawing on a variety of positions’ assets to help them
implement its requirements across the business to achieve
compliance.
As they embark on their “EU AI Act journey,” banks and
wealth management firms will discover specific strengths,
weaknesses, and limitations to what they can do in their business
and promote and foster safe, secure and trustworthy AI systems.
These all depend on a person's position and the responsibilities
they hold within their business. Firms should establish and
operate an EU AI Act Framework which includes five phases to help
achieve readiness, as shown in Figure 1: Assess, Design,
Transform, Operate and Conform. The goal of the framework is to
translate EU AI Act obligations into actions and outcomes that
help clients effectively manage both safety, security, and
trustworthiness, to help reduce risk and avoid incidents.
Figure 1: AI & Partners’ EU AI Act Framework
Implementing transparency measures
Transparency is at the core of fostering client trust. Firms
should adopt AI systems that can be explained and interpreted,
enabling clients to understand how AI-driven decisions are made.
By providing clear insights into the functioning of AI
algorithms, they can dispel potential biases and avoid opaque
decision-making.
The need or desire to access information about a given AI system
is motivated by many reasons; there are many concerns that may be
addressed through transparency measures. One important function
of transparency is to demonstrate trustworthiness which, in turn,
is a key factor for the adoption and public acceptance of AI
systems. Providing information may, for instance, address
concerns about a particular AI system’s performance, reliability
and robustness; discrimination and unfair treatment; data
management and privacy; or user competence and
accountability.
In an article from the Financial Conduct Authority (1), the task
of developing an organisation’s approach to AI transparency is
described as identifying a wide range of potentially relevant
types of information and deciding how to deal with each of them.
There are three salient considerations which arise when
reflecting on the ‘why’, ‘who’ and ‘when’ of transparency:
Figure 2: The "Transparency Matrix"
Ensuring human oversight in AI systems
While AI enhances decision-making, human expertise remains
irreplaceable. Organisations should remain aware of the need for
robust human oversight mechanisms in AI systems to review,
validate, and intervene if necessary (the “Hybrid Approach”).
This combination of AI and human judgment ensures a balanced
approach to investment strategies, reducing the risk of AI-driven
errors.
Data from Accenture (2) shows that as companies learn from their
initial use cases and the potential of AI technologies becomes
clearer, AI quickly rises to a “C-suite” priority.
Notwithstanding, truly realising that potential is much harder.
Based on their survey results, 76 per cent of executives struggle
with how to scale AI across the enterprise (2).
One of the key roadblocks to scaling AI is governance and risk
management. Specifically, AI decisions in wealth management have
a real impact on people's lives. Placing decision-making
capability in the hands of a machine gives rise to large
questions around ethics, trust, legality and responsibility. In
this respect, both explainability and oversight are paramount,
with the hybrid approach (human and machine) needed. Figure 3
shows how this can be embedded in an organisational context.
Figure 3: Intersection of organisational elements
Collaboration counts in the long-term
Advantages of striking partnerships
Collaborations with trusted service providers specialising in AI
and compliance brings immense advantages to firms in a new
technological realm. Specialist service providers possess deep
expertise in navigating complex AI regulations, allowing senior
managers to focus on their core competencies.
For example, Project Infinitech (3), a consortium-led initiative
comprising 48 participants in 16 EU member countries, has pilot
AI-driven products and services that include use-cases around
Know Your Customer (KYC), customer analytics, personalised
portfolio management, credit risk assessment, financial crime and
fraud prevention, insurance and regulatory technology (RegTech)
tools incorporating data governance capabilities and facilitating
compliance to regulations. Collaboration, in this sense, was core
to its success.
A multitude of opportunities exist to streamline
operations
Embracing AI in wealth management presents a multitude of
opportunities for streamlining operations,
improving client experiences, and delivering superior
investment strategies. However, compliance with the EU AI Act is
of paramount importance to avoid large fines, litigation risks,
and maintain integrity and trust in the industry.
Wealth managers should prioritise transparency and human
oversight while proactively complying with AI regulations. By
partnering with professional services firms like AI & Partners,
wealth managers can confidently navigate complex legalities,
ensuring that their AI systems remain compliant and, most
importantly, build trust with their clients through responsible
AI usage.
Footnotes:
1, https://www.fca.org.uk/insight/ai-transparency-financial-services-why-what-who-and-when
2, https://www.accenture.com/gb-en/insights/capital-markets/wealth-management-artificial-intelligence