Compliance
Europe Gets Closer To AI Regulation

Policymakers in Europe, the US, Asia and elsewhere are trying to figure out how to regulate AI so that they capture the financial and non-financial benefits, without risks of accidents and threats to society.
At the weekend, the European Commission applauded the
political agreement reached between the European Parliament and
the Council on legislation designed to regulate artificial
intelligence.
It comes as policymakers around the world juggle the promise of
AI, such as boosting productivity, safeguarding against potential
risks to privacy, society and economic welfare.
The Commission commented on moves to ensure that the Artificial
Intelligence Act comes into force. The organisation – the
executive power of the European Union – had proposed the Act in
April 2021.
Enforcement under the legislation takes effect two years after
the Act is formally published in the Official Journal of the
European Union. From this publication’s understanding, the Act
could take full effect from the start of 2026, with compliance
obligations starting from the first quarter of 2024.
Companies not complying with the rules will be fined from €35
million ($37.6 million) or 7 per cent of global annual turnover
(whichever is higher) for violations of banned AI applications,
€15 million or 3 per cent for violations of “other obligations”
and €7.5 million or 1.5 per cent for “supplying incorrect
information.” More proportionate caps are foreseen for
administrative fines for small and medium-sized enterprises and
startups in case they infringe the Act.
The salience of AI and the attention of politicians and business
leaders has shot up this year, particularly with the prominence
of ChatGPT and similar large language model tools. Although
developing for some time (AI has sometimes been conflated with,
amongst other things, machine learning and deep learning), the
technology has grabbed attention in the wealth management
industry during 2023.
The political agreement in Europe is subject to formal approval
by the European Parliament and the Council; it will come into
force 20 days after publication in the Official Journal [the
record of EU legislation]. The Act then applies two years
after its entry into force, except for some specific
provisions.
“The EU's AI Act is the first-ever comprehensive legal framework
on artificial intelligence worldwide. So, this is a historic
moment. The AI Act transposes European values to a new era. By
focusing regulation on identifiable risks, today's agreement will
foster responsible innovation in Europe. By guaranteeing the
safety and fundamental rights of people and businesses, it will
support the development, deployment and take-up of trustworthy AI
in the EU,” Ursula von der Leyen, President of the European
Commission, said.
There are various approaches around the world. In the US,
there are bi-partisan moves in Congress to clarify laws about AI:
in November, Senators from the Republican and Democrat parties
introduced the Artificial Intelligence Research, Innovation, and
Accountability Act of 2023 (AIRIA). The AIRIA follows President
Joe Biden’s recent Executive Order on Safe, Secure, and
Trustworthy AI and the blueprint for an AI Bill of Rights.
In Switzerland, on 9 November, the Federal Data Protection and
Information Commissioner issued a statement on AI-supported data
processing. It said the Swiss Data Protection Law is formulated
in a technology-neutral manner and is, therefore, also directly
applicable to AI-supported data processing. Turning to the UK,
the government has issued a white paper on regulatory proposals.
Singapore, meanwhile, issued proposed guidelines on the use of
personal data in AI recommendation and decision systems.
Risk levels
The EU is taking what it calls a “risk-based approach.”
The “vast majority of AI systems fall into the category of
minimal risk,” the Commission said in a statement at the
weekend. Minimal risk applications such as AI-enabled recommender
systems or spam filters will benefit from a free pass and the
absence of obligations, as these systems present only minimal or
no risk for citizens' rights or safety. On a voluntary basis,
companies may nevertheless commit to additional codes of conduct
for these AI systems, the Commission said.
AI systems identified as “high-risk” must comply with strict
requirements, including “risk-mitigation systems, high quality of
data sets, logging of activity, detailed documentation, clear
user information, human oversight, and a high level of
robustness, accuracy and cybersecurity.”
Examples of such high-risk AI systems include certain critical
infrastructures for instance in the fields of water, gas and
electricity; medical devices; systems to determine access to
educational institutions or for recruiting people; or certain
systems used in the fields of law enforcement, border control,
administration of justice and democratic processes. Moreover,
biometric identification, categorisation and emotion recognition
systems are also considered high-risk.
The new Act identifies a fourth category: Unacceptable risk.
These are AI systems considered a “clear threat to the
fundamental rights of people,” and they will be
banned.
This category includes AI systems or applications that
“manipulate human behaviour to circumvent users' free will, such
as toys using voice assistance encouraging dangerous behaviour of
minors or systems that allow ‘social scoring' by governments or
companies, and certain applications of predictive policing.”
Also, some uses of biometric systems will be banned, for example
emotion recognition systems used at the workplace and some
systems for categorising people or real time remote biometric
identification for law enforcement purposes in publicly
accessible spaces (with narrow exceptions), the Commission
said.
Finally, there is what the Act calls “specific transparency
risk.”
“When employing AI systems such as chatbots, users should be
aware that they are interacting with a machine. Deepfakes and
other AI generated content will have to be labelled as such, and
users need to be informed when biometric categorisation or
emotion recognition systems are being used,” the Commission said
in its statement. “In addition, providers will have to design
systems in a way that synthetic audio, video, text and images
content is marked in a machine-readable format, and detectable as
artificially generated or manipulated,” it said.
The Commission will convene AI developers from Europe and around
the world who commit voluntarily to implement key obligations of
the AI Act ahead of the legal deadlines.