Technology
OPINION OF THE WEEK: Beware The Hype Around AI

There has been "greenwashing" and might there be a risk at some point of a similar issue with how the term "artificial intelligence" is bandied about these days by almost everybody?
Like quite a few journalists assailed by comments about how we’re
going to be side-lined by artificial intelligence, I tested
ChatGPT the other day. I put in search terms about myself to see
what it might write (my narcissism was on full blast). It was
mostly accurate, but got a few points badly wrong. I almost
cried with relief. ChatGPT can’t come up with new ideas,
understand full context, or understand why one story matters but
another might be a dead end. I am still gainfully employed.
Artificial intelligence, or AI, is all the rage. It seems to be
everywhere. By my reckoning, the AI acronym is catching up fast
with “ESG” in its ubiquity in my email inbox. It is
“hot.” Whether in Silicon Valley, London, Zurich or
Singapore, venture capital money is pouring in (well, it was
until recently, given financial market shenanigans).
To give some idea of the buzz, HSBC
Global Private Banking and EquBot have launched the
Artificial Intelligence Powered Global Opportunities Index. This
is a rules-based investment strategy featuring IBM Watson’s AI
engine and other patented technologies to turn data into
investment insights. In the US, Widom Tree has partnered
with NASDAQ and the Consumer Technology Association to develop a
bespoke index that identifies and classifies AI-focussed
companies, providing a “pure AI investment exposure,” to
quote the firm’s website. In early March, Salesforce Ventures,
the company’s global investment arm, launched a new $250 million
generative AI fund to bolster the startup ecosystem and spark the
development of responsible generative AI. This news service has
carried research and commentary on how different parts of the
wealth sector will benefit, such as in
crunching data for KYC checks and to flag potential issues,
for example.
The financial sums are large. The US Department of Commerce has
reported that global AI funding reached an estimated $66.8
billion in 2021, doubling its previous figures from 2020.
Wealth managers get through a lot of data to do their jobs. This
means new computer-based tools aren't luxuries or science
fiction, but hard necessity. For example, the field known as
behavioural finance uses a lot of it because of a need to
spot patterns and see potential warning signs, such as if a
person is showing excessive confidence or undue pessimism, etc.
ESG investing, to be viable, needs large amounts of data when
screening for carbon "footprints", poor labour standards,
questionable boardroom moves, and others. Without the sort of
computing power now taken for granted, these ideas cannot
fly.
But there are concerns on what AI really amounts to. When OpenAI
launched GPT-4, an upgrade to the ChatGPT platform, it did not
meet with universal applause. Over at Bloomberg, for
example, columnist Parmy Olsen was scathing: “Artificial
intelligence in particular conjures the notion of thinking
machines. But no machine can think, and no software is truly
intelligent. The phrase alone may be one of the most successful
marketing terms of all time.”
Her argument is that when people use the term AI, in nearly all
cases what is meant is machine learning, not some sort of
“intelligence” as she defined it. All that is going on is that
these systems are mirroring huge amounts of text. This is an
impressive feat, but it is not intelligence. (It is worth
reminding ourselves of the old "Turing Test" named after the
famous computer scientist and Enigma codebreaker Alan
Turing. Turing proposed that a computer can be said to
possess artificial intelligence if it can mimic human responses
under specific conditions.)
Beyond the Turing Test, what counts as “intelligence”
here? It is more than just being able to remember facts or
"sound" like a person having a conversation. Depending on one’s
view and philosophy, it touches on qualities such as volitional
consciousness (the idea that to think requires effort, a sense of
“taking charge” of the path your mind is on), introspection
(thinking about the process of how one thinks about something to
work out if a process is rational or not), ability to imagine and
throw up scenarios, and come up with original ideas, etc. These
notions – such as about the existence or not of free will –
have divided philosophers since before Aristotle.
Why does all this matter? It matters – and Ms Olsen made a
similar point – because there’s a risk that AI gets oversold and
degrades into being a marketing sales term, and causes problems
when cynicism kicks in. I see this happening with ESG investing.
Also, if people make costly investment mistakes because they
entrust their retirement fund to a set of algorithms, it is
unclear whether a firm can dodge lawsuits or regulatory frowns
because “the machine made me do it". But using the term
inappropriately can still lead people astray. That’s not
good.
As a fashionable area, AI is drawing in a lot of money. Wealth
managers are no strangers to fashion, and as we have seen with
the
“greenwashing” phenomenon, definitions matter. Framing
expectations around what this or that technology can do is
important.
For the time being, my prediction is that AI, however defined, will augment humans’ capabilities, but won’t supplant them. That can be wonderful – think of how we don't need to remember things like phone numbers or street maps as much as we did because of the internet. But even here, technology is double-edged. I’d go so far to say that there may be a danger that as our lives become ever more comfortable and we spend so much time on social media and using gadgets, our minds will become flabbier. We need to be uncomfortable, learn new skills and subjects, and develop critical reasoning skills. (How, for example, can a computer develop “healthy scepticism”?)
There are a few worries. A report earlier in March said that a group of psychologists, two from Northwestern University and the third from the University of Oregon, found via online testing that IQ scores in the US may be dropping for the first time in nearly a century (source: Phys Org, 10 March). A paper was published in the suitably named journal, Intelligence, by Elizabeth Dworak, William Revelle and David Condon. They analysed results of online IQ tests taken by volunteers over the years 2006 to 2018). One could assume that a similar trend might be in place in other countries. It is not entirely clear what is going on. One theory might be that in the late 20th century and early 21st, our lives became easier. Information is easier to find. The sharpness we had to cultivate to learn how to "figure things out" is not as urgent as it was for people, say, 50 years before. Admittedly, this is highly speculative.
The rapid rise of AI has got some people worried about the impact
on society. For example, Elon Musk – Tesla CEO and OpenAI
co-founder – has called for a six-month pause in developing
systems more powerful than OpenAI's newly launched GPT-4. That's
not the first time that the spacefaring entrepreneur and Twitter
owner has flagged his concerns.
Whatever the data says, though, what is clear from the dramas we
have seen at Credit Suisse, Silicon Valley Bank or the positive
achievements of entrepreneurs and innovators, is that at base
they depended on human judgement, and a grasp of reality or
evasion of it. A computer cannot "evade" reality, but it also
cannot, at least as far as I know, "take full responsibility" for
something, either.
For the time being, you are going to have to put up with
me.