If AI can raise standards for all kinds of advisors then it might be thought of as negligent not to incorporate it into business practices, so the author of this article says.
The ways in which AI can or cannot be used to help guide decisions, or even make them, is bound to be a growing area of interest across many business sectors. That includes the world of legal and wealth management advice. The capabilities of artificial intelligence are developing rapidly and this news service is keen to share different perspectives on the topic.
The authors are Rosie Wild, partner, and Anna-Rose Davies, associate at legal firm Cooke, Young & Keidan. The editors are pleased to share these views; the usual disclaimers apply to views of guest writers. Jump into the debate if you have a view. Email firstname.lastname@example.org
2023 has been a landmark year for artificial intelligence becoming a part of the national conversation. Take Google searches for example – AI-related searches are registering approximately tenfold the interest levels seen in 2004. In the last decade, we’ve witnessed ambitious attempts to change the way we live and work using AI tools, spanning autonomous vehicles to investment decision-making. With no signs of imminent regulatory intervention on the horizon, alongside AI tools steadily becoming more accessible to users, the scope and diversity of these applications are only set to evolve.
While AI tools can excel within the controlled boundaries of a rule-based game, their performance in the real world without constraints is far more uncertain. What is certain is that in any situations where judgment calls are debated, the same issues will arise when AI takes on the role of decision-maker.
Courts have already considered scenarios where a trading platform autonomously enters into contracts as a product of its coding. Legally, there's no issue with human operators programming their systems to enable this, and there's no inherent reason why AI systems couldn't be granted the same authority. However, humans often disagree over whether they have entered into a contract, and these disagreements will no doubt continue to occur when AI serves as the decision-maker, especially in light of regular news stories of AI systems behaving unexpectedly.
If an AI program were to engage in what a reasonable person might deem an objectively unfavourable agreement, the party on the "losing" side might argue that the deal was so evidently one-sided that both parties must have recognised a mistake had been made. However, it is not always an easy task to convince a court that a contract should be invalidated due to a mistake, and this is likely to remain the case (and potentially be even more challenging) when it comes to use of AI software in investments.
The issues at play escalate when AI is working for you, rather than against you. A wide variety of investment platforms now offer advice through AI tools, employing algorithmic software to provide investment recommendations based on customer input. Taking it a step further, there have been instances where fully autonomous AI systems reportedly outperform seasoned stockbrokers and, already, we understand that the largest “robo” advisors have hundreds of billions of dollars in assets under management. Even if anecdotes like these don’t necessarily reflect the average performance and may simply be particularly fortunate instances, this practice is undoubtedly here to stay.
For an advisor who is in the business of suggesting (or claiming to suggest) investment opportunities for individuals, they must ensure that they are meeting the standards expected of a reasonably competent investment advisor. Failing to do so exposes them to the risk of being found negligent and therefore liable to compensate individuals for their losses. One clear downside of using AI systems (not to mention more basic algorithmic investment advisors) is their lack of common sense – AI may not respond to a typo or a clearly erroneous user input in the same manner as a human advisor would.
Even more interesting is what we could call the "Google Maps effect" – the digital version of the "Lonely Planet effect." If there was ever a period when Google Maps (and the shorter routes it worked out and took people down) was a well-kept secret among a select group who used it to navigate quickly through towns using unconventional routes, it didn't take long for it to contribute to traffic congestion as its user base expanded.
Now, consider the scenario where the investment world's equivalent of Google Maps triggers simultaneous sell orders for the same stock, potentially resulting in flash crashes even more dramatic than those we have seen before. It might require some imaginative thinking to put together a legal course of action for such a situation, but certainly not a lot.
If this is the path we are heading down, and if the instances where AI outperforms seasoned stockbrokers are not just standalone incidents, then it would certainly take a lot for advisorrs to disregard the benefits of AI software in their line of work. In addition to this, if AI raises the standards for advisors across the board, there may come a point where it would be considered negligent not to incorporate AI into one's practices.