Should trustees be worried about the impact of artificial intelligence on Master Trusts?

Article Header
Donna Walsh

September 18, 2023

4 mins read

Artificial intelligence is set to reshape all our lives – and this could include our pensions. Trustees therefore need to understand its limitations, ensure members’ information is appropriately protected, and keep pace as this technology evolves.

Picture the scene. A well-known and widely trusted financial expert promotes an Elon Musk investment opportunity, via video. The video says that Musk’s new project “opens up great investment opportunities for British citizens”. It is shared repeatedly across multiple social media platforms, including Facebook, Instagram and X.

Interested? A lot of us might be. Particularly nowadays, when many of us feel poorer. 

The trouble is, the promotion was completely fake. An AI-generated deepfake. 

The financial expert supposedly promoting this opportunity was Martin Lewis, the MoneySavingExpert.com founder.

Except that Martin Lewis had nothing to do with it. Neither did Elon Musk.

Yet the video was terrifyingly convincing, due to the computer-generated impersonation of Martin Lewis’ face and voice. 

Martin Lewis soon posted his own message on social media, warning anyone who saw the video that it was an attempt by criminals to steal their money. 

 

Robots have arrived

Sadly, such scams could be the tip of the iceberg in terms of what people may face in future. And they represent part of a much larger trend that seems likely to revolutionise our lives: artificial intelligence (AI).

Recently, the use of AI has been seen in everything from song-and-essay writing and driverless cars, through to chatbot therapists and the development of medicine.

And it can be increasingly difficult to differentiate AI from human behaviour. For instance, the winner of a major photography award in 2023 revealed later that his work had actually been created using AI. Meanwhile, a song using AI to clone the voices of Drake and The Weeknd recently went viral on social media.

So what do these changes mean for how people might make financial decisions? And how can people stay safe online when there are so many competing sources of information?

And what is the role of financial providers in this space, when some people might trust their social media platforms more than traditional financial services companies, and be more inclined to invest in cryptocurrency than a pension?

 

It’s not all bad

ChatGPT itself was recently asked what AI could mean for DB pension schemes
Its generated article was generally very supportive of the impact of AI (perhaps unsurprisingly), and suggested AI could add value in the following areas:

1.    Enhancing operational efficiency and accuracy 

2.    Risk management and predictive analytics 

3.    Improved member engagement 

Where the data is good enough, AI might assist human administrators or investment managers with repetitive tasks such as data processing, calculations and member communications.

With respect to risk management, AI could potentially help by analysing vast amounts of data to identify patterns. This could provide trustees with more accurate risk assessments, for example with respect to investment outcomes, data protection and cyber security.

And in terms of member engagement, pension member queries could potentially be dealt with in a similar manner to many online retailers, where chatbots and virtual assistants are becoming a common feature. This might allow staff to focus on more complex tasks.

All of these opportunities are accompanied by risks, however.

 

Unconscious bias

In pensions, neutrality is obviously vital and conflicts of interest must be thoroughly managed. For example, communications with members must be carefully drafted to not be perceived as advising on or influencing member decisions. 

And there may be some susceptibility to bias in AI tools, which would have to be carefully controlled. Recently, the EU's competition chief said AI's potential to amplify bias or discrimination was a pressing concern.

Such bias was alleged when the Department for Work and Pensions (DWP) widened its use of AI to assess universal credit applications and tackle fraud. 

Such use of AI would therefore need to be accompanied by tight governance and control, with all final decisions made by a human.

Then there are environmental considerations. Training large language models like ChatGPT uses significant energy and water resources, and trustees need to be mindful of this as part of their overall environmental, social, and corporate governance strategy.

 

Balancing act 

For Master Trusts, it will fall to trustees to try to find the right balance between allowing AI to be leveraged for the benefit of members, and not allowing undue risks to be taken.

This means understanding its limitations, ensuring members’ information is protected and keeping pace as this technology evolves.

For the foreseeable future, at least, AI is unlikely to take over the pensions industry.

Many of us will probably continue to want to interact with other people when making significant financial decisions.

For instance, when deciding how to use our pension pots, many of us would still want to speak to a human expert, even if most of the calculations and recommendations up to that point had been produced by a computer. 

AI is also likely to have a tougher time empathising with people – particularly those with pronounced vulnerabilities, whether physical, mental or financial.

Often, it is only by living an experience first-hand, even just for a moment, that we can start to understand what people are going through. 

Imagine, for example, someone trying to access information online and make serious financial decisions while suffering from arthritis. Or an eye condition such as cataracts or tunnel vision.

AI may be able to understand the physical consequences of these conditions, but what about the feelings of vulnerability and isolation that someone in this situation might experience?

 

Somebody to listen 

Many of our customers experience poor mental health, and the increased cost of living has only made things harder. 

Simulating the pressures of surviving on a tight budget is near-impossible. But Standard Life has created a virtual reality tool to allow our staff at least a glimpse. When you put on the VR headset, you find yourself witness to a simulated call between a customer and a Standard Life employee.

To your left you can see the customer sitting at her kitchen table on the phone, explaining how she is struggling with her finances and mental health. 

She speaks at times with her head in her hands, tears rolling down her cheeks. It’s as if you’re in the same room as her. You can see her kitchen counter in the background, the ironing board and a pile of laundry on the other side of the room, her houseplant and lino flooring. 

To your right you can see a Standard Life employee she is speaking to. This employee sits in an office, complete with phone headset, mug of tea on her desk and colleagues working in the background. They listen sympathetically and try to help.

Of course, this whole experience is triggered by technology. But it is felt by humans, all of whom crave a sense that what they feel is understood by someone else. And perhaps this is where the real opportunity lies: human emotion and engagement aided by technology to provide better service and financial outcomes for more of us.

For avoidance of doubt, this article was written by a human!
 

Share via

Related Articles