16 August 2024
Beyond the Code: Exploiting the human element for cybercrime in asset management
First, there is the research. Then there is the hook, the play, and finally, the exit.
While this may sound like the playbook from a bad crime movie, it is actually a high-level overview of the four stages of a social engineering attack.
Social engineering's human element distinguishes it from other types of cybercrime. Rather than exploiting technology flaws, social engineering relies on tricking a person into providing confidential or sensitive information or transferring funds.
It makes sense to leverage the human element in cybercrime. After all, human error has been cited as a major factor in approximately 88% of all data breach incidents.
Now, artificial intelligence (AI) has upped social engineering’s game. Of particular concern is AI’s ability to create deep fakes that impersonate individuals using audio and video. Even the most cyber-savvy asset management professional could easily fall for today’s highly sophisticated social engineering scams.
Human error cited as a major factor in approximately
of all data breach incidents.
A prime target
In addition to trillions of dollars in assets, asset managers have responsibility for a wealth of high-value data. This often includes sensitive client and employee information (Personal Identifying Information - PII), as well as intellectual property, such as a trading algorithm or other proprietary technology.
Another reason why asset managers have become so popular with fraudsters is that the structure of these firms provides multiple social engineering entry points. Even the largest firms rely on a range of third-party service providers, such as accounting firms, consultants, custodians, data providers, distributors, transfer agents, and recordkeepers.
For smaller firms, such as hedge funds, when it comes to cybersecurity, “Vendor security is a fundamental risk concern,” notes Alex Burton Brown, an Executive Director at Gallagher Specialty.
Because they tend to have a relatively small staff headcount, hedge funds typically outsource a significant amount of non-core business activities. This is particularly true for back-office operations such as investment compliance and fund administration. This makes them especially vulnerable to a phishing attack via a trusted third party.
Of all cyber threats, ransomware remains a top concern and a significant driver of claims. Tactics are becoming increasingly sophisticated, with double extortion now commonplace and cyber criminals typically targeting the size of ransom to fit the organisation.
The new face of phishing
While misappropriating a company’s technology is still at the core of social engineering, sometimes cyber fraud has a surprisingly low-tech component. For example, tailgating involves a fraudster gaining access by following someone into a building, often by posing as a delivery person or claiming that they left their work ID at home.
SIM swapping is a form of phishing that allows a fraudster to take over a firm’s phone network. After obtaining personal information about an employee, the fraudster will call the phone company, claiming the employee’s phone was stolen. The fraudster will then redirect the victim’s phone to their SIM card.
Enter AI
Generative AI uses algorithms to develop a wide range of content, including audio, video, images, and text. It can also be used to create new and sophisticated types of malware.
Estimates suggest that AI can generate a deceptive phish in just 5 minutes, while a human might take around 16 hours to craft one.
“There’s a good reason it’s been called social engineering on steroids. Generative AI has given social engineering a big boost, making it stronger and more complex than ever before.” Notes Jonathan Drinkwater, Divisional Director with Gallagher Specialty.
Particularly worrying for asset management firms is generative AI’s ability to create fraudulent documentation. This can include bank statements, board minutes, and other documents routinely reviewed during due diligence.
Deepfakes
The ability to impersonate a human is perhaps the most disturbing aspect of generative AI-powered social engineering. Earlier this year, a deepfake posed as a multinational company’s CFO in a video call — tricking an employee into remitting USD25.6 million to fraudsters. Using such technology, criminals can pose as a client, a potential investor, a counterparty, or even a regulator. Unfortunately, the possibilities are endless.
How insurance is responding
Social engineering is a comparatively new exposure, and Gen AI-driven technology has only recently begun reshaping the threat landscape. As such, those looking for robust claims data specific to this exposure will need to wait.
As a report from Lloyd’s of London noted the impact of AI on cybercrime, overall, it is likely that the frequency, severity, and diversity of smaller-scale cyber losses will grow over the next 12-24 months, followed by a plateauing as security and defensive technologies catch up to counterbalance.
The key question from an insurance perspective is how can insurance respond to the consequences of social engineering. Broadly speaking, those are:
- Legal and regulatory actions resulting from the inadvertent disclosure of PII
- Ransom payments in response to ransomware threats
- The transfer of funds to fraudsters in reliance upon deepfakes or other forms of impersonation fraud utilising information obtained through phishing or other electronic fraudulent means
- The costs of third-party professionals to rectify damaged computer systems following a social engineering attack
Which insurance policy responds?
The two classes of insurance that provide the best protection against social engineering risk are Cyber insurance and Crime insurance.
Cyber insurance covers many computer-based risks, primarily: data breaches resulting in the loss of sensitive information; the impairment of computer systems; and interruption to business resulting from cyber incidents and ransomware attacks. However, it does not respond to all computer-based risks. Most notably it does not usually cover the misappropriation of funds via computer-based means, e.g. impersonation fraud via email. That risk typically sits under Crime insurance.
Crime insurance includes coverage for the misappropriation of funds via certain fraudulent electronic methods, including funds transfer fraud. For example, email communications where the fraudster poses as a customer of the insured and requests the transfer of funds. Most policies will refer to fraudulent electronic communications in this respect. As deepfakes are a relatively new development, many policies do not make specific reference to fraudulent video communications. Whilst we think the reference to electronic communications could incorporate video, we are working on updating our policies to make specific reference to such communications.
So, the best approach to insuring against social engineering risk is to purchase both Cyber insurance and Crime insurance. Traditionally, the insurances are provided under separate policies (with separate limits of liability), with different insurers. There has been a recent move by some insurers to offer both insurances under one policy. We do not believe this development is necessarily advantageous as there are some notable benefits of buying separate policies. For example, specific Cyber policies include additional services and vendor panels that are often missed from combined policies.
The two classes of insurance that provide the best protection against social engineering risk are Cyber insurance and Crime insurance.
The long cybercrime
Even the largest and most sophisticated investors can be the target of an attack. In 2020, cybercriminals stole USD10 million from the Norwegian Investment Fund, one of the world’s largest sovereign wealth funds.
The spoofing email they sent included false payment information and diverted money, which had been authorised to a microfinance organisation in Asia, to the fraudster’s bank account.
The attackers were successful because the scammers had spent months collecting information about the fund’s employees, financial operations, processes, and procedures. During this time, they learned about the loan, which they intercepted via wire transfer fraud.
Variations of a scam
Business email compromise (BEC) may seem old school compared to these new types of phishing, such as:
- Vishing — phone calls and voicemails either made by an individual or a robocall.
- Smishing — messages sent via text to a phone or tablet.
- Quishing — QR codes that direct the recipient to a malicious site.
As is the case with BEC, these fraudulent communications often appear to come from a well-known company, which the recipient might routinely hear from.
Let's talk
Keep reading
Navigating the Crypto Challenge: Key Issues for Asset Managers in the Insurance Market
ESG Exposures for Investment Managers and Asset Management Executives
Navigating the Regulatory Landscape: The Evolving Challenges for Asset Managers
The Importance of Strong Corporate Governance for all Asset Managers
Arthur J. Gallagher (UK) Limited is authorised and regulated by the Financial Conduct Authority. Registered Office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company Number: 119013.