15 April 2025
The Rise of ‘AI-Washing’: How PE firms can protect themselves from false claims
As concerns around corporate environmental, social and governance impacts have come to the fore, so have accusations of ‘greenwashing’, where a company exaggerates its green credentials for commercial benefit.
Now, private equity investors, regulators and other stakeholders must contend with a new phenomenon — AI-washing. Artificial intelligence (AI) has made significant strides in recent years and ever more advanced AI tools are now in reach of anyone with a smartphone. This has led to an explosion in the use of AI within goods and services, from kitchen appliances to credit decisions.
In 2022 — the year ChatGPT launched — only 10% of tech start-ups mentioned AI in their funding pitches. One year later, this had jumped to over a quarter. Today, everyone has an AI adoption strategy and ‘use cases’ they are trialling, with studies suggesting start-ups can attract 15% to 50% more investment if they mention ‘AI’.
How can prospective investors really be sure such claims are accurate — that start-ups are not merely cashing in on the hype? There are a growing number of examples of the latter, where firms have claimed the technology has done everything from creating new flavour sensations and checking out supermarket shopping trolleys to managing investment decisions.
The problem is not particularly new. In a 2019 study by MMC Ventures, it was found that as many as 40% of firms claiming to be AI start-ups used virtually no AI at all. So, how can private equity firms protect themselves against AI promises that either don’t deliver or lead to unintended consequences?
“Companies understandably feel the need to implement AI in some way, or go under, because the expectation is that it will have transformative effects on any business,” says Steve Bear, Executive Director for Financial and Professional Risks at Gallagher. “But, precisely how will it have an effect? How does it fit into the existing business structure? Who is using it? How does it affect the customer? How are you policing it? If you're relying on a technology that isn't yet stable, because it's too new, then it's very easy to make mistakes, which can result in litigation.”
The development of an AI
Developing robust, reliable AI remains extremely complex. What may look like sophisticated AI software may just be automation of pre-set behaviours or rules. Real AI must be able to perceive its environment using external data, reason through different courses of action, decide upon one and act on it. The real-world difference between intelligence and mere automation when applied to say, self-driving cars, could be stark.
Then there’s the risk of real AI delivering perverse outcomes. Facial recognition software has used ever more sophisticated AI for years, applying it to everything from opening your smartphone to crime and policing. However, multiple studies and investigations have found that, because this software is often trained using a small range of ethnicities, it can result in real-world bias. This potentially further marginalises specific groups of people and can result in litigation around discrimination, which may lead to costly insurance claims for businesses.
Poorly designed AI has already led to significant problems for some firms. In November 2022, US real estate firm Zillow announced hundreds of millions of dollars in operating losses after an AI it had developed was found to have consistently overvalued property prices. The mistake knocked billions of dollars off its market capitalisation and forced it to make staff cuts. Professional Indemnity insurance could provide some protection for the resulting claims and reputational damage.
“There’s always a risk that AI does a great job one day and then misreads some data the next day and comes out with a load of rubbish,” says Bear. “Depending on where in the business you’re using it, that might have implications for customer service, human resources and ultimately the bottom line. Errors and Omissions (E&O) insurance can be an important tool in helping companies defend against claims arising from those mistakes.”
Growing concerns
Concerns around AI washing and other pitfalls associated with the technology have grown to such levels that some jurisdictions are putting in legislation to limit downside risk. The EU was first out of the gate, introducing the AI Act in mid-2024. This sets out risk categories for different types of AI applications. Under these new rules, misuse of AI can result in a fine of €35 million or 7% of global annual revenue, whichever is higher.
Other regulators are already issuing charges. In March 2024, the US Securities and Exchange Commission settled its first AI washing charges against two investment advisors, who were found to have exaggerated the use of AI in their promotional materials. The charges were small — a couple of hundred thousand dollars each — but set a precedent for future action against other rulebreakers. Regulatory fines like these typically cannot be covered by insurance policies, so firms must take proactive steps to ensure compliance with emerging AI regulations to avoid such financial penalties.
The role of insurance
Professional liability insurance — including D&O and E&O covers — is a useful backstop for litigation; however, regulatory fines and penalties are typically not covered by insurance. The first line of defence for private equity firms should be sound governance and due diligence processes for initial investment decisions.
“Building a control function that has the right level of expertise that can interrogate specific claims is absolutely essential. But, ultimately, the buck stops at the board level. We’re very keen to work with all levels of investment organisations as a trusted partner to advise on governance and help clients pick their way through a changing landscape,” says Bear.
Insurance can play a pivotal role in protecting firms from potential risks, but solid governance frameworks are essential to ensuring long-term stability and compliance.
Let's talk

Arthur J. Gallagher (UK) Limited is authorised and regulated by the Financial Conduct Authority. Registered Office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company Number: 119013.