Artificial authenticity: are NGOs risking their reputation when using AI-generated imagery?
New research from the University of East Anglia studied 171 AI-generated images used by 17 major organisations, including Amnesty International, Plan International and the World Wildlife Fund, along with more than 400 public comments those images generated.
Fewer than one in five of these comments actually engaged with the humanitarian issue the campaign was trying to highlight. In the rest, people were debating whether the images were real, or picking apart their technical quality. The cause had been lost in the noise.
The core problem is trust
Trust is the lifeblood of the NGO sector. It is the thread connecting us to our donors, our communities and our funders. AI-generated imagery is fast, flexible and increasingly affordable. However, this research suggests it can quietly erode the very thing NGOs are trying to build.
What is particularly striking is that labelling images as AI-generated did not help much. Even when organisations were transparent — 85% of images in the study were properly disclosed — audiences still shifted into a critical, sceptical mode. Rather than being moved by a cause, they became investigators, scrutinising images for flaws and questioning the ethics of the technology itself.
When the medium undermines the message
Another risk which often goes unnoticed is the disconnect between how you communicate and what you actually stand for.
WWF Denmark learned this the hard way when it faced public backlash for using energy-intensive AI tools in a sustainability campaign. Supporters felt the approach clashed with the organisation’s values.
This kind of “message–medium misalignment” is something every communications team should be thinking about before reaching for AI tools. Critics have also highlighted how AI-generated imagery threatens the livelihoods of local photographers and filmmakers. AI-generated films, after all, can consume up to 1,000 times more energy than a single image — amplifying both ethical and environmental concerns.
Can AI genuinely help?
It is not all bad news. There are situations where AI-generated imagery makes sense ethically, especially when working with survivors of conflict, abuse or displacement, where photography or filming could cause harm or re-traumatisation.
The challenge is that even in these cases, audiences do not always accept the trade-off. Many donors still prioritise seeing “authentic” imagery over respecting a programme participant’s right to privacy.
That tension is worth an honest conversation in your organisation. Ask yourself, if your audience knew exactly how this image was made would it strengthen or weaken their connection to your cause? Maybe you will reach new audiences, or maybe you will lose existing supporters.
What are the solutions?
The research does not call for AI to be banned from NGO communications. It is a call to use AI more thoughtfully.
Here are five practical steps your organisation can take to do just that:
- Write a clear AI imagery policy. Every organisation using AI visuals should have a published policy which makes it clear when AI imagery is appropriate, when it is not and how outputs are reviewed and disclosed.
- Train your communications teams. Decisions about how people are depicted (e.g., their skin tone, clothing, setting and cultural markers) are not just technical choices. They shape how communities are seen and understood. Staff need practical training to make these decisions responsibly.
- Move away from photorealism. The research shows that images designed to look like photographs invite the most scrutiny and generate the most backlash. Illustrative, diagrammatic or clearly stylised visuals tend to be better received.
- Bring communities into the process. If AI is being used to depict a community, that community should have a say. Involve the people being represented in shaping prompts, reviewing outputs and approving final images. Otherwise you risk producing imagery that reflects the assumptions of a distant office rather than the reality of people’s lives.
- Tell broader, richer stories. AI-generated charity imagery tends to default to familiar tropes, such as children, poverty and crisis. Push back against this. The communities NGOs work with are diverse, resilient and complex. Your imagery should reflect that.
Public trust in institutions is already fragile. Audiences today are quicker than ever to spot synthetic images. AI is not the enemy of good humanitarian storytelling, but using it as a cheap shortcut to emotional connection is a risk the evidence shows we cannot afford to take.
The full Artificial Authenticity report is available at www.charity-advertising.co.uk.
Category
News & viewsThemes
Communications