Recent weeks have seen a flurry of media coverage about increasingly convincing Artificial Intelligence (AI) image generation. Should the sector be worried?
Hamish Crooks, Chair of the Association of Photographers, shares his perspective with Rachel Erskine, co-chair of Bond’s People in the Pictures working group.
To someone not working in the photography industry, the growth of AI image generation can seem inevitable and irreversible. How do you see things?
To be brutally honest, it’s exciting stuff! When you see what’s possible in other disciplines such as health or medicine and science, of course it’s exciting, from an inventive and creative point of view. But what I’ve learned is that most AI systems only do one thing, or at least only do one thing well. No one has yet made a multi-functioning AI that’s interconnected across a range of disciplines – the keyword being yet! A lot of what’s out there is very early days. But AI will improve and find its niche. We’re never going to put the genie back in the bottle and if we look at the way social media companies lobbied governments globally and went almost unregulated for 20 years, I think there’s reason to be wary. Regulation will be needed. When the companies themselves ask for regulation, then that does make me worry. There is a certain style of photography that NGOs like, and AI will learn that: it’s not there yet, but it will be.
Join Bond’s People in the Pictures working group!
This community is an advisory group on ethical approaches to gathering and using images. The group share best practice and knowledge to move the debate on NGO imagery forward. They also provide space for peer-to-peer discussion and support on best practice approaches.Find out more
What are the potential implications of AI for the way photography, and the people who produce it, are valued?
Computer-generated imagery isn’t new. Before AI, we had CGI – but with CGI, you’re building a picture from absolutely nothing; you’re starting from scratch, and that makes it very expensive. AI image generation uses what already exists: it combs through a dataset that’s in the public domain. That makes it a lot cheaper – and this is where some of the controversy comes from. Any platform that is open to the public can be scraped, with obvious implications for copyright. Photographers never had the chance to opt-out, and they weren’t paid for the images that were picked up.
I don’t think all photography’s suddenly going to die. News photography will never be replaced, because it has to be real, and there are very strict rules around that. But we’re already seeing the two leading photo agencies, Getty and Shutterstock, link up with AI image generators to cut expenses. Those expenses might be the people that, to date, have created photos for them. Getting photographers paid fairly is an ongoing battle, and this is only going to make it harder. I’m not saying we shouldn’t be using AI, I’m just saying we should be thoughtful about it.
It has been suggested that because AI imagery doesn’t show “real” people, it offers an opportunity to circumvent some of the ethical issues that a charity fundraising campaign can pose. From this point of view, AI can shield us from questions of fair or accurate representation. What do you think?
We must remember that although these images aren’t real, they are based on real images. As a result, the end product will carry the same biases as the datasets that have been used to create it. If the AI is scraping poverty porn, then the result will be poverty porn, so it doesn’t solve the problem. The number one rule in photography still applies here: you only read a picture through your cultural leanings.
What steps can NGO communicators take to protect our existing pictures and the people in them?
I think it’s more important than ever to have a sound metadata policy. It might seem boring but if you don’t have metadata in your images, you don’t have ownership. No one knows they’re yours. The metadata might get stripped out by the AI, but it needs to be there in the first place if you want to find out what datasets your images have ended up in. You might also want to update your consent policy to make it clear that AI scraping is among the potential secondary uses of your imagery.
One thing AI isn’t very good at yet is context. I once used image recognition to search for a picture of a bus and it showed me a bus that had been blown up by a bomb. For now, because AI only reads pictures from vast publicly available datasets that were, in most cases, scraped without permission, then these sets do not have the enriched metadata such as keywords that would enable AI to make better value judgements based on a human’s cultural reading of images – but it is catching up. Getty and Shutterstock have these resources and will fine-tune these AIs’ cruder output today with a vastly better product aligned with their skills in licensing imagery. Also, for this sector, if your NGO has a particular aesthetic, then that’s what the AI will start to look out for and deliver specifically on request.
Do you think AI represents a point of no return for NGO communications?
Someone at our event wrote in the chat that “real people with real stories will always be important”, and I think that’s the thing: you can’t use AI to write the kind of human stories INGOs produce. AI will keep growing and improving, but I think INGOs will continue to tell stories and use real photographers. AI might just be another tool in your toolbox.
In the past, I saw NGO campaigns, such as one produced by Shelter in the 1990s, using real stories accompanied by footage posed by models as it would have been unethical to show real people in those situations. Will the NGO sector use AI in a similar way? I guess that depends on how successful those campaigns are.
Your industry has a very strong ethical foundation, and I’d be really interested to know how you end up using the technology. How are you navigating those ethical issues? Are you getting the things you wanted from it? Those are questions I am desperate to know the answers to!
The above is an edited version of a conversation held on Thursday, 16th March between Hamish Crooks and People in the Pictures co-chair Rachel Erskine; the full recording is available to members of the People in the Pictures working group, which as a member of Bond you can join here. To find out more about Bond membership, contact Rachel Phillips, [email protected]