Two friendly robots work at a desk. Image generated by AI using Midjourney
Two friendly robots work at a desk. Image generated by AI using Midjourney

The machines are coming for written applications and written reports. Are we ready?

It is almost impossible to escape all the articles, blogs and think-pieces about ChatGPT. But how might ChatGPT and generative AI revolutionise funding in the international development sector?

Could it be a game-changing moment those of us advocating for more participative, human, decolonised and trust-based funding have been waiting for; or will it make us double down on and lock-in many of our current ways of working that are in desperate need of change.

The machines have already come for written applications and reports…

I wrote a blog a few months ago on how scarily good ChatGPT was at writing grant applications. Since then, generative AI tools have become even better and we will all be using them soon every day when Microsoft Co-pilot or Google’s ‘Help me write’ launch. Many time-pressured grant writers have already turned to ChatGPT or Google’s Bard to write grant applications and funding reports and it’s easy to see why. Faced with another overly onerous grant application or report to fill in, why not ask a machine to answer the generic question you’ve been asked by so many other funders before?

On the other side of the funding coin, the potential uses for funders are almost endless. AI could be used to scan initial applications; streamline due diligence; find and distil patterns and lessons learnt from a batch of reports; synthesise information for governance more consistently and intentionally. AI could host conversations with applicants online to check eligibility and filter or feedback on the quality of their application.

The critique of long, written application forms and reports is well-documented. Forms have always favoured the white voice, the voice of organisations big enough to invest in specialist grant writers and those with MEL teams who know how to ‘write the best’. They keep funders trapped in linear, outcome-based project funding, maintaining a hierarchy that the funder knows best and ‘mark their pupils’ homework’.

Will AI mean we finally break our addiction to written tools? I have outlined two potential scenarios (and there will be loads more).

Scenario 1: AI makes us double down on the current system

This scenario is very likely and very scary. We end up in a kind of arms race of more and more AI-generated writing. I am certain most funders will already have received ChatGPT written (or aided) applications. Grant writing tools are springing up everywhere (examples here, here and here). The fundraisers in this blog already say it will enable them to write lots more applications. More people will be applying to more funds, using AI tools to write better and quicker applications. In turn, funders become inundated with applications and use more AI tools to filter, assess and judge the ever-increasing applications.

Rather than questioning if the system works, AI just makes it easier for us to work in and play the current system, entrenching everything that is wrong with it at the moment. Some funders, filled with horror at this possibility of people ‘gaming’ their system, invest in AI to prevent this, doubling down by investing in new systems along the funding application journey rather than stepping back and questioning how the system works.

The speed with which generative AI is being mainstreamed makes this a scenario that is not very far down the road at all.

Subscribe to our newsletter

Our weekly email newsletter, Network News, is an indispensable weekly digest of the latest updates on funding, jobs, resources, news and learning opportunities in the international development sector.

 

Get Network News

Scenario 2: AI makes us reimagine grant-making with human interaction, trust and relationship at the heart

The second scenario is much less dystopian, but no less radical. It is one where AI spells an end of the written application and report form. Anyone wanting funders to decolonise their ways of working, shift power, be participative and more progressive should be over the moon.

Rather than getting defensive about everyone ‘cheating’ in their applications and reports, funders step back and see it as a chance to reimagine a better funding system. They ditch traditional written applications that can be easily gamed by AI tools and instead rely on things that can’t yet be gamed – human interaction.

To select potential grantees and to assess the impact they are having, funders visit people involved in programmes and communities they are working in. Funders develop a more relational and human way of assessing and reporting.

Suddenly, community members and people with lived experience of the issues discussing applications and making decisions is not seen as a risky way to make grants. Getting those you fund together and genuinely listening to them talk about the impact they are having, what is working and what is not working becomes a much more reliable way to get a ‘report’. It becomes easier for everyone to learn about the impact of unrestricted and non-project-based funding, as project-based and linear reports become redundant. Devolving money to local and national grant-makers in the South now makes perfect sense. Everything that many people argue is too ‘risky‘ has now become the perfect mitigation for everyone using ChatGPT to write applications and reports.

Any moral panic about AI and ChatGPT forces funders, organisations and communities to connect on a human level, and away from paper – which can only be a good thing.

What can I do as a funder?

The key thing is to talk about it. Make sure it is on the agenda of staff and trustee meetings. You don’t need to be a technical expert in AI. Spend some time playing around with ChatGPT, Microsoft Co-pilot, Bard and other tools; perhaps using it to fill in some of your own reports or apply for your own grants.

Think through potential scenarios for how AI will impact you and those you work with. There are so many potential ones. The two outlined above are a good starting point but you can come up with others. What about the racial bias of AI? Will it open up and democratise funding processes? Can it decolonise language? How can CSOs use AI in their own work? What is the role of civil society in regulating AI?

There are no ‘right’ scenarios but ask yourselves which scenario you prefer. What can you do now to make it more likely to happen and what you can do to make the scenarios you don’t like less likely to happen?. That gives you a simple but important task list for what you then need to do; but do it quickly…The machines are coming for our written applications and reports quicker than we know it.