Navigating AI and good governance
Type
WebinarThemes
Futures and innovationArtificial Intelligence (AI) has significant potential to transform the work of international development and humanitarian organisations, enabling them to achieve greater impact through their programmes and in response to humanitarian crises. While many INGOs are already using AI tools across various operational tasks, including content creation, grant management, data collection, they also face significant risks such as bias, cyber threats, and concerns about job displacement. With individuals utilising it for everything from streamlining content creation to sorting data or creating images, how can you tell? And if you can’t tell, how can you govern it?
This webinar explored:
- AI and its potential to benefit your organisation
- What AI is and the context for INGOs
- The opportunities and risks and where to begin
- Practical tips and advice for AI governance
Speakers
- Chair: Hugh Swainson, partner, NFP and charities, Buzzacott
- Guy Marshall, Fuza AI specialist
- Zoe Amar, charity digital expert
Introduction
AI and digital governance – Hugh Swainson
When considering AI and governance, several key questions and themes come to mind. First, it’s essential to reflect on what aspects of AI and governance are most relevant, especially in terms of how they impact organisational decision-making and ethical practices. This involves unpacking the complexities of AI, understanding what it entails and exploring how different stakeholders within the organisation might interact with it.
The Charity Digital Code of Practice remains highly relevant when thinking about AI as it emphasises a governance perspective that extends beyond traditional IT concerns. This code encourages comprehensive organisational engagement with digital issues, focusing on leadership, governance, skills development, and ethical considerations—each of which is crucial when addressing AI’s implications in a charity context.
The potential for AI and the context for INGOs – Guy Marshall
An AI strategy can appear sophisticated and forward-thinking, however, it’s crucial to recognise AI’s limitations—it’s not as “intelligent” as it might seem. Generative AI models, for instance, don’t actually “know” anything in the traditional sense; they lack true knowledge representation and simply generate outputs based on statistical patterns. This means that trust should be carefully tempered, as these systems are essentially text processors that convert inputs to data, manipulate them, and then produce outputs. This is why generative AI, or “GenAI,” is aptly named.
An example of GenAI’s limitations is seen in the biases that can emerge in generated content, such as images or text. For instance, AI-generated images depicting homelessness often carry inherent biases, reflecting the biased data on which the models were trained. This raises critical questions about whether AI, particularly large language models (LLMs), is robust enough for complex applications. Some strengths of LLMs are clear, such as reducing language barriers, which can enhance accessibility and broaden reach. However, there are risks to relying solely on these models. Human review is required as errors or misunderstandings can occur. Tools like Dataro, which claim to offer good ROI, illustrate this nuance; while they provide valuable insights, their focus is more on data analytics than on language generation, underscoring that not all data-related challenges require an LLM to solve.
Most organisations recognise AI’s transformative potential, especially in areas like fundraising and grant assessment. AI models like ChatGPT are already being widely used in these contexts, though as they become more complex, new challenges arise, particularly around bias. Managing rather than completely eliminating bias remains a priority, as fully eradicating it is complex. Ethical concerns also loom large, as AI systems are difficult to audit, explain, or fully unpack, which complicates accountability and transparency in their use.
Suggested actions and takeaways:
- Consider creating a strategy for digital and data, including AI.
- Aim to get benefits of AI without investing in complicated, expensive, “waterfall” R&D. Identify key opportunities in your business for automation, likely customer service or prediction. Analysis and LLMs for quick wins, if done ethically.
- Support existing technical team in using off-the-shelf AI tools, probably integrated with your existing tech stack.
- Review data governance and data structures, get consent to use data you might wish to utilise in the long term for 10x improvement with machine learning
- Focus areas: Data and governance, Cyber, Culture and Skills
AI and governance for charities – Zoe Amar
There has a shift in the charity sector’s AI discussions. Initially, questions centred on understanding AI itself—what it is, why it matters, and what changes it may bring. Now, the conversation has moved to more practical questions: how to create effective governance structures, make the case for investment, and responsibly support AI initiatives. Trustees, who are on a learning journey, are increasingly focused on how to integrate AI thoughtfully and ethically within their organisations.
To ensure responsible AI in charities, there is a strong need for skills development and continuous learning at the board level. Charities have an obligation to uphold high ethical standards, going beyond corporate norms to align AI practices with organisational values, inclusivity, and ethical commitments.
As charities develop AI strategies, establishing governance structures that include data security and clear policies is essential. Transparent communication ensures that staff understand AI’s role within the organisation, with leadership facilitating open discussions to prioritise goals and investments effectively.
AI also impacts business models, which leaders must address as part of their AI planning. Skills development and learning are key themes; charity staff, often stretched thin, need time and support to stay informed on AI advances. Executives are encouraged to support staff participation in AI forums, fostering a culture of ongoing learning and awareness.
Horizon-scanning is critical in a rapidly evolving AI landscape. Staff should engage with charity networks to anticipate AI trends and plan accordingly. As tools like ChatGPT grow, charities face questions about maintaining trustworthiness as information providers. Boards should engage in scenario planning to assess AI’s future impact thoughtfully. An AI Checklist for Charity Leaders and Trustees created by Zoe Amar Digital supports trustees to integrate AI responsibly and strategically.
Key questions for trustees:
- Have we scenario planned for how AI could affect our charities?
- How might we avoid knee jerk reactions to automating roles?
- Could AI create new competitors to charities?
- Have we given staff the space and time to learn about AI?
- How might we ensure an inclusive approach to AI?
- Can we run small pilot projects to test out its impact?
- Do we need to develop an AI policy? Have we updated our data policy?
- Is our board skilled up in AI? Do they know enough to provide scrutiny and make informed decisions?
More information and advice can be found in the presentation shared in the webinar
How Bond and Buzzacott supports INGO trustees
- Join quarterly meetings in 2024-25 – details will be on the Bond events page.
- Read the Buzzacott and Bond guide. Governance: A guide for international NGOsis an up-to-date, relevant resource and reference point for practical support and guidance.
- Join the online Governance Forum for Trustees of INGOs, a private online forum for trustees of international NGOs to share learnings, discuss concerns and hear from governance experts. Curated by Buzzacott and Bond. Contact Jemma Ashman for more details and to sign up.