NGOs need to share and use evaluations to create change

27 June 2017
Author: André Clarke

In an ideal world, all evaluations would influence decision-making, improve practice and contribute to wider sector knowledge.  I don’t for a second doubt the commitment of NGOs to better understand what works and what doesn’t, for whom, in what ways and under what circumstances. But we need to remember that the reason we do evaluations is to change and learn – and commit to doing so.

I recently attended the UK Evaluation Society Conference where Bond was part of a session on utilisation-focused evaluation (UFE), with TripleLine, INTRAC and Islamic Relief. The conference explored the ‘Use and Usability of Evaluation: Demonstrating and improving the usefulness of evaluation’. The focus on use and usability shifted the discussion from a technical and methodological debate, to one that is bound up in the politics, priorities, and values, of both the project evaluator and the commissioner.  

4 ways sharing information helps organisations

For Bond, the idea of using and sharing evaluations brings together the debates around transparency, accountability and overall effectiveness of development organisations.  Sharing timely and relevant information about organisations’ activities and results in an accessible way helps in the following ways:

  • Stakeholders are more able to hold organisations to account for their activities and results.
  • Organisations have a basis for learning more from each other.
  • Development actors can better coordinate activities with each other (particularly when information is shared in a common format).
  • Greater openness can help build greater public trust in NGOs.

For the past few years, Bond assessments (The Health Check Big Picture, Transparency Review) have signalled that using and sharing evaluations have been one of the lowest scoring areas for members. The conference is a timely reminder of the need for collective effort for improvement.  

Various presenters spoke of the many challenges around the sharing and use of evaluation including the “publication bias” - only publishing positive results and the consequent INGOs silence on failure, the quality of design and writing of evaluations, decline in trust in experts and results fundamentalism. Michael Cooke discusses the barriers and drivers to publishing and using evaluations in Making Evaluations Work Harder for International Development. The conference also highlighted several examples of how organisations can overcome these barriers and make use of evidence, such as DFID’s Research Uptake Guidance and Evaluation Decision Framework and a UCL collaboration looking at the use of research evidence in decision-making.

3 considerations for NGOs

  1. Transparency 
    Public trust in NGOs, particularly those working in the international development sector, is at an all-time low. Increasing questions around legitimacy and accountability mean that transparency is crucial. Sharing evaluations and evidence can combat this mistrust by building credibility, helping to combat knowledge gaps and improve thinking within the sector. The drive for greater transparency means NGOs must examine their internal information systems and knowledge management processes to ensure that they raise standards of project reporting.
  2. Learning and Change
    If evaluations are to be used successfully, it is critical that INGOs reflect, learn and adapt. This requires the skills, interest and processes to use this data to inform continuous improvement. NGOs must do consider what signals their values and behaviours send about their openness, willingness to learn and attitude to failure. 
  3. The politics of using and sharing evaluations 
    If we want to see greater use and usability of evaluations, then we need to go back to basics. Firstly, what is the role of evaluations in our organisations - accountability (often to donors), learning or a bit of both? The answer will have profound implications for what we choose to evaluate and whether evaluations will result in learning and organisational change.  Secondly, who are the key decision makers in our organisations that evaluations need to influence and how do we manage the process to ensure ownership and buy-in of the findings? And thirdly, how do we engage and manage evaluators to ensure that they go beyond methods and approaches, appreciate the context and can navigate the internal politics of the organisation.  

Evaluations are one of the tools that support the sector to answer these critical questions. Using and sharing evaluations to strengthen our impact will only happen where there is a deliberate effort to change. We must consistently ask ourselves the so what [does this evaluation mean]” question and consider whether less is more when it comes to evaluation. More profoundly though, is whether after all the rhetoric, organisational learning remains “nice to have” rather than a mission critical component. 

Look out for Hamayoon Sultan’s (senior impact evaluation officer at Islamic Relief Worldwide) blog next week to hear about their efforts to improve how they use and share evaluations.

For more information, get in touch with André Clarke, effectiveness and learning adviser at Bond. 

Read INTRAC's blog on the UKES Conference.

Bond runs a variety of training courses on monitoring, evaluation and learning throughout the year. The next course is 13-14 September - Planning and practice in monitoring, evaluation and learning, which introduces key MEL systems, processes, methods and tools.

About the author

André Clarke
Bond

André Clarke joined Bond in March 2017 having previously worked for Plan International and Save the Children UK. His role focuses on supporting members to tackle some of the challenges they are facing in monitoring, evaluation, research and learning.