4 recommendations for better evaluations
5 July 2017
Evaluations are a vital tool to promote effective programming but the way that they are carried out requires change. There are many challenges to be overcome but as Albert Einstein said: “No problem can be solved from the same level of consciousness that created it.”
Along with Bond’s André Clarke (whose excellent blog about fostering the conditions to share and use evaluations you should also read), I recently spoke at the UK Evaluation Society conference on Islamic Relief Worldwide’s work in the use and usability of evaluations. This is a summary of what we shared, and also learned, at the conference about how to better utilise evaluations.
Problems, challenges and weaknesses
Evaluators currently face challenges around a lack of trust in, and support for, evaluations. Evaluations are often planned only after a project is designed with their purpose not always fully thought through. They are done merely to tick donor boxes and are assigned limited time and resources.
Publication bias is also an issue, where only positive results are sometimes shared to the detriment of the areas that could have been improved. Often NGOs are afraid of admitting failure, particularly when it opens them up to donor scrutiny. Rather than acknowledge weaknesses to improve, they are ignored, meaning learning is not taken forward and the evaluation becomes worthless. Evaluation methodologies have in many ways remained static for a long time, limiting their use and scope. Innovation, arising from failure, can lead to improvement, and it is by learning to accept failure that these processes can be taken forward.
Evaluators can also become more territorial about their own work and can forget that their role is part of a wider effort. These silos often lead to each part of an organisation not knowing what the other is doing, sometimes with limited joint learning, and which over time exacerbates a lack of trust.
Solutions and recommendations
1. Break down the silos and share evaluations
Islamic Relief Worldwide piloted a new approach where evaluators from the Pakistan and Bangladesh offices were asked to conduct centrally-commissioned evaluations outside of their own countries, rather than being done only by the evaluation function from headquarters. Colleagues in the field got opportunities to visit and learn from operations in other countries, develop their own skills, take any learning back to their own countries, and ultimately, foster a culture where learning is not only captured but also used.
2. Improve the quality of project reports
Evaluators are experts in the capture, analysis, and reporting of data and so steps must be taken to use this data to persuade cynical policymakers that an intervention was effective. The use of data to back up findings rather than simply making claims; making professional judgements; quoting the voices of ordinary people within reports; and using photos, graphs, video clips, or dashboards would all help to recreate the authenticity of the local voice within evaluation reports.
By improving the quality of evaluation reports, people are more likely to trust them, so fostering a culture of using and learning from them.
3. Be innovative in your use of technology
Islamic Relief Worldwide is piloting and using technological solutions like EpiCollect and KoBo Toolbox to improve data collection, analysis, and reporting for evaluations.
Evaluations need increased scientific rigour, but we must also be accept how limited scientifically rigorous studies carried out by NGOs can be without the appropriate resources. Randomised Control Trials (RCTs) may be the gold standard, but NGOs’ capacity to do them properly is often very limited such that the end result can neither be representative nor valid.
4. Engage with cutting edge academic research
There is a lot of good work being carried out within academia on evaluations and research techniques which Islamic Relief has started to engage with. A team of academics at University College London (UCL) recently carried out research on the wisdom and madness of crowds, where they demonstrated how the aggregation of many independent estimates can outperform the most accurate individual judgment (Joaquin Navajas, 2017 (PDF)).
While this research was intended to inform and improve political forecasting, there is potential for this to be applied to the way that focus group discussions are implemented, and Islamic Relief is at present exploring ways to include such in our evaluation methodologies.
There is a great deal that we can learn from academic fields and we should be open to these opportunities. Yes, we will be challenged and we may even find that the way we are currently operating is not as effective as it could be, but these lessons are ultimately how we will grow and improve.
Bond runs a variety of training courses on monitoring, evaluation and learning throughout the year. The next course is 13-14 September - Planning and practice in monitoring, evaluation and learning, which introduces key MEL systems, processes, methods and tools.