Bond tips: evaluation terms of reference

Bond

Michael O'Donnell

Thursday, October 1, 2015

At Bond, we have a keen interest in helping organisations improve the quality of the evidence from their work to enable better learning about what works and what doesn’t. Evaluations are a key learning product, but all too often we hear stories of disappointment about the quality of evaluation reports.

Good terms of reference are a vital resource for a quality evaulation. Bond recently reviewed 23 examples of evaluation terms of reference issued by NGOs, either publicly on the Pelican Initiative between May and July 2015, or sent directly to us by members of Bond’s Monitoring Evaluation and Learning Group. We saw some great stuff, and some common problems that would limit the likelihood of getting a good quality final product from the consultant. Much of what we would recommend is already in our Guide to Getting the Best Out of a Consultancy, but some is specific to evaluation terms of reference.

Things to consider

Before you get to the point of issuing terms of reference for an evaluation, there are some important things you need to have considered:

Why do you need an evaluation at all and who is it for?

It may be a donor requirement, but primarily evaluations should be for your own and/or others’ learning purposes: is this intervention innovative or different in some ways such that it will add to the body of learning out there already, or is it a well-proven approach that is just being replicated? DFID’s Evaluation Strategy (p7-8) provides a useful example of rationales for evaluating or not.

Consider your evaluation design at the proposal development stage

The "evaluability" of an intervention depends on its design. Some evaluation designs (e.g. RCTs) need to be incorporated into your intervention right from the start of an intervention, and the monitoring data that will be gathered over the intervention lifetime has a great bearing on the quality of your final evaluation. Linked to this, consider advertising for your evaluator at the start of the intervention, and work with them at key points through the project, not just towards the end. Only one out of 23 terms of reference we reviewed did this.

Secure an appropriate evaluation budget

There are evaluations which cost tens or hundreds of thousands of pounds, and there are "bog standard" NGO project evaluations where £3k-£10k is put into the budget almost as an afterthought. It is very hard to get a really good evaluation out of a tiny budget: if it’s going to be worth learning about what was achieved, lobby the donor to put enough budget towards it, or add to the budget using other funding sources. Push back on donor requirements that make no sense. For example small grants with tight limits on the percentage of the budget that can be spent on monitoring and evaluation are not conducive to generating evaluations with "robust evidence".

Drafting evaluation terms of reference

1. Advertise well in advance

This will give you a chance of getting a good pool of applicants. Good consultants get booked up in advance, and bids for evaluations usually need to be tailored and thus take time to prepare. So (a) give at least a fortnight between advertising a consultancy and the closing date for applications; and (b) don’t expect a consultant to start until at least six weeks after that closing date (more if you’re not going to be fast at making your selection). Indicate flexibility in starting dates if possible.

2. Give a sense of the budget available

Given the range of evaluation budgets and costs, you need to give a consultant a sense of your expectations so that they can give realistic proposals. This could be by quoting an indicative range for your budget, while still clearly indicating that value-for-money will be taken into consideration in appraising bids. Quoting a maximum number of days that the work is expected to take is a common proxy, but with consultancy fees varying so much, it may still result in bids that you can’t afford.

3. Avoid specifying preferred tools

Far too many terms of reference don’t say anything about what an evaluator would recognise as a design, method or approach to evaluation, but do specify the tools they should use e.g. desk review, focus group discussions, key informant interviews and surveys. That’s like telling a carpenter that you want them to do a job using a hammer, saw and nails. At best it’s unnecessary; at worst it either tells a consultant that you don’t really know about evaluation or that you want to micro-manage the assignment.

4. Focus on telling the evaluator what they need to know to determine the most appropriate method

What are your key evaluation questions? What are the key characteristics of the intervention being evaluated? Bond is currently working with Barbara Befani on developing a tool for choosing appropriate evaluation methods, based on the idea of a "design triangle" in Stern et al. 2012 and Stern 2015. Things to consider include:

  • Is there a theory of change already developed? If so, make it available and ask prospective consultant to make comments on it.
  • Is the intervention carried out in relatively homogenous or diverse locations?
  • Is there a standard set of activities in each location, or are they diverse and tailored to each location?
  • Have interventions followed a set plan and been consistent, or have they been adapted and changed over the project lifetime?
  • What are the key intervention approaches used in the intervention, e.g. behaviour change, advocacy, capacity-building, service delivery?
  • How many people/actors does the intervention target in each location for each activity?
  • Is there much disagreement among stakeholders about the relevance of the intervention’s goals, and thus about what will constitute success?

5. Don’t make your evaluation questions a shopping list

Focus on the essentials for your learning and accountability; put everything else as "desirable", and be prepared to discuss what is realistic and feasible with your evaluator.

6. Provide access to project documentation

Give prospective evaluators access to project documentation. The project proposal, theory of change, and any existing monitoring data or evidence are all particularly helpful.

7. Make informed use of guidance and principles

Asking evaluators to cover the OECD DAC evaluation criteria and to score highly on the Bond Evidence Principles is very demanding, and can be impossible to achieve on the most limited evaluation budgets.

8. Consider a two-step application process for large or complex evaluations

In the first stage look at applicants’ track record, experience and ask for a very brief outline of the approach they think would be appropriate. Then filter it down to a shortlist who can prepare more detailed proposals. Bids are a big investment for consultants.

9. Ask for references and examples of work

Check references and look at examples of work from prospective consultants, in addition to an outline of the proposed evaluation approach and the budget.

10. Get a second pair of eyes

Ask someone else not familiar with the work to review your terms of reference before publishing them. They can help you strip out jargon, acronyms, internal-facing language, etc., can sense-check whether you’ve included necessary information on the project, and potentially whether you have unrealistic expectations of what can be delivered. Try not to ask someone who will just insert more "interesting stuff" to look at in the evaluation!

11. Give feedback to rejected applicants

It’s courteous, the feedback is useful, and it prevents them being too discouraged to apply again next time.

Contribute

We want this resource to be as useful as possible, so let us know any other top tips you may have, or if there's anything here you disagree with.

Email us

Related Resources

E-Valuate
E-valuate provides three calculation tools for an impact evaluation – a sample size calculator, a power calculator and an effect size calculator.
25 October 2017
Poverty in Valhalla Park
The PPA Learning Papers share learning from the last round of PPAs. This paper explores what it is like to be on the receiving end of exit, using case studies from both partners and country offices.
20 September 2017
Chameleon in a tree
The LASER programme, one of the first to be tasked with testing an adaptive approach, outlines how it achieved and measured change.
4 August 2017