The impact of advocacy: monitoring change or changing monitoring?
We are all seeking to demonstrate the impact of our work. How many lives have we saved? How many communities have we helped transform?
But demonstrating the impact of our advocacy and campaigns work is something that many organisations wrestle with, as the well-attended Bond Conference session on claiming the impact of our advocacy showed.
At Tearfund, this discussion has come to the fore as we roll-out a new online monitoring and evaluation (M&E) system. Our advocacy and campaigns teams have been asked to represent their work in the same way as our country programmes. In the new system, we input our work through a standard logframe with indicators, baseline and targets.
When it comes to advocacy and influencing, you often hear people say “it's all too complex and too difficult to measure impact and define indicators, it can't be done, so just trust us”.
At Tearfund we are refusing to accept this. In all our work, we need to know what is working and what is not, so that we can invest in those things that produce results. This is just as true for advocacy and campaigns work as it is for food security and WASH projects. If we don't know, or aren’t sure if our work is having an impact, how can we continue spending donor or supporters’ money on it?
Advocacy is a long game
As we seek to put our advocacy work into logframes, these are some of the questions we have been wrestling with:
- Is the impact we are seeking to demonstrate the number of lives transformed through policy and practice change?
- If so, how do we measure that impact when such changes may occur over many years and with the contributions of many other actors?
- Should we only count those policy changes to which we have made a significant contribution? If so, how do we define a significant contribution towards policy change?
There should be an understanding and an expectation that advocacy is a long game, especially when it comes to major international policy changes. There won't be many international policy and practice changes per year let alone per quarter. So we need to be looking at our impact over many years.
But we also need interim indicators at impact, outcome and output level. Tearfund’s Advocacy Toolkit, when talking about M&E, says:
“Advocacy…is more subtle and uncertain, less linear, and because it is fundamentally about politics, it depends on the outcomes of fights in which good ideas and sound evidence don’t always prevail .’ - Teles and Schmitt
Our advocacy work is multi-faceted and works at different levels, as we try to achieve complex systems change. This makes our M&E tricky, because despite planning what and when we will measure and analyse, the real world is complex and dynamic and results in outcomes that we didn’t plan for or expect.
As a result of this dynamism in our advocacy work, we spread our bets, we invest in different initiatives, alliances and movements, some of which will fail and some will succeed. We need to follow this approach with our M&E.
We need to be trying new approaches, new indicators and ways to measure them. And when something doesn’t work, we need to spot it and quickly redeploy resources, re-focus and re-write indicators and logframes. This way, our MEL and advocacy will improve.
What we are trying now
Our advocacy work currently draws on various sources of data to build up a picture of the impact we are having. We write case studies and stories of change to document advocacy successes. This can allow us to track incremental change and is a good way to encourage and motivate supporters and campaigners.
Each year our teams set targets for a number of indicators. These are linked to their theory of change and are a way to measure if our planned outputs and outcomes are being realised. We use these to report progress to our senior management and board, but also to manage our teams and projects.
Depending on what we achieve each quarter, in relation to our indicator targets, we will tweak our activities and plans for the quarter ahead. Indicators for our advocacy work include these examples:
|Level in logframe||Indicator||Means of verification|
|Outcome||Number of changes in policy or practice as a result of the project||Counted by the team and backed up by quotes or stories from policy makers|
|Outcome||Number of actions taken by people who have been mobilised by campaign organisers||Counted in Tearfund’s supporter data-base and by team members|
|Output||Total number of campaign organisers||Team members records feed into quarterly reporting|
|Output||The number of times policy makers are met or engaged with||Team members records feed into quarterly reporting|
|Output||Number of elite opinion formers reached||Team members records feed into quarterly reporting|
This has resulted in robust discussions and reflection on what we have been measuring in the past and how that relates to a logframe structure.
Although somewhat painful, applying this more rigid structure to our advocacy and influencing work has encouraged staff to look at things differently and ask questions and look at new ways of measuring impact.
What we have learnt so far...
We have practiced and got better at measuring indicators at output and outcome level. Some of our indicators are still somewhat subjective and, this makes it critical to carefully document the assumptions made in their measurement and to carefully define terms used e.g what do we mean by a policy maker or campaign organiser. A good indicator should be one that is externally verifiable.
However, at impact level the indicators become difficult to measure and even more difficult to track attribution. We need to use proxy indicators or find ways of making estimates for these numbers backed up with sound assumptions. Whilst we are measuring some indicators of transaction, “measurement should not stop with the “win ”.
We’d love to hear from other agencies about your challenges but also where you have had successes in your advocacy M&E.
You can join the discussion on how to measure the impact of advocacy by joining our MEL group.