At the moment there is an increasing focus on how NGOs measure their results and show impact in order to ensure we are being as effective as possible. This has many benefits but hits particular difficulties when NGOs are trying to bring about change through influencing policy structures.
Ironically this area of work can have some of the biggest impacts – in Bolivia we have seen communities’ incomes rise by 250%, partly by work to persuade local authorities to immunise domestic animals. However often what these impacts will be within the life of an advocacy project cannot be definitely defined via a logframe in advance. Changes are often unpredictable, take a long time and are not entirely within the control of the organisations pushing for them.
How to increase the measurability of advocacy work is an issue that CAFOD has been looking at for some time. Recognising that change through policy influence does not always happen in a simple or predictable way, our method of monitoring the effectiveness of our work has to reflect that reality.
Glouberman and Zimmerman provide us with a helpful model for thinking about this (cited in ‘Purposeful Programme Theory’ by Funnel and Rogers). They give us three examples of processes: baking a cake, launching a rocket and raising a child. When baking a cake you can bring together the ingredients, follow the instructions and know you will end up with a cake. When you are building a rocket the process may be more complex but high levels of expertise will bring results. However when raising a child following prescribed formula won’t work – you need to be able to adapt your approach. Complexity is one of the new buzzwords of development (the Aid on the Edge blog is a good place to start if you want to know more.)
This thinking should influence our monitoring. For the cake you can check what ingredients went in and what comes out, measuring child rearing in the same way is not going to work. The context where CAFOD’s partners are working are complex environments which will be influenced by a lot more than our partners’ work. This is why asking them to monitor their advocacy work as if they were baking a cake won’t make any sense.
A few years ago I was seconded by CAFOD to work with an advocacy network in an east African country. A consortium of bilateral donors had funded the network, based on a log frame which highlighted outputs around civil society consultation in particular policy processes of the government and donors. –However events over took us and there were large scale protests after disputed elections. Donor disapproval of the government’s handling of these protests meant that the established policy environment changed overnight. Donors rushed to create new mechanisms to operate at sub-national levels. The network responded with flexibility and political nous. They engaged in the development of the new formats and were elected onto the board for one element to represent civil society. They reacted as we hope all good advocates would do – with flexibility, creativity and one eye on the wider political context.
However what really surprised me was, when it came time to report to the donors, the organisation did not know how to bring all this work and initiative into the reporting process. They were looking back to the original project design and seeing the project as a failure as it did not meet the original objectives. They didn’t have a way to reflect on their work and learn from it. Monitoring, in their eyes, was about saying if you had followed a recipe rather than learning and reflecting because this is what they had been asked to do by donors. The critical facilities that existed in the rest of the process deserted them when it came to the monitoring process due to their perception of what they thought the donors wanted and what monitoring was about.
If donors want to encourage learning behaviour they need to get away from a simplistic measuring of results to something that challenges people to integrate results, reflect and learn from their practice. This will mean, in practice, getting away from the logframe.
A few years later when I was working in CAFOD headquarters again we got the chance to pick up on some of these reflections as we redesigned the monitoring work of the organisation for an institutional block grant. What we came up with was the idea of a scale which organisations could measure themselves against – bringing their own evidence to demonstrate the level they have reached.
This resulted in the development of a monitoring tool and we have just delivered a paper on the process of the process around this for the recent INTRAC conference on monitoring and evaluation. The paper is still in development so any thoughts or questions would still be welcomed.
At that conference we found many people struggling with this challenge to monitor advocacy. Challenging the assumptions about how we expect the change to happen and how we can monitor that can help us address these. Theories of Change came up as an important concept again and again – with many agencies beginning to work with them more.
The discussions will continue, at least partly through the BOND Effectiveness Programme. We hope that it will lead us to a more thoughtful examination of what it takes to succeed rather than just a focus on the end product.