Monitoring and evaluating the impact and reach of advocacy and governance programmes is difficult at the best of times. However, trying to disaggregate what available data there is according to sex can be even more challenging.
A growing number of INGOs and donors are asking their local partners to provide sex-disaggregated data. So, seeing as how it’s International Women’s Day, now seems as good a time as any to examine the value of breaking data down in this way. What does it actually tell us? Or, more to the point, what are we hoping to find out by doing this?
Personally, I think the value in collecting sex-disaggregated data about a project is that it provides us with the means to ensure there are no barriers to people participating in or deriving benefit from the project on the basis of their gender. That shouldn’t require that there be equal participation of both men and women in every project, of course- that depends on the project and what it is trying to achieve.
In the context of governance programmes this is particularly relevant because women and girls are a commonly marginalised social group, often excluded from decision-making processes which affect their lives. On this basis, I think there are two main questions we would generally want to find out about a governance/advocacy programme:
- How are women and men participating/included in the programme?
- How are men and women impacted differently by the programme (positively and negatively)?
So, how difficult is this?
Well, feedback from programme staff at CAFOD is that the primary challenge is about developing appropriate indicators to answer those questions (and then identifying what data to collect to support those indicators). Through the DfID PPA learning group on inclusion, we’ve been hearing that this is a common problem across the sector and that there is a need to better support programme teams and partners to analyse and interpret the data they collect.
I think most people would agree that, while counting is important, looking exclusively at numbers doesn’t really tell us anything e.g. How many women came to the meeting? How many women are represented on the local council? Without further analysis (and further data), these figures don’t tell us much at all.
As well as counting the number of women who came to the meeting, we need to ask whether anything has changed as a result of that increased female participation and whether partners are, for example, creating structures for increased women’s participation, or analysing how government policies might impact differently on men and women. Collecting data against a range of indicators and taking the time to analyse the results will give us a much fuller picture of the actual situation on the ground.
That said, as with any monitoring and evaluation process, I’ve found that you can only blame the indicator so much. M&E is what you make of it. You can make it a tick box exercise if you like or you can take the time to interpret what your findings tell you and invest in learning from (and acting on!) those findings. So, perhaps the question to consider is how we can integrate this analysis more into programme learning processes and support staff/partners to understand and use the data effectively.
However, there are also practical challenges to consider here. Even with the best will in the world to learn and develop, we have to be realistic about the limits to the quantity and quality of data we can collect or ask our partners to collect.
In one of our programmes in Brazil, partners have found it difficult to disaggregate data about the indirect beneficiaries of their advocacy work for better housing and access to basic services. While they can fairly accurately count how many men and women participate in their community-based work and campaigns directly, it’s much more difficult to establish the number of men and women, girls and boys within each family who benefit from improved housing/services as a result.
The main challenge is that they do not define these beneficiaries as individuals- they monitor at the household level. Although they can fairly accurately assess how many of those households are headed by women or men, there is very limited public data available about the gender balance in the rest of the household. And furthermore, given the number of households that fall within the scope of the programme, partners had neither the time nor the resources to conduct this level of research themselves. (Not to mention the fact that household composition isn’t static- with families often taking in other relatives in need).
In the end, the partners based their impact assessments on qualitative methods and sample surveys of the target households. The time and resources needed to collect the data on individual family member composition by sex wasn’t commensurate with the value of having that information. I think this is a good example of developing a data collection process that is appropriate to the programme context, and of striking a balance between what we need to know and what we might like to know.
Over the last few months CAFOD has been making a concerted effort to try to pull together lessons across all programme areas about what works and what doesn’t when it comes to disaggregating data by sex. It’s a learning process for us and we’d be interested in hearing from other agencies about their experiences in this area too.
Ultimately, if we’re trying to achieve better quality programmes that lead to meaningful change in the lives of women and men, we need to get better at asking the right questions and making sure that we’re asking those questions for the right reasons.