By Osvaldo Néstor Feinstein
Participatory Evaluation (PE) gives voice to stakeholder perspectives on policy, programme or project processes and results in order to limit or avoid technocratic bias. Furthermore, it promotes ownership of the evaluative process and results which makes the evaluation more widely accepted. These are two of the arguments in support of PE.
On the other hand, PE has been criticised by the argument that it is not a rigorous approach due to its qualitative methods which capture “impressions” and anecdotes but do not provide rigorous quantitative procedures. Sometimes random control trials (RCTs) are used as an example of rigour in evaluation.
But using RCTs does not guarantee that an evaluation is rigorous. Nor do participatory evaluations have to be qualitative and lack rigour. RCTs are useful for obtaining valid conclusions in very specific contexts and for certain types of interventions that are compatible with this kind of approach.
However, when these results are generalised to other contexts, it becomes problematic. Often, this generalisation or extrapolation of results is carried out flippantly with no rigour whatsoever. In these situations, using RCTs generates only the illusion of rigour. It focuses on the internal validity, which is the result of concentrated efforts, and presumes it also has external validity leading to the non-rigorous generalisation of results which only really correspond to a specific context.
Returning to the PEs, here is a real example of how you can make a PE more rigorous and turn it into a valuable instrument for both accountability and learning.
A few years ago, I had the privilege of evaluating a rural development programme in India. In this programme, an NGO played an important role in training groups of women and awarding them credits.
In the first few meetings with the NGO, its director, a charismatic leader, known both in India and internationally, argued that the project’s main problem was that the government insisted on working with goals and that this negatively affected the results because the groups formed were not solid or consolidated enough and so they failed. As a consequence, he recommended getting rid of the goals.
But, this recommendation was not viable because this is the way the public sector works in India. Furthermore, as talks continued with the NGO, the evaluation team observed that the organisation had identified required conditions for “consolidated groups”.
They could, therefore, create an indicator for “consolidated groups” and carry out an analysis of the relationship between group expansion and its effect on the “consolidated groups”, providing evidence of the trade-off between progress in the organisation, measured in terms of the number of groups formed and the consolidation of these groups through the indicator of number of consolidated groups. This made it possible to set goals in terms of “consolidated groups”, opening up a dialogue with the government, using the evaluation results which showed the negative consequences of managing the programme based exclusively on goals in terms of the number of groups formed.
This way, whilst at the beginning of the participatory evaluation, group consolidation appeared to be an exclusively qualitative matter, a “consolidated groups” indicator added rigour to the approach, based on the information generated through the evaluative process. This not only made the participatory evaluation more rigorous, but also improved future activity planning. It is important to distinguish between “unquantifiable” and “unquantified”. The line between them can be grey as it depends on a deep understanding of the specific reality, creativity and the determination to evaluate in a participatory manner as rigorously as possible.
Osvaldo Feinstein is a professor on the Master’s Degree in Evaluation of Programs and Public Policies at the Complutense University of Madrid. He is a member of the International Evaluation Advisory Panel of the UNDP Independent Evaluation Office and a senior advisor to the independent evaluation offices of IFAD and the African Development Bank. He is also a member of the Editorial Board of the Program Evaluation and Planning Magazine and editor of the World Bank Series on Evaluation and Development. He has extensive international experience as an evaluation consultant and evaluation unit manager.