The programme, Servicio País (from the Poverty Alleviation Foundation) started working on Participatory Evaluation (PE) several years ago in a somewhat desperate attempt to conduct evaluations that were more coherent with the programme’s internal logic. It promotes a promotional and collective model which puts local organisations at the heart of development instead of individuals.
Until this point, the evaluation model only included quantitative instruments which treated people as the object of analysis and the evaluation criteria as a product of a logical framework. This system was used to calculate achievement results. However, when studied together with the programme aims, this model was not sufficient in taking into account “other” results which could only be observed using collective evaluations which gave a leading role to local stakeholders and their expectations.
As Servicio País carries out social action activities in all regions of Chile, it needed to conduct PE in various contexts separated by geography and facilitated by teams situated in various regions. Therefore, we needed to design a general methodology which was flexible enough to be adapted to local contexts and so we formed a Central Evaluation Team. Our aim was to transmit this PE method to the local teams so they could develop it in their areas with the support of the central team whether remotely or in person in some cases.
Over time, and after being able to look back on what has happened with hindsight, it is clear that there was a key difficulty which affected the design and execution process in PE. Here I am referring to a general resistance on the part of the stakeholders to “go with” the participatory evaluation process because we all had preconceived ideas about what evaluation is and how it is carried out. I noticed that these preconceived ideas corresponded to understanding evaluation as a concept intimately connected to satisfaction ratings, individual questionnaires and consultations defined by an external evaluator. These preconceived ideas prevented us all from having open minds about different ways to evaluate.
This could seem irrelevant and easy to remedy. But, in reality, this evaluation “inertia” has been a constant challenge. Overcoming it implies changing our deeply ingrained paradigms, instilled from our training and traditional evaluative sources. In our experience, this inertia was expressed by various stakeholders. The professional teams who made up the evaluator teams fell into the temptation of designing evaluation matrices which fulfilled externally defined evaluation criteria.
Furthermore, we constantly reproduced the traditional evaluative discourse in that we referred to ourselves as “surveyors” or “moderators” instead of “facilitators”. We tended to design evaluative spaces which gave priority to the logic of “consultation and response”, giving the evaluator a higher hierarchical position which evidently did not sit well with participatory logic. Whilst we tried to move beyond these preconceived ideas about PE, we continually found ourselves acting in ways that came from this traditional inertia that we were trying to get away from.
These preconceived ideas were also observed among other stakeholders, such as those from community organisations that participated in the evaluative workshops. For example, in the first workshops we saw that they found it hard to grasp the idea of collective evaluation. Being invited to an evaluation space meant they had come prepared to answer questions and respond. At times, especially at the beginning, they seemed a bit taken aback by the entertaining aspects and participatory nature of the workshop. Something similar happened with municipal council members who attended evaluative sessions.
Last but not least, as a civil society organisation who relies on public financing, we have wrestled continuously with government partners who demand that results reporting demonstrates achievement from a positive perspective and are not prepared to accept PE logic as a valid evaluation method.
Taking all this into account, to advance with better ways of carrying out PE, we need a change of perspective, to install newer more open and flexible evaluation approaches which operate from a “bottom-up” logic at all levels; evaluators, local stakeholder participants, government agency workers, policy and programme designers. This requires continual effort to build capacity, prepare our evaluator-facilitator teams and promote evaluation in government institutions, investing in a process of “conversion” from preconceived ideas about evaluation to more integral versions of evaluation.
Carmen Luz Sánchez | Coordinator, Programme and Evaluation Management. Servicio País Programme, Fundación para la Superación de la Pobreza (Chile)