Many valuable contributions have been made in the ongoing debate around Participatory Evaluation in times of COVID 19, and today we would like to share another contribution on meaningful lessons ‘in’ and ‘from’ Participatory Evaluation.
This time, we would like to take an example from Costa Rica and listen to Karla Salazar Sánchez, a social researcher and participatory evaluation facilitator. Remember, the invitation to participate remains open and we are interested in hearing from voices on the ground; those who have practice-based knowledge. All contributions are welcome. The many journeys made and times of stumbling and of getting back up; the great discoveries and joys can be made accessible to everyone if you share them with a brief post in this community.
The Valle de la Estrella Case Study: a Particpatory Evaluation from the Beginning to the End
The Costa Rican Caribbean is known for its lush green landscapes, beautiful beaches and friendly people. But, unfortunately, it also stands out for its high incidence rates for various kinds of cancer. This prompted the region’s Health Boards (local bodies who monitor the quality of health services) to request an evaluation of cancer care and prevention services, taking advantage of the fact that at this time there was an open tender organised by the Costa Rica Ministry of Planning (MIDEPLAN) and the German cooperation programme, FOCEVAL to support capacity strengthening in evaluation.
And so, the evaluation was carried out from July 2016 to February 2017 in a rural Caribbean town called Valle de la Estrella. The evaluation began with incredibly high aspirations for citizen participation. In other words, it did not consult or ask for opinions from various stakeholders but rather sought to provide the board members with evaluation tools so they could evaluate the cancer services themselves. Their role was that of protagonists; they were the evaluators.
Seven people from different areas of the Caribbean were added to the evaluation group. They were chosen due to the way they stood out for their contributions to community participation and strong commitment to the communities they worked with in a voluntary capacity. There was also a technical team who supported the process, made up of a main evaluator, a local facilitator and FOCEVAL, MIDEPLAN and Ombudsman representatives. Therefore, the project was designed and executed by the evaluation team while the technical team collaborated with methodological consultancy and close support during each of the activities that took place on the field.
The experience taught us a lot of lessons. This was one of the first times that the evaluation of a public service had been delegated to a group of community leaders that had not been previously involved with formal evaluations.
I cannot emphasize enough here the word “formal” because their knowledge of the context and strong connection with community activities requires constant evaluation of what has gone well and what could be improved in various services. However, they had never followed a rigorous methodology that would enable them to systematise results and take them to the authorities to support decision-making. In this sense, the evaluation was innovative because it valued contextual knowledge and the development of technical skills that could be used in the day to day.
Without a doubt, this method requires close technical support and this is where the facilitation role at local level is key to the evaluation. The facilitation body is the bridge between technical know-how and contextual knowledge. As its name suggests, it facilitates the process by seeking to translate formal knowledge to local knowledge and viceversa. In other words, it is not exclusively centred around rigorous procedures, but also promotes the use of local experience to generate an evaluation in line with the context´s needs.
However, the methodological issue should not be ignored because a report with valid data for decision-making could be expected. One of the most challenging aspects for the group was to put themselves in the shoes of the evaluator. Their role was that of evaluators and so they participated right from the stage of identifying the object of evaluation through to defining the recommendations. Other stages included designing the evaluation questions, collecting data and all other stages found in a traditional evaluation. However, at first the role they naturally took on was more similar to that of key informants as the team had never been in a similar process before and at first did not feel like they had the skills to do so.
To address this, local facilitation had to put in a continual effort to accompany them at every stage so that they would take on the correct role. How? By giving them the technical tools and by reminding them what their role was day by day. In order to do this, they held weekly sessions over several months, in which they studied basic concepts and defined key issues such as the evaluation questions, the data collection instruments, etc. This process was also accompanied by the wider technical team and they held workshops which included everyone involved where they created and fine-tuned the evaluation design.
This difficulty was also reflected in the conclusions and recommendations stage of the evaluation. Here, the high levels of contextual knowledge versus the basic knowledge of evaluation meant that the dividing line between the findings and the team’s perceptions was vague.
The data collection stage in interviews and focus groups in order to elaborate conclusions was complicated because once again the team lost sight of its role and acted as informants, seeking to reflect their thinking in the results and not necessarily that which stood out in the data. This led to several discussion sessions with the main aim of understanding the methodological process so that they wouldn’t feel like their knowledge wasn’t valued but rather that evaluation logic requires evaluators to follow other paths via a more abstract and rigorous route that goes beyond individual opinions.
Furthermore, it is important to keep in mind that those facilitating the process need to support the evaluation team in the most empathetic and approachable way. We must remember that participatory evaluation does not seek to simply provide a results report but also equip a group of people with a better capacity to evaluate and therefore have more tools at its disposal when working in citizen participation contexts. In other words, it is necessary to break down the barrier between academic and popular knowledge because in these processes both are essential for reaching the desired results.
Taking this stance into account, it is necessary that those who facilitate the process are clear about this being one of their aims. Sometimes time pressures, budgets and rigorous methods make the process difficult for the facilitator. Going back to the idea of being a ‘bridge’, the local facilitation body should respond not only to the evaluation team but also the technical team, the different institutions involved and the financing organisation. This generates additional pressure which should be addressed again through communication, indicating in a transparent way how the evaluation process is progressing in line with the plan elaborated a priori. Furthermore, the abilities and limitations of the evaluation team should be recognised so that additional support can be requested whenever required.
To conclude, the process demonstrates that “YES, IT WORKS” in capital letters, because despite all the doubts and fears, an evaluation was carried out and the results were enlightening in many ways. Local capacities were improved in terms of dissemination, not only of the results but also of the participatory process which was extended to various parts of the country to motivate other community organisations. It also served to strengthen and empower the health boards as well as other involved institutions leaving behind many lessons learnt. One of these lessons was the fact that participatory evaluation is possible as long as there is a greater investment in resources in terms of time and money.
And that’s when it depends on the objectives: Are you seeking a quick external evaluation by experts to get fast results? Or you do you want to use the evaluation to develop local capacity in the search for greater transparency and improvements in public policies? Are you looking to empower stakeholders who traditionally have been left out of the decision-making processes and strengthen citizen participation to support decisions with empirical contextual evidence? If you seek the latter, one of the best routes to follow is participatory evaluation, from the very beginning right to the very end.
Karla Salazar Sánchez I Social researcher and facilitator of participatory evaluation processes