The benefits of a participatory approach in a complex context: The interim evaluation of the CRIA Program (Guatemala)

by Joaquín Navas, Claudia Calderón y Ricardo Ramírez.

Context

CRIA stands for Consorcios Regionales de Investigación Agropecuaria (Regional Consortia for Agricultural and Livestock Research). The CRIA program began in 2016 with the goal of improving agricultural and livestock research capacity across inter-organizational consortia in Guatemala. The program has been funded by the United States Department of Agriculture (USDA) and implemented by the Ministry of Agriculture, Livestock and Food (MAGA) and the Inter-American Institute for Cooperation in (IICA).

Two Canadian-based independent consultants – Joaquin Navas and Ricardo Ramírez – completed the mid-term evaluation of CRIA between November 2018 and February 2019. They leaned on their experience with the DECI Project [1] in Latin America, which builds on Michael Quinn Patton’s Utilization-Focused Evaluation (U-FE) (2008) [2]. This approach calls for the engagement of a team of primary intended users that take on the design of the evaluation and commit to making use of its findings. This is a participatory aspect of U-FE, where the primary users elicit the purpose and intended uses expected from the evaluation. This in turn enables them to take ownership of the evaluation design.

In the case of CRIA, the group of primary users included representatives of IICA, USDA (funder), the MAGA, the Institute of Agricultural Science and Technology (ICTA), and various universities associated with the program. The evaluation uses that were agreed upon for the mid-term evaluation of the program included:

  • Program improvement
  • Updating the program strategy
  • Accountability (financial and results based)

These three intended uses helped organize and prioritize the key evaluation questions.

Challenges and innovations

A major challenge in the mid-term evaluation of CRIA was the fact that its stakeholders had not grasped nor shared the level of complexity involved in the program. Without this shared awareness, the program delivery had led to uncertainty and discouragement. A further challenge was the lack of a baseline against which to compare outcomes. CRIA was supported by international development funding, where a conventional evaluation would have been the norm. Therefore, a significant innovation was the introduction of a participatory approach that demonstrated rigor while documenting program achievements.

The direct engagement of primary evaluation users allowed them to appreciate the source of the evidence that was collected. This in turn allowed them the opportunity to discuss the findings and find a space for reflection that allowed them to appreciate the program from a new perspective.

A third innovation that was very enriching was a workshop to facilitate the use of the findings, and to involve the primary users in the design of Theories of Change for the main program components. The participatory design of Theories of Change contrasted with other cases where they are developed by external consultants rather than by the program stakeholders who are able to provide follow-up to the recommendations.

Evaluation outcomes and utilization

The main outcome of the mid-term evaluation of CRIA was to encourage the people in charge of implementing the Program, as well as some of the other stakeholders involved. Another important result was the identification and documentation of unexpected outcomes that were absent from the original logical framework. A third result worth mentioning was the reallocation of resources and activities based on the findings.

A central premise of the U-FE approach is that the success the evaluation is based on the level of use given to its findings. This prompts one to ask: Was the mid-term evaluation of the CRIA Program useful? The answer to this question in that the evidence provided by the evaluation contributed to the following:

  1. The funder granted two extensions, each of two years in duration, so that instead of ending in 2020, the Program will continue until June 2024 in order to achieve 100% of the expected objectives, increase the number of participants, and expand the dissemination of validated technology. Moreover, the extension of the program came with an additional US$14M, which surpassed the original US$12M that were allocated during the initial phase.
  2. There was a relaxing of tensions between the Program decision-makers and the representatives of the different institutions, as prior to the evaluation they had contrasting opinions about outcomes, achievements and challenges. The improved harmony in turn increased the trust among the Technical Committee, decision-makers, and the Regional Research Consortia.
  3. The Program decision-makers were able to update the overall strategy. The workshop held to facilitate the use of the findings allowed the users to flag and eliminate several activities and outputs from the logic framework that were no longer relevant, and would have taken up time and resources without contributing to the overall goal.
  4. The Program decision-makers became aware that the major challenge facing the Program was its administration. Following the recommendations, they simplified the procedures to allow for more efficient disbursements. In addition, some gaps were addressed, including the question of sustainability.
  5. The Program became more adaptive in its response to the COVID-19 pandemic. It broadened the use of ICTs to reach program participants and share those technologies that had been validated through research.

Conclusions

There are an increasing number of programs that are complex, with some outcomes that are difficult to predict. This happens when it is necessary to coordinate work among multiple organizations, each with their unique commitments and governing systems. The nature of the challenges faced by the agricultural sector in Central America calls for systemic approaches. A property of complex systems is emergent change, which is difficult to accommodate with conventional planning tools.

The CRIA Program exhibited such conditions, which require approaches to monitoring, evaluation and learning that accommodate that reality. The participation by primary evaluation users, as owners of the process, allowed them to witness the nature of the Program collectively. This then allowed the evaluation to be the means of updating its strategy and streamlining its implementation.


[1] DECI is a project funded by the International Development Research Centre (IDRC) that provides capacity building in evaluation and communication to support research projects. The projects develop evaluation and communication strategies that allow them to gather data systematically to track their performance, and improve impact at the policy level.

[2] Patton, M.Q. (2008). Utilization-focused evaluation 4th. ed. Sage.