Site icon EVALUACIÓN PARTICIPATIVA

Inclusive Rigor as a Transformative Compass: An Experience from Colombia

The challenge of evaluating what is already participatory

by Ángela María Báez-Silva Arias

When we set out to evaluate Co-Inspira, a peacebuilding initiative in Colombia, we knew we were facing an unconventional challenge. This was not about assessing a traditional intervention, but something far more complex: a Systemic Action Research (SAR) process that was, in itself, already a collective exercise in knowledge generation.

How do we evaluate a process that is, by nature, an exercise in Participatory Research? How do we avoid overburdening participants with an additional layer of evaluation work when they are already leading their own knowledge-generation process? In the chapter Lessons from practicing Inclusive Rigor in the evaluation of a peace initiative in Colombia”, published in the book Evaluation, Democracy and Transformation”, we share how we approached this methodological challenge through an evaluative learning process guided by the Inclusive Rigor framework.

Colombia has a long history of armed conflict, but also a rich tradition of citizen participation that intensified after the 2016 Peace Agreement. In this context, Co-Inspira —the process we evaluated— worked with three Territorial Councils for Peace, Reconciliation and Coexistence, comparing two methodological approaches: a traditional capacity-strengthening model and an innovative one, SAR. The latter invited co-researchers to draw on their own life stories, and those of others in their communities, to develop a systemic understanding of peacebuilding dynamics in their territories.

Inclusive Rigor in practice

Throughout the chapter, we show how we applied the principles of the Inclusive Rigor Framework, collaboratively developed by a group of evaluators, practitioners and researchers working in peacebuilding. This framework proposes that participation is not in tension with methodological rigor; rather, it strengthens it. We provide concrete evidence of how the genuine participation of co-researchers not only enhanced the quality of the evaluation, but also enabled the emergence of new insights and unanticipated uses.

An adaptive evaluation: from theory to lived experience

One particularly interesting aspect of our experience was how the evaluation itself became an adaptive process. We began with a design informed by external theories on peacebuilding and causal hypotheses grounded in specialized literature on relational fabric, power and agency—an approach that could have led to a more conventional evaluation. However, as participants embraced their role as co-researchers, their involvement in evaluation spaces expanded, and the process evolved in response to their reflections.

In the chapter, we document how initial hypotheses had to be adjusted in light of power dynamics. For example, we realized that public officials were not participating as equals in these spaces; instead, they assumed controlling roles or were physically present but not meaningfully engaged. This led us to reframe our research questions, focusing more directly on power relations and on the characteristics of peacebuilding spaces that are genuinely transformative.

Bricolage as a tool for rigorous inclusion

A key element of our experience was what we describe as “methodological bricolage.” This was not about mixing methods for convenience or merely seeking consistency, but about intentionally responding to a central question: how to ensure that different voices are heard in ways that are meaningful for the diverse groups involved.

We combined elements of systematization of experiences, outcome harvesting, semi-structured interviews and reflective spaces, always guided by a focus on useful, participatory learning. This bricolage allowed us to do something we consider essential: present initial theories and hypotheses transparently to co-researchers —not as immutable truths, but as expectations and assumptions held by some of the actors who made the project possible. Through this act, these theories became shared learning objects: assumptions we could collectively explore, test, adjust and, when necessary, challenge —beyond the evaluators’ own analyses.

Navigating tensions and learning from failure

Not everything worked as intended, and we are candid in the chapter about our limitations and mistakes. One of the most significant was the inability of government institutions to use the evaluation findings. Despite producing rigorous and well-crafted technical outputs, staff turnover during implementation meant that these results followed the fate of many traditional evaluations: they were archived without influencing future action.

Even so, this apparent failure taught us something fundamental about the transformative power of democratizing evaluation. While technical reports had little institutional impact, the participatory evaluation process itself became a valuable and transformative space —for both co-researchers and the facilitation team— consistent with the methodological principles of participation, dialogue and collective knowledge construction.

At the end of the chapter, we organize our reflections around the three domains of the Inclusive Rigor Framework: participation and inclusion, bricolage, and use within an enabling environment. We chose this structure because each domain offered specific lessons for iteratively adapting the evaluation and addressing key questions: how to intentionally engage different levels and moments of participation; how to articulate diverse methods guided by use; and how to navigate complex institutional dynamics that shape the utility of findings. We hope these domains can also guide readers’ own reflections and decisions.

An open invitation to collective reflection

Our chapter does not offer recipes. Instead, it shares a complex experience, with its strengths and shortcomings, its achievements and lessons learned through difficulty. We believe this level of honesty is necessary to build a more participatory and transformative evaluation culture in Latin America —one that is nourished by difficult conversations. Our regional context is fertile ground for the continued adaptation of these frameworks, and we have much to contribute and to learn collectively.

How can we reimagine evaluation as a space for collective construction and continuous learning? What power structures must be transformed for evaluation to become truly participatory? How do we navigate tensions between different learning interests without losing the centrality of local voices? These are some of the questions we explore in depth, and we hope they will spark meaningful dialogue with colleagues facing similar challenges in their own contexts.

In 2025, we concluded the evaluation process of Co-Inspira, the initiative at the heart of this chapter, and we would very much like to continue the reflection and share longer-term learning. We warmly invite our peers, and the entire EvalParticipativa community in the region, to read the full chapter and join the ongoing conversation in the Inclusive Rigor Co-Lab.

Exit mobile version