Exploring Participation in Evaluation: Scope, Limits, and Lessons from Chile

by Rodrigo Quiroz Saavedra

When I wrote this paper (only in Spanish), my intention was not merely to present a case of participatory evaluation, but to invite the reader to delve into real evaluative practice and to look closely at what it truly means to open up spaces for participation in contexts where diverse actors are involved and significant asymmetries exist among them.

I chose as a case study the “Transition to Independent Living Programme” (PTVI) in Chile, implemented by Fundación Eres, because it brings together two key features: it works with young people with intellectual disabilities — a historically marginalised group — and it proposes a social and labour inclusion approach that places self-determination and the rights of persons with disabilities at its core.

From the outset, I wanted to explore to what extent an evaluation could genuinely be participatory when applied to a programme of this nature. For this purpose, I used the tool developed by Daigneault and Jacob (2009), which measures participation across three dimensions: extent of involvement, diversity of actors, and control over the process. The exercise was both technical and deeply reflective. On the one hand, it enabled me to assign clear scores; on the other, it compelled me to examine the power relations, conflicts, and interpretations that emerged during the process.

Over the course of a year, we worked with an Evaluation Committee that brought together representatives of all stakeholders: young people with disabilities, caregiving mothers, support professionals, foundation managers, public officials, and members of civil society. It was not simply a matter of “listening to their voices”, but of ensuring they took part in substantive tasks within the evaluative process: defining evaluation questions, co-designing tools, interpreting results, and formulating recommendations. This breadth and diversity of involvement resulted in high scores for those dimensions, but also revealed tensions that no single indicator could capture.

One of the most significant moments was a discussion about the role of families. While some saw mothers as an indispensable support for the young people’s independence, the mothers themselves spoke of the overload and lack of support they experienced. It was then that I understood that the evaluator cannot be a mere neutral observer: their role includes making asymmetries visible and, when necessary, supporting the position of those with less power so that their perspectives are not rendered invisible.

The Daigneault and Jacob tool proved useful for systematising and scoring the experience, but I also recognised its limitations. It measures who participates and in which tasks, but it does not always capture how power is negotiated, nor whose interpretations ultimately prevail and with what consequences for those involved. This is why, in the chapter, I argue that incorporating the dimension of power relations is essential if participatory evaluation is to remain consistent with its transformative principles.

Personally, this experience confirmed to me that participatory evaluation is both a methodological exercise and a political and ethical act. Political, because it concerns who is able to speak, have a voice, and exert influence in a process that can affect policies, programmes, and people’s lives. Ethical, because it confronts us with the responsibility not to reproduce, through evaluation, the very forms of oppression we claim to want to transform.

The chapter moves between narrating the case and reflecting on the evaluator’s role. I propose that this role is not only to facilitate, but also to mediate, to open spaces for dissent and conflict resolution, and to create the conditions for participants’ knowledge and lived experience to carry the same weight as technical expertise. I also show how, by the end of the process, my subjective perception — that the evaluation had been moderately participatory — was surpassed by the data: the indicators revealed broader and deeper participation than I had imagined.

Ultimately, I invite the reader to accompany me on this reflective journey, not to learn a “recipe” for participatory evaluation, but to glimpse the complexities, dilemmas, and learning that arise when participation is taken seriously. I hope that, by the end of the chapter, it is clear that evaluating in a participatory way is not simply a matter of applying a method: it is a commitment to a process of democratising knowledge and decision-making, one capable of challenging the structures that sustain exclusion.


Daigneault, P. M., & Jacob, S. (2009). Toward accurate measurement of participation: Rethinking the conceptualization and operationalization of participatory evaluation. American Journal of Evaluation30(3), 330-348.

 

Leave a Reply