PARTICIPATION, A KEY FOCUS IN OUTCOME HARVESTING: LESSONS FROM CHILE

by Andrea Peroni Fiscarelli

Homogenous evaluations vs differentiated evaluations

In order to build a solid framework and evidence, state-led evaluations have become more standardised over time, and are thus increasingly gaining in credibility. The problem that remains is the need to recognise that not all public programmes share the same characteristics, and therefore differentiated types of evaluations should be considered.

This is evident in programmes that centre around the delivery of goods with no interaction with the beneficiaries (such as in the case of subsidies, plans and vouchers), and others that seek to provide tools and/or develop competencies and skills aimed at increasing the social inclusion of individuals, especially from vulnerable sectors.

For these types of programmes, traditional evaluation methodologies have proved to be insufficient as they only seek to assess the level of effectiveness or efficiency and do not manage to capture the complex and diverse reality, practice and results that are present. Furthermore, as the programmes deal with human and social behaviours, the complexity of their contexts should also be recognised.

Outcome Harvesting is a viable, innovative and useful evaluation methodology for carrying out evaluations of capacity-building public sector programmes as it enables behavioural changes in skills development and social inclusion projects to be explored in a highly participatory manner.

In their book, Ricardo Wilson-Grau and Heather Britt describe Outcome Harvesting as “a method that enables evaluators, grant makers, and managers to identify, formulate, verify, and make sense of outcomes. The method was inspired by the definition of outcome as a change in the behaviour, relationships, actions, activities, policies, or practices of an individual, group, community, organisation, or institution” (Wilson-Grau y Britt, 2012).

Outcome Harvesting, created by Ricardo Wilson-Grau[1], is a suitable methodology for complex programme contexts where the relationships of cause and effect are not clear and so the aims and ways of achieving them remain fairly unpredictable, in turn necessitating predefined objectives and theories of change to be modified over time in order to respond to contextual changes. It is particularly useful when the outcomes, (and even the inputs, activities and outputs), are not specific, not defined quantitively at the planning stage, and/or not sufficiently measurable in the evaluation.

Harvesting Outcomes in Evaluation

In early 2016, a team of professionals and academics, members of the Chilean Evaluation Network, on behalf of Isónoma Consultorías Sociales Ltda., used this methodology to evaluate the programme “Yo Emprendo en Comunidad” (I am a Community Entrepreneur), designed and implemented by the Solidarity and Social Investment Fund (hereafter, FOSIS) of the Chilean Ministry of Social Development.

This programme seeks to enable workers in worker-owned cooperatives to develop economic activities in order to increase their income and improve their living standards and the context in which they carry out their activities, thus promoting cooperative work and constituting a programme which features complex characteristics due to levels of flexibility required and uncertainty present.

The evaluation included:

      • A characterisation of the beneficiary organisations and their productive activities.
      • The evaluation of the “Yo Emprendo en Comunidad” programme based on a traditional quantitative approach. This was carried out by comparing the baseline with the output line on variables such as income, sales, production costs etc. with the aim of evaluating to what extent the programme objectives had been achieved.
      • The qualitative evaluation of processes, based on individual and group interviews with teams and regional managers.
      • The evaluation of a selection of ten projects in northern, central and southern Chile using the “Outcome Harvesting” methodology, through which we sought to highlight the participation of the members of the beneficiary organisations, and to identify possible unexpected effects that the baseline and output line indicators had not considered; these were mainly results linked to worker-owned models of business and changes in the quality of life experienced by the beneficiaries.

Harvesting Lessons

The first lesson we learnt was to dare to propose a methodology that had not been requested by FOSIS and that was unknown by the state counterpart, making it a risky proposal “outside the Requirements”. This did not hinder us as we were prepared to implement the requested evaluation and also include a participatory methodology, in this case, Outcome Harvesting.

We were determined to show that social programmes are diverse, and highlight that they are developed in complex contexts and that if these programmes focus on developing individual and community skills and capacities, they require a different approach in the evaluation.

Thus, we carried out a” homogenous evaluation” and a “differentiated evaluation” in parallel. This meant that the evaluation could generate two types of results: on the one hand, the traditional use of the findings (effectiveness, efficiency, results) and on the other, a dialogue-based learning exercise for programme staff and beneficiaries, which allowed them to learn to reflect and engage with each other from a participatory evaluation perspective. This perspective seeks to involve the actors in the same process and together, develop a process to reflect on the use of this evaluation, given that they designed and participated in the evaluation themselves. This represents an opportunity to strengthen the deliberate exercise of democracy at local and national level.

The second lesson we learned was that most innovative methodologies come from other contexts and this means that several adjustments need to be made when they are applied to the national context. All in all, despite the considerable differences among the implemented projects, both in terms of the projects themselves and the territorial contexts where they were based, the defined methodology worked well. However, we will now describe a series of difficulties that arose:

      • There were times when the key institutional actors did not have the required depth of knowledge concerning the outcomes of the evaluated projects and so were not ideal informants. This made defining the outcomes at the beginning of the process considerably difficult for some of the projects.
      • Detecting the third key actor[2] varied from project to project, especially as there were projects that could not name a key actor external to the organisation with knowledge on the projects and their outcomes. This meant that this figure was omitted in several projects, making the process of defining the outcomes and validating them more difficult.
      • The time restriction for surveying the outcomes on the ground made this process difficult. It also made the description and validation of the outcomes with the required depth complicated. This was not a cross-cutting difficulty among all the projects but rather relevant in cases where it was difficult to gain open access to participants from the organisation in question.
      • The process of validating the outcomes was highly complicated in all the projects. Even though it had been established that the outcomes would be validated by the key institutional actors and the key informant detected on the ground, it was not possible to implement this stage of the project adequately due to the low levels of knowledge possessed by the selected informants concerning the projects.

An important lesson learned concerning the methodology of the workshops was the use of the didactic material used to represent abstract concepts[3], which proved to be a valuable facilitator when working with vulnerable groups of people with low educational levels that varied in age and gender.

The material was adapted by the research team and was used successfully with groups such as female potters in geographically isolated rural areas, groups of men in prisons, groups of artisanal fishermen and fisherwomen, groups of beekeepers in rural areas. However, it was difficult for the representatives of the participating organisations to work independently in groups and respond in writing to the information requested due to language difficulties and the fact that a significant number of people were not able to read or write.

As seen above, the Outcomes Harvesting methodology could not be fully applied due to some difficulties encountered in the fieldwork. Nevertheless, it contributed to evaluating and understanding the processes related to each of the implemented projects, providing significant results for the “Yo Emprendo en Comunidad” programme decision-makers. It also demonstrated that it is possible to carry out this type of participatory evaluation even when the state does not (initially) contemplate it in its evaluation requests.


Evaluator Team: Andrea Peroni F. , Patricia Varela, Cecilia Robayo, Claudia Olavarria. For more background and detail, see the paper in Spanish presented at the 2016 CLAD Conference in Santiago de Chile, El Uso de Metodologías Innovadoras en la Evaluación de Programas Sociales: El Caso de la “Cosecha de Alcances”.


[1] The method known as Outcome Harvesting was developed by Ricardo Wilson-Grau with the help of his colleagues Barbara Klugman, Claudia Fontes, David Wilson-Sánchez, Fe Briones Garcia, Gabriela Sánchez, Goele Scheers, Heather Britt, Jennifer Vincent, Julie Lafreniere, Juliette Majot, Marcie Mersky, Martha Nuñez, Mary Jane Real, Natalia Ortiz and Wolfgang Richert. Over the past eight years, the Outcome Harvest has been used to monitor and evaluate the achievements of hundreds of networks, non-governmental organisations, research centres, think tanks and grassroots organisations around the world (Wilson-Grau & Britt, 2013).

[2] The first type of actor was the public official/worker, the second type of actor was the beneficiaries (or vice versa), and the third type of actor refers to actors not directly involved in the programme but who can testify to the outcomes achieved or not, for example, local municipal authority.

[3] In this case, the didactic material used was: (1) Star cut-outs to represent the dreams or expectations of the project beneficiaries; (2) Cloud cut-outs to represent difficulties encountered and negative outcomes; (3) Cut-outs of fruit to represent positive outcomes from the project; (4) Cut-outs of nuts and bolts to represent the main contributions to the processes; and (5) cut-outs of lightbulbs were added to represent contributions from the actors that made the programme successful.