Today we would like to share with this community of practice and learning, EvalParticipativa, the experience and perspective of the international organisation Fe y Alegría regarding its connection to participatory evaluation.
The article highlights a recent institutional document produced by this organisation, created by members of the Commission responsible for its collective construction: Beatriz Borjas, Antonio Pérez Esclarín and Vicente Palop. In this post, they first introduce us to aspects of Fe y Alegría’s identity and activity as a context for its main commitment to popular education, a key element and close relative to participatory evaluation as we shared in our post When cousins meet and in the handbook, Sowing and Harvesting.
When reviewing the objectives of the EvalParticipativa Community, I read: ‘[…]our aim is that this space can be used for the social and collective construction of knowledge on participatory evaluation without the need for an “expert” or “trainer”...’ I also read ‘[…]horizontal and collective learning’. And it automatically brings to mind ideas from Popular Education (Freire), Justice against Epistemicide (de Souza Santos) and Interculturality.
Our region has spent years developing alternative ways of thinking about evaluative processes, deconstructing ways of thinking and doing. This has introduced interesting debates that have led us to reflect not only on how to evaluate but also on who should evaluate (and from what knowledge and what feelings).
The Network for Monitoring, Evaluation and Systematization of Latin America and the Caribbean (ReLAC) promotes the development of these topics through its working groups. Thus, the group, Evaluate from Latin America and the group Evaluation Standards for Latin America and the Caribbean, develop and exchange ideas, and generate theoretical and practical knowledge to cause us to think and reflect along these lines and in accordance with the network’s mission and objectives.
Since 2019, the EvalParticipativa community of practice and learning has been working on a path of knowledge and deepening of participatory evaluation experiences carried out in Latin America and the Caribbean. Our virtual platform is witness to the cases and experiences that we have presented, as well as the reflections on meaningful lessons generated in them, and the growing amount of resources (videos, manuals, tools) focused on the field of participatory evaluation that have been added.
In order to strengthen the institutionalisation of participatory evaluation in the region, we have planned to develop different activities during this period 2021 and 2022 in conjunction with multiple actors. A very special activity is the call for the EvalParticipativa Award for Academic Production, which aims to deepen analytical approaches that reflect on this type of evaluation in general or on some of its dimensions, potential, characteristics, etc.
My participatory evaluation research journey was sparked by a motivation to involve young children, as the beneficiaries of an educational program, in evaluating their pre-school program. I wanted children to have the opportunity to participate in designing and using evaluation processes, so as to have their voice heard and to impact on programs and activities they were involved in.
Subsequently, this motivation broadened to include beneficiaries of any age and led to my masters and PhD research in which I sought to understand beneficiaries’ perspectives on participating in evaluating nonprofit organisations. After all, if beneficiaries don’t want to be involved in evaluation processes, requiring them to be is unlikely to be a useful, empowering or voice-giving experience.
Hello, I’m Emma Rotondo (*), member of the Peruvian Evaluation Network. I would like to share about the importance of soft skills and facilitation skills for evaluators in evaluation processes.
One of the definitions for evaluation that I most like is that through carrying out the evaluation, society learns more about itself. Another important element in the definition of evaluation is the importance of enhancing pluralism, the different perspectives held by the participant stakeholders in an initiative. It emphasizes the importance of empowering these stakeholder groups so they can make decisions. And so they can think critically. So they can opt for and develop activities that are guided by the evaluation’s recommendations.
As outlined in the planned activities for EvalParticipativa’s second stage, we are keen to keep adding to the RESOURCES section of our community of practice and learning.
As our colleagues and friends already know, the section already hosts a wide variety of testimonial videos, guides and manuals, tools, case studies and significant lessons. We highlight new and updated material that we add to the repository on our social networks, Facebook, Instagram and LinkedIn: another reason to follow us online!
In this post, we want to highlight four recent additions to the Guides and Manuals section. They all share a clear theme: the empowerment evaluation approach. Although participatory evaluation is the general or umbrella term that refers to stakeholder involvement in evaluation processes in Latin America, the same is not true in the Anglo-Saxon context, where nuances between different evaluation approaches that include or involve stakeholders are more commonly accentuated.
After studying business and spending my early career working in the private sector, discovering what “participation” meant, albeit theoretically, was key. I understood that “participation” meant that several stakeholders had a voice (and vote) in decision making during the four phases of the evaluation: the design, data collection, analysis, and interpretation… seemed simple enough!
From that moment (back in 2012), I began to notice that most evaluation reports included in their summary or methodology something along the lines of: “this is a (highly) participatory evaluation”, and some even mentioned it in the title itself (“participatory evaluation of…”), when what they really meant was that they had consulted many people or groups, but only as informants…! I wondered why they called it participation when what they actually meant was that they had consulted a wide sample.
The deadline to deliver the ambitious 2030 Agenda for Sustainable Development is approaching fast. The world is challenged and struggling to gain a foothold with the COVID-19 crisis still looming large. Bold, ambitious and inclusive actions can turn it around for the people and the planet. At the same time, solutions that have the highest transformative power to change people’s lives must be scaled up. With a world population younger than ever before, engaging with youth in development processes, including in evaluation, can provide the impetus and the multiplier effect to get the Sustainable Development Goals back to course.
Inclusive and meaningful engagement of youth in evaluation provides an unparalleled opportunity to make development programmes responsive to the needs and demands of the youth. It raises youth voices and agency and recognizes them as active leaders and contributors in building a sustainable world. When the power of youth is harnessed in evaluation through meaningful ways, it can bring innovation, increase evaluation quality, enhance the relevance and the transformational power of evaluation.
Costa Rica’s Caribbean coast is known for its lush green landscapes, beautiful beaches and friendly people. But, unfortunately, it also stands out for its high incidence rates for several kinds of cancer. This prompted the region’s Health Boards (local bodies who monitor the quality of health services) to request an evaluation of cancer care and prevention services, taking advantage of the fact that at the time there was an open tender organised by the Costa Rica Ministry of Planning (MIDEPLAN) and the German cooperation programme, FOCEVAL to support capacity strengthening in evaluation.
One of the challenges of participatory evaluation is that of including a broad diversity of stakeholders, many of whom have no training or previous knowledge of evaluation.
In this testimony Olga Nirenberg highlights the importance of the tools and techniques employed. These must be at the same time effective in tackling the topic and simple so as to be within the reach of all the participants. Including the voices of a vast and diverse array of stakeholders in the process is the best way towards a useful, transformational and high quality evaluation.
Olga Nirenberg has a PhD in Social Sciences (UBA, 2005) and a diploma in Public Health (UBA, 1976). A founding member of the Local Development Support Centre – CEADEL -(https://ceadel.org.ar, 1986 – 2020), she developed the Self-Assessment Tool for Education Quality project – IACE- (UNICEF/CEADEL, 2007-2017). She has worked as a consultant/evaluator for UNICEF Argentina, the ARCOR Foundation, the Pan American Health Organization (PAHO), the World Health Organization (WHO) and the W.K. Kellogg Foundation. She has also worked in different social areas of the national government in Argentina. She has been an evaluator of extension projects and a teacher in both public and private universities and has published books and articles on social planning and evaluation. She is a member of the Argentine Network of Evaluation (EvaluAR) and of the Latin American and Caribbean Monitoring, Evaluation and Systematization Network (ReLAC). She is currently collaborating with the EvalParticipativa initiative.