ANSWERS TO THE QUESTIONS RAISED IN THE WEBINAR “EXPERIENCES OF PARTICIPATORY EVALUATION IN LATIN AMERICA”

Dear colleagues and friends, We hope you are well! We want to share with you the answers to questions that were not answered in our webinar held during the Evaluation Week.

We hope these responses will be useful as we continue to share and learn about this exciting topic and add new participants to this community of practice and learning.

Question 1 (Susana Menéndez).  Stakeholder mapping: how does it take into account a gender perspective and intersectionality?

Answer from Olga Nirenberg

First of all, I need to state that I am not an expert on gender issues so my thoughts on this should not be taken as the final word.

The stakeholder mapping tool provides us with an initial view on who is considered significant in the context where the evaluation will take place. As it states in its introduction, it is flexible and adaptable to specific contexts according to the area or topic that the evaluation will address. It is therefore possible to account for and specify other dimensions, for example, the gender perspective, even though it is not explicitly accounted for. (It was however applied to initiatives where the target groups were mainly female). The tool’s introductory text states that the following features regarding the stakeholders are of interest when planning action:

    • the positions they hold and the roles they play
    • their specific behavioural logic and reasoning
    • their specific perception of reality
    • their interests
    • the type of resources they manage and/or control in relation to the topic in question
    • their relative power to overcome identified problems and needs

The gender perspective could be contemplated in each, or at least in most, of these. However, it would be the task of the evaluation itself to include a focus that identifies and explains structural causes of prevalent inequalities between men and women for any given specific context. Action should also be included in the initiative under evaluation to mitigate or improve situations of identified gender inequality and their effects. In order to do this, indicators, techniques and instruments are required that ask appropriate questions in order to identify differences between women and men regarding participation, benefits and results related to equality within the context of the initiative under evaluation.

Stakeholder mapping is a valuable initial tool which provides a superficial description of the relationships between the stakeholders in the specific context of the initiative. But this is obviously far from being the evaluation itself. Perhaps it would be advisable to include at least a breakdown by gender in the map’s summary matrix with two additional columns to identify if the stakeholders are male or female. But it would be even more advisable to create or adapt a map that deals exclusively with the inclusion of the gender perspective in the initiative under evaluation with appropriate indicators and questions.

The mapping includes intersector action and multisector action when it specifically refers to the relevance of the links established between the different governmental areas, civil society organisations and those from the business world. You can also observe intersector action in the summary matrix when it describes the different types of stakeholders, their functions and roles and the resources that they manage and/or control.

To respond to the question about how the mapping integrates intersectionality, I must reiterate my initial statement regarding my limited depth of knowledge regarding gender issues. I know even less of theories that take into account how methodologies include an intersectional analysis for specific situations. My understanding is that it relates to a complex analysis to detect multiple discriminations or dimensions that interact and affect each other. In other words, it identifies the coexistence of several motives for discrimination that operate together in specific contexts. The tool that I propose does not explicitly address this but rather describes the type of stakeholders, their functions and roles and the resources that they manage and/or control in terms of the specific topic covered by the initiative under evaluation in that specific context. Those that apply may include such viewpoints in the particular situations where they work, in other words, they can create “context-specific” mapping. It should be reiterated that it would be advisable to adapt mapping to exclusively focus on the gender perspective for the initiative under evaluation with appropriate indicators and questions which include intersectionality.

Question 2 (Gabriela López).  What would you suggest to independent evaluators who carry out external evaluations (mid-term and final) that come with predefined criteria and indicators in addition to generally tight deadlines for the entire process? How can we influence the agencies and donors so the evaluation corresponds with the methodologies under consideration?

Answer from Olga Nirenberg

In most evaluations, the independent evaluators are commissioned by government and/or donor agencies. It is true that their tender requests tend to predetermine the evaluative methodologies or approaches (most emphasize quantitative methods), the dimensions and indicators and the timescales (usually very tight). It is true that this leaves little room for creativity, especially the inclusion of qualitative and participatory methods which require longer periods of fieldwork and perhaps more resources (not only financial). In this situation, in my experience, it is advisable to pursue early negotiation processes which those that contract us or request our services, focusing on what qualitative tools, participatory methods and relevant indicators would add, rather than substitute, to the technical evaluative design. In other words, rather than approach them from a fundamentalist standpoint, we need to negotiate added indicators and an evaluative component that incorporates the “stakeholder perspective”, drawing special attention to including people from the target groups at various stages, emphasizing the fact that they are almost certainly the ones who can best comment on the quality of the goods and services that the project under evaluation offers. The ideal is to adopt qualitative-quantitative approaches and evaluative models that have enough flexibility to allow for continual adjustments throughout the process. It is important to note that if the initiative under evaluation does not, or did not, include participatory methods when it was set up, it will be extremely difficult, if not impossible, to carry out a participatory evaluation.

In summary: before embarking upon an evaluation process, you should make those who request the evaluation aware of the importance of including mixed methodological approaches and participatory methods through explanations and by giving examples of prior evaluation experiences as part of arduous negotiation processes. When the evaluation comes to an end and the conclusions and recommendations are being drawn up, the added value that is provided by mixed and participatory approaches becomes more obvious and this should be emphasized. This way, our contributions can be part of making those involved, especially decision makers from governmental bodies and cooperation agencies, aware of its importance.

I can cite three examples of evaluations that used such methodologies in the context of working with important cooperation agencies:

  1. In the W. K. Kellogg Foundation, in evaluations (ex ante, ongoing and ex post), in a training initiative for health professionals in eleven countries in Latin America (cluster of UNI Projects – 1990 to 1996) and then adapted for use in several other subsequent initiatives.
  2. In the Pan American Health Organisation (PAHO) in the final evaluations of two initiatives in Central American and Caribbean countries (2014 to 2015).
  3. In UNICEF Argentina (in cooperation with CEADEL, 2007 to 2017), in the IACE initiative, consisting of self-assessments for quality education in public institutions for the three levels of compulsory education in seven provinces in Argentina. A summary can be found at https://evalparticipativa.net/ in the section: RESOURCES, Participatory Evaluation Experiences and Case Studies: A participatory method for School Evaluations: IACE.

I could give other examples of government agencies (national or regional) in Argentina that opted for similar evaluative methodologies with mixed and participatory approaches.

Answer from Pablo Rodríguez Bilella

The issue you raise goes beyond the reality of most participatory evaluation which requires longer timescales and often different types of resources.  If the evaluations are set out with strict guidelines, the wiggle room for including participatory moments will be narrower than if the evaluator or team commissioned to carry out the evaluation could approach it from a participatory evaluation approach. With reference to the situation you describe (the most difficult!) there’s little you can do if you can’t persuade the people requesting the evaluation of the advantages of stakeholder participation in the process. On the other hand, if you have their approval and endorsement, everything is simpler and more legitimate. For this reason, it is essential for us to know how to “sell” the potential and possibilities that a participatory approach can bring to evaluation: its relevance to achieving results validated by the project’s local stakeholders; the mid-term impact that comes from adopting or receiving findings and suggestions from the evaluation etc. In other words, everything that demonstrates (and encourages) the benefits that the participatory evaluation process will bring as more stakeholders are incorporated.

Question 3 (Ericka Valerio).  Are all stakeholders involved in all stages of the evaluation process? What participatory methodology is used for example when the evaluation report is being written?

Answer from Carmen Luz Sánchez

To give an example, at SERVICIO PAÍS, we have incorporated the participatory evaluation approach in a progressive manner, both in terms of covering a wider range of projects under evaluation and regarding the level of participation of those involved. At present, the various stakeholders are involved in the processes of designing, planning and implementing evaluative moments, but we have not yet incorporated them into the stages of systematisation, writing results reports, etc. This year, we aim to increase the level of participation to reach this level.

Question 4 (Pablo).  How is this pandemic affecting or not participatory methodology?

Answer from Fernanda Arriza

Without a doubt, the majority of evaluations have been affected by this pandemic, both the more traditional ones and the more participatory ones. It is having a clear impact on their execution at this particular moment. However, it also provides us with a good opportunity to rethink the ways we do things and evaluate the feasibility of incorporating other implements (technological) that allow you to continue working despite physical distancing and/or it being difficult to meet up.

There is another side to this when you think about what comes after the pandemic. Elements of participatory evaluation, such as its contribution to redistributing power and its focus on accountability and informed decision-making, mean that it is well prepared to contribute to local recovery and empowerment processes and thus confirms its value and the need to implement it. It is therefore important to remain alert to developments both within and outside the field of evaluation regarding instances of increasing participation in virtual spaces. The educational field will no doubt have much to contribute in this sense as well as specific planning and implementation moments in social programs and projects that place an emphasis on civil society. Due to the cross-disciplinary nature of evaluation, various virtual spaces will provide suggestions, clues and inspiration for our field of interest.

Question 5 (Francisco Bernales) :   What do you think about using Parliamentary Assessment as a tool for Participatory Evaluation and to overcome the technocratic bias in public authorities? Would parliaments provide the best context for facilitating participatory evaluation? (With external methodological support. And if they exercise a direct evaluative function, as per article 24 of the French Constitution)

Answer from Pablo Rodríguez Bilella

The role of parliaments in evaluation is a subject largely addressed by the initiative EvalPartners in their offshoot EvalParliament, as can be seen on its website .  The information is not up-to-date, but the people and the data that appear there have almost certainly continued this work. Without a doubt it has been a very relevant space in the field of evaluation and participatory evaluation, but in my experience, it has been very difficult to work in a coordinated manner when people come from different settings.

Question 6 (Jorge Carbajal) :  From the perspective of public sector democratic control, can participatory evaluation be considered a tool for citizens to promote social accountability?

Answer from Pablo Rodríguez Bilella

We can see in the literature and in the reality of participatory evaluation in the region that it strongly and clearly emphasizes one side of the evaluation coin: learning. The other side of the coin is accountability, and this has been more prevalent and has been treated as more of a priority in various evaluative efforts. This is easy enough to see when we think of what images come to mind when we speak of “evaluation”: control, accountability, weighting, etc. As a way to balance out this imbalance, participatory evaluation then presents an open and explicit way of emphasizing the learning component, which is both the starting and end point of its proposal. However, if we want to do an “evaluation”, it can’t leave behind either side of the coin, and so the accountability feature should not be left out of any participatory evaluation practice. The challenge then is to think about what instruments and processes would be of use when setting up participatory experiences regarding the democratic control of the public sector. As evaluative instances, they include the two evaluative components mentioned, and without a doubt they provide inputs and lessons learned as the level of quality participation grows. All this is provided within the high legitimacy levels expected from such processes.  Evaluative research, as a research field for evaluation theory and practice, could address specific experiences of participatory social control and try to clarify how many relate to evaluation. And from there, it could also contribute its own participatory evaluation wisdom.

—–

Thanks again for your interest and participation!

Greetings from  Carmen Luz Sanchez (Servicio País Program), Fernanda Arriaza (TECHO International), Olga Nirenberg (CEADEL), Pablo Rodríguez-Bilella (PETAS Program) and Juan Carlos Sanz (DEval) webinar moderator. 

Leave a Reply