SIMILARITIES BETWEEN SYSTEMATISATION OF EXPERIENCES AND EVALUATION

by Olga Nirenberg (*)

Introduction

As I mentioned in my brief comment on the article published the 22nd of March on EvalParticipativa, “Systematisation of Experiences and Evaluation: Similarities and Differences“, by Oscar Jara Holliday, my first reaction upon reading it was of surprise. I felt it was paradoxical that certain arguments regarding evaluation would appear on the blog on Participatory Evaluation, it seemed like an oxymoron, a contradiction in itself.

I am familiar with the works published by the author of the article, OJH, which are widely known and have been of great influence on those of us who work in evaluation in Hispanic American countries. I have even had the opportunity to debate directly with him on the resemblance between systematisation and evaluation. Our conversation was very important to me, so much so that in the book I published shortly after (Nirenberg, 2013) I included a chapter (chapter 8) dedicated almost entirely to systematisation of experiences in which I highlighted its points of contact with non-traditional evaluation approaches.

Although I have acquainted myself with the approach and methodologies of systematisation of experiences and have even put them into practice in many occassions, here I will discuss mainly the references to evaluation present in the cited article, since that has been, for more years than I would like to admit, the central focus of my professional career.

I am very grateful to my colleagues and coordinators of EvalParticipativa, Pablo Rodriguez-Bilella and Esteban Tapella (PETAS/National University of San Juan, Argentina) and Carlos Sanz (DEval), for giving me the opportunity to expand and enrich this debate.

Evaluation, its Purpose and its Usefulness

Evaluation is a systematic and programmed reflection that is carried out using quality information (trustworthy, relevant, up to date) in order to generate, through comparisons and reasoned evaluative judgements, useful and viable recommendations to improve the action and thus its results. These reflections refer to the action that has already been taken, to the one in progress or to the one expected to be taken (Nirenberg, 2013).

I would like to reiterate something I have mentioned in various publications, and with the backing of several colleagues, which is that evaluation does not have one purpose alone but at least three main ones:

    • to learn from practice,
    • to improve intervention programming and mangagement, and
    • to achieve transparency in intervention management.

I therefore disagree with the article’s claim that the goal of learning belongs to systematisation and that it is what differentiates it from evaluation. On the contrary, my view is that this aim is shared by both systematisation and evaluation. At this point I state my opinion on the relation between the usefulness of evaluation and systematisation: they both contribute to the goal of learning in order to transform the situations of reality that are problematic or unsatisfactory. That is, they both lay the knowledge for the transformative action.

Evaluation in accordance with stages, focusEs and who carries it out

We should remember that evaluation deals with different stages and focuses, which can sometimes be recognised as such, in the form of a time sequence, but more frequently present themselves as conceptual moments, in the form of a logical sequence, which means focuses may overlap, for example, during ex-post evaluation, when attention is also placed on other moments and focuses. In short, they are:

    • Diagnostic evaluation or baseline, with a focus on the initial problematic situation and its context.
    • Ex-ante evaluation, with a focus on the formulation or design of the intervention.
    • Mid-term evaluation, during the intervention and with a focus on the processes (activities).
    • Ex-post or final evaluation, with a focus on the results or effects (both expected and unexpected).

Those moments interact, affect each other and, as mentioned before, can overlap, in many cases acting as inputs to one another. In the referenced article only the evaluation focused on results is mentioned and the only results taken into consideration are programmed ones, as if the evaluation was not required to investigate the effects that were neither programmed nor expected, whether positive or negative. I insist, even if the only evaluation is carried out at the end of an intervention, the other moments (diagnosis, design, implementation records) are taken into account in order to provide greater explanatory power to the conclusions (judgements) and foundation to the evaluative recommendations.

The article also fails to differentiate between types of evaluation according to who conducts it: external, internal, self-assessment, participatory evaluation. Each of these types or modalities has advantages and disadvantages, that is why I always recommend the use of synergetic combinations, which introduces a fifth type of evaluation, one that was brought to my attention by my late colleague and friend Mario Robirosa and which he called “mixed evaluation”. In many ocassions I have suggested combining moments of self-evaluation with others of external evaluation so that the findings yielded by the former can serve as relevant input for the latter. The same can happen with systematisation.

Regarding self-assesment (a modality I have worked on for a decade), it is often criticised for requiring stakeholders to be both the judge and the defendant at the same time. However, I want to emphasise that its advantages reside precisely in the fact that it is the protagonists of the action the ones who carry out the methodical and systematic reflections on their own performances and results, thus learning from their achievements and failures. This in turn will allow them to propose improvements. And these will most likely be more viable and effective than improvements requested or imposed by headquarters or external evaluators.

To me, all of this testifies to the relation between self-evaluation and participatory evaluation, in which the protagonists of the different stages of the evaluative process are those who have taken part, in one way or another, in the formulation and implementation. And I dare say self-evaluation and participatory evaluation are the ones that share the most points of contact with systematisation. It is because of this that I see the claims in  Jara’s articleas an invitation to debate on what is evaluation and what separates it from systematisation. In this spirit, I would like to analyse other polemical points in his article.

Judgements, comparisons, recommendations and turbulence

I agree that evaluation always formulates reasoned evaluative judgement and that perhaps that might be a difference with systematisation, but I do not agree with the remark about those judgements being related only with the results. Although evaluation is very much concerned with effectiveness (the degree to which the programmed results are achieved), it also deals with non-programmed effects and takes into consideration other dimensions, especially the processes that took place during the intervention as this allows comprehension and explanation of the results that were achieved (whether programmed or not).

Those judgements are formulated as answers to evaluative questions. In that sense, evaluation follows the Socratic method, which dates back to the fourth century BC and that consists of asking the right questions about something and afterwards debating the answers through argumentation, evidence and reasoning. It is through debate (nowadays we would call it intersubjectivity) that we get to build new knowledge with which to answer the questions that were posed and thus perform better. This does not mean that I am unaware that knowledge is necessary but not sufficient to perform well, as Socrates did believe. Today it is clear that motivation and an axiological framework are also needed.

Oscar Jara makes a valid point when he states that in order to pass reasoned evaluative judgements it is necessary to make comparisons and that characterises evaluation. However, to this I would add that these comparisonsare not drawn concerning exclusively the objectives and goals set out during the design of the project, but rather they are drawn concerning mainly the diagnosis or baseline and the problematic situations that were detected and prioritised, all the while leaning on a theory of change (theoretical framework).

Comparisons of the outcomes are drawn with different parameters in mind:

    • Initial situation ⇒ baseline ⇒ comparison without counterfactual analysis ⇒ “before-after”.
    • Objectives and goals that were set.
    • Norms and standards in other contexts (national average, protocols).
    • Other similar interventions in the same context or a comparable one.
    • A similar control group but one where the intervention was not carried out.

In evaluation focused on processes, the main consideration is the project implementation, thier modalities, the allotted time, the sequences, the inputs that were used, the outputs and scope obtained, as well as the advancement (institutionalisation and expansion) of the processes that took place.  Both the intrinsic or contextual factors that favour the management as well as those that hamper it and call for reprogramming are analysed. Although these are the emphases, enquiry into the effects that take place during the course of the action is not left aside. At this evaluative stage, the prime comparisons are made contrasting with:

    • The theory of change or formulated strategy.
    • Present theoretical currents about the specific topic.
    • The consideration of ethical and axiological matters both general and specific to the topic.
    • The project design (activities, project leaders, schedule, expected scope and outputs, assigned inputs).
    • The initial situation and its context reflected on the diagnosis, including the mapping of stakeholders and resources.
    • The changes to the initial situation and its context that occurred during the implementation and that brought about reprogramming.
    • Other interventions, other similar or of the same “family” (eg, national clusters, policies or programmes implemented in different locations).

I also disagree with the statement that evaluation finishes with the reasoned evaluative judgements. Although they are indeed a crucial element, I have always insisted that every evaluation should finish with recommendations for future action based on those judgements.

Evaluative judgements are similar to conclusions in the field of research and I reiterate that they are usually either answers to questions (or hypotheses confirmations) previously posed or unexpected findings that were encountered along the line of enquiry (Serendipity). If it only encompassed judgement, evaluation would be similar to controlling or auditing for accountability.  But it has been at least three decades since evaluation moved on from that.

I believe recommendations are an important intersection of evaluation and planning. They reflect the learning achieved during the evaluative process and they facilitate reprogramming, which in turns helps to improve management. The design of the intervention, its initial formulation, is a road map that guides the actions and I agree with OJH that they are contrasted with the situation in the field.

But to think that what has been programmed at the beginning is static or linear is to be unacquainted with the flexible modalities of strategic and participatory lines of planning. Formulation does not happen once and for the entire duration of the implementation, but rather there must be constant planning since it is known that throughout its different stages there is a need for operational programming or reprogramming in accordance with changing contexts. Once again I think of Robirosa, as he would use, since at least thirty years ago, the term “turbulence” to characterise the contexts in which interventions are carried out (Robirosa et al, 1990; Robirosa, 2014).

Methodological rigour and triangulation

There is consensus among evaluators on the importance of obtaining quality information (trustworthy, relevant, up to date) to back the evaluative judgements of the action and its outcomes. This includes the testimonies of the stakeholders involved, of those who took part at different moments of the intervention and particularly of the beneficiaries, but at the same time, the implemented records and other relevant quantitative and qualitative data should also be taken into consideration.

At this point it is worth highlighting that an important procedure, although not the only one, to achieve both the trustworthiness of the information and the methodological rigour of the evaluation is triangulation. With it, the main aim is to reduce individual subjectivity through intersubjectivity by including diverse subjects or different points of view. Evaluation usually triangulates:

    • Types of evaluation (based on who evaluates and on moments or focus).
    • Quantitative and qualitative approaches.
    • Different techniques.
    • Different sources of information.
    • Different indicators for the same phenomena.
    • Different comparative parameters.
    • Multiple stakeholders.
    • Different disciplines.

It is the methodological rigour where the greatest similarity between evaluation and investigation lies. The use of qualitative techniques, such as group dynamics and interviews, and of appropriate, play-based tools is another link that connects evaluation with systematisation.

Participatory evaluation and systematisation of experiences

To highlight the main points of the conceptualisation of participatory evaluation and reflect, without the need for further arguments, what the points of contact with systematisation of experiences are, a very useful resource is the recent publication “Sowing and Harvest. A Handbook for Participatory Evaluation” (Tapella et al., 2021). Below I cite the seven principles which that modality is based upon.

    1. The relevant stakeholders of the intervention or situation that is to be evaluated are actively and consciously incorporated to the evaluative process as rights-holders.
    2. Local knowledge is recognised as valid and necessary knowledge for the management.
    3. The institutional representatives work in association with the local stakeholders in the design, implementation and interpretation of the evaluative findings.
    4. The use of didactic techniques and materials facilitates dialogue by generating spaces and procedures for the recollection, analysis and use of the information.
    5. The participating stakeholders or interested parties take ownership of both the processes and the results of the evaluation.
    6. The evaluative process strengthens the local competences in planning and participatory decision making.
    7. The external agents act as facilitators of the processes of management and evaluation.

Without a doubt, there are great challenges in building multi-stakeholder spaces where genuine participation can take place and that generate true areas of equality in order to achieve a deeper action-oriented knowledge that is aimed at the action to improve or overcome problematic situations. It is really worth reading this manual to better understand the connections between participatory evaluation and systematisation of experiences.

Evaluation and systematisation: an example

At the beginning I stated that I would not meddle with the procedures of systematisation. However, I want to finish this article with an from an intervention that I coordinated myself in which systematisation was a relevant step and input for the evaluation.. With this I do not mean to say that systematisation is always a stage of or input for evaluation.

I am referring to the IACE method for school self-assesment that was developed and implemented from 2007 to 2017, in 3245 schools of the three compulsory levels of education in seven provinces in the north-east and north-west of Argentina. The development and implementation of this method of self-assesment was made possible by the cooperation of UNICEF Argentina through agreements with the Local Development Support Centre (CEADEL), of which I am a member, and the ministries of those provinces (Nirenberg, 2018 y 2019).

This method promotes the central role of head teachers, teachers, students and their relatives. This allows them to work on indentifying and prioritising the prevailing problems in their respective schools, by jointly doing a series of evaluative exercises, and afterwards tackle the formulation of an action plan for overcoming the prioritised problems which is implemented and monitored during the following year.

One of the steps in the method consists of the systematisation of the processes designed in each school during self-assesment and the implementation of the action plan. To do this, they process and analyse the records that were set for monitoring which are triangulated with group dynamics with the stakeholders involved and interviews with the head teachers.

This product, apart from serving as a document of the memory and the learning for the members of each institution, is integrated and made available for each jurisdiction.  This integration is one of the prime inputs to the evaluation of evaluative interventions and of action plans implementations. Thus, the aim is not only to improve the management of each school but to have an effect on the education policies of the provinces, which introduces a change in the direction of the processes of policy fomulation: from the bottom up. The first two links appearing below lead to additional information on this case.

References

 

Leave a Reply