How to ensure that all voices are heard. Predefined criteria vs. stakeholder questions in evaluation

by Laura Porrini

For a while now, I have been pondering some key aspects that, in my experience, shape Monitoring, Evaluation and Learning (MEL) practice in the Global South. It is within this context that I have decided to set out some ideas that could be incorporated into practice criteria. One of the ideas that I have focused on is the existing tension between the increasingly felt need to ensure that all voices are heard in the evaluation process and predefined evaluation criteria, both in terms of their content and use.

Today, the importance of participation in the evaluation process is no longer under debate. It is now rare to find a Terms of Reference (ToR) document that does not mention the need to follow a participatory process. However, while participation is a common component in ToRs today, the Development Assistance Committee (DAC) evaluation criteria remain present in many evaluation processes (relevance, effectiveness, efficiency, sustainability and impact), and the coexistence of both these requirements can lead to some contradictions that need to be discussed.

Caroline Heider, former Director General and Vice President of the Independent Evaluation Group at the World Bank has analysed the way these DAC evaluation criteria came about in the early 1990s. She states that ‘the underlying assumption at the time was that “aid” should help “recipient countries” achieve positive development results. To do so, aid needed to be relevant, effective, efficient, impactful and sustainable’[1]. Today, the DAC mandate focuses on promoting development with the aim of achieving ‘sustained, inclusive and sustainable economic growth, poverty eradication, improvement of living standards in developing countries, and to a future in which no country will depend on aid’[2]. The world has radically changed and so should evaluation criteria in order ‘to ensure evaluation incentivizes development practices (…) for partner countries’[3].

With these issues in mind, irrespective of which evaluation approach is chosen, using unilaterally predefined criteria will reduce the possibility of incorporating the concerns and questions held by various different stakeholders involved in the intervention. This clearly limits the quality of the responses. In this context, the best-case scenario is that participation will take place with clearly defined boundaries: according to the interests of those who draw up the questions.

Heider explains that the five DAC criteria have underpinned and legitimised most evaluation systems in the field of international development. However, readers are often left with ‘unanswered questions’ when evaluations stick rigidly to these criteria[4]. This is even more significant when we consider how the world has changed over the last three decades: today, in any evaluation it would be odd, to say the least, not to consider issues such as coordination and empowerment. These are just two examples of the many factors left out of the five traditional criteria.

The challenge for us today goes beyond reconsidering which criteria should be used, although this is a good first step, to also rethink how criteria are selected and think through how evaluation processes can be designed to respond to specific stakeholder questions rather than (just) predetermined criteria.

There are undoubtedly advantages to using predefined criteria, namely that they can make the process easier. A consultation process that took place in 2018-2019, concerning the need to adapt the DAC criteria, revealed that most survey responses ‘highlighted the value of the criteria in bringing standardisation and consistency to the evaluation profession and evaluative practice. It was also clear that there was a need for continued simplicity, by retaining a limited set of evaluation criteria and keeping the definitions coherent’ (DAC, 2019). However, whether evaluation design can be opened up to real participation (understood as putting forward the questions rather than just answering them) depends not only on improving the original criteria definitions, adding a new criterion (coherence, as was proposed in the consultation process), or defining guiding principles. It means that the questions have to precede the criteria. If the evaluation questions that will guide the process are drafted collectively before any evaluation criteria is specified, the evaluative process is more likely to result in enhanced participation and a contextualised response. Accordingly, a question-based evaluation matrix that is collectively constructed could help open up the evaluation process to more stakeholders, promoting real participation to lay the foundations for transformative evaluation.

The logic that underpins the question-based evaluation matrix is that it is possible to develop designs that are sensitive to each specific case/context if the questions are defined collectively first and the criteria are established after. Prioritising questions over criteria or methods can significantly impact how well evaluation procedures adapt to each context and help deter from sticking rigidly to predefined requirements. Once this matrix is designed, the two principles that were defined during the DAC criteria consultation process (2018 – 2019) can be applied to the context in a simpler, more useful and more sensitive manner.

Principle One. The criteria should be applied thoughtfully to support high quality, useful evaluation. They should be contextualised – understood in the context of the individual evaluation, the intervention being evaluated, and the stakeholders involved. The evaluation questions (what you are trying to find out) and what you intend to do with the answers, should inform how the criteria are specifically interpreted and analysed.

Principle Two. Use of the criteria depends on the purpose of the evaluation. The criteria should not be applied mechanistically. Instead, they should be covered according to the needs of the relevant stakeholders and the context of the evaluation. More or less time and resources may be devoted to the evaluative analysis for each criterion depending on the evaluation purpose. Data availability, resource constraints, timing, and methodological considerations may also influence how (and whether) a particular criterion is covered.

[1] Caroline Heider (2018). Rethinking Evaluation- Tracing the Origins of the DAC Evaluation Criteria. Retrieved from:

[2] The DAC mandate. Retrieved from:

[3] Caroline Heider (2018). Rethinking Evaluation- Tracing the Origins of the DAC Evaluation Criteria. Retrieved from:

[4] Caroline Heider (2017). Rethinking Evaluation – Have we had enough of R/E/E/I/S? Retrieved from:

Leave a Reply