by Thomas Aston
Nowadays Most Significant Change (MSC) is used so loosely that it’s worth reflecting on the method’s original purpose. Rick Davies, who designed the method has an online list of MSC publications which comprises over 200 documents. Yet, I suspect, there is a widely varying quality of application.
The method emerged from an experience in Bangladesh where partners couldn’t agree upon indicators, so instead, the evaluation relied on people retelling their stories of significant change they had witnessed as a result of the project and explaining why they thought their story was significant. In a nutshell, programme participants became the storytellers.
MSC is regularly, but wrongly, conflated with promotional stories of success; these are generally the faces of change but rarely the voices of change. This phenomenon reminds me of a few years ago when I was training CARE’s Ghana Strengthening Accountability Mechanisms (GSAM) team on Contribution Tracing. In the conference room in Accra was a huge poster of “Honourable Deborah” who became a congresswomen, it was argued, due to CARE’s efforts in the Pathways to Women’s Empowerment programme. Someone was narrating her story from a particular perspective amid a confection of project activities, many of which seemed tenuously related to her election success. One of the participants in the training, Nurudeen Salifu, from CARE’s partner Ibis, also knew Deborah and had his own perspective on how Ibis had contributed to her election. What was lacking in the poster, of course, was Deborah’s own testimony. I thus used it as a study aide for how not to talk about contribution for the rest of the training.
My very first steps in the international development industry were actually collecting MSC stories from community members in a USAID-funded food security project implemented by CARE in Bolivia. Like many organisations, CARE had an adapted format for MSC stories. The production of the MSC guide itself was partly funded by CARE UK only two years before I went out gathering stories in 2007. According to my ex-colleague, Jay Goulden, CARE did so because MSC matched the rights-based approach CARE was promoting at the time; putting the power for explaining what changes are significant and why in the hands of programme participants, and ideally involving them in making those choices overall, too.
What are the essentials of MSC?
Most Significant Change is a form of participatory Monitoring Evaluation and Learning (MEL) that involves generating and analysing personal accounts of change and deciding which of these accounts is the most significant, and why. As Better Evaluation points out, the three main steps are:
-
-
- Deciding the types of stories that should be collected;
- Collecting the stories and determining which stories are the most significant;
- Sharing the stories and discussion of values with stakeholders and contributors.
-
MSC is not just about collecting and reporting stories, but about having processes to systematically select (a), appraise (b), and learn from (c) these stories. Full implementation breaks this down into 10 more specific steps. You may not end up doing all the steps.
One particular goal of MSC is to learn about the similarities and differences in what different groups and individuals value. Like in Realist Evaluation, rather than assuming change will be identical across a population, we assume instead that the significance (and meaning) of change will vary widely for different groups.
Before selecting stories, different stakeholders (usually project teams) define broad domains of change. Unlike indicators, these are broad and deliberately “fuzzy,” giving scope for variety.
In terms of eliciting stories from change participants (or staff), you tend to start by asking open questions such as “looking back over the last month, what do you think was the most significant change in [particular domain of change]?” After these stories are collected and examined, you’re supposed to bring different stakeholders together to discuss the stories elicited and ask “from all these significant changes, what do you think was the most significant change of all?” You then verify selected stories, often by visiting the sites where the described events took place, checking these were honest and accurate, gathering any more detailed information about especially significant events in the stories and any downstream effects after the first telling of the story.
According to Rick Davies, “the worst omission is not having a structured selection process (Step 5). Selection is key to MSC, if it is not present, [Rick and Dart]… would say ‘this is not MSC.’”
Davies underscores that MSC is chiefly about “discovering what different people value, through a structured and transparent dialogue process.” As such, our focus isn’t on causal analysis, whether it is the “most common change or the most average change,” but rather on what is valued as the “most significant change.” This goes to the heart of evaluation — a judgement about value.
In my first experience, I remember the selection process. Regional coordinators in the food security programme were also technical leads, and thus my colleague Ximena Echeverría and I suspect that their background of 2/3 of them in civil engineering helps to explain why I spent so much time talking to communities about water and sanitation systems. This allowed them to make their technical areas look good. Little was prescribed from above; the centre of gravity was in the middle as much as the stories from storytellers, as per the figure from the guide below:

Common complaints I’ve heard about MSC over time are that it’s too slow and too difficult. Going through all ten steps is time consuming and a big effort. As a result, I’ve seen a number of teams, clients, and even illustrious evaluation organisations cut corners. Stories often lacked adequate context, weren’t told in sufficient detail, and weren’t coded beyond broad domains. Often, therefore, data were difficult to verify, interpret, or even link to the intervention. Unsurprisingly, therefore, stories weren’t necessarily used, and were thus ultimately worthless and an irresponsible waste of time.
From my own experience, if you aren’t using MSC to reflect on trends and variation across geography or for different populations, you’re not making the most of the method. The process of filtering and sorting stories is one of the most valuable parts for learning. Simply flying in once and collecting a few stories though interviews rather misses the point.
How light is too light?
Given the aforementioned concerns, I’ve often agonised over how light-touch you can be with MSC, or how you can take parts of it to support other methods (nowadays, Mariana Apgar and I refer to this as bricolage). On even a relatively superficial level, MSC can help support organisations to highlight what they consider to be the most significant changes, and to compare the types of changes across general domains. It can allow teams with modest progress over a short period of time to celebrate success, but also to reflect on how different partners and different places have different changes.
For a learning review of the Affiliated Network for Social Accountability in the Arab World (ANSA-Arab World), I got 11 CSOs across 7 countries and 7 sectors to harvest what they considered to be the most significant changes from facilitating a community scorecard process. In a workshop bringing teams together we were able to define domains of change together and compare and contrast the strategies each CSO employed and the actors they engaged to achieve these significant changes. This was a very fruitful learning moment, but as it was a one-time snapshot, it lacked the same phased and structured process of deliberation and also the stories of programme participants themselves, so I resisted calling it MSC.
Another recent application in the Strengthening Advocacy and Civic Engagement (SACE) programme in Nigeria also didn’t use the method in full, but instead used MSC to collect and winnow stories of policy outcomes for analysis. Rather than looking at community-level effects, SACE’s most significant change protocol was used to capture changes in the relational capacities of cluster (partner) organisations and helped to define eight change domains.
Stories of change were captured at later cluster gatherings, where colleagues and peers provided feedback to strengthen stories. This cluster sharing process continued throughout the duration of the project (i.e., what my ANSA example was missing). The MSC process, in turn, provided a foundation for the team to conduct Outcome Harvesting. As MSC doesn’t prescribe indicators, it seems a good fit with an inductive method like Outcome Harvesting.
An adapted form of MSC stories was also used by World Vision in the Citizen Voice and Action for Government Accountability and Improved Services: Maternal, Newborn, Infant and Child Health Services project in Indonesia as the basis to develop and refine mechanisms in a Realist Evaluation. So, MSC can provide a qualitative platform for various other methods.
The Root Change and World Vision examples seem like good uses of MSC. However, perhaps the broader questions here are when is our use of a method is so light touch that we forget its original purpose and where that method (or pats of that method) can add the most value. In the case of MSC, it seems that the essence is about letting project participants define what they think is significant, listening carefully to why they think that is, and explaining how they think change happened from their perspective.
Thus, there is a question of where the centre of gravity should be and the politics of who defines what’s significant and what’s valued. After all, whose story of change is it?
