Localising Evaluation: Rethinking What We Measure In Humanitarian Impact
By Lilly Sannikova, Junior Expert, INCAS
Monitoring and evaluation (M&E) has become a cornerstone of humanitarian and development practice. These processes serve not only as mechanisms for accountability but also as frameworks that shape how change is defined, delivered, and understood. What began as a technical tool for tracking results has evolved into a powerful influence on programme design, funding decisions, and sector-wide priorities.
Yet despite this central role, monitoring and evaluation continues to face challenges in connecting to the realities it is meant to reflect.
Some discussions about power dynamics within the international humanitarian and development sectors adopt an even stronger perspective than what I have just articulated. They believe that monitoring and evaluation models were designed and remain dominated by the methods, theories and value systems of the Global North, so that dominant power structures are embedded within even well-intentioned (but still critically unexamined) frameworks.
University of Botswana Professor Bagele Chilisa, who has written extensively on indigenous knowledge and research methodologies, has argued that “There is, for instance, emphasis on translating evaluation instruments to local languages and indigenising techniques of gathering data without addressing fundamental questions on worldviews that can inform evaluation theory and practice.”
Calls to localise M&E therefore include a broad reimagining of how we approach aid, development and the humanitarian sectors, spanning the conception, design, rollout and measurement of interventions. When it comes to M&E, the idea is to move away from the long-held viewpoint that the policies and processes typically used offer a universal and non-political way to measure effectiveness.
My own lived experience has shown me how even carefully designed monitoring and evaluation systems can fall short in what they capture. In many conflict-affected contexts where I have worked, I have seen how externally driven frameworks can overlook the nuances of how communities experience progress. While indicators may document services delivered or targets met, they frequently miss the emotional, social, and relational dimensions of recovery—such as safety regained, dignity restored, or the return of hope. These are not peripheral outcomes; they are essential to resilience. And yet, they often remain invisible within the very systems designed to document impact.
In the countries where I have conducted evaluations and assessments, including Syria, Iraq, Libya, and Ukraine, I have often found that many of the most meaningful insights emerged outside what is formally measured.
n Iraq, participants in livelihood programmes spoke of the recognition and sense of purpose they regained. These outcomes, while deeply impactful, rarely appear in standard indicators. In Libya, informal safety practices—like neighbours sharing knowledge about mine-contaminated areas—proved vital in shaping behaviour and protecting lives, even though they were undocumented in formal reports. In Northeast Syria, while leading an inter-agency assessment of the education sector, it became clear that the barriers to learning extended far beyond infrastructure. Parents and teachers spoke about trauma, displacement, and a deep sense of uncertainty—all of which shaped decisions around school engagement and educational continuity.
I would argue that these experiences did not necessarily reflect flaws in the evaluations themselves but rather revealed the challenges of conventional frameworks to capture the complexity and texture of local realities. Unless we build in the space to recognise these layers, we risk overlooking the very dynamics that determine whether a programme is truly relevant or sustainable.
Emerging approaches are intended to localise M&E focus, to capture nuances and to contextualise results. Among other things, they might include: being locally led, including indigenous knowledge systems where possible, using multiple viewpoints during data analysis and discussing the role of possible bias within an evaluation, communicating impact through a range of media and bottom-up approaches, gathering qualitative and quantitative data through mixed-methods approaches, and considering a range of evaluation outcomes—not only those that demonstrate growth.
The transformative impact of this kind of approach became especially clear to me during a recent evaluation I conducted with INCAS. As part of the process, we interviewed mediation teams operating in politically sensitive environments. They spoke candidly about the importance of trust, timing, and contextual awareness—factors that often mattered far more than predetermined milestones. Progress was frequently built quietly—through relationships, discretion, and cultural fluency—rather than through formal deliverables. This illustrated the importance of allowing sufficient time when conducting M&E.
What made this evaluation stand out was that it created space for those complexities to be seen and understood. Rather than simplifying nuance, it allowed meaningful insights to emerge from it. It reaffirmed the idea that when evaluations are grounded in local realities and informed by those closest to the work, they produce knowledge that is both more meaningful and more pertinent to the change that work in the humanitarian and development sectors seek to achieve.
The humanitarian sector is shifting towards more locally led and adaptive approaches, and recent developments—such as the effective dismantling of USAID—could well accelerate this process. M&E must evolve accordingly.
Localising evaluation is not about lowering standards. It is about questioning whether the standards we use reflect the realities we aim to understand. This requires frameworks that are context-sensitive, inclusive, and designed to capture the complexity of how change truly unfolds. When M&E becomes more than a reporting requirement—when it supports learning, reflection, and local ownership—it enhances both the credibility of the evidence and the impact of the work.
At INCAS, we approach evaluations and research with a focus on methodological rigour, contextual intelligence, and adaptive design. In politically sensitive and complex environments, we aim to develop frameworks that reflect local perspectives and generate insights that are both relevant and actionable. Our first step is to listen and try to understand. Whether working on mediation, humanitarian programming, or systems-level analysis, we prioritise community knowledge, support collaborative learning, and ensure evidence informs real-world decisions—including when they are hard to make. This blend of technical depth, humility and contextual understanding is central to making evaluation a tool for transformation—not just accountability.