Towards an impact evaluation agenda in the humanitarian space — three takeaways

By Kristen McCollum

A conversation with ALNAP, the IRC, and the World Bank’s DIME division held during the the Humanitarian Networks and Partnerships Weeks (HNPW) in May 2021, highlighted three considerations for an impact evaluation agenda in humanitarian contexts.

Women at work at a Food for Assets project in Niger. WFP/Mariama Ali

WFP’s Office of Evaluation (OEV) is ramping up efforts to use impact evaluations in humanitarian response. As the world’s largest humanitarian agency, WFP is well-placed to be a leader in this area.

The recently-launched Impact Evaluation Strategy emphasizes an investment in new capacities and partnerships dedicated to generating rigorous evidence in crisis- and conflict-affected settings. A USAID-funded workstream on humanitarian impact evaluation is helping OEV’s impact team to define priority questions about optimization, and how experimental evidence can improve the work that WFP does on the ground.

Internal consultations at WFP show that there is a high appetite for impact evaluation evidence, particularly around configurations of cash-based transfers, precision and speed of targeting communities and households in emergencies, as well as to test novel programmatic approaches such as forecast-based financing. Of course, we also have a lot to learn from other organizations who are engaged in the area. UNICEF-Innocenti, for example, recently published a report on the challenges and opportunities of impact evaluation in fragile settings. Another great example is Innovations for Poverty Action (IPA), who have a Humanitarian and Forced Displacement Initiative to support researchers interested in improving the lives of people in protracted crises.

Recently, we were joined by colleagues from the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) the International Rescue Committee (IRC), and the Development Impact Evaluation (DIME) unit of the World Bank, to hear their takes on priority questions for impact evaluation in humanitarian settings. Here are our top three takeaways from the event:

A systematic review conducted by DIME in the World Bank (by researchers Dahyeon Jeong and Iva Trako, forthcoming) found only 20 studies focusing on different types of social protection transfers which met the criteria of quasi-experimental or experimental evaluations in conflict or crisis settings. In addition to the lack of evidence in general, gender-outcomes, particularly programme impacts on gender-based violence, were some of the least-explored outcomes in humanitarian impact evaluation literature. Half of the studies included unconditional cash transfers as a treatment arm, and far fewer studied other popular interventions in the humanitarian sector, such as cash/food assistance for assets (two studies), or school feeding (only one study).

Beyond the direct programmatic impacts, evidence is also lacking for decisions across the timeline of an emergency response: from predicting shocks and improving targeting, to the size, frequency, and duration of aid and the transition to development programming.

Traditional impact evaluation approaches can be perceived as being too risky or too costly for emergency contexts. Alice Obrecht, Head of Research and Impact at ALNAP, described how decision-making in emergencies carries an added sense of urgency and uncertainty.

To conduct impact evaluations in these conditions, it is crucial to have ready-to-deploy, flexible designs with outcomes that can be tested quickly and results that are easily actionable. At WFP, we’re calling this approach “Lean IE” and we are creating an impact evaluation design library, with evaluation questions inspired by our conversations with colleagues.

As Alice pointed out, there are still some misperceptions among practitioners about what research methods are appropriate for answering causal questions, since impact evaluation is relatively new to the humanitarian space. Strategies to build impact evaluation evidence must be linked to a wider effort to increase the familiarity with different types of evidence among humanitarian decision-makers.

In humanitarian emergencies it is not only important to find out what works best but also which interventions are most cost-effective to reach the most people in need.

Jeannie Annan, Chief Research and Innovation Officer at IRC’s Airbel Lab, provided some insight into how impact evaluation was used successfully in the past. One example comes from an impact evaluation in South Sudan, which tested a simplified protocol for treating acute malnutrition compared to the status-quo programming. The results showed that the new approach had no significant difference in recovery rates compared to the treatment as usual — and that was great news. A simplified protocol was just as effective, saved the IRC money, and allowed for them to expand treatment for acute malnutrition to more people.

This type of ‘A/B’ testing — experimentally testing two different interventions, without necessarily using a control group — can be applied across the entire timeline of emergency response. It can also fill evidence gaps with flexible IE methods providing insights into cost-effectiveness. For example, at OEV, we are exploring using the Lean IE concept to measure the impact of different household targeting strategies in El Salvador. With this, we aim to find out whether community-self targeting or leader selection of vulnerable households yields the most precise results (benchmarked against a full survey and alternative data sources like satellite and drone image data). Country Offices in Bangladesh and Nepal are considering lean IEs to test whether providing assistance some days before a flood is expected (vs. assisting after the shock) helps to boost resilience and ultimately food security.

What’s next?

With the support of USAID, we are connecting with communities of practice to learn from others in the space and support evidence generation and use in humanitarian operations. If you are interested in developing a humanitarian impact evaluation agenda, let us know at wfp.evaluation@wfp.org so we can include you in the conversation.

Delivering evidence critical to saving lives & changing lives. The Independent Office of Evaluation of the UN World Food Programme works for #ZeroHunger