Five key questions before starting an impact evaluation

WFP Evaluation
4 min read6 days ago

--

Felipe Alexander Dunsch and Simone Lombardini provide a five-point checklist to guide anyone considering an impact evaluation.

Impact evaluations can discover cause-and-effect relationships and allow researchers to quantify impacts. An example question would be:

By how much does indicator “X” (e.g. food security) go up if we implement a basic project version “A” compared to a more intricate project variant “A+B”?

But impact evaluations are like marathons. They need endurance as they can take some time from start to finish. It is, therefore, important to get clarity on why to start one. Previously, we wrote this blog with examples of how WFP impact evaluations can inform programme operations, provide evidence and technical assistance to governments’ national programmes, and contribute to global evidence debates.

As a next step, here we identify five simple but key questions one needs to consider when deciding whether or not to embark on an impact evaluation.

Illustration: Muhammad Faisal

1. Are there “open” questions that need to be answered?

Once the purpose is clear, the next step is to identify the question that the evaluation will answer.

A key component is to determine whether the question at hand has already been sufficiently answered. A literature review or an evidence gap map (see for example 3ie) can help to discern this.

Whether or not there is enough evidence available for a certain question is not easy to decide and there can be conflicting views. There is an active debate about how results from one context extrapolate to another. While there might not be an easy answer to this question, it is worth spending some time thinking about.

Similarly, some questions are obvious or have no genuine interest in engaging with the results. For example, you don’t need an impact evaluation to ascertain whether it makes sense to take an extra bottle of water when hiking through the desert, or whether it’s useful to use a parachute when jumping out of an airplane.

2. Does the problem have the right size”?

Experimental impact evaluations cannot answer all questions.

They often cannot answer the “largest” questions, such as:

  • What’s was the impact of Germany’s development aid over the last 20 years on the economic growth of country X?
  • What’s was the impact of an interest rate hike by 0.5 percentage points on country Y’s economy?

These are interesting and important questions, but one cannot credibly answer them with an (experimental) impact evaluation.

Instead, experimental impact evaluations often focus on “smaller” questions, such as testing which assistance modality (e.g. cash or in-kind support) shows a larger impact on food security, or whether offering women work opportunities outside the household and directing transfers directly to them can improve intrahousehold agency.

3. Is the project well implemented?

Impact evaluations usually test the mechanism that a project uses to bring about change and require stability in its implementation quality.

Typical questions include whether beneficiaries use cash transfers to improve their food security, whether home-grown school feeding programmes increase famers’ income, or whether anticipatory action can boost resilience for flood-affected households.

Thus, impact evaluations are not meant to test whether or not the project is well implemented.

If a programme is going through a learning phase, waiting out a few cycles or starting first with a pilot is usually recommended before embarking on a full-fledged impact evaluation.

Read more about two WFP impact evaluation pilots in Burundi and Guatemala in the past.

4. Is it possible to implement project variants (to create a credible “counterfactual”)?

The bread and butter of impact evaluations is to test variants and quantify the differences, shedding light on the causal mechanisms driving change.

To ensure credible results, an impact evaluation needs to be based on a credible counterfactual. This includes testing the impacts of a project vis-à-vis a control group or testing project variants or tweaks versus each other, for example in the form of a ”lean” impact evaluation.

An impact evaluation is feasible if the programme team can introduce such variation and sufficiently monitor the implementation. Obtaining ethical clearance is important before introducing this variation.

5. Can you collect high-quality data?

Lastly, impact evaluations require data.

Impact evaluators need to ensure that good quality data can be collected. For this to be the case, one needs sufficient funding. Also, key requirements are equipment, access to the households to be interviewed, and a qualified pool of enumerators.

There is more and more momentum to use alternative data sources and to rely less on surveys (such as sensors, administrative data, GIS data, etc.). Despite the momentum towards alternative data sources, face-to-face surveys remain preferred for their accuracy in capturing detailed, contextual household information.

If you answer a resounding “yes” to all five questions, you may be ready for an impact evaluation!

--

--

WFP Evaluation

Delivering evidence critical to saving lives & changing lives. The Independent Office of Evaluation of the UN World Food Programme works for #ZeroHunger