Evaluation turns a page: A Q&A with Andrea Cook
In February, Director of Evaluation Andrea Cook closed a remarkable chapter at WFP’s Office of Evaluation (OEV). When she joined WFP on 6 February 2017, the office was a team of 21 employees. Today the function is almost 70 strong, with a talented and diverse team both at headquarters and across the six regional bureaus. Andrea’s departure is grounded by a new Corporate Evaluation Strategy, to put into action the vision for the updated Evaluation Policy and Charter. In a recent interview, she reflected on some of the highlights, milestones, and opportunities during her tenure at OEV.
How does the new evaluation policy and strategy depart from those of 2016? What are the novelties?
The new strategy carries forward what was set out in the 2016 evaluation policy. It defines the three types of evaluation: centralized, decentralized and impact evaluation, now formally categorized as a third evaluation category in WFP for the first time. It also raises our ambition for evidence use and national evaluation capacity development (NECD).
The new policy and strategy allow us to build on previous learning, setting out realistic but ambitious expectations within a longer timeframe, running to 2030.
Why until 2030? How do you see the evaluation trajectory and is there scope to adjust should circumstances change?
It runs to 2030 to align with Agenda 2030, allowing us to span several WFP strategic plan cycles. Given that we had an independent peer review in 2020, we are confident that setting a longer-term time horizon makes sense. In 2025/6 we are planning another OECD-UNEG peer review and view this as an appropriate point to adjust to changes in circumstances if the need arises.
We foresee continued development of the function. In 2023, a new thematic window on nutrition will open under the impact evaluation work plan. We also see continued growth in decentralized evaluations, specifically an increased interest in regional bureaus commissioning evaluations on thematic or multi-country issues. And then we anticipate a greater focus on use, with more demand for evaluation summaries and commissioning of evaluation syntheses.
This will enable us to make better use of existing evidence to transfer across geographies and contexts, and embed evidence even better into policy and programme cycles to support users.
How has the external and internal environment changed since 2016 and how do the policy and strategy respond?
The 2016 policy mandated some key changes, such as the establishment of the decentralized function.
The evolution of decentralized and impact evaluations is important because both are demand-led functions. It’s about putting decision-making and evidence from evaluation in the hands particularly of country offices and has led to a shift in how people understand evaluation, and its value in WFP. This in return contributed to the realization of the previous policy’s vision that evaluation becomes “everyone’s business”.
We came full circle in that we’re looking not only at the commissioning and conduct of evaluation, but also the use of evaluation to drive forward improvement in WFP’s performance, which was accompanied by increased, and diversified funding and human resource capacity on evaluation, leading to a more diverse and vibrant evaluation function.
If we look externally, Agenda 2030 led to tilting the evaluation function towards the country level. Particularly for decentralized-, impact and country strategic plan evaluations, but it also brought a shift towards country-led evaluation.
I’ve been delighted to see that the investment in decentralized evaluation led to doing more evaluations jointly with governments. This is the paradigm shift that ultimately comes through on Agenda 2030.
Another major external shift was Covid. It changed many things for evaluation, including that even when it was very challenging to do evaluations, it did not diminish the demand. This led to us having to find new ways of doing evaluations.
Covid itself has put a premium on evidence-informed decision-making. It may have been uneven from country to country, but it did bring to the fore the use of evidence to drive public decisions, with evaluation being part of that.
How do the policy and strategy tie into the bigger picture of improving WFP performance and supporting countries to achieve Zero Hunger?
It’s captured in the inclusion of evidence as an organisational enabler in the new WFP Strategic Plan 2022–2025 and embedding evaluation as one of other sources of evidence to inform the improvement in WFP’s performance and the relevance of our work.
But also, when you look at the Zero Hunger reviews, the Common Country Assessments (CCA) or the UN cooperation frameworks, which aim to put the evidence assessment in the hands of national stakeholders, it’s about evaluation making WFP’s work more transparent and accountable at the country level.
As countries themselves are trying to tackle Zero Huger, evaluation gives them a bigger toolbox of evidence, greater resources available, and by building skills and capacity, gives them the potential to do more evaluations themselves, which makes the trajectory to Zero Hunger more evidence-informed.
How does the strategy strike a balance between high-quality evaluations without overwhelming during a period of crisis?
When we designed the new policy, we looked at the coverage norms for all types of evaluations very carefully and calibrated that around the level of capacity and interest in the organization. We consulted extensively with the Board, and across the organization, to make sure that the coverage norms were revalidated.
Looking back on your six years as Director of Evaluation, what do you consider major highlights for the Office of Evaluation?
The Covid-19 evaluation was a landmark evaluation in WFP because it used a developmental evaluation approach to support organizational learning. Most organizations have done retrospective evaluations to their Covid response. We evaluated our response while WFP was responding to the pandemic which allowed us to use the evaluation to support organisational adaptation at a point of real crisis.
Then, impact evaluation is increasingly playing an important role in the organization. I’m proud of our partnership with the World Bank, and the gradual appreciation and recognition for the impact evaluation work, both internally and externally.
There was also the success in building a well-respected decentralized evaluation function, producing credible, useful and timely evidence that’s also of high quality. We set the bar high, and we are meeting it.
Another highlight is the overall shift in the culture. Evaluation is now everybody’s business. People understand what it is, how to use it, and how to get the best out of it. People also embrace evaluation learning.
There was significant growth of the function. We have a very strong talent pool on which we have worked hard to improve both gender and geographic diversity since 2017. I’m incredibly proud of the talented and increasingly diverse group we have working in evaluation.
What do you perceive as some of the key challenges, and how did these challenges result in learning?
The big challenge was the Covid-19 pandemic. Because that brought into question the value of evaluation. At a time when the humanitarian needs and demands on the organisation were very high, important questions were raised: why carry on investing in evaluation? What is it going to offer? And obviously, we had to change completely how we were doing evaluation using almost fully remote methods.
This did require us to think very deeply and consult widely and adjust our ways of doing evaluations to continue to maintain coverage levels that ensure continued learning and accountability across the organisation. It pushed us to get better value out of data, and to go further in bringing national evaluators into evaluation teams. It also stimulated innovation and more creative methods and approaches.
What do you consider the major opportunities that evaluation presents for WFP and partners?
Evaluation provides real-world evidence on what is working, what is not, and how. As we gear up to 2030, the big opportunity lies in thinking on the progress that’s been made since 2015, drawing increasingly on evaluative evidence — whether that’s at country level, as captured in the Voluntary National Reviews, or at the system level.
As national governments develop their evaluation functions, there’s an opportunity for WFP to support their work and do more evaluation in partnership. This is very important in the context of WFP moving from a deliverer to an enabler. It opens avenues for learning and reflection in partnership that can strengthen development and humanitarian responses.
We’re also interested in how artificial intelligence is increasingly becoming a tool to enable us to tailor and extract evaluation evidence and learning in real-time to meet diverse needs. This is an exciting initiative that we will share more about soon.
We have a balanced function with strong experience in the three main types of evaluation. We have excellent quality systems and strong resources to contribute.
As one of the biggest humanitarian evaluation functions that work in crisis and of crises, we therefore have a unique opportunity to evaluate and bring accountability and learning in challenging contexts.