The power (and challenges) of evaluation to inform policy: Insights from the UK Evaluation Society Conference 2025

Categories: Research using linked data, Blogs, Events, ADR UK Partnership

5 June 2025 Written by Holly Greenland, Head of Communications and Engagement at ADR UK

The presentations, posters and stands at this year’s UK Evaluation Society Conference offered a mix of inspiring case studies, new thinking and challenging concepts to get the evaluation juices going.  

And, lucky for me, the theme was ‘Data in Focus’ - so there were lots of interesting data collection and analysis insights too.  

Here’s just a few of my personal takeaways from two days spent up in sunny Glasgow at the Glasgow Caledonian University campus. 

The strength of data from longitudinal studies  

I was pleased to take my stand next to the team from Understanding Society. This longitudinal study aims to capture life in the UK in the 21st century. It is the largest longitudinal household panel study of its kind, with the whole household contributing to the regular data collection. 

Over our coffees, we discussed how survey-based data and admin data can complement one another for evaluation purposes, with the former offering the opportunity to offer specific data on nuanced or topical questions, while the latter offers objective, quantitative data in high numbers (and can be linked!).  

Our chat got me thinking: the dataset that’s perfect for one study might not be such a good fit for another, and vice versa. That’s why cross promotion across data owners and facilitators is critical to matchmake researchers with the right data.  

But there will also be times when combining the insights from two or more data programmes will provide outputs that may be more impactful than the sum of their parts. Researchers can do some of this connecting, but maybe data programmes can pay a more active role here too? 

Evaluation as a violent act?  

The keynote that I won’t forget in a hurry was given by Dr Luke Roberts. The talk covered topics as varied as the responsibility of evaluators to recognise the risk of our data being misused (even to a violent end), to whether we could use ‘play’ to create new evaluation methods.  

The biggest gasps came when Dr Luke Roberts raised the question of whether we should abandon the traditional concept of the ‘Theory of Change’ altogether. 

Perhaps, he suggested, the theory of change as a linear process reflected the mechanical era of the Victorians, and was now out of date. Today, we need to think in a far more nuanced way, for example, viewing change as sitting within an ecosystem such as we might view a rainforest space.  

Whatever you think of the concept, the very act of questioning accepted methodology got everyone talking and critically thinking on the day and beyond; always a positive outcome for a conference event.  

Finding the very hardest to reach 

One of the global data talks that got me thinking was ‘Ensuring Inclusion in Data: Strategies for Enumerating Hard-to-Count Populations in Somalia and Somaliland’ from Josh Shelley, Tetra Tech International Development.  

Here, the team used both satellite data and (perhaps even more importantly) local on-the-ground expertise to find and count nomadic groups in rural Somalia and Somaliland.  

This really puts the concept of hard-to-reach groups in the UK into perspective, but also reminded me once again of the vital importance of engaging with local knowledge and people to find and listen to specific communities we may otherwise miss in data collection. In this era of emerging AI prominence, reminders like this - of the important role of humans to understand, interpret and connect our complex society - are vital.  

Main takeaway: Explore evaluation from multiple directions for nuanced findings 

I could have shared many more talks from this event, but overall I took one main thought away from this year’s event…  

Evaluation is a complex process. Exploring at the outset all the ways we can gather data from different viewpoints, sources, and people is critical to building a solid methodology.  

Administrative data provides a rich resource for evaluation and research, with objective information from large groups, often collected over time, and linkable to spot otherwise hidden insights. On its own it has powerful implications for understanding policy impact and identifying future change, but combined with other methods – longitudinal surveys, qualitative interviews, targeted outreach to otherwise missing groups – it can provide an even richer picture of society to help make informed, better decisions for change.  

One more thought… with two days of jam-packed talks, keeping our energy up was so important. The coffee, lunches, cakes and cups of tea provided by the team at the venue were incredible! Thank you to the conference team and venue! 

Case studies of evaluations using admin data 

On our stand, it was great to speak to participants from across evaluation teams about how admin data can play a part in evaluation.

Here’s just a few of the studies we discussed: 

ADR Wales: Long term outcomes of people treated for substance misuse in Wales

This project looked at the effectiveness of services for those with drug and/or alcohol issues to help inform best practice in meeting the needs of those affected.  

ADR Scotland: Evaluating the impact of alcohol minimum unit pricing on deaths and hospitalisations

Researchers used administrative data to evaluate a policy designed to reduce deaths attributable to alcohol, providing important insights into its effectiveness for policy makers.  

ADR UK and NatCen collaboration: Research insights from administrative data: Care experienced children and young people 

This project explored children and young people’s experiences of social care and differences across the four UK nations, showcasing valuable long-term data analysis across a range of themes.  

Share this: