Finding out ‘what works and what doesn’t’ with administrative data
This blog from Giulia Tagliaferri and James Farrington follows newly-published work linking archived trials data from the Education Endowment Foundation to administrative data – and reflects on the benefits of this kind of analysis. Giulia is Head of Quantitative Research for the Behavioural Insights Team; James the Research Advisor leading the analysis.
In a newly-published report, the Behavioural Insights Team (BIT) present results from new analyses of completed randomised experiments that tested educational interventions. Our findings show that interventions effective in improving attainment may run the risk of increasing exclusions or reducing attendance. Equally, some interventions that did not seem to affect attainment may have had other impacts on attendance or exclusion. Reassuringly BIT found no adverse impacts on other outcomes such as reconviction in the criminal justice system, or pupils being NEET (Not in Education, Employment or Training).
While for some, the focus of the work will be on the results from individual interventions, the main purpose of this project was to demonstrate that it was possible - and to actually do the work - to link completed experiments to administrative data to gain new insights. As such, this wasn’t so much about the specific linkages themselves, but helping to forge the way for similar linkages in the future - specifically shortening timelines and simplifying processes.
Reanalysing completed trials in this way is extremely cost-effective. For every £1 spent on reanalysing an existing RCT, we are saving nearly £300 by not having to re-run the trial with a new outcome. This means we can leverage the stock of experiments coming out of the What Works Network and the burgeoning experimental evaluations in the UK Government; getting more ‘bang for our buck’ by looking at multiple outcomes. If we took the same approach to policy evaluation more broadly - thinking through and anticipating outcomes in other policy domains - then we would be able to more quickly to identify policies that have the potential for positive spillovers or harmful backfires, rather than focusing in on a single outcome.
We cannot build knowledge of policy effectiveness without data, and policy cannot wait years for outcomes and analysis – particularly at a time where public spending is under such pressure and needs greater scrutiny. Using administrative data in this way brings both short- and long-term benefits. In the short-term, it may allow trial teams to understand if and how trial participants progress with the systems they interact with (for example, do they remain in education and how consistent is their attendance if so?). This means that rather than waiting to look at educational attainment several years post-intervention, we can look at whether attendance is affected in the short-term, including negatively, and so we can stop policies early if they appear harmful.
In the long-term, we can better understand whether policies designed for one specific outcome generate additional benefits, involve trade-offs between outcomes, or have no impact on other outcomes of interest. Working across policy silos has been an explicit aim of recent initiatives in UK Government such as the shared outcomes fund - but in order for this to be realised, data must be made available and shared in a timely fashion within Government and outside it.