Back to Top

Research Briefs

CfEE Research Briefs aim to make complex economic research in education transparent and accessible. Their preparation and publication is generally at clients' behest. 

The briefs are produced by re-versioning academic papers in order to produce a short-format briefing highlighting policy-relevant outcomes and recommendations. 

In the context of our 'Impact package', publication is supported by a full media campaign and a Chatham House roundtable event for key policy influencers, with conclusions presented in summary Notes for the Record for the benefit of a confidential list of key stakeholders. 


A digital divide? Randomised evidence on the impact of computer-based assessment in PISA

A CfEE Research Brief by Professor John Jerrim

Since the Programme for International Student Assessment (PISA) first was carried out in 2000, it has increasingly come to dominate education-policy discussions worldwide. Educationalists and policymakers eagerly await the tri-annual results, with particular interest in whether their country has moved up or slid down the rankings. Yet there are many challenges to measuring trends using large-scale international assessments, such as PISA. This is because administration and analysis procedures may change between survey rounds, potentially influencing results.

This paper looks at one of the most important alteration: the move to computer-based assessment in 2015. Between 2000 and 2012, PISA was carried out as a regular paper-based assessment. However, in 2015, pupils in the great majority of countries instead took the test on a computer. Since the change to computer-based assessment could affect pupil performance by itself – in ways that differ between countries – it has the potential to reduce comparability of PISA test scores across countries and over time.

This issue is investigated using data from the OECD field trial, which was carried out in the spring of 2014 in all countries making the switch from paper-based to computer-based assessment. Since pupils taking part in the field trial were randomly assigned to complete the same PISA questions on a computer or using paper and pen, we are able to draw causal inferences. 

The results show that pupils completing the computer-based test performed substantially worse than pupils completing the paper-based test. Once the method used to account for mode effects in PISA 2015 is applied, the differences decreases. The key conclusion is the adjustment made in PISA 2015 does not overcome all the potential challenges of switching to computer-based tests, but that it represents an improvement compared with not making any adjustment at all. The results show that policymakers should take great care when comparing the results across and within countries obtained through different modes.

You can download the paper here.