Back to Top

Kenya’s reading revolution

Commentary by Barbara Bruns on

‘Identifying the essential ingredients to literacy and numeracy improvement: teacher professional development and coaching, student textbooks, and structured teachers’ guides’

by Benjamin Piper, Stephanie Simmons Zuilkowski, Margaret Dubeck, Evelyn Jepkemei, and Simon J. King

Published version

and

‘Scaling up successfully: lessons from Kenya’s Tusome national literacy program’

by Benjamin Piper, Joseph Destefano, Esther M. Kinyanjui, and Salome Ong’ele

Published version


Three years ago, the world woke up to the fact that “schooling ain’t learning” and set education SDGs (Sustainable Development Goals) that for the first time focused on what children actually learn in school. Since then, the news has been all bad. In Indonesia, the average math skills of adolescents have not improved in 14 years (as Pritchett notes earlier in this volume). In rural India, after three years in school, 68 per cent of students cannot read a word of English (ASER 2007). In Chad and Niger, over 85 per cent of children finishing primary school cannot read and understand a text (PASEC 2014). Across Africa, the share of grade 2 students who cannot not read a single word is 64% in Uganda (English), 56% in Zambia (Chitonga), and 90% in Malawi (Chichewa)(USAID Early Grade Reading Barometer). UNESCO’s official estimate is that 56% of all primary school-age children worldwide are not achieving basic literacy, and in two-thirds of cases this is despite actually completing primary school.

In this depressing context, there is important news coming out of Kenya. Two new publications by Ben Piper and co-authors document big improvements in early grade reading over the past few years and their research findings on how Kenya has done it. The first paper provides the most systematic evidence to date on how to improve teachers’ ability to teach reading in the first two grades of primary school. The second paper provides evidence on how to take pilot programmes to national scale. This is a question of huge policy relevance but near-zero research base. Together, the two papers are an object lesson in how to do research with real-world impact.

Kenya’s education ministry deserves kudos for both research advances. Since 2012, the government has engaged in an all-out effort to transform the way basic reading and numeracy skills are taught. Importantly, part of the strategy is serious evaluation of these efforts. The first stage was several randomised evaluations of a pilot Primary Mathematics and Reading (PRIMR) programme. The first of the papers I reference here, by Benjamin Piper, Stephanie Simmons Zuilkowski, Margaret Dubeck, Evelyn Jepkemei, and Simon J. King, for World Development journal (2018), reports on one of these – an evaluation of PRIMR in a sample of 834 schools, a large enough sample to compare three different programme designs with a control group of schools.

Next, the ministry used evaluation results to design a new national literacy programme, “Tusome” (Swahili for “Let’s Read”). Finally, and perhaps most unusually, in the national scale-up of Tusome the government opened all levels of the education system to classroom observation, data collection and external analysis of the quality of the implementation process.

So, what works to help teachers impart literacy?

The past 15 years have seen a big increase in experimental evidence on improving teachers’ effectiveness. Evidence supports matching the curriculum to the pace of students’ mastery (“teaching at the right level”), remedial education (sometimes using computers) for students falling behind, providing a book to every child, some forms of teacher training, individualised coaching of teachers, and, somewhat controversially, scripted lesson plans which guide teachers through each day of the curriculum.

Several meta-studies have synthesised this research, but limitations in the underlying studies make it hard to draw clear conclusions. Few studies test alternative programme designs, so there is little systematic evidence about different programme elements. Studies typically evaluate one intervention design in a single country, so it’s not always clear things would work in a different setting. Finally, many evaluations fail to analyse costs, making it impossible to generate the evidence policymakers need most – not just on the effects of different interventions, but their comparative cost-effectiveness.

The PRIMR programme impact evaluation was designed to give the Kenyan ministry clear answers to these questions:

  1. Which programme elements are most essential?
  2. Are there important complementarities among elements?

And, if so:

  1. Which combination is most cost-effective?

PRIMR was designed in a context where: less than 5% of first- and second-grade children met the government literacy benchmarks; 80% of teachers reported no professional development support during the prior year; and a network of curriculum support officers existed, but lacked the time and transport to visit classrooms.

The government devised three different strategies for improving reading instruction and tested them in a systematic way. Each variant systematically added intensity (and costs) to the prior one, so a randomised trial would reveal whether the additional ingredients (and costs) were really worth it.

Strategy 1: Teachers each received ten days per year of professional development. Curriculum support officers received 15 days of training on tablet-based teacher observation, feedback, and coaching tools. Training and support were focused on the existing curriculum and available reading materials (where less than half of students typically had textbooks).

Strategy 2: Teachers received the same training and coaching time, but were given an additional new set of textbooks. These new books were based on the latest research on how to teach literacy and were provided to students on a one to one ratio. Teacher and curriculum support officer training were focused on using the new books and teaching techniques. Teachers were encouraged to develop their own lesson plans for using the new books.

Strategy 3: Teachers received the same training and coaching time and new textbooks as in Strategy 2, with the addition of new teachers’ guides matched to the new textbooks. The guides contained 150 days of partially scripted lesson plans.

The three-level PRIMR evaluation was implemented in grade 1 and 2 classrooms in 847 government schools in rural Kenya between March 2013 and October 2014. As curriculum support officers are organised by zone, 44 schooling zones were randomly assigned to one of the three treatments or the control group. Baseline and endline reading skills against the government benchmarks were measured using the EGRA (Early Grade Reading Assessment), which generates scores on letter recognition, oral reading fluency, and reading comprehension.

Key results

As shown in the chart below, the study found very limited impact from professional development and coaching using existing materials (treatment 1). Adding the new books led to statistically significant improvements in English and Kiswahili (treatment 2). Adding the teacher guides led to very large improvements in student literacy (treatment 3). The average effect size of RCTs in Africa reported by Conn (2017) is of the order of 0.23 SD). PRIMR produced results above 1 standard deviation. There are some reasons to quibble with these results. The evaluation had to rely on a difference-in-differences estimation overlaid on the randomisation because the treatment groups were not perfectly balanced at baseline. But it is highly unlikely that effects this large or consistent are a fluke.

Figure 1: PRIMR evaluation: key results

Source: Piper, Zuilkowski, Dubeck, Jepkemei, and King (2018), Figure 1.

 

 

 

 

 

 

Value for money?

It is great that the intensified treatments produced stronger impacts, but what was their marginal cost? The cost of the core treatment (training and coaching) was kept constant across the three variants at US$5.63 per pupil. The addition of revised textbooks for all students cost an extra US$2.38 per pupil. The teachers’ guides added just US$0.16 per pupil. Although it’s unclear whether all the costs of content development were included, the guides were a pretty low-cost addition. Piper and co-authors estimate the number of students able to reach the Government’s oral reading fluency benchmarks for every additional $100 spent on each treatment.

Treatment 1 (training and coaching only): 2 extra students able to meet the oral reading fluency benchmarks (65 words per minute for English, 45 for Kiswahili).

Treatment 2 (training + books): 6 to 8 more students reaching the benchmarks.

Treatment 3 (training + books + guides): 15 more students reaching the benchmarks.

The large learning gains produced by the teachers’ guides doubled the cost-effectiveness of the programme.

Because of its careful design, the PRIMR impact evaluation has made an outsized contribution to our understanding of how different elements combine to raise the effectiveness of teachers. This study demonstrates the payoff to randomised trials that explore the marginal impact, and costs, of complementary programme elements. This is more expensive research – three treatment arms require a sample of 800+ schools rather than the 100-200 school sample seen in most RCTs. But if the knowledge yield from this single study influences other governments and research teams to follow suit, we can expect bigger dividends from future research on education quality.

What does it take to scale up a successful pilot programme?

Given PRIMR’s positive results, in 2015 the Kenyan government decided to scale up the third variant of the programme to all 23,000 public primary schools (and 1500 low-cost private schools). After two years, an independent external evaluation documented Tusome’s sizeable impacts on students’ reading fluency (Freudenberger and Davis 2017).

The second of the papers I reference here, by Benjamin Piper, Joseph Destefano, Esther M. Kinyanjui, and Salome Ong’ele for the Journal of Educational Change (2018), is unusual in that it focuses on the implementation processes underlying the results. This topic is central to government policy but rarely gets research attention.

The authors compare Tusome to the conceptual framework developed by Crouch and de Stefano (2017) on why it’s so hard to drive system-wide change in education. Crouch and de Stefano observe that system-level change requires getting decentralised schools and teachers to adopt new behaviours.

They posit that an education ministry’s ability to do this rests on its institutional capacity for three key functions:

  1. setting and communicating expectations for the outcomes of education;
  2. monitoring and holding schools accountable for meeting those expectations; and
  3. intervening to support the students and schools that don’t meet expectations.

Piper and co-authors document how Tusome’s design addressed, and strengthened, some of these core functions.

First, on expectations, the programme is organised around two clear outcome goals: national benchmarks for oral reading fluency (words per minute) in Kiswahili and English. Piper and co-authors find these have been communicated broadly and extensively through training and coaching programmes. These outcome goals are also hardwired in the design of the textbooks and teacher guides, which present a sequence of lessons geared towards achieving “emergent” and “fluent” reading at appropriate points. Classroom observations in 2017 found that 99% of classrooms had one book per student, and 95% of teachers were using the guides. Moreover, teachers reported that they believe the guides help their students’ progress.

Second, school monitoring is a key element of Tusome. Curriculum support officers make regular classroom visits using tablets with instructional support tools. They are reimbursed for travel to schools, where they spot check students’ reading and observe teaching. They record where teachers are in the curriculum and whether teachers are using the new techniques. They meet with teachers to provide one-on-one feedback. In 2016, curriculum support officers averaged 90 visits each and recorded and uploaded 113,604 classroom observations. More than 80% of Class 1 and 2 teachers reported being observed by a support officer at least once per term. While this was short of the targeted three times per term, it is a degree of classroom-level monitoring and data collection that is unprecedented in Kenya, and rarely seen anywhere.

Third, on targeted support, Piper and co-authors find this was the weakest part of the implementation. While the school system is for the first time generating real-time data that clearly expose variations in performance at the school and district level, the researchers document little action thus far to target resources or support interventions to those that are struggling.

The 2017 external evaluation measured student reading in a national sample of schools after two years of implementation. In both English and Swahili, in both Class 1 and Class 2, there were large gains on a wide range of reading tasks. The percentage of Class 2 children meeting the national benchmark approximately doubled for both English (34% to 65%) and Swahili (37% to 66%). These are impressive gains for a programme scaled up nationally in just two years.

Piper and co-authors believe the main driver of success is the Ministry of Education’s effectiveness in the first two core functions identified by Crouch and De Stefano (2017) – setting and communicating expectations and monitoring implementation. The national programme achieved high implementation fidelity in materials provision, teachers’ professional development and, to some extent, instructional support.

More profoundly, they believe Tusome has transformed the “instructional core” of the first years of schooling. The programme moves teachers to engage with their students in a new way, with new teaching techniques, new materials, and new expectations for learning outcomes.

They also note that Tusome has reoriented the education system to focus on the classroom. Regular visits, structured observations, and feedback replace the isolation and performance vacuum found in most school systems at the classroom level. While there are no incentives or sanctions for teachers associated with the observations and feedback, the simple fact of classroom-level monitoring has reshaped the system’s norms and teachers’ felt accountability for performance.

Tusome is still only a few years old, and it will be important to watch how it is sustained, deepens, or fades, over time. Big improvements in early grade reading should produce lower grade repetition and better learning outcomes as children move up through primary school; administrative data will help confirm that Tusome’s early effects are sustained. But through careful research on what works to improve reading, and how programmes can be scaled successfully, Kenya and an enterprising group of researchers have made big contributions to our knowledge base.

Continuing evidence from Kenya hopefully will expand that base further – and inspire other countries and research teams to do similarly important work.

References

ASER Centre (2017), Annual Status of Education Report (Rural) 2016, New Delhi, http://img.asercentre.org/docs/Publications/ASER%20Reports/ASER%202016/aser_2016.pdf

Conn, K.M. (2017). ‘Identifying effective education interventions in sub-Saharan Africa: A meta-analysis of impact evaluations.’ Review of Educational Research, 87(5), 863-898.

Crouch, L., and J. de Stefano. (2017). ‘Doing Reform Differently: Combining Rigor and Practicality in Implementation and Evaluation of System Reforms.’ International Development Working Paper No. 2017-01, RTI International.

Freudenberger, E., and J. Davis. (2017). ‘Tusome External Evaluation – Midline Report.’ https://pdf.usaid.gov/pdf_docs/PA00MS6J.pdf

PASEC (2015). ‘PASEC2014 Education System Performance in Francophone Sub-Saharan Africa: Competencies and learning factors in primary education.’ Conférence des ministres de l’Éducation des États et gouvernements de la Francophonie, Dakar http://www.pasec.confemen.org/wp-content/uploads/2015/12/Rapport_Pasec2014_GB_webv2.pdf

Piper, B., de Stefano, J., Kinyanjui, E.M., and S. Ong’ele. (2018). ‘Scaling up successfully: Lessons from Kenya’s Tusome national literacy program.’ Journal of Educational Change, 19(3), 293-321.

Piper, B., Zuilkowski, S.S., Dubeck, M., Jepkemei, E., and S.J. King. (2018). ‘Identifying the essential ingredients to literacy and numeracy improvement: Teacher professional development and coaching, student textbooks, and structured teachers’ guides.’ World Development, 106, 324-336.


To download this commentary in pdf form click here.

This commentary was originally published as part of the CfEE Annual Research Digest 2017-2018, in September 2018.

The volume is edited by Lee Crawfurd, Strategic Advisor with the Ministry of Education in Rwanda and the Tony Blair Institute for Global Change, and a CfEE Fellow.

To download a pdf of Lee’s introduction to the volume, visit this page.


Barbara Bruns (@barbarabruns) is a Visiting Fellow at the Center for Global Development and former lead education economist at the World Bank, where she specialised in Latin American education and rigorous evaluation of education programmes. As the first manager of the $14 million Spanish Impact Evaluation Fund (SIEF) at the World Bank from 2007 to 2009, she oversaw the launch of more than fifty rigorous impact evaluations of health, education, and social protection programmes. Barbara also headed the EFA Fast Track Initiative (now Global Partnership for Education) from 2002-2004 and served on the 2003 Education Task Force appointed by the UN Secretary General.

Blog Category: 
About the author