What does the data tell us about local schools?
The comments below relate to Tom Bewick’s article ‘Let’s make sure every young person achieves success’, published in the Brighton and Hove Independent, 28 August 2015.
The article (together with Greg Hadfield’s article in the same issue) provides data concerning the pass rates at GCSE for the nine state secondary schools in Brighton and Hove. In particular, pass rates are given for students who qualify for the pupil premium and for those who don’t (PP and NPP students).
The PP-NPP gap – what does it tell us?
Cllr Bewick notes that the three schools with the highest NPP pass rates have big gaps in attainment between PP and NPP students. He wants this explained. Well, a school may have a high pass rate for NPP students, but there is no reason to suppose its PP students will automatically do better than the PP average. So, suppose a school has a large cohort of able NPP students who get a pass rate of 80%. If their PP students achieve the average PP score of 33%, there is a gap of 45%. Is the school to blame for this gap? There is an obvious danger that the three schools identified by Cllr Bewick will be criticised simply because they have a large number of able students.
One might hope that the performance of PP students would be raised by being in a school which achieves a high pass rate among NPP students. There is some evidence for this – two of the three schools whose results he queries are in the top three for PP pass rates. Schools with fewer PP pupils generally have better PP pass rates. One explanation for this might be that these schools have few of the most deprived students – we can’t assume that all PP students are identical.
Cllr Bewick comments that the size of the PP-NPP gap differs greatly between these schools. This really is dangerous statistical territory. He is identifying three schools for their good exam results and then looking at the difference between their PP and NPP pass rates. These are two quantities, each of which may vary markedly from one year to the next, so the gap between them may vary even more.
To see why looking at the attainment gaps for these three schools is a distraction, consider the results at BACA. Here, the pass rate is 35% for NPP and 25% for PP students. So a more useful question might be: why is there such a small attainment gap at BACA?
Education statistics – approach with care
In looking at education statistics, there are two useful rules to bear in mind. Firstly, national statistics vary slowly over time. (This shows that it takes a long time to achieve major changes in educational outcomes.) Statistics for an LEA can vary more rapidly, simply because the numbers of schools and students are smaller – in the case of Brighton and Hove, there are only 9 state secondary schools. When it comes to comparing individual schools in league tables, the scores can vary markedly from one year to the next, for all sorts of reasons. Policy shouldn’t be made on the basis of a crude analysis of such data.
Secondly, statistical data can only show patterns; it doesn’t give explanations. It may rule out some explanations but there are often many possible explanations for a particular pattern. The correct approach is to start by deciding what question you are trying to test. Then decide what data you need to test the question. Collect the data and use it to test your hypothesis. Starting from tables of data and hoping to extract meaning from them is back-to-front.
In particular, it is very easy to be misled by looking at extreme values from a table of data. GCSE maths students would be expected to draw scatter graphs to see the patterns in data. However, to explain the data, you would really need to have detailed information about the individual students involved. Have all students who might attract the PP been identified? Do some schools have admissions procedures which help them to avoid taking on too many ‘difficult’ students? Do some groups of parents avoid certain schools because they think they are ‘not for people like us’? And so on. We would also need to know whether schools spend the PP funding in different ways – it isn’t attached to individual students. These sorts of factors can lead to big differences in outcomes for schools which appear rather similar and which may even share a catchment area.
So let’s acknowledge that improving the prospects of PP students is something that will take time. There will be many factors involved, and many of these will not be in the hands of either the schools or the LEA. The pupil premium is not a bad thing, but I’m sure no-one imagines that additional funding, in itself, can remove inequalities.