Following on from the last post, we're unlikely to have funding to dose every school kid in Britain with radioactive markers and fMRI-scan them a term later to see how their neurons are getting on any time soon, even if you could get that past the ethics committee and the Nuclear Dread. So unless someone comes up with a field-expedient diagnostic test, we'll need some other way of assessing the problem. Which means that this annoyed me.
So some firm decided to try analysing the primary school SAT results better. They broke down the UK into much smaller units than Local Education Authorities or even schools - neighbourhoods of 300 people on average. They then classified them into 24 groups based on demographic and socio-economic indicators, looked at the average results for each group, and arrived at an expected score for each school based on the distribution of those groups in the school's intake. They then compared the actual results to see which schools were really doing better or worse.
And they got quite a lot of criticism for not using a database of pupils that...wait for it...the government won't let them use. This is a pity. Ever since Pierre Bourdieu, we've been well aware that there is much more to class than money. With all that data, we could do a lot of interesting things; we could, for example, use principal components analysis to establish objectively defined groups and see how well schools are doing that way. We could benchmark them against the Flynn effect, and I suspect quite a lot of schools would turn out just to be tracking the gradual uplift overall. But if we can't see the data we can't do anything.
No comments:
Post a Comment