I knew that if I had enough patience the corporate reformers would eventually let slip some data which would prove, once and for all, how unscientific are the metrics they’ve been using to shut down schools.
That day came earlier this week. I’ll encourage anyone to recheck my calculations, just in case, but if I’ve found what I think I’ve found, it will be the ‘death blow’ to the New York City ‘value-added’ model they use to rate and close down schools.
Schools are shut down for getting multiple years of poor progress reports. The progress reports are what give schools their letter grades, A, B, C, D, and F. The way the progress reports are calculated are as follows: 15% is based on school environment, 25% is based on student performance, and the majority 60% is based on something called student progress.
This is defined by the DOE in the guidebook as
I. Student Progress (60 points): measures how individual students’ proficiency on state ELA and math exams has changed in the past year, as they move from one grade to the next. The Progress Report measures individual students’ growth on state English and Math tests using growth percentiles, which compare a student’s growth to the growth of all students in the City who started at the same level of proficiency the year before. A student’s growth percentile is a number between 0 and 100, which represents the percentage of students with the same score on last year’s test who scored the same or lower than the student on this year’s test. To evaluate the school, the Progress Report uses the median adjusted growth percentile. The metric is calculated for all students and for students in each school’s lowest third, in both ELA and mathematics. Each of these four metrics counts for 15 points.
The premise is that since it is unfair to blame a school for getting kids with low starting scores, they want to measure the ‘growth’ or how the school ‘moves’ students. So what they do for each student is take his starting score and ending score. Then they check all the other students in the state who had the same starting score and calculate out what percent of those students this student did better than on the test a year later. Then they take the median of all the students and that becomes the schools student progress score (they do it with math and English and then also with math and English for lowest third of students) and this becomes the student progress score which makes up 60% of the progress report which determines if the school gets an A, B, C, D, or F, and which can lead to the school being shut down.
New York City just released the progress report database for the 2010 to 2011 school year. To see how good of a statistic this ‘progress’ metric is, I though I’d compare how different elementary and middle schools did when they were scored in the 2009 to 2010 school year. Both files, if you want to re-check my calculations, are available here.
Now, I always suspected that this number didn’t really measure much. When I got the two databases, I sorted the 1,100 schools by this progress score from lowest to highest for both years. Then I combined the databases to see how the schools had changed relative position in a one year time. If this metric was at all reliable, there would be some kind of correlation between the two numbers. So a school that was 100th from the bottom in 2009-2010 would probably be pretty close to that number in 2010-2011. After all, they’ve got mostly the same students and mostly the same teachers so there shouldn’t be a major difference.
So after I got all my data sorted out, I made my scatter plot and instead of getting the linear correlation that one would expect, I got this:
As anyone can see, there is, if any, a very ‘weak’ correlation between the two years.
A summary of some of the main results:
Out of 1,100 schools
266 moved under 100 spots.
218 moved between 100 and 200 spots.
164 moved between 200 and 300 spots.
127 moved between 300 and 400 spots.
96 moved between 400 and 500 spots.
84 moved between 500 and 600 spots.
75 moved between 600 and 700 spots.
40 moved between 700 and 800 spots.
24 moved between 800 and 900 spots.
8 moved between 900 and 1000 spots.
6 moved between 1000 and 1100 spots.
So over 60% of the schools moved over 200 spots in one year!
To me, this is the most rock-solid proof that this metric is completely unreliable. Schools just don’t get that much better or that much worse in one school year.
My hope is that some people will independently confirm my calculations. I checked the individual school progress reports for some of the outliers, just to make sure that I hadn’t made some horrible error that you can only make with a computer. All the data is right there on their website.
If I’m correct in all my calculations, this would mean that the entire progress report system is a farce and many schools have been unnecessarily shut down and communities have had to suffer the unnecessary shame that goes with a school being shut down.
In my next post, which you can read here, I discuss what sorts of conclusions about charter schools in NYC can be derived from the progress report database for 2010-2011 under the assumption that the progress metric is valid.