Open Letters to Reformers I DON’T Know. Part IV: Arne Duncan

This is the sixteenth in a series of open letters I have written to reformers I know and, more recently, that I don’t know over the past two years.  Go here for links to the other letters and responses.


Dear Secretary Duncan,

Race To The Top was intended to improve education in this country by finally holding accountable the schools and the ‘adults’ who work in those schools — meaning the teachers — for their failure to get students to adequately grow academically.  ‘Ineffective’ teachers need to be identified and fired and ‘failing’ schools need to be identified and closed.  Unfortunately the entire program collapses without reliable metrics to judge which schools are truly ‘failing’ and which adults are truly ‘ineffective.’

To illustrate the issues with the accountability metrics that have been the trademark of your tenure, I’ve applied them to something you know intimately, your senior year Harvard basketball team, the 1986-1987 Harvard Cagers.  Were the 1986-1987 Cagers a ‘failing’ team?  Was Coach Peter Roby an ‘ineffective’ coach?  Were you and Keith Webster ‘ineffective’ co-captains?  It all depends on which metrics you use.


Your last place finish 9 and 17 record is just one way to judge your efforts.  Some would use it as the sole metric and declare this a ‘losing’ season.  But if you just look at points scored, you didn’t do so badly with 2152, which was pretty close to the 1972 Harvard record of 2221 points at that time.  So if we look at just offense, the team was not failing.  But you also gave up 2169 points, which is not so good defensively, though only 17 points more than how many points you scored.  The ‘average’ game that season, you lost 82.8 to 83.4.  Doesn’t sound so bad when measured that way.

But what if Coach Roby was judged on your performance of just one day?  Well, it depended, then, on what day.  The ‘86-‘87 Cagers were streaky.  You started off 0 and 3, all away games.  Then the next ten games you went 7 and 3 bringing your record to 7 and 6.  The last two wins were against Penn and Princeton on January 9th and January 10th 1987, who finished respectively 1st and 2nd in the Ivy League that year.

The Penn game is still considered one of the greatest comeback upsets in the history of Harvard basketball.  With 11:50 remaining you were down by a seemingly unsurmountable 19 points.  With 4:21 left you had chipped away at the lead but were still down by 10.  Then Harvard’s top scorer, you, went on an amazing run scoring 14 points in just 3 minutes to set up an eventual overtime.  Then, you remember, the legendary finish.  Down by 2 with 33 seconds to go in overtime.  Phillips ties it up with a jumper with 9 seconds left.  Then with the Harvard home crowd going crazy, Webster steals the ball from Elzey and hits the winning shot at the buzzer.

Here’s the footage, in good faith, I found it for you.

Crimson sports writer Jonathan Putnam started his article about these two wins with “This past weekend will long remain one of the greatest in Harvard men’s basketball history.”

After that, the Cagers season went downhill.  The next thirteen games you went 2 and 11 tied for last place in the Ivy League in 1987 with the Brown Bears.  You had beaten Brown convincingly on February 6th, 108 to 90, but in the rematch on February 21st, your last game at home at Briggs Athletic Center, you lost a heartbreaker.  But who on that team could guard you?  Who could guard Webster?  Definitely not Lynch.  No way Murray could either.  Even your career high 32 points with 24 of them coming in the second half weren’t enough and Webster had a cool 21.  You still lost 90 to 87.  That was your game to get out of the cellar.  A major missed opportunity.  And was that failure one of coach Peter Roby?  Or of the co-captains you and Webster?  Should Roby have been fired?  Should you and Webster have been replaced as co-captains?

Maybe instead of wins and losses, the team could be judged on ‘growth’ or ‘value-added.’  If before the season a computer predicted the Cagers only had the talent to go 4 and 22, then the 9 and 17 record would credit coach Roby as adding some value to the team.  But if the computer instead predicted you would go 13 and 13, well, then the team did not meet the growth targets.  How would you like it if your hard work was declared a failure by a computer?

Would the same team have really done so much differently had you still had Coach McLaughlin?  By 1989 the Cagers were 4th in the Ivy League under Roby and were 3rd for 1990 and 1991.  Of course Coach Roby went on to have a legendary career and in 2007 was named one of the 100 most influential sports educators in America by the Institute for International Sports.

You and Webster were celebrated at the end of the season with various accolades, and deservedly so.  In Harvard basketball history at that time, only twelve players had ever scored 1,000 points, and you and Webster were two of them.

If the point of playing college basketball isn’t just to win games or score points, but to develop citizens who understand how to be on a team and how to work together and become future leaders, then Coach Roby maybe deserves a coach of the year award for 1987.  That season influenced your future.  You went on to play some professional ball, first in Rhode Island and then in Australia.  Besides Jeremy Lin, there aren’t many other Harvard players who played any kind of pro ball.  And then as far as leadership, you went on to become Secretary of Education.  Did anyone from the ‘87 Brown team land any cabinet positions?  Lynch?  Murray?  No way.

You know a lot about sports, so let me ask you something:  What do you think is in worse shape, our country’s education system or our country’s sports program?  I can apply the same arguments you and others use to declare that our education system is ‘broken’ to sports.  You might think this is ludicrous:  we do well in the Olympics.  Our baseball, football, and basketball professional teams consistently trounce teams from other countries.  But I can say many good things about our education system too:  Our top students, just like our top athletes, can go head to head with the best in any country.  Our universities are the best in the world.  Our education system has produced some of the most innovative thinkers in the world.  We’ve fostered creativity and have also produced some of the greatest musicians and entertainers in the world.

But our ability to produce the top football and basketball players in the world is not proof that our country has a high-performing sports program.  With the obesity rate in this country, I’d say that our ‘average’ athlete is not very good at all.  To compare countries on an even playing field, the sports equivalent of the PISA tests is certainly the other football, or soccer.  In soccer, we are very mediocre.  Watching the World Cup game against The Netherlands last summer opened my eyes to how far behind we are in soccer compared to much much smaller countries.  Yet we have just as much, if not more, opportunity to field a competitive soccer team.

How would you react if the President appointed a Secretary of Physical Education who had never played sports or coached sports?  And what if this person declared that our lackluster performance in the World Cup soccer tournament is evidence that our physical education system in this country is horribly broken?  And what if he made the argument that he has identified the problem as the weakness of one of our most popular games, your beloved basketball?

Here’s the argument why:  In a soccer game, it is very hard to score a goal.  Often for complete games, the score is 0 to 0.  Yet in basketball, teams regularly score over 100 points a game.  What kind of point inflation is this?  With basketball, we’ve been lying to ourselves, patting ourselves on the back for being such great athletes when the reality is that we have not been challenging ourselves with this sport.  For one thing, the hoop is way too low.  Maybe 10 feet was OK sixty years ago, but not anymore in this global economy.  The first thing we need to do to fix basketball is to raise the hoop up to about 15 feet.  Dunking wouldn’t be quite so easy anymore, nor should it be.  The next thing we need to do is cut back on the inflated score.  Why two points for each basket?  It should be just one point.  And the three point line is way too close to the basket.  It should be moved back to about 40 feet.

I do realize that the scores in basketball games will, at least at first, drop drastically.  But that’s just at the beginning until players get used to the new rigorous standards.  By holding the teams and especially the coaches accountable, eventually teams will be scoring 100 points a game again, and even dunking, with the 15 feet high hoop.  How awesome will that be?

You know a lot about sports and basketball in particular so you immediately know in your gut that these suggestions about the 15 foot hoop and the entire premise that basketball is ‘failing’ is nonsense.

But this is how I feel as someone who has been involved in education for almost my whole life about some of the things you have said and done with regard to education in this country.  An example of something you said in a TV interview last year:  “The vast majority who drop out of high school drop out not because it’s too hard but because it’s too easy.”

I suppose that there are a few kids here and there who are bored by school because it isn’t challenging enough.  Most of those kids don’t drop out though, they stick it out and deal with their boredom, I know.  No, the “vast majority” don’t drop out for that reason.  Kids drop out of school for a lot of reasons, but school being too easy is a pretty rare reason to drop out, and I’m concerned that the Secretary of Education is not aware of this.  It would be like the Secretary of Physical Education saying that most kids can’t dunk because the basket is too low.

Secretary Duncan, time is running out.  It’s like that game against Penn on January 9th, 1987.  There are only a few minutes left and the team is down big.  Teachers are fleeing the profession and there is soon, I believe, to be a teacher shortage as new candidates will avoid the profession for the same reason that the older teachers are leaving.  Standardized testing is out of control.  How much time, energy, and resources are being spent on testing?  Your legacy is not looking good right now.  But it is not too late.  Please can you rise to the occasion as you did that time against Penn when you scored 14 points in three minutes to force overtime?  Please captain Duncan, would you muster up the will to lead a final charge and again turn an almost hopeless situation into one of the great comeback finishes of all time?


Gary Rubinstein

Math Teacher

Arne Duncan playing basketball

Me playing basketball

This entry was posted in Open Letters Series, Uncategorized. Bookmark the permalink.

26 Responses to Open Letters to Reformers I DON’T Know. Part IV: Arne Duncan

  1. Gary, this is brilliant! It’s entertaining and funny, yet makes the point very cogently about the difficulty of measuring performance in a complex system. What a thought you leave us with — Duncan leading a comeback against the ongoing disaster of the high-stakes testing, blame-the-teacher juggernaut.

  2. Pingback: Gary Rubinstein Writes a Letter to Arne Duncan | Diane Ravitch's blog

  3. Brilliant. Just brilliant.

  4. Leigh Campbell-Hale says:

    best and favorite post ever…

  5. Jonathan says:

    Absolutely wonderful. Thank you.

  6. meghank10 says:

    You always give me hope. Maybe we will get through to him. I spoke to him in person myself, and told him to end the testing.

  7. skg917 says:

    Reblogged this on skg917.

  8. Brian Davison says:

    This is silly. Rubinstein must not be familiar with basketball since there basically are VAMs for it. They are called Player Efficiency Ratings (PER) and are used by many professional teams to evaluate players. In fact, many of the GMs now in the league don’t have high-level basketball experience but have utilized PER to effectively rate and acquire the best talent relative to their cost.

    Rubinstein doesn’t know what he doesn’t know. But I guess a lot of teachers don’t really understand VAMs even though they feel qualified to make false statements: “achievement” is the same as “growth”.

    I understand this is a blog for the far-left extreme teacher activists who want no accountability. Another teacher pointed me in this direction. I sued VDOE and forced them to give me the VAM/SGP data so we could evaluate districts/schools. My case is still ongoing so I may get teacher names here soon. I have two main points:

    1. Schools do not exist to employ teachers; schools exist to effectively evaluate students

    2. By resisting efforts to use VAMs to measure effectiveness, unions are literally taking future income out of the pockets of disadvantaged kids and placing it in the paychecks of ineffective teachers. That is EVIL.

    • meghank10 says:

      “Schools exist to effectively evaluate students”?
      Really? Not educate them? Who is the evil one?

      • Brian Davison says:

        Good catch. That was a typo. It should be Schools don’t exist to employ teachers; schools exist to effectively educate students..

        The evaluation is for teachers to gauge whether their instruction is effective.

        Thanks for the correction.

      • meghank10 says:

        Freudian slip, I think.

    • mjpledger says:

      You do realise that the American Statistical Association have put all sorts of caveats on the use of VAM for evaluating teachers – see
      Specifically they say …
      “VAMs typically measure correlation, not causation: Effects – positive or negative –
      attributed to a teacher may actually be caused by other factors that are not captured in the model. ”
      “Ranking teachers by their VAM scores can have unintended consequences that reduce quality”

      One of these unintended consequences is that the number of people entering teacher training has dropped substantially – it’s likely that there is going to be problems with getting enough teachers and whether they are good essentially won’t matter because there is no alternative or the alternative is worse.

      Teachers are happy to be accountable (as they always have been to teachers and schools) but accountability has to be fair and it has to have little effect on the students. All the testing that students are subjected to under this regime so that “teachers can be held accountable” has profound consequences in what they are taught and the way they are taught and very little of that is good.

      • mjpledger says:

        And I am probably left-wing but I am not a teacher … I say this as a parent.

      • Brian Davison says:

        mjpledger, have you bothered to read the source research to which the ASA was responding? Did you realize those researchers provided a response that completely destroyed that ASA “statement”?

        You seem very concerned about whether any teacher is ever rated slightly inaccurately. But you do not seem very concerned about whether it’s actually probable that disadvantaged kids have ineffective teachers in their classrooms. Let me repeat the core issues:

        1. Schools do not exist to employ teachers. Schools exist to effectively educate students.

        2. By protecting ineffective teachers, the unions are literally taking future income from the pockets of disadvantaged youth (who have no parents to fall back on) and placing it in the paychecks of ineffective teachers.

        The question is basically whether your concern lies with the large number of disadvantaged students who desperately need an effective education or the small number of ineffective teachers who might be slightly inaccurately rated. Simple choice for me.

      • mpledger says:

        @Brian Davidson
        The ASA would hardly weigh in to this if any teacher was ever rated slightly inaccurately. They are weighing in because there is room for huge inaccuracies and for the consequences to do the exact opposite of improving education.

        I don’t have much time for Chetty because his main result (see Figure 2, graph 1 in ) depends entirely on fitting a linear model to data that is so obviously non-linear. That’s a pretty fundamental mis-representation of what’s really going on.

        Chetty et al don’t “destroy” the ASA statement – they just don’t get it – their response to the VAM not being able to deal with the things that effect a child’s performance outside of a teacher’s control (e.g. a divorce in the family, a parent goes to jail/gets laid off/moves overseas, a child gets tutoring/has a growth spurt/gets glasses) is just to come back with “The role of luck is inescapable in any measure of job performance.”

        It’s not “luck”, it’s a major imperfection in the model.

  9. Brian Davison says:

    mpledger, I realize all of the teacher groups try to show that the VAMs/SGPs are unstable and unreliable. That is simply false. I got the exact same results as every other major study when I looked at the results of teacher’s SGPs in Virginia.

    1. A teacher in the bottom 20% in year 1 was 10x more likely to stay in the bottom 20% than to rise to the top 20% in year 2.

    2. A teacher in the bottom 20% in year 1 was more likely to stay in the bottom 20% than to rise to any of the other 4 quintiles in year 2

    3. The same applies to teachers in the top 20%

    I realize why you want to muddy the waters with claims of uncertainty. But if the above results are so stable, how can you knowingly sentence any students, much less disadvantaged students, to sit in the class of an ineffective teacher?

    If you meet 3 NBA players, you will almost certainly come to the conclusion that NBA players are very tall. You won’t be able to say with 95% certainty (normal statistical significance bands) that NBA players are much taller than the normal population, but anyone with a clue would believe they were. So don’t tell us when it’s clearly obvious that some teachers are ineffective, that we must prove with 95% certainty that’s the case before we protect the student’s rights.

    • mpledger says:

      You realise from 1) that 1 in 10 teachers in the lowest quintile one year are in the top quintile the next. That 10% who were measured as the least “effective” teachers have become the most “effective” teachers.

      That must put you in a quandry … Either you believe ineffective teachers must be fired because their is no room for improvement and so the 1 in 10 change means the model must be bogus *OR* you believe that teachers are legitimately able to go from being the least effective teachers to being the most effective … so really we should be trying to find out how that happened so we can show the other ineffective teachers how to make such a momentous changes … and so noone. needs be fired, the teachers just need to be educated.

      So which is it – is your model correct or are ineffective teachers only worth firing?

  10. Brian Davison says:

    mpledger, I believe we should follow the guidance of all the researchers and the state boards who say that a single year of data should NEVER be used to make hiring/firing decisions. Our VDOE recommends using 2+ years of data and 40+ student scores to gain more stability.

    If a teacher consistently remains in the bottom 20%, then he/she should be

    1. Moved out of the core classes that greatly affect every students future income (math and Reading)

    2. Provided professional development to improve (I wish all teachers got real PD but unfortunately my district – Loudoun – believes PD should be “voluntary”)

    3. If they still do not improve, we should consider removing them from teaching altogether (maybe admin job).

    Since teacher pensions are not portable (if you fire them, they lose the $100K+ that has built up), I do not support firing teachers for low performance at this time. If pensions were portable (they could take the pension contributions with them), then I probably would for consistently low performers.

    But I do not accept the notion that we have to make sure we are 100% absolutely positive that a teacher is ineffective before we rescue the kids in his/her class from ineffective instruction. This is the lives of young children we are playing with. Maybe we get a few cases wrong with the teachers every now and then. So be it. I would rather get a few cases wrong with the teachers than to subject thousands upon thousands of kids to ineffective teaching. Which side are you on?

    • mpledger says:

      I am on the side of statistics, the side of accurate measurement and the side of reducing bias. I am on the side that uses the scientific method to get to the truth.

      I am also on the side of students who are subjected to endless test prep, subjected to appalling tests that measure only a sliver of their learning, that are deprived of education resources which are diverted to testing and hardware companies that do an appalling job of making tests, deploying tests and grading tests.

      The VAM model may save some kids from an ineffective teacher (or maybe from a competent but unlucky one) but you’ve destroyed a meaningful and engaging education for the vast majority in the process.

      • garyrubinstein says:

        Nicely stated. This talk about the ‘bottom 20%’ is interesting since most people agree that far less than 20% of teachers are truly ‘ineffective.’ Also I’m always interested in how far apart, in absolute terms. the bottom 20% are from the top 20%. Like if there was a standardized test and the bottom 20% teacher by VAM and the top 20% teacher by VAM both had classes with students who did about the same the year before. Then there’s a 50 question multiple choice test, and the students of the top 20% get an average of 34 questions right. What, would you say Brian, is the number of questions on average the students of the bottom 20% teacher would get right? 31? 28? Do you have any idea? I’d bet that the scores would actually be pretty close since VAM scores are usually about the same for everyone. It is only after the ranking and sorting do some teachers end up in the bottom 20% despite having nearly the same VAM scores as the top 20% teachers. Could you do a FOIA to get more specific information about how the VAM is calculated? I think if you saw the raw numbers, you wouldn’t be such a proponent of VAM.

      • Brian Davison says:

        Gary, first off I think we have the same goals. I have no intention of randomly firing teachers. I think the real travesty is that so few teachers are given quality professional development from our school administrators. In my district – Loudoun County outside DC – prof dev (PD) is “voluntary”. First year teachers are not given best-of-breed lesson plans to start off the year. They are expected to tailor lesson plans themselves from the beginning (how crazy is that?)>. And we have no district wide teacher networks that provide best practices or useful lessons for others to share. Teachers are on their own… in a silo. That is the administration’s fault.

        Second, I have a really bad administration that thwarts any attempts at transparency (see below for more). So expecting to get a FOIA answer out of them is foolish (in court now on their non-responsiveness to 8 FOIA requests). But luckily I did obtain the entire state’s SGP data via an earlier court case against VDOE. So I can run those numbers you discuss. In these graphs on slides 9 and 10, you can see the distribution for SOL (achievement score) and SGP (growth measure) scores in Virginia by teacher (anonymous teacher ids). You’ll notice it resembles a Bell curve (Gaussian) and has small but significant tails. That means there is a relatively small but core group of highly effective and highly ineffective teachers. Those ineffective ones bother me. You’ll also note that the math scores are more spread out than Reading. The research shows that math teachers have a bigger impact positive or negative. One of my ideas is to move the less effective math teachers into Reading (at least at the lower grades). The ineffective teachers can’t hurt so much in Reading but they sure can in math. But again, the ineffective teachers should receive quality PD first before any actions are taken. You can see all the materials here on Google drive or on my YouTube channel where I recorded briefs to explain the SGP data.

        Finally, back to my district. The chairman of the school board didn’t like some of my skeptical questions on his facebook page. See the Jan 17, 12:28am timestamp on this one toward the bottom and the Jan 17, 10:34am timestamp on this one as well. He subsequently hid my Facebook post, slandered me to his board colleagues (I never used the word gr***, I simply sent him a snarky email about stop making his PR hole deeper once he got caught hiding Facebook posts and he tried to tell all the school officials that I was some kind of “threat”) and tried to get school security to follow me ever since when I spoke out at board meetings. It has become so bad after I noted Loudoun’s abysmal results on the PISA tests, that the LCPS administrators have tried to make me look like I am a threat (see the post at end of doc that was sent by school officials) even though all of the other commenters on the local paper clearly knew I was just handing out SGP info cards to STEM parents (look for same 8:44 Mar 14 post in context) at the local science fair (which my daughter competed in). Read the comments on March 14 and you can tell I am just referring to a target-rich environment for STEM parents. Note that I am a Navy veteran with an active TS clearance who would never use any kind of force against anyone. Pretty scary tactics by LCPS, eh? My friends were afraid of being targeted as well but I got a colleague who works in the Army HQ to submit the same FOIA requests as me so they can no longer shut me down. We’ll see how this turns out but I am simply asking for the LCPS admins to be open with the public.

      • Brian Davison says:

        mpledger, your last reply was full of nonsense. First, only about 0.5% of all educational spending goes to testing. There may be prep time that teachers spend of their own choosing but an effective teacher can prepare her kids while teaching normal lessons. The only reason some teachers try to “prep” kids is because they might lack the ability to get kids to retain it in the first place. Plus, tests are not just to measure what kids know. Tests give us a natural stopping point to review the material we have learned thus far. Most folks need 5-7 repetitions to retain info long term. Without tests and their required reviews, it would be much harder to achieve these 5+ presentations.

        I dare say you have no clue what the scientific method means. I just showed you data that indicated the VAMs are fairly stable or at least are measuring something (more than 1/2 of teachers in bottom 20% in year 1 remain there in year 2). But let’s pull the string on the scientific method (as a STEM major I think most don’t really understand the concept of a falsifiable hypothesis). Let’s say we could devise a falsifiable hypothesis about whether VAMs are reliable.

        In our hypothesis, we’ll take teachers who have taught at one school for several years and have demonstrated a high VAM over those years. We’ll take that teacher and transfer him/her to another school. If the old school’s average VAMs go down, the new school’s average VAMs go up, and the transferred teacher’s VAM stays high, then that would mean the teacher’s individual VAM score is pretty accurate (at least not random). But we might not have a high “confidence level” for just one teacher. So let’s say we did the same with both high VAM and low VAM teachers and included a large number of teachers. If we could conduct this experiment and the results were as expected from the hypothesis, would you agree that the VAMs are somewhat reliable indicators of teacher effectiveness at that point? If not, why is you falsifiable hypothesis that we can test?

  11. mpledger says:

    The United States Federal,State and Local Government Spending on K-12 education is 545 billion dollars so 0.45% of that is around 2.5 billion dollars. That’s not chump change.
    And that would be direct funding – there are huge indirect costs as well.

    My brother-in-law did the PIRLS exam. Although his class was the top stream in one of the top schools in my country, once some of the kids has knocked off the easy questions they started horsing around – the outcome of the tests didn’t count for anything so they saw no reason to put any effort in. It’s the same for tests based on VAM – the outcomes for kids don’t matter so there is no incentive for them to try.

    So, we shouldn’t be asking is VAM reliable because that’s too far along the process. We should be starting at the beginning with these questions …
    1) Do exams measure what we want to get out of them? Do they measure the type of student learning that we value?
    2) If exams are able to measure what we want … are they a valid and consistent measure of that? And how do we know that?
    3) If they accurately and precisely measure student achievement, do they accurately and precisely measure teacher input?
    4) Is the type of teacher input they measure, the type of teacher input we want students to get

    and there are a lot more really crucial decisions before we even get around to VAM modelling. Getting the data right is about 80% of a statisticians time, modelling is only 20%. And from the look of things, I don’t think anyone has spent 1% of their time on this (but then I think that is a fundamental difference between data science and statistics).

  12. Pingback: A teacher’s unusual evaluation of Arne Duncan — through the prism of Harvard basketball - The Washington Post

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s