New TFA alumni magazine has ‘some’ balance

I received my latest issue of ‘One Day,’ the TFA alumni magazine.  No, I have not asked TFA to remove me from their mailing list.  I have great hopes that TFA will ‘one day’ will become, again, an organization that I can be proud of being a part of.

I’ve been writing recently about how TFA is too intertwined with the ‘reform’ mindset.  I know that not every staffer has the same views on the complex issue of education reform, but TFA, the organization, in my view, portrays the ‘reform’ ideas as, at least, the ‘majority’ opinion.

My issue with the ‘reformers’ is that they lie to make it look like they have evidence that the strategies of charter schools and evaluating (and firing) teachers by standardized test scores actually work.  My concern is that these ‘reforms’ will actually make things worse.  I truly believe this and also believe that with my limited time I have forced my way into the conversation and am making an impact.

The cover story was called ‘The Measure of a Teacher’ and had the subtitle on the cover “Spurred by Race to the Top, states grapple with the dicey question of how to evaluate the complex art of teaching.”  I was happy to see this as the focus of the issue.  It is, without doubt, the biggest issue in ed reform today.  A related article had op-eds about whether or not teacher ratings should be made public.

First, I’ll let you know that the ‘balance’ I mentioned in the title of this post was in the section about making teacher ratings public.  They reprinted Wendy Kopp’s Wall St. Journal Op-Ed, where she said that we should not be humiliating teachers — though she never mentions anything about the huge error rates in value-added measures.  But in the point / counterpoint section following Wendy’s piece was something from Tom Rochowicz, who is a teacher at the WHEELS academy (I am friends with the principal there, and even considered transferring there a few years ago).  He wrote:

The release only undermined the call for increased teacher accountability.  With such large margins of error from tests two years old, few took the data as an actionable measure of performance.

He then admits that on the ratings, he was judged merely ‘average.’  I was happy to see this.

The issue, though, that I had with the magazine was the giant eight page cover story by the magazine’s editor Ting Yu.  Early in the article, she lays out the problem with the old evaluations:

No one got fired for poor performance, despite the fact that 81 percent of administrators said there was a tenured teacher in their school who was ineffective.  The opponents of high -stakes evaluation – mainly teachers unions — argued that the metrics were flawed, that test scores provided an incomplete picture of a teacher’s performance, and there was no accounting for insidious factors like poverty and home life on a child’s performance.  Trying to gauge the impact of a teacher on a child’s academic progress was difficult and unfair, they said.  So, for the most part, school districts didn’t.

About Race to the Top she writes “states practically overnight rewrote their policies and laws to measure teacher performance linked, in part, to student achievement.”  But student achievement is not the same thing as value-added calculations.  I see ‘reformers’ constantly using the expression ‘student achievement’ interchangeably with test score ‘gains.’

I don’t think that comparing the results of a class to what a computer predicts they should get with an ‘average’ teachers measures ‘student achievement’ in a direct way any more than thorough principal evaluations.  I think that if a principal can see a teacher’s lesson plans and they look good, and then watch that teacher to see if they can teach from their lesson plan and students are participating and answering questions, well, that, to me, is evidence of ‘student achievement.’

One thing I’ve thought about with regard to teacher evaluations is to have teachers give pre-tests and post-tests for all their units.  This is good teaching practice anyway, and a principal would just have to look at the comparison between the two tests to see that a lot of ‘student achievement’ has occurred.  Teacher-made assessments will more easily reveal what the students have actually learned.

In a section called ‘Hard to Measure’ she does say that value-added is ‘controversial’ but then says that new research from Gates indicates “that value-added analysis is more accurate than any other single measure in predicting success over the course of a teacher’s career.”  This is not really what the ‘Measures of Effective Teaching’ research says.  Basically it says that principal evaluations don’t correlate much with value-added.  This is something that should raise a red flag about value added.  And yes the value-added for a teacher does more accurately predict ‘success,’ but since that ‘success’ is defined as more value-added, this should not be surprising.  A good analysis of this research can be found here.

As case studies of evaluations done ‘right,’ we see two places where TFA alumni are prominent, Washington D.C. and Tennessee.  Both of these evaluation systems are complete disasters.  D.C. has horrible teacher turnover, and their IMPACT system is generally considered the main reason.

The Tennessee system is hailed as a success because 25% of the teachers were labeled ‘ineffective’ or ‘minimally effective’ so “the city is poised to boast one of the most varied teacher-evaluation distributions in the country.”  See, TFA starts with the premise that there are a lot of ineffective teachers and then uses the fact that this system gives a lot of teachers a low evaluation as evidence that it is a good system.  This is the system that was mocked in the New York Times back in February.

The more I learn about value-added, the more I can’t believe that it counts as up to 50% of evaluations in some states.  New York passed a law to make it 40%.  D.C. is 50% for some teachers.  Tennessee has 35%.  In D.C., from what I have read, they are lowering that number.  Ironically, Bill Gates recently said in a speech “If someone wants to rush an evaluation system into place – and they think they can speed it through by doing it without the teachers – that is a grave mistake.”  It seems like he is saying, without explicitly saying it, that systems implemented in places like Washington D.C. were rushed.

The funny thing is that TFA was probably blind to the lack of balance in this piece.  This is the big obstacle to ever getting TFA to ‘come around.’  I do appreciate that they tried to include some counter points of views in the op-ed section, but this article was the ‘feature’ article and is written as ‘fact’ so they need to be more careful, I think, in fairly representing both sides.

This entry was posted in Teach For America. Bookmark the permalink.

12 Responses to New TFA alumni magazine has ‘some’ balance

  1. meghank says:

    Tennessee has 50% based on test scores. 35% is value-added, and the other 15% is “student achievement”, but since they haven’t finished deciding what other measures you can use for the other 15%, last year everyone had to use value-added. The whole 50% will probably be value-added again this year, as they don’t seem too eager to come up with another assessment that they will “approve” for use for the other 15%.

    It’s true that all of Tennessee is using this new evaluation system, but many districts in Tennessee have said they will not be using the new evaluations to make firing decisions. The county surrounding Memphis, Shelby County, has said they will not fire teachers based on the evaluation scores. Memphis is worse than most of Tennessee because the district has said that they WILL use the scores to make personnel decisions. In fact, the superintendent said in a newspaper article that the lowest-scoring 10% of teachers will be fired at the end of this year.

  2. Terry says:

    I wish they applied this system years ago in Baltimore to evaluate Michelle Rhee. She would have been evaluated as a failure and fired. Her record in DC was dismal as well. If only the so called reformers were held to the same high standards. If they institute a system that results in high turnover, cheating and a narrowing of the curriculum aren’t they the failures?

  3. Terry says:

    As soon as I hear “research from Gates” I am skeptical. It appears the research and think tank always produce results that agree with the billionaire. It is a charade with paid shills going through the motions to come up with a report that agrees with the opinion the wealthy guy had from the beginning.

  4. Michael Fiorillo says:

    For a real life alternative to TFA spin and disinformation on this topic – including the title they apparently filched – go to

  5. Volunteer says:

    Gary, you wrote, “Basically it says that principal evaluations don’t correlate much with value-added. This is something that should raise a red flag about value added.”

    I agree. The problem is that value-added worshipers see the evaluators, not the value-added methods, as the cause of the disparity.

    A few weeks ago, the TN Dept of Ed (headed by former TFAer Kevin Huffman), released its review and recommendations of the TEAM evaluation system. Page 32 of the report harps on the disparity between the percentage of teachers who scored a 1 on TVAAS (16%) and those that scored a 1 on observations (.2%):

    “In many cases, evaluators are telling teachers they exceed expectations in their observation feedback when in fact student outcomes paint a very different picture. This behavior skirts managerial responsibility and ensures that districts fail to align professional development for teachers in a way that focuses on the greatest areas of need.”

    “This disparity between student results and observations signifies an unequal application of the evaluation system throughout the state.”

    The report ignores the disparities at Levels 4 and 5:

    Level 4
    TVAAS 11.9%
    Observation 53%

    Level 5
    TVAAS 31.9%
    Observation 23.2%

    If the issue were really inflated observation scores, shouldn’t the mini diatribe focus on Level 4? Or why not charge that more teachers should’ve been scored a 5 in observations? I wouldn’t rule out occasional inflated observation scores, but anyone with an ounce of knowledge regarding value-added who reads the report understands the disparities stem from value-added, especially since the report is only taking into account one year’s worth of data!

    But there’s not a single sentence in the entire report that so much as considers the flaws and limitations of applying value-added models to individual teachers. To question value-added is an act of heresy apparently. I’ve discovered that when presented with evidence of value-added’s flaws, the only responses people at the state level offer amount to “well, we’ll just have to agree to disagree” or “there’s evidence, too, that shows value-added can be reliable”–without citing any specific evidence, mind you.

  6. E. Rat says:

    I don’t know why the sign of a successful evaluation system is how many “ineffective” teachers it finds unless the goal is to find “ineffective” teachers. But still, it’s a circular argument: we these systems because of ineffective teaching and we’ll know they work if they find ineffective teaching.

    The people arguing in their favor have yet to prove that there is widespread ineffective teaching or that their systems measure it.

    If the deform crowd were really interested in improving teacher performance through their metrics, wouldn’t they be attempting to correlate their results to other teacher evaluation metrics? The (extremely limited and questionably derived) data offered don’t show any correlation – 81% of principals reporting a teacher is ineffective doesn’t make for 25% of all teachers.

    I do think that the numbers these systems generate are more important for reformers in framing the debate around bad teaching. I believe Baltimore found that over 60% of its teachers are ineffective this year. When that many teachers are reported to be lousy, it feeds the idea that teachers are altogether lazy, child-hating slatterns who don’t deserve pensions, decent pay, or union protection. The number can’t be based in reality, but it’s mediagenic.

  7. CY says:

    This caught my eye, “No one got fired for poor performance, despite the fact that 81 percent of administrators said there was a tenured teacher in their school who was ineffective.”

    Should they have been fired? How about helping them improve? Do teachers who get tenure simply stop professional development? No. Do those principals have the obligation to develop all of their educators, from a first-year teacher to a seasoned professional? Yes. One thing I do appreciate about Tennessee’s new evaluation system, and Memphis’ iteration of it in particular, is that principals are obligated to observe and actually discuss strengths and weaknesses with their teachers at least 4 times a year. Whether this actually happens in all schools..

    Hopefully I can join you on a TalkingEd episode soon! School starts Monday…

    • Terry says:

      Many teachers leave before they can be fired and many teachers who leave are effective but quality teachers do not want to work in a demoralizing bureaucratic environment especially when run by those who never taught or taught only for a few years. This type of administator is usually over confident and clueless and quality teachers do not respect them.

    • Selma says:

      How about tenured teachers that don’t care to improve? I worked with one. She had been around for 15 years. She doesn’t go to professional development outside of what’s required because “I don’t get paid to do that.” When asked how she felt about her grade switch ( to kindergarten), she replied, “along as I’m getting paid, I don’t care.”. Her class was the lowest performing in the entire school (all grade levels). Yet, she still has a job….

      • PhillipMarlowe says:

        Then the principal needs to get his/ her butt off the seat and start the process to improve or dismiss the teacher.
        Or is it your fault that the principal is doing nothing?

      • E. Rat says:

        YES. I get really tired of the claim that the only thing principals can do is put up with these terrible, terrible teachers (or entice them to move to other schools where they can continue to be terrible).

        Principals DO have the ability to remove tenured teachers from the classroom. It involves observations, paperwork, some kind of improvement plan possibly including peer assistance, etc. It can be a long process. I’m sure it’s tedious. But I find that many administrators who complain about their teachers have done very little to be proactive about the situation. Working at a school is not easy; this is part of what principals sign up to do.

        I’m also curious about how a teacher’s class could be the lowest performing of all grade levels. What kind of metric was used to determine that? And what are the “required” professional development sessions? Are parents concerned about this teacher’s classroom? Do other teachers at the site agree that this teacher isn’t very good, and have they used the power of peer pressure to encourage a new career? Frankly, I’ve found that a strong staff is often much quicker to remove a teacher who’s not committed to the community than a principal.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s