On the Democrats For Education Reform website last week they announced a letter by “a coalition of over 30 education reform organizations” (including Teach For America) in which they asked Duncan to hold teacher preparation programs ‘accountable’ — meaning that their funding and even existence should be based on the value-added scores of the teachers who graduated from these programs.
These 30+ organizations read like a who’s who of places who think that value-added is so accurate that basing teacher evaluations, school evaluations, and now teacher prep program evaluations on it will surely close that achievement gap up in no time. (I think my favorite name is ‘Step Up For Students’) From my research, I’ve concluded that principal evaluations by a knowledgeable principal correlate more with ‘student learning’ than value-added does. Perhaps in 100 years value-added will reach a better level of accuracy. (For those of you who are new to this, value-added is when a computer predicts what a class of students should get on the year-end state test if they had an ‘average’ teacher. It is based, mostly, on the scores those students got on last year’s tests. It is highly volatile with teachers getting rated great one year and horrible the next despite no change in their teaching. The reason for the error rate is that it relies on 1) last year’s test results being meaningful, 2) the computer’s calculation of what the student’s ‘should’ get needs to be accurate, and 3) this year’s test results need to be meaningful. If any of those three things is off, the entire calculation if off. A lot more could be said, but that’s a brief primer for you.)
Here is the complete text of the letter they sent to Duncan (emphasis mine) :
Dear Secretary Duncan:
We, the undersigned, are writing to convey our strong support for the Obama Administration’s teacher education reform strategy as described in “Our Future, Our Teachers” and urge you to advance your policy through Executive action as quickly as possible.
Each year, some 200,000 schools of education graduates and alternative route participants are newly placed in American classrooms. Too often, they themselves and their employers discover that they are ill‐prepared to teach and as a consequence the children in their classes do not have the opportunity to learn to their utmost potential. Students from historically disadvantaged groups, who year after year are taught by the least effective teachers, are by far the most frequent victims -‐ often with life-‐ changing consequences -‐ of the deficiencies in our teacher preparation and placement system.
We understand that the U.S. Department of Education, with broad input from the field through a formal negotiated rulemaking process, is developing or has developed regulations that would require states to: 1) meaningfully assess teacher preparation program performance; and, 2) hold programs accountable for results. Even though this group of non-‐federal stakeholders failed to reach consensus, we are pleased to see they came together behind the idea of tying teacher preparation program quality directly to the student outcomes of their graduates (including outcomes for students with disabilities and English Language Learners). We urge you to exercise your rightful authority in this matter and publicly release your draft regulations so that all interested parties may offer formal and detailed comments and the process can proceed with all due haste to final rulemaking.
Administrative action is both sorely needed and long overdue. Title II of the Higher Education Act requires states to conduct an assessment of teacher preparation programs and identify and improve the lowest-‐performers. At present, such policies are the exception rather than the rule. In the most recent year, states identified low-‐ performing programs in only 37 of more than 1,400 institutions of higher education that prepare and train teachers. Furthermore, since these requirements were put in place more than a decade ago, 27 states have never identified a single low-‐performing program. Each year, teacher preparation programs receive approximately $6 billion in support from the federal government. They have both a moral and legal responsibility to carry out the Title II requirements in a way that has a positive and dramatic impact on student learning.
Right now, we don’t have good information for most teacher preparation programs on their graduates’ impact on student learning and their performance in the classroom. A few states, such Louisiana and Tennessee, have started to look at this data and see clear differences both between and within programs. In Tennessee, the most effective programs produced graduates who were 2-‐3 times more likely to be in the top quintile of teachers in the state, while the least effective programs produced graduates who were 2-‐3 times more likely to be in the bottom quintile.
In terms of student learning, research also shows that students with the most effective teachers on average advance a grade and a half on academic assessments in a single academic year while students of similar backgrounds with the least effective teachers acquire about only half a grade level of learning in the same academic year. A recent study by TNTP showed that teachers who affected higher outcomes for students also exhibited other positive qualities, according to surveys of the students in their classrooms. Students taught by such teachers were more likely to report that those same teachers cared more about them, made learning more enjoyable, and encouraged them to make greater effort in their studies.
The ultimate goal of formal and final regulations should be to ensure that the HEA Title II requirements around reporting and accountability have the effect that they were intended to – providing meaningful data on program quality and ensuring that low-‐ performing programs are identified and improved. This may require the investment of some additional, targeted resources, particularly to minority serving institutions to ensure that the quality and diversity of the teaching force go hand in hand.
We hope the Administration also pursues work with Congress to reauthorize the Higher Education Act. But for now, deliberate and swift administrative action on Title II regulations is the best next step to advance these aims.
Thank you for your consideration.
I’ll go through some of the issues I have with this letter:
“Too often, they themselves and their employers discover that they are ill‐prepared to teach and as a consequence the children in their classes do not have the opportunity to learn to their utmost potential. Students from historically disadvantaged groups, who year after year are taught by the least effective teachers, are by far the most frequent victims -‐ often with life-‐ changing consequences -‐ of the deficiencies in our teacher preparation and placement system.”
Isn’t this the problem with Teach For America? Many of the new CMs struggle and feel very unprepared after five weeks of training. Now of course they are going on to show that by some metric TFA is failing in a smaller way than some of the other teacher preparation programs so they are worthy of being spared from the sanctions, but I’ll show the problems with those metrics later.
“We understand that the U.S. Department of Education, with broad input from the field through a formal negotiated rulemaking process, is developing or has developed regulations that would require states to: 1) meaningfully assess teacher preparation program performance; and, 2) hold programs accountable for results.”
Since ‘meaningfully’ is defined as ‘by value-added’ in all their studies, I don’t think that ‘meaningful’ is the appropriate adjective.
“In Tennessee, the most effective programs produced graduates who were 2-‐3 times more likely to be in the top quintile of teachers in the state, while the least effective programs produced graduates who were 2-‐3 times more likely to be in the bottom quintile.”
Again, the Tennessee study which, of course, put TFA at the top of the performers, is just based on value-added. In that same study, TFA is a very poor performer by another metric — retention rate. I think that any evaluation of a teacher preparation program should have retention as part of it so TFA should be very careful about what it is asking for. Of course the people making the evaluations will rig them so that retention is not a part of it, just to protect TFA, but maybe in the far future someone else with more of a research background will be making those decisions.
But even with value-added being the main criteria, I’ve discovered a giant paradox in this letter. The theory of value-added is that it somehow equalizes for student starting points. So teachers that have ‘better’ students are not rewarded just because their students do well on the state test. They have to do better than the computer prediction. So if the coalition is so enamored with value-added then they need to work out the value-added for each program. But isn’t that just the value-added of the students of the teachers trained through the program? No. (This might get a bit confusing and ‘meta’ but try to hang with me on this one.) What they would need to develop is a program that predicts what the value-added should be for the students of a group of teachers, depending on the ‘starting point’ of those teachers. So since TFA teachers are the ‘best and brightest,’ the program would predict that they would get good value-added from their students. So just because their students get those good scores, that does not mean that the training program has done a good job, just that they got ‘better’ raw material. So by this type of calculation the ‘value-added of the value-added’ might not be so good for TFA.
“In terms of student learning, research also shows that students with the most effective teachers on average advance a grade and a half on academic assessments in a single academic year while students of similar backgrounds with the least effective teachers acquire about only half a grade level of learning in the same academic year.”
I’ve analyzed the ‘study’ on which this claim was ‘proved’ and wrote about it extensively here. That they would throw this bogus stat into this letter is really low. This study was based on 1,920 students in Indiana and was a study about how children that have a lot of siblings have lower achievement than children who don’t have a lot of siblings. The comparison of effective teachers claim was a small afterthought and it was admitted that the sample wasn’t big enough or varied enough to decisively conclude anything. And while it is possible that some teachers only get 1/2 year and others get 1 and 1/2 years of learning, there is no mention about what percent of these superstars there are and what percent of these dullards there are, so it is not something that any policy can be based on.
So for TFA to be demanding teacher training programs to be improved really takes a lot of ‘Chutzpah.’ They only offer their trainees ten to fifteen days of student teaching an hour a day with classes of ten students or less. Their training is really an embarrassment.
I should mention here that I have not been impressed with some of the training I’ve seen at other training programs, even ones that are a full year or more. There are others, though, that I think are very good. Certainly the most important thing is the opportunity to student teach. If I could have just started over after six weeks of my first year of teaching, I could have jumped right to the kind of success I experienced in my second year. So if I could have gotten a good student teaching experience with many different groups of students I’m sure that I could have had one of those mythical ‘good’ first years. Until TFA fixes this deficiency in their training, they would be wise to just fly “under the radar” but they seem to think they have, and will always have, some sort of diplomatic immunity. When the pendulum swings back the other way, TFA might find themselves on the short end of this kind of policy.