When people try to defend the common core, they usually admit that there have been problems, but that the problem was “in the implementation.” For once, they are right.

The Regents exams have been given since the 1930s and have changed over the years to reflect different priorities in what was considered important for students to know in math. Most years there were three different math Regents, one for 9th graders — generally Algebra I, one for 10th graders — generally Geometry, and one for 11th graders — generally Algebra II / Trigonometry. Though over the years the names of the courses have changed and there was even a failed experiment where they tried to make the three tests into two courses each spanning a year and a half, things were pretty steady from the 1930s until very recently.

I know there is a lot to criticize about the common core, especially in the elementary grades. But the high school recommendations for the common core, at least the official ones from the people who designed them, are fairly vague and not really very different from what the expectations in math were before the common core.

Every state had to ‘interpret’ the common core recommendations and New York did make some odd decisions about what they felt would qualify as a ‘common core’ state curriculum, which topics to eliminate from the curriculum and which ones to add. Even with these changes, a math course pre-common core would be about 85% to 90% the same as it was post-common core.

Another aspect of the common core besides just a tinkering with cutting some topics and adding some topics was to make ‘common core aligned’ exams, and this includes the new Regents.

As a teacher, I make unit tests all the time, and I take great pride in the quality of my tests. I have a testing ‘philosophy’ when it comes to a math unit test. I like to have a mix of questions, some questions, about 50% I’d say, are intentionally ‘easy’ (at least for a student who has learned the material), another 35% are ‘medium’, requiring a longer calculation, more steps, a bit more decision making, and the remaining 15% are ‘hard’ requiring students to sometimes answer a question that they haven’t exactly seen before but if they really understand the material deeply, they can still figure those out. My tests are out of 100 points, and if I make my test properly, there should be no need to ‘curve’ the results.

I’ve also made lots of ‘final’ exams. For those, my distribution of easy, medium, and hard questions is different. I don’t have exact numbers on this, but I’m thinking that it is about 60% easy and 40% medium with no deliberately ‘hard’ questions. I do this because the students are studying a full year’s worth of material on the final and they have to constantly ‘switch gears’ from one unit to another which is already tough to do so I don’t see the need to have the deliberately hard type questions, like when the have to apply something they’ve learned to a new situation. On a regular unit test, I like those in moderation, but not on the final.

Back when I took the math Regents exams as a student in 1983, 1984, and 1985, they were very ‘fair’ tests. If you knew your math and studied, you were sure to pass and if you were really proficient, you were likely to get between 90% and 100%. There was no ‘curve’ on the test though you were supposed to choose something like 30 out of 35 short answer questions and 4 out of 7 long answer questions so in that way there was a small curve, but mainly the percent you got correct was your grade on the test.

Starting in the early 2000s with the Math A / Math B experiment, there seemed to be new philosophy of Regents tests. For one thing the tests would be a lot harder. There are different ways to make a math question ‘hard.’ It can be more steps meaning more opportunity for errors or the numbers can be harder to work with since they involve fractions or decimals or the questions can have extraneous irrelevant information to throw students off, or a basic question can be made more confusing by expressing it in an unfamiliar way. So the new questions were harder, but to make up for this the test was ‘curved.’ As time went on this curve got more and more extreme until a score of about 30% was curved up to a 65. Another relevant fact is that the curve is announced before the tests are scored. So it is not that they thought they made a fair test and then realized something must be wrong because the pass rate would be so low if the passing score were 65% but that they knew that this test would be so difficult it would require such a generous curve.

This is not a sound educational idea, to make a test much harder and then to curve it. What that does is make the scores less accurate since you can have two students get no credit on a question even though one of the students may not know the material at all and the other can know the material pretty well but have fallen for too many of the traps.

I suppose that they would say that the harder tests mean that they’ve ‘increased expectations’ or ‘increased rigor’ but if you’re going to curve a 30% up to a 65, it really isn’t increasing expectations it is just making the results inaccurate. An unfair test is discouraging to the students who prepared for a test that would accurately assess their skills. An unfair test is also frustrating for teachers who have been giving fair tests throughout the year only to have their students do poorly on the state made final exam.

Now I have my issues with some of the changes to the different math curricula in the transition to the common core, but as far as the Regents exams go, I am certain that it is possible to make a very fair and accurate test with those topics which would not need to be curved.

I don’t know how the Regents exams are made. I expect that there’s a committee of question writers, maybe they are teachers and assistant principals or retired teachers. . There also must be some kind of person who is ‘in charge’ who takes all the proposed questions and assembles the test and makes sure that the test ‘flows’ and that there is an appropriate mix of easy, medium, and hard questions. This year, for the three math tests, this team and leader have failed to create appropriate tests. In this post I’ll look at Algebra II and maybe look at the other tests another time. My hope is that whoever is in charge of assembling these teams, especially the leaders of the teams, will choose different, more qualified, people for next years exams.

When a teacher looks at a curriculum for a math course, he or she decides how much time should be devoted to each unit. Some units might take 5 days to complete, like rational equations, while others will take 15 days to complete, like exponential equations. A good Regents exam will reflect this by making the number of questions on each unit proportional to the amount of time a teacher spent teaching that topic. Otherwise it is pretty frustrating for students and teachers alike when they spent a month learning and mastering something and it only came up in one 2 point multiple choice question.

On the June 2017 Algebra II Regents I think all math teachers would agree that having just 10 points combined (out of 86 points total on the test, or about 12%) on rational equations, radical equations, systems of equations, sequences, and complex numbers was not the proper representations for these topics would I’d estimate to be about 25% of the course. On the other hand, the overrepresentation of exponential equations which accounted for about 25% of the test though about 12% of the course. So I’d say they need to do a better job of deciding how many points to give to the different topics based on how long those topics take to teach. This was not the main problem with this test, but it is something in need of improvement.

The issue with this test is that the majority of the questions are ‘bad’ questions for one reason or another. I’m going to analyze some of these questions in this post and then look at the other questions in future posts if readers want me to continue with it.

The first part of the Algebra II Regents is the multiple choice section with 24 questions worth 2 points each for 48 points (out of 86 total for the test). Of these 24 questions, only six of them were ‘good’ (numbers 1, 6, 7, 8, 11, and 17 for those of you following along with the test at home). The other 18 had issues with them that made them not ‘Regents worthy’ in my opinion.

Exponential equations are an important topic in Algebra II. It actually takes a few weeks to teach all the different things that lead up to a question like this. The first thing wrong with this question is that it takes too many steps for a multiple choice question. One way to fix this would be to remove the +3 from the exponent. The next issue is the way the choices are phrased. Besides knowing how to solve this question, students also are expected to use the change of base formula in their solution in order to get something that looks like the answer choices. They could have made choice (1) say log_2 6 -3, which would have been a little better. In my opinion the answer choices should just be decimals rounded to the nearest hundredth.

Maybe they did it this way because they did not want students to be able to ‘cheat’ by just plugging the four answer choices into the equation to see which one made the expression evaluate to 48. But the students could do that anyway by just converting the four answers into decimals and then testing them each out.

So question 2 was flawed since there were too many skills being taught in one two point multiple choice question. If I were giving a twenty question test about just this topic, perhaps a question like this would be one of the ‘hard’ ones. But to pick one question about a topic that takes many days to teach and make it this one instead of a more straight forward question is counterproductive. You want a question that enables you to distinguish the students who don’t know the topic from those who do. That’s why you want a less involved question than this.

Though the math involved in this question is fine, each of the steps, distributive property followed by simplifying the powers of *i*, as a math teacher this question is just ‘weird’ looking. I’ve rarely seen *x* variables and the imaginary number *i* put together in quite this way. My feeling is that, by convention, the *i* would be between the coefficient and the variable, but I’m not sure. Maybe the *x* should be a *z* since it is dealing with complex numbers. There’s just something ‘off’ about this question. Not that it’s something that if a teacher knows might be on the test isn’t something that students can easily practice and get correct, but since I don’t think I’ve ever encountered an expression like this in any math that I’ve done, it would be an ‘application’ of complex numbers that would be pretty forced. If the goal is to show that students know powers of *i*, I think a better question would be one that does not have the *x* in it, maybe something like (2+3*i*)(5-2*i*) or something like that.

This is a question that requires a graphing calculator. They want to see if students know how to solve equations like this by graphing both functions and using the ‘intersect’ feature of their graphing calculators to find the solution.

When they graph the two functions, though, the only answer that seems reasonable is choice (3) since these curves do intersect at (-0.99, 1.96). But choice (3) is actually not the correct answer here. The answer to an equation like f(x)=g(x) is not an ordered pair, but just the x-value. So if -0.99 was a choice it would be correct, but since the choice has the y-coordinate too, it is not considered correct.

So what is correct? Well, students would have to realize that on the 10 by 10 grid that the calculator defaults to there were only two intersections. But if they zoomed out to a 20 by 20 grid there would be a third intersection point at (11.29, 32.87) which makes choice (4) look good, but once again, that wouldn’t be right so fortunately there is choice (2) which is just the 11.29.

It was not a good decision to make a question like this and not have the correct answer in the basic 10 by 10 grid. That’s the first fix I’d make to this question if I were involved.

This question could also have been improved by making all the choices have just a single number and not have any ordered pairs since I don’t think teachers spend a lot of time having a the philosophical discussion about whether a solution to an equation can be an ordered pair or is it just a number. A discussion like that would be a bit too esoteric and boring for the vast majority of students.

Another option is to make the question ask which is not a solution and have the x-coordinate of the three intersection points plus one number that is not a solution.

OK, that’s all I’ve got in me for now. I could do this kind of analysis (let me know if you think I should) and show how about two thirds of the questions on this test reveal a lack of understanding by the team that created this test about what the purpose of a final exam is, what sorts of questions are appropriate for a test like this, what sorts of answer choices are appropriate, and just generally getting a clue about what makes a high quality test.

The issues with this test have nothing to do with the ‘common core’ actually. Very good tests could be created based on the common core curriculum for Algebra II. Teachers who taught this course created good tests throughout the year on it. The Regents should be a test that students who know their stuff should be able to pass without a huge curve to compensate for the low quality of the test questions. Those who are paid to create these tests need to take this responsibility more seriously or they should get more competent people to make the tests.

Question 4 is very odd. It’s pretty easy to guess the right answer to this question by only knowing very basic algebra – three answers are very obviously not possible. You don’t want people who know the right answer to get the same reward as people who can guess the right answer without knowing.

I think it’s rather poor not to define what i is – is it a plain old variable or an imaginary number? Or to give i^3 in an expression. It’s like giving x^2/x in an expression.

Maybe they have to curve because the test gives bimodal results. Maybe there’s a hump at 30 (and how far away is that mark from guessing?) and a hump at 80.

“Maybe they have to curve because the test gives bimodal results. Maybe there’s a hump at 30 (and how far away is that mark from guessing?) and a hump at 80.”

It is more than likely that the distribution of scores is non-Gaussian. I don’t know if this is why the NY Regents feel the need to massage scores [and I agree with Gary that we should only use raw scores generated from straight forward exams], but my own experience teaching chemistry and physics informs me that saddle-shaped distributions start to crop up well before the end of the first semester. My own students’ grades have always mirrored their math (usually algebra II and trig/pre-calc) grades, so I’m confident that the same phenomena occurs there as well.

I don’t see anything wrong with the questions presented above and disagree with Gary’s suggestion that the exponential question’s answers be given in decimal form (that leads directly to kids simply plugging in answers to arrive at the correct answer.) However, Gary is correct to say that standardized exams need to accurately reflect the material taught and mainly consist of items of average-level difficulty. I would add that students also need to be given an adequate amount of time.

Although I live and teach in California, I use old Regents’ Exams to test my own regular and honors level students (in addition to my own free response exams). The NY Regents’ Exams are not perfect, but their old exams have always seemed to be fairly honest and are good benchmark exams for teachers to utilize. They are decidedly more useful to me than anything that has come out of the Common Core, which has, unfortunately, watered down California’s Science Standards.

Keep up the fight, Gary.

Regents Exams came into existence in the 19th Century:

https://en.m.wikipedia.org/wiki/Regents_Examinations

In my opinion, the scoring keys on the Math Regents are nothing short of academic fraud, and the NYS Education Department should be reported to the NYS Governor’s Office, NYS Senate Education Committee, and NYS Assembly Education Committee for encouraging this type of marking system.

Can anyone explain the major discrepancy between grade 8 CC math scores and grade 9 CC algebra I scores? I am aware that in many districts accelerated students skip the grade 8 test but that alone cannot explain the very significant differences in the non-accelerated scores in these two grades. Could 9th grade math teachers really be that much better (Ha!) or is it simply a matter of cut-score jujitsu?

If you look at the standardized math scores of a very average, small, grade 7-9 class (suburban or urban…doesn’t matter, where it’s not at a specialized school), you’ll get a distribution of raw scores that resembles this (multiple choice exam, 4 answers per item, 100 points possible):

16, 22, 23, 27, 28, 28, 29, 30, 31, 33, 34, 37, 38, 45, 57, 68, 84, 92

I’ve not calculated it, but the average is probably around 35…slightly better than guessing.

Pathetic, but true. When I used to analyze my old, inner-city school’s math data, the average scores were in the 28-32% range…barely greater than random guessing.

Now take the top two or three kids out of that class (essentially outlier kids) and place them into a faster-paced class (algebra 1 instead of 8th grade math, geometry instead of algebra 1) and your average score shifts even closer to 25% (random guessing).

That is likely what you’re seeing, as higher-achieving 8th graders are yanked out of general math classes and placed into higher level classes, while high schoolers are simply programmed according to what the kids passed the year before.

It’s sad, but that accurately reflects the state of our nation’s educational system.

An editorial in New York _Newsday_ asks similar questions about curving the grades:

http://www.newsday.com/opinion/editorial/new-formula-for-n-y-regents-exams-1.13865532