Thanks to Tina, we’ve got this great example of a tiny little error that crops up during complex numbers. Here’s my take on it: there’s no way that this kid would make this mistake if their problem was just “Simplify the square root of negative 4.” When the skill is laid forth is such a direct way, it’s very clear what the student is supposed to do. But when the skill is embedded in a much more complex problem, the student “handled” the negative root by realizing that this was a context that deserved a complex number. Happy and satisfied that they noticed and “handled” every aspect of the problem, the student moved on.
I like calling these sorts of mistakes “local maxima” mistakes, and I think they’re fairly common. To me, the importance of these sorts of mistakes is that they reveal the problem with testing any skill in isolation of others. I’m <i>absolutely sure</i> that this student could simplify the square root of negative four if plainly asked to. But that didn’t mean that this student was able to use that skill in this context, when there are many more things to juggle.
To me, this means that you can’t really assess any individual skill in that sort of isolation. Instead, I’d prefer an assessment system that gives students a bunch of chances to use a skill — unprimed — in the context of a fairly difficult problem. If the student can simplify negative radicals in 3-4 more involved problems, then I’m pretty confident that this kid has that skill down.
2 replies on “Negative Square Roots”
I also wonder, if this child was told they had a mistake in their work, would they even notice the problem simplifying the radical sign? Do our assessments give students the opportunity to demonstrate an extremely valuable skill – that of fixing errors in one’s own work, once one is aware they have made a mistake?
“To me, the importance of these sorts of mistakes is that they reveal the problem with testing any skill in isolation of others.”
Agree completely! This goes hand-in-hand with my pet peeve about teaching “how” without teaching “why.” My kid’s classroom experience is “here’s the formula … now plug and chug.” Such teaching survives in part because the tests ask questions which can be answered by remembering the formula and knowing how to plug values. The school does well on the metrics, and kids are supposed to have learned what they were expected to.
The best problems make us think (vs. make us reflexively start following a practiced algorithm) and throw multiple concepts in together to make connections. I also like them more when they are not multiple-choice type and are not timed.