Decimals areĀ hard.
What would we even want the student to do here if he’s working in decimal? Like, how do standard multiplication algorithms handle something like a repeating digit?
That’s what I’m getting out of this mistake right now: the deviousness of decimal representation, and the way it can obscure numerical properties.
How about you? What do you make of all this?