The submitter reports: “There was some controversy in our department over the fact that the y axis has been labeled as frequency. The y axis should be labeled frequency density. However the majority of students did manage to ignore or relabel the axis correctly. This student has become confused by the whole thing.”
Ignoring the mislabeling of the problem, what evidence do we have about student knowledge?
Thanks to Ian Hopkins for the submission. Follow him on twitter too.
7 replies on “Histograms and Frequency”
This teacher has become confused by the whole thing too.If the vertical axis is labeled frequency, then gosh, I would expect that to be the setup. I think it’s a little harsh to give students an incorrect setup and then mark them off for giving an incorrect answer.I absolutely can’t ignore this, it’s the crux of the issue. If you’re going to make random-sorted bin sizes, and expect people to find the area for frequency, then you have to label the chart to tell people that.Personally I had to go to bitesize to check that I knew what I knew..
The student knows how to follow instructions,no matter how impossible, and is doing its best. Nitpicking about 3.2 instead of 3.1, and complaining about the student trying to fit “frequency” in as a vertical axis is nitpicking after the heinous crime of the problem. The student needs to learn to rebel vociferously, but that would probably not be encouraged.
This makes me sad.
Who makes a histogram like that? Why not just give the true histogram that is apparently behind the scenes, namely the one with 1-minute bins? Then ask for the frequency axis to be scaled by giving info like “there are a total of 20 students with 0<t<10" and so on?
If the answer to "where in the world do you ever see histograms with such crazily uneven bins" is "in other math problems of this type" then, well, there you go
Too clever by half as the saying goes. I am with crazedmummy on this. I hope those students do their best to stick it to the man. The point of of a histogram is to convey information in a clear and meaningful way. That histogram is beyond ridiculous on so many levels.
I’m not happy with the tone of the above discussion. Why are we so careful to use constructive language when talking about student work if we’re going to take out the claws when dealing with teachers?
I take the point about my snarky tone. I guess where that tone comes from is a feeling that students are different from teachers and so I have different expectations.
Assessment is part of a feedback loop. It does not always indicate anything about student learning. When an assessment gives surprising results, it could be telling us that the students’ learning/thinking is “off” somehow, OR that the instruction was “off”, OR that the assessment itself is flawed or invalid or otherwise “off”. (“Off” is shorthand for more constructivist ideas about the present state of understanding.)
The premise of this blog seems to be “here’s an assessment that’s pretty valid but here’s weird student work so let’s figure out what the student is thinking” and that’s fine. But by posting here it seems that teachers are positing that the premise is correct (that the assessment itself is not flawed). I guess it’s that part that I’m taking issue with — the whole setting-aside that the question itself might be to blame. The wording of this post explicitly brings up the possible flaws, then asks us to set them aside as if they’re not important.
That doesn’t justify my snarkiness. But it is disappointing that the teacher is reflective enough to post here and be engaged but not reflective enough to turn from “what was the student thinking” to the deeper “What was *I* thinking? What was my goal? What did I really want? Is this assessment getting at all those things?”
Calling the student “confused” when the student appears to have made a very good and logical attempt (and was brave enough to question the problem in writing!) when the problem is admittedly debatable…that pushed a button with me. It’s not surprising that it pushed a button with prior commenters.
Sorry for apparently attacking a person nice enough to share their work, but I spend a lot of my time looking critically at what rubbish I have given students. Especially when they can tell me what stupidity I assailed them with, I am inclined to listen to their alternate variations.
Most of the posts M Pershan has given us have been very useful in looking for errors from students – I am using them in my class, for students to learn to look for mistakes in a non-threatening way ( the mistakes are not theirs)
This was not one of those. The student did its best. And yet it received big kisses all over the place. I am from England and i have seen the US marking where you don’t usually give a check for correct. It also uses the word marks for score, so I assumed it is from the U.K. or Canada. I concluded it is on a printed test, supplied by some government entity, due to the line with “END” on it, and total marks. Therefore the teacher is unlikely to have generated it.
I stand by my opinion that this student did not deserve the 1/4 score, and the question as graded does not assess the knowledge of the student.
By the way, if you look at the bitesize link, you’ll see that this type of histogram is part of the math requirement in the UK. I’ve never seen it in the US. Sorry, Canada, Australia, New Zealand, I have no idea what your requirements are.