For some
time, I’ve been considering a new approach to teaching mean absolute deviation
(MAD). This is a new concept for 6th grade as it is in the Common
Core standards (CCSS.MATH.CONTENT.6.SP.B.5.C) The lesson in the student’s textbook is not terribly helpful. It doesn’t give any purpose for finding the MAD for a set of data
and the directions for doing so are somewhat intimidating. It
is my hope that I can help students intuitively derive MAD on their own, or at
the very least, give them the motivation to learn MAD to identify which set of
data has more spread.
Last year, I had the same hopes
of creating this intuition by having students create equilateral triangles.
This idea was borrowed from a similar activity I worked with Dan Meyer on where
students had to identify which of four triangles was the most equilateral. I had
students create their own triangles and measure the lengths of their sides. We compared the triangles and their measurements to determine which was the best.
It was my hope that students would
see the data and have some basic understanding of what to do with it.
Unfortunately, I only had one student in my five classes really figure it out
without a lot of assistance from me. It was obvious that, if I was going to do
this lesson again, I would have to find some way of creating an easier path for
my students to find the MAD. To build investment and help find meaning, I would again need data that was student
generated, but easier to work with. Thinking about absolute deviations would
have to come naturally and the mean of those deviations the obvious answer to
comparing data sets.
I created a game for students to play that would require the MAD to
determine the winner. Of course, I couldn’t tell the students that this was how
the winner was determined. They would have to come up with this method on their
own. I called for two volunteers to come up to the front of the class and
explained that they would be rolling two dice. Whoever rolled a sum closest to
seven would be the winner. One student rolled a five and
the other student rolled a ten. I placed their sums on a number line in the
front of the room for everyone to see and asked who won and how did we know.
There were a
couple of variations in answers, but the general idea was that one was closer
to 7 than the other. One student was more specific about how five is two away
from seven and ten is three away from seven. Therefore, five is better. I tried
to impress upon my students that quantifying how far each number was away from
7 would really help them as we worked through these different scenarios.
I asked the students to roll
again, but this time I wanted them to roll twice. The boy rolled a seven and a
four. The girl rolled a twelve (already losing) and a ten.
It seemed
obvious who won, but I asked students to write down a sentence or two telling
me who won and explain how they know. There were a couple of ideas about this,
but no one was really thinking about mean absolute deviation at this point. To
their credit, it would not make sense to do it here. There are much easier ways
to compare these sums. What I did want students to see is that the boy’s two
sums deviated from seven by three and zero. The girl’s two sums deviated by
three and five. The sum of those deviations was enough to determine the winner.
One girl said that she
determined who won by taking the average of the sums. I thought this was a neat
idea and it didn’t occur to me to think about it this way. The boy’s average
was 5.5 (1.5 away from 7) and the girl’s average was 11 (4 away from 7). This seemed to validate our belief
that the boy won. I asked the two students to roll again and again had the
students write about which person won. The girl with the averaging method used
it again, and again it seemed to work. I then created a hypothetical situation
where the girl would roll two seven’s (best case scenario) and the boy would roll a two and a
twelve (worst case scenario). I asked, “Who won?”
Before
anyone even answered, I could see some students making the connection that the
average was not going to work every time. In this case, the sums both averaged
out to be seven, indicating a tie, but the boy’s sums were obviously worse than the girl’s.
I explained that the students
would now be placed into groups and creating their own data. With one student
rolling the dice for me, I showed students how to record their results. I
rolled the dice ten times and when finished, I had a line plot that looked like
this:
After
students finished creating their own line plots, they brought them up to me and
I recreated them on Microsoft Excel:
With this data, I asked students
to rank the line plots from best to worst. Three groups volunteered their rankings:
We noticed
that we were in agreement about ranks 1, 2, 3, 7 and 8, but we had trouble
figuring out how the middle groups performed. I placed two of these groups’ line plots on the screen and I asked all students to
figure out, mathematically, which one was better.
From here, I got a lot of
interesting ideas from the students. One girl tried making box and whisker
plots of the data. This made sense because we’ve been using box and whisker
plots lately to describe spread by looking at the range and interquartile range.
(The following day we had a conversation about how box and whisker plots can be
misleading when trying to understand spread.) Another student had an idea to
compare the sums from each side. Another girl
tried to develop a point system where a sum of 7 would be worth 7 points, 6 and
8 would be worth 6 points, 5 and 9 would be worth 5 points, and so on. The
point values were somewhat arbitrary, but she was really developing a good way
of quantifying the spread. After sharing this method with the class, another girl suggested using the distances to seven instead, just like we did in the beginning of the class. Rolling a 7 would be worth zero points, rolling a 6 or 8 would be worth 1 point,
and so on. I didn’t mention this to the class at the time, but this girl was describing the absolute deviations.
I wrote down all of these
deviations with the class and asked, “What’s next?”
Box and
whisker plot girl asked if we could add all of these deviations together and
compare. So we did. We found that Amari’s total sum of these deviations was 37
and Avarey’s was 28. Most of the students felt that Avarey was clearly the
winner. Amari quickly raised his hand to protest, “But I rolled
more times than her! That’s not fair!” At this point, many students suggested
that either Avarey’s group be forced to roll an equal number of times, or we
remove some of Amari’s data. I
asked them to consider how we compare different hitters in baseball. If one
player gets 78 hits in 100 at bats and another player gets 140 hits in 200 at
bats, we don’t force the first player to take 100 more at bats to even things
up. After a couple of students made guesses about how to do this, a girl suggested we find the
mean of these differences. We quickly divided each value by the number of rolls
each group made and found that, on average, Amari was 1.85 away from 7 and
Avarey was 1.87 away from 7. We can say that Amari’s rolls were closer to 7
(less spread), but just barely.
We then reviewed how the students
ranked each of the line plots and compared this against the mean absolute
deviation for each (picture below). It was
interesting for students to see how some of their predictions came true and how
they were completely wrong for others. Nevaeh’s data is a good example of this
– students overwhelmingly thought that her group came in last place, but her
score indicated that she was actually in 3rd place. This
misplacement had more to do with students thinking less about spread and more
about total number of rolls in the 6-8 range. Because Nevaeh didn’t roll as
often as the other groups, it was assumed that she lost because she didn’t roll
very many 6’s, 7’s, or 8’s. However, she only had one sum that was far from the
center. (There is probably a good lesson
here about how the amount of data collected affects comparisons of data sets,
but there was no time for me to discuss it.)
Now that we had some way of
comparing the data, I asked students to collect one more data set. Again, they
had to roll their dice and write down the sums. The only difference is that
they had to find the absolute deviation from 7 for each roll and take the
average of those deviations. Students turned their data in to me and I quickly
checked that they calculated the mean absolute deviation correctly. Again, we
compared line plots and checked those comparisons against the MAD of each data
set.
During the next class, we took
some quick notes on how to calculate the MAD (this time using the mean of the
data set as our central point), constantly referring back to the work we did
the previous day. Students practiced by finding the MAD for a made up set of
data. Finally, they calculated the MAD for average high temperatures for different
cities in the U.S. (This came out of necessity. I explained that the
temperatures in Pottsville, PA varied way too much and I needed to move where
it’s warm all year round. As they were anxious to see me go, they had quite a
few suggestions.)
Overall, I’m pretty happy with
how this lesson went. I think it was worth building the context over time and
it pushed them to really connect the visual (line plot of the data) with the
statistic. When we calculated the MAD for the different cities, students
already had an intuition about which cities would have a low MAD and what that
number actually means. I feel confident that I will keep this lesson for next
year with some minor adjustments.
Special thanks to Bob Lochel and
Tom Hall, two math teachers who were nice enough to exchange ideas with me
about this through email. Also, I'd also like to thank Stephanie Ziegmont for
helping develop some of the writing components of the lesson.