Thursday, October 29, 2015

Can you remember more than 7 digits?

The other day, I came across this website that tests your ability to remember digits.


I thought it was interesting that, according to the website, the average person can remember 7 numbers at once. I've heard this before. This is supposedly the reason why telephone numbers are 7 digits long. At this point, I'm sure you're wondering if you are an "average" person. So, go try it...http://www.humanbenchmark.com/tests/number-memory.

Did you do it? I did it a few times myself and the farthest I got was 12 digits (my worst was 10). This probably means that I'm a superhuman or I have evolved past the rest of you. I'm sorry, but your days are numbered. (Numbered! Get it? No, of course you don't.)

I was still curious about this 7 digit claim, so I posed the problem to my students. Can the average person really only remember 7 numbers?

I had all of my students load the website and play along. After everyone was finished, I recorded the results and made a line plot with the data.

I asked the students to talk to their neighbors about whether or not this data confirms that the average person can remember 7 digits. Overwhelmingly, they felt pretty good about it, especially since the median of the data was 7. (I should note that sixth grade standards are all about analyzing distributions.) They were also able to see that more than half of the students were able to remember at least 7 digits, but less than half could remember 8 or more. Another reason to believe the claim that the average person could remember 7 digits.

We then discussed strategies for memorizing the numbers. Some students mentioned that they chunked the data...remembering 62 as "sixty-two" instead of "six-two". Some of them would practice typing them to build the motor memory. 

I also shared a couple of my own strategies...sometimes I could associate a number with something. For instance, once I saw a 53 and, for whatever bizarre reason, I remember that as Bobby Abreu's jersey number. Once I had that image of Bobby Abreu in my head, I stopped worrying about remembering 53. For the longer sets of digits, I would repeat the second half of digits over and over again while staring at the first half of digits. This way, I was relying on both my visual and auditory memory.

Now that the students had some new strategies, I gave them another chance to increase their digits.


As you can see, the data changed, but there really wasn't much improvement. Many students did worse while a few did marginally better. We couldn't make much sense of it, though we suspected that some of these strategies need to be practiced before we could see some results.

At this point, it would have been nice to keep practicing to see if we could improve, but my period is only 37 minutes long. I also had a couple of situations where students figured out they could copy and paste their answers. Cheating would be difficult to monitor.

Side note: Some of my students with IEPs could only remember three digits. This was consistent each time they made an attempt. This was eye-opening for me...when short-term memory is so weak, learning anything must be a huge struggle.


Saturday, October 24, 2015

Why, Common Core? Why?

The other day, I was checking students' work on mean, median, and mode. One of the problems involved finding out what grade you would need to get on a fourth test to have an average of 85 for the class. It's basically a mean problem in reverse, and for students who have never solved this problem, it can be challenging.

One of my students was struggling with this and wrote in her notebook, "WHY COMMON CORE WHY". I laughed and assured her that this problem has been around a lot longer than Common Core. What I really found amusing was that, in terms of content, this sixth grader really hasn't been exposed to some of the more unique things about Common Core. Most of that is happening in elementary school and Pennsylvania only switched over last year, when she was in fifth grade.

In all likelihood, this girl's hatred towards Common Core probably stems from something she overheard her parents say. And now, every time I present her with a challenge, a little voice in the back of her head is going to tell her that this problem is Common Core and it's not really important for her to figure it out. And that's all she needs...another reason to give up.


Tuesday, September 29, 2015

Warm-Ups with a Purpose

Warm-ups last year:

I would display four or five review problems on the Smartboard for students to work through as I took attendance. I would then walk around the classroom to see how students were progressing, but would often struggle to help very many of them, nor would I have a good sense of how the class did as a whole. We would then review every problem which was time consuming and not always helpful. The next day, I would create a few more warm-up exercises but I never had a clear picture of what my students were still struggling with or why.

Warm-ups this year:

I was asked to move into a new classroom where every student would have his or her own computer. Over the summer, I looked at several websites that would help me use formative assessment on a daily basis. I was happy to find Socrative (which is FREE!) and I use it everyday for my warm-ups. Students can quickly log in and start working on the exercises. I can create multiple choice, true/false, or short answer questions, and as students are answering them, I can see their responses live! It looks something like this...


This is kind of a big deal. As soon as a student gets something right or wrong, I know. And there's a lot I can do with that information. During those exercises, you'll routinely hear me say things like...

"Mary, awesome job on that last one. Everyone's having trouble with it."

"Almost everybody's getting #1 wrong. Make sure you read it carefully!"

"Sheri, that last one...how are you supposed to set up an addition problem with decimals?"

"Fawn, you seem to be having trouble with greatest common factor. Can I see your work for that last problem?"

"Hey, Andrew. Where's your notebook? Stop trying to do the work in your head. You're not Rain Man!"

After the students finish the exercises, I share the results with them and I let them tell me which ones we need to review (and which ones we don't). We look at commonly selected wrong answers and think about what mistakes students were making.


At the end of the day, I can throw this data onto a spreadsheet (shown below) and decide which topics/skills students have a firm grasp and which need further review. I can see how students progress in some skills over time and share that as a model of learning.


I love that students are getting instant feedback. I love that I have evidence of their growth. I love that we can review results as a class and, rather than students only focusing on their own mistakes, we can ask ourselves, what are we, as a class, doing wrong? What are we, as a class, doing right?

Sunday, September 20, 2015

Quizzes without Grades

A few weeks ago, I blogged about how I was going to stop putting grades on quizzes. This decision was heavily influenced by Dylan Wiliam's ideas from his book, Embedded Formative Assessment. I also need to mention that Ashli Black has very helpful explaining how she does comments-only grading and pushing me to design a system of grading that works.

This past week, I was finally able to test-run this idea after the students took a quiz on the Order of Operations. I explained my reasoning to the students and, for the most part, they seemed to be okay with it. I told them that this creates a better working environment where students can feel less embarrassed about their performance and work together to identify and correct their mistakes, no matter how well they did. I marked the quizzes by circling the problem number for every wrong solution and then color-coding three problems that I wanted the student to correct. If a problem had a pink mark, they had to identify their error. If there was a purple mark, they had to rework the problem. If a student did not get anything wrong, I gave them a more challenging problem to solve. Finally, while grades were not written on the quizzes, they were calculated and recorded into the online gradebook so parents and students could see them at home.

Overall, I thought it went really well. The students had about 10 minutes to work alone or together on their mistakes and handed the quizzes back to me. Those who did not finish had extra time overnight to do so.
The next day, I used socrative (an online quizzing tool) to ask my students how they felt about my "no grade" policy. The good news is that 70% of my students either liked it or didn't care. More students liked it than didn't like it. However, there is still 30% of my students that didn't like it. While it was not obvious in their responses, I believe that this frustration comes from not having that instant gratification of knowing what your grade is. This impatience isn't unexpected. Many times students will ask me if I graded their quiz ten minutes after handing it in.

In the end, I think the benefit of students revisiting their work and working together to fix mistakes outweighs the annoyance of not getting their grades right away. I'm hoping that, over time, students will begin to also see that benefit.


As a side note, I should say that I'm not really doing "comments-only grading". I had considered writing out comments, but it occurred to me that most of what I'd be writing could later be discovered by the student upon more reflection or figured out with help from a classmate. I believe that writing comments on every wrong answer would have been extremely time consuming and would have deprived my students from discovering their own mistakes.


Update 2/15/15:

Carolina Vila (@MsVila on twitter) asked me if I have kept up with this system. As with anything I experiment with, I look for more efficient ways to do things. (Okay, maybe I just got lazier.)

I mentioned that I color-coded problems in the beginning of the year and that these colors would tell students how I wanted them to reflect on each problem (identify the error/explain what they did wrong or rework the problem). After doing this a few times, it just seemed to make more sense to have students do both things. On a separate piece of paper, they would have to tell me which three problems they chose to rework, tell me (in sentence form) what they did wrong, and finally, rework the problem.

For students that got everything right, I backed away from trying to give them a more challenging problem, and instead, asked them to help other students make their corrections.

Students would turn in their corrections along with their quiz, I would check to see that it was done, AND THEN, I would write their grade on the quiz to give back to them the next day. When I first started taking grades off of the quizzes, I had hoped that I could just put their grades online for them to check, but I ran into too many issues where students and parents couldn't check the grades online because they lost their passwords or didn't have internet access at home. By finally putting the grades on the quizzes, students complained less and respected the correction process more.

On the student side, one of the biggest misconceptions was that making quiz corrections would improve their quiz grade. I explained that they would get credit for making the corrections (similar to a homework grade), but that their quiz grade would remain the same. The only way their grade would improve would be to retake the quiz, and the only way a student would be allowed to retake a quiz is if he or she made the corrections on the first quiz. Altogether, there is plenty of incentive to make these corrections.

Monday, August 3, 2015

Spaced Practice and Repercussions for Teaching

I've been reading John Hattie's book, Visible Learning, in which he ranks the effect sizes of different strategies that help student achievement. One of the strategies that is pretty high on the list is that it is better to give students spaced (or distributed) practice as opposed to mass practice. In other words, rather than having a student practice something over and over again in one day, it is much better to spread that practice out over multiple days or weeks. (You can read one of these studies here.) The main benefit is that spaced practice helps with long-term retention.

While this research certainly gives some justification for providing students with multiple opportunities to revisit older topics, I am left to wonder if this should change how I structure my lessons and assessments. I, like many others, teach by units. My students might spend a month on fractions followed by a test. They then get a month of algebra followed by another test. We, as teachers, create this span of time when all learning about a particular topic must happen. We don't always give students the time to practice these ideas, particularly the more challenging ones that almost always happen at the end of the unit and right before the test.

Based on what I've read about spaced practice, I would propose that teachers shouldn't give tests at the end of a unit. Perhaps students need time to practice these skills over several weeks before you should assess them. This is something I'm going to explore this year with some of the concepts that were challenging for my students last year.

Note: This is probably not an original idea and I'm sure someone else out there has probably explored it. If you have any resources to share on the subject, I'd greatly appreciate it!

Another note: I do allow my students to retake quizzes which I had hoped would send the message that learning doesn't stop after the quiz is taken. However, very few of my students have taken advantage of this in the past. I am hoping to correct that this year with some ideas from Dylan Wiliam, Ashli Black, and others.

Update: Henri Piccioto has written about this and calls it "lagging homework". He also reinforces the idea that quizzing should happen much later then when the material was taught. Thanks to Mary Bourassa and Chris Robinson for helping me find his work!

Sunday, August 2, 2015

Movie Popcorn

I ordered a small popcorn at the movie theater and the cashier asked me if I'd like the large size for only $1 more. I knew that this had to be the better deal, so I took it. I mean, what if I had gotten the small popcorn and ran out during the movie? That would be unacceptable.



However, as I left the theater, I noticed that I didn't actually eat all of the popcorn. There was about two and a half inches of popcorn left at the bottom of the bucket. I could take it home with me, but stale popcorn doesn't sound too appetizing and I decide to throw it away. Did I just get ripped off? Should I have just bought the small popcorn?



There's a couple of ways of modifying this task to address the needs of different grade levels. It all depends on what information is given to the students. If you can just give the students the number of cups of popcorn in each bucket, then this is a fairly simple unit price problem. If you just give dimensions of the buckets, you will need to derive and use formulas. It would also be extremely helpful to use a spreadsheet.

6th Grade Version:

Info required...


Questions to explore...

What is the unit price for each size?
What is the percent change in size, price, unit price?
What is the least amount of popcorn from the large container (in cups) you would need to eat so that you don't get ripped off? (This is not as interesting a question as the 8th grade version because you can't usually tell how many cups of popcorn are left in a bucket.)


8th Grade (or beyond) Version:

Info required...





Volume of a truncated cone:

You will notice that there is a little bit of popcorn above the rim of each bucket. There is also a small gap on the bottom of each bucket. I assumed that the added and subtracted volumes of this popcorn would more or less cancel each other out. I could be wrong about this!!!

Questions to explore...

What is the capacity of each size?
What is the unit price for each size?
What is the percent change in size, price, unit price?
How many inches of popcorn would be left in the large bucket if you eat just as much as the small bucket?
What is the least amount of popcorn from the large container you would need to eat so that you don't get ripped off? In other words, how many inches of popcorn can I leave at the bottom of the bucket?

The answer....

I'm not leaving my full solution here because I'm curious to see how others might solve it. Basically, I used a spreadsheet to test different heights of popcorn eaten to determine where the unit price of the large matches the unit price of the small. If you think about it, this is further complicated because as you eat popcorn, the height AND top radius changes. You will have to come up with a formula that calculates the top radius based on the height.

I determined that you get ripped off if you leave more than two inches of popcorn at the bottom of the bucket.

Sunday, July 26, 2015

My Grudge with "Grudge"

I'm flying home from Twitter Math Camp near Los Angeles, and after successfully figuring out how to steal the airplane's wifi, I decided to write another post. This is what I do. I go to a conference, get inspired to contribute to the MTBoS community, and write a blog post. You must understand that once I get home, all motivation to do such a thing will be lost. That's what Netflix would like me to believe anyway.

There is one contribution I've made to the online community that has received a lot of good feedback from students and other teachers. This is a game called Grudge. I gave a survey to my students at the end of this year and asked them what were their favorite things were from my class. Grudge was near the top of the list. ("Mr. Kraft" was at the very top of the list, of course.) 

There is no question in my mind that it is a review game that engages almost all of my students almost all of the time. I also feel that I present it in such a way that students seriously consider their answers and are eager to understand their mistakes. But there is a problem with the game. On occasion, students will team up on other students, and while it is not always expressed, I do believe that feelings can be hurt. As Matt Vaudrey once expressed in a tweet, it hurts the class culture. It promotes competition instead of collaboration.

I've learned that any activity I use in my class should not only be engaging and promote academic growth, but should also encourage students to be respectful to one another.

Sunday, April 19, 2015

What the hell is mean absolute deviation?

When I first started looking at the Common Core standards for sixth grade a couple of years ago, admittedly, there was one standard I had to do a double-take on:

6.SP.B.5.C: Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern and any striking deviations from the overall pattern with reference to the context in which the data were gathered.

And, like many of my colleagues, I thought, "What the hell is mean absolute deviation?" My horror was confirmed when I googled it and saw how complicated it would likely be for my students.

Looking in some textbooks and online resources, I was continually left wondering why my students would even care about mean absolute deviation. I mean, you do all of these steps, you get a number, and then what? What does mean absolute deviation tell you?

I figured that the only way my students are going to have any access to this would be to compare different data sets, make a quick judgement about which one has more variability (which can be very subjective) and find some way of quantifying that variability. On top of that, I wanted my students to create their own data where the goal would be to have the least amount of variability.

I then remembered the "Best Triangle" activity I did with Dan Meyer. In this activity, Dan asked four teachers to draw their best equilateral triangles. (Notice that Andrew and I have points in our nostrils.)

Rather than having the students evaluate the teachers' triangles, I had them create their own. I started the lesson off by asking the students to draw, what was in their mind, the perfect triangle. Immediately, there were several hands that shot up from students who wanted some clarification, but I told them to just do what they thought was best. After a quick walk-around and throwing some random triangles up on the document camera, it seemed that almost everyone was trying to draw an equilateral triangle. A few students argued that a right triangle could be considered a perfect triangle and I admitted that my instructions were very vague and their interpretations were justified.

We then brainstormed all the things we should look for in the perfect equilateral triangle. Students agreed that we needed three equal sides and three equal angles. They then made a second attempt on the whiteboards to draw perfect equilateral triangles. I asked everyone to make a quick judgement about which triangles they thought were the best, but soon ran out of time for the day. After the students left, I quickly took pictures of their triangles and took measurements in millimeters. (Admittedly, this is something I would have preferred having the students do on their own, but my class time is unbelievably short...37 minutes.)

The next day, I told my students that I took those measurements and found a way to rank all of the triangles from all of my classes. Next, I showed them the five triangles which represent the minimum (best), first quartile, second quartile, third quartile, and maximum (worst) of the data (in order below). This was a nice way to show a sample of the triangles as my students had just finished learning about box-and-whisker plots.

When I first showed them these triangles, I asked them to figure out which triangle represented the maximum and the third quartile. The other three triangles were not easily identified, however, we noticed that if you reorient the triangles so that one of the other two sides was on the bottom, the inferior triangles no longer looked equilateral (leaning to the left or right).

I explained that ranking these five different triangles didn't provide too much difficulty, but I was confused how to rank triangles that looked very similar. I gave the three following triangles as an example and had students vote on which one they believed looked the best:


In each class, there was a lot of disagreement about which triangle was the best, and more often than not, the majority picked the wrong one. I then provided the side lengths of each triangle (above in millimeters) and asked the students, "how can we use these measurements to rank these three triangles?"

After a few unproductive guesses, someone usually asked to find the differences between the measurements, which lead to someone else asking to find the sum of those differences or the range. They notice that the ranges for each triangle are all 20 mm. Someone usually calls me out for doing this intentionally...which I did.

Next, somebody will ask about the mean of the numbers. I act dumb, as I do with every suggestion, and we explore that possibility. We find the means, and it would seem that we have again hit a dead end.

I have say that at this point, some classes were completely stuck, and some kept going with it. For those that were stuck, I told them that to me, the mean (157 mm for the first triangle) represented the side length that the triangle drawer had intended for each side, but sometimes he or she fell a little short of that goal (149 mm), or overshot it (169 mm). I then asked them to compare each drawn side to "the perfect side length". We found the differences of each length and the mean, and soon after, someone suggested finding the sum of those differences.

At this point, most of my classes were satisfied that we found a method of comparing the triangles. We just had to look at the sum of the differences from the mean. The best triangle was the triangle that had the lowest sum. A couple of classes even went one step further to find the mean of those differences. In reality, there was nothing wrong with either of those methods. However, the second method WAS THE MEAN ABSOLUTE DEVIATION!!! When I first started planning this lesson, never did I think my students would intuitively come up with this concept.

This was the first time I've taught this lesson and I realize that there was a lot more I could have done with it. Given more time, I could have had students work in groups to come up with their own methods for determining the best triangle (similar to Dan's lesson plan) and we could have compared the methods later.

Side note: Dan says that "the best solution is to use the fact that an equilateral triangle is the triangle that encloses the most area for a given perimeter". Sixth graders are not at a point yet where they can find the area of a triangle just given the side lengths, so some other solution was necessary. Technically, my method is flawed because it favors smaller triangles. If you double or triple the size of a triangle, it doubles or triples the mean absolute deviation. This is noticeable in the data as smaller triangles were preferred. A better method would have been to compute the percent differences from the mean, but this would have greatly complicated an idea I was just trying to introduce for the first time.