Tuesday, September 29, 2015

Warm-Ups with a Purpose

Warm-ups last year:

I would display four or five review problems on the Smartboard for students to work through as I took attendance. I would then walk around the classroom to see how students were progressing, but would often struggle to help very many of them, nor would I have a good sense of how the class did as a whole. We would then review every problem which was time consuming and not always helpful. The next day, I would create a few more warm-up exercises but I never had a clear picture of what my students were still struggling with or why.

Warm-ups this year:

I was asked to move into a new classroom where every student would have his or her own computer. Over the summer, I looked at several websites that would help me use formative assessment on a daily basis. I was happy to find Socrative (which is FREE!) and I use it everyday for my warm-ups. Students can quickly log in and start working on the exercises. I can create multiple choice, true/false, or short answer questions, and as students are answering them, I can see their responses live! It looks something like this...

This is kind of a big deal. As soon as a student gets something right or wrong, I know. And there's a lot I can do with that information. During those exercises, you'll routinely hear me say things like...

"Mary, awesome job on that last one. Everyone's having trouble with it."

"Almost everybody's getting #1 wrong. Make sure you read it carefully!"

"Sheri, that last one...how are you supposed to set up an addition problem with decimals?"

"Fawn, you seem to be having trouble with greatest common factor. Can I see your work for that last problem?"

"Hey, Andrew. Where's your notebook? Stop trying to do the work in your head. You're not Rain Man!"

After the students finish the exercises, I share the results with them and I let them tell me which ones we need to review (and which ones we don't). We look at commonly selected wrong answers and think about what mistakes students were making.

At the end of the day, I can throw this data onto a spreadsheet (shown below) and decide which topics/skills students have a firm grasp and which need further review. I can see how students progress in some skills over time and share that as a model of learning.

I love that students are getting instant feedback. I love that I have evidence of their growth. I love that we can review results as a class and, rather than students only focusing on their own mistakes, we can ask ourselves, what are we, as a class, doing wrong? What are we, as a class, doing right?

Sunday, September 20, 2015

Quizzes without Grades

A few weeks ago, I blogged about how I was going to stop putting grades on quizzes. This decision was heavily influenced by Dylan Wiliam's ideas from his book, Embedded Formative Assessment. I also need to mention that Ashli Black has very helpful explaining how she does comments-only grading and pushing me to design a system of grading that works.

This past week, I was finally able to test-run this idea after the students took a quiz on the Order of Operations. I explained my reasoning to the students and, for the most part, they seemed to be okay with it. I told them that this creates a better working environment where students can feel less embarrassed about their performance and work together to identify and correct their mistakes, no matter how well they did. I marked the quizzes by circling the problem number for every wrong solution and then color-coding three problems that I wanted the student to correct. If a problem had a pink mark, they had to identify their error. If there was a purple mark, they had to rework the problem. If a student did not get anything wrong, I gave them a more challenging problem to solve. Finally, while grades were not written on the quizzes, they were calculated and recorded into the online gradebook so parents and students could see them at home.

Overall, I thought it went really well. The students had about 10 minutes to work alone or together on their mistakes and handed the quizzes back to me. Those who did not finish had extra time overnight to do so.
The next day, I used socrative (an online quizzing tool) to ask my students how they felt about my "no grade" policy. The good news is that 70% of my students either liked it or didn't care. More students liked it than didn't like it. However, there is still 30% of my students that didn't like it. While it was not obvious in their responses, I believe that this frustration comes from not having that instant gratification of knowing what your grade is. This impatience isn't unexpected. Many times students will ask me if I graded their quiz ten minutes after handing it in.

In the end, I think the benefit of students revisiting their work and working together to fix mistakes outweighs the annoyance of not getting their grades right away. I'm hoping that, over time, students will begin to also see that benefit.

As a side note, I should say that I'm not really doing "comments-only grading". I had considered writing out comments, but it occurred to me that most of what I'd be writing could later be discovered by the student upon more reflection or figured out with help from a classmate. I believe that writing comments on every wrong answer would have been extremely time consuming and would have deprived my students from discovering their own mistakes.

Thursday, August 20, 2015

Motivated by Stature

Many people measure their success by comparing themselves to others. If they are at the top of that food chain, they will feel a sense of superiority and will likely continue to do well with as much effort to preserve that. If they are at the middle or the bottom, they will likely withdraw over time as they’ll never be able to surpass those at the top.

Anything we do in our classrooms that feeds into that culture will ultimately harm all of our students. What is needed is a belief system where people are not defined by their class rank but where everyone, including those at the top, has potential to improve.

I made the decision this summer to switch to comments-only grading which I believe will help instill this belief. Students will no longer be able to compare their grades with other students to determine where they fit in this hierarchy. All students will be asked to extend their thinking, including my highest-performing students. However, quiz feedback is just one small aspect of everything I do in a classroom. I can’t help but wonder how many of the other interactions I have with students might imply that I value ability over effort.

My hope is that, throughout this coming school year, I will regularly reflect on how my interactions with students, which are often subtle, might help or hinder this outlook.

Monday, August 3, 2015

Spaced Practice and Repercussions for Teaching

I've been reading John Hattie's book, Visible Learning, in which he ranks the effect sizes of different strategies that help student achievement. One of the strategies that is pretty high on the list is that it is better to give students spaced (or distributed) practice as opposed to mass practice. In other words, rather than having a student practice something over and over again in one day, it is much better to spread that practice out over multiple days or weeks. (You can read one of these studies here.) The main benefit is that spaced practice helps with long-term retention.

While this research certainly gives some justification for providing students with multiple opportunities to revisit older topics, I am left to wonder if this should change how I structure my lessons and assessments. I, like many others, teach by units. My students might spend a month on fractions followed by a test. They then get a month of algebra followed by another test. We, as teachers, create this span of time when all learning about a particular topic must happen. We don't always give students the time to practice these ideas, particularly the more challenging ones that almost always happen at the end of the unit and right before the test.

Based on what I've read about spaced practice, I would propose that teachers shouldn't give tests at the end of a unit. Perhaps students need time to practice these skills over several weeks before you should assess them. This is something I'm going to explore this year with some of the concepts that were challenging for my students last year.

Note: This is probably not an original idea and I'm sure someone else out there has probably explored it. If you have any resources to share on the subject, I'd greatly appreciate it!

Another note: I do allow my students to retake quizzes which I had hoped would send the message that learning doesn't stop after the quiz is taken. However, very few of my students have taken advantage of this in the past. I am hoping to correct that this year with some ideas from Dylan Wiliam, Ashli Black, and others.

Update: Henri Piccioto has written about this and calls it "lagging homework". He also reinforces the idea that quizzing should happen much later then when the material was taught. Thanks to Mary Bourassa and Chris Robinson for helping me find his work!

Sunday, August 2, 2015

Movie Popcorn

I ordered a small popcorn at the movie theater and the cashier asked me if I'd like the large size for only $1 more. I knew that this had to be the better deal, so I took it. I mean, what if I had gotten the small popcorn and ran out during the movie? That would be unacceptable.

However, as I left the theater, I noticed that I didn't actually eat all of the popcorn. There was about two and a half inches of popcorn left at the bottom of the bucket. I could take it home with me, but stale popcorn doesn't sound too appetizing and I decide to throw it away. Did I just get ripped off? Should I have just bought the small popcorn?

There's a couple of ways of modifying this task to address the needs of different grade levels. It all depends on what information is given to the students. If you can just give the students the number of cups of popcorn in each bucket, then this is a fairly simple unit price problem. If you just give dimensions of the buckets, you will need to derive and use formulas. It would also be extremely helpful to use a spreadsheet.

6th Grade Version:

Info required...

Questions to explore...

What is the unit price for each size?
What is the percent change in size, price, unit price?
What is the least amount of popcorn from the large container (in cups) you would need to eat so that you don't get ripped off? (This is not as interesting a question as the 8th grade version because you can't usually tell how many cups of popcorn are left in a bucket.)

8th Grade (or beyond) Version:

Info required...

Volume of a truncated cone:

You will notice that there is a little bit of popcorn above the rim of each bucket. There is also a small gap on the bottom of each bucket. I assumed that the added and subtracted volumes of this popcorn would more or less cancel each other out. I could be wrong about this!!!

Questions to explore...

What is the capacity of each size?
What is the unit price for each size?
What is the percent change in size, price, unit price?
How many inches of popcorn would be left in the large bucket if you eat just as much as the small bucket?
What is the least amount of popcorn from the large container you would need to eat so that you don't get ripped off? In other words, how many inches of popcorn can I leave at the bottom of the bucket?

The answer....

I'm not leaving my full solution here because I'm curious to see how others might solve it. Basically, I used a spreadsheet to test different heights of popcorn eaten to determine where the unit price of the large matches the unit price of the small. If you think about it, this is further complicated because as you eat popcorn, the height AND top radius changes. You will have to come up with a formula that calculates the top radius based on the height.

I determined that you get ripped off if you leave more than two inches of popcorn at the bottom of the bucket.

Sunday, July 26, 2015

My Grudge with "Grudge"

I'm flying home from Twitter Math Camp near Los Angeles, and after successfully figuring out how to steal the airplane's wifi, I decided to write another post. This is what I do. I go to a conference, get inspired to contribute to the MTBoS community, and write a blog post. You must understand that once I get home, all motivation to do such a thing will be lost. That's what Netflix would like me to believe anyway.

There is one contribution I've made to the online community that has received a lot of good feedback from students and other teachers. This is a game called Grudge. I gave a survey to my students at the end of this year and asked them what were their favorite things were from my class. Grudge was near the top of the list. ("Mr. Kraft" was at the very top of the list, of course.) 

There is no question in my mind that it is a review game that engages almost all of my students almost all of the time. I also feel that I present it in such a way that students seriously consider their answers and are eager to understand their mistakes. But there is a problem with the game. On occasion, students will team up on other students, and while it is not always expressed, I do believe that feelings can be hurt. As Matt Vaudrey once expressed in a tweet, it hurts the class culture. It promotes competition instead of collaboration.

I've learned that any activity I use in my class should not only be engaging and promote academic growth, but should also encourage students to be respectful to one another.

Sunday, April 19, 2015

What the hell is mean absolute deviation?

When I first started looking at the Common Core standards for sixth grade a couple of years ago, admittedly, there was one standard I had to do a double-take on:

6.SP.B.5.C: Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern and any striking deviations from the overall pattern with reference to the context in which the data were gathered.

And, like many of my colleagues, I thought, "What the hell is mean absolute deviation?" My horror was confirmed when I googled it and saw how complicated it would likely be for my students.

Looking in some textbooks and online resources, I was continually left wondering why my students would even care about mean absolute deviation. I mean, you do all of these steps, you get a number, and then what? What does mean absolute deviation tell you?

I figured that the only way my students are going to have any access to this would be to compare different data sets, make a quick judgement about which one has more variability (which can be very subjective) and find some way of quantifying that variability. On top of that, I wanted my students to create their own data where the goal would be to have the least amount of variability.

I then remembered the "Best Triangle" activity I did with Dan Meyer. In this activity, Dan asked four teachers to draw their best equilateral triangles. (Notice that Andrew and I have points in our nostrils.)

Rather than having the students evaluate the teachers' triangles, I had them create their own. I started the lesson off by asking the students to draw, what was in their mind, the perfect triangle. Immediately, there were several hands that shot up from students who wanted some clarification, but I told them to just do what they thought was best. After a quick walk-around and throwing some random triangles up on the document camera, it seemed that almost everyone was trying to draw an equilateral triangle. A few students argued that a right triangle could be considered a perfect triangle and I admitted that my instructions were very vague and their interpretations were justified.

We then brainstormed all the things we should look for in the perfect equilateral triangle. Students agreed that we needed three equal sides and three equal angles. They then made a second attempt on the whiteboards to draw perfect equilateral triangles. I asked everyone to make a quick judgement about which triangles they thought were the best, but soon ran out of time for the day. After the students left, I quickly took pictures of their triangles and took measurements in millimeters. (Admittedly, this is something I would have preferred having the students do on their own, but my class time is unbelievably short...37 minutes.)

The next day, I told my students that I took those measurements and found a way to rank all of the triangles from all of my classes. Next, I showed them the five triangles which represent the minimum (best), first quartile, second quartile, third quartile, and maximum (worst) of the data (in order below). This was a nice way to show a sample of the triangles as my students had just finished learning about box-and-whisker plots.

When I first showed them these triangles, I asked them to figure out which triangle represented the maximum and the third quartile. The other three triangles were not easily identified, however, we noticed that if you reorient the triangles so that one of the other two sides was on the bottom, the inferior triangles no longer looked equilateral (leaning to the left or right).

I explained that ranking these five different triangles didn't provide too much difficulty, but I was confused how to rank triangles that looked very similar. I gave the three following triangles as an example and had students vote on which one they believed looked the best:

In each class, there was a lot of disagreement about which triangle was the best, and more often than not, the majority picked the wrong one. I then provided the side lengths of each triangle (above in millimeters) and asked the students, "how can we use these measurements to rank these three triangles?"

After a few unproductive guesses, someone usually asked to find the differences between the measurements, which lead to someone else asking to find the sum of those differences or the range. They notice that the ranges for each triangle are all 20 mm. Someone usually calls me out for doing this intentionally...which I did.

Next, somebody will ask about the mean of the numbers. I act dumb, as I do with every suggestion, and we explore that possibility. We find the means, and it would seem that we have again hit a dead end.

I have say that at this point, some classes were completely stuck, and some kept going with it. For those that were stuck, I told them that to me, the mean (157 mm for the first triangle) represented the side length that the triangle drawer had intended for each side, but sometimes he or she fell a little short of that goal (149 mm), or overshot it (169 mm). I then asked them to compare each drawn side to "the perfect side length". We found the differences of each length and the mean, and soon after, someone suggested finding the sum of those differences.

At this point, most of my classes were satisfied that we found a method of comparing the triangles. We just had to look at the sum of the differences from the mean. The best triangle was the triangle that had the lowest sum. A couple of classes even went one step further to find the mean of those differences. In reality, there was nothing wrong with either of those methods. However, the second method WAS THE MEAN ABSOLUTE DEVIATION!!! When I first started planning this lesson, never did I think my students would intuitively come up with this concept.

This was the first time I've taught this lesson and I realize that there was a lot more I could have done with it. Given more time, I could have had students work in groups to come up with their own methods for determining the best triangle (similar to Dan's lesson plan) and we could have compared the methods later.

Side note: Dan says that "the best solution is to use the fact that an equilateral triangle is the triangle that encloses the most area for a given perimeter". Sixth graders are not at a point yet where they can find the area of a triangle just given the side lengths, so some other solution was necessary. Technically, my method is flawed because it favors smaller triangles. If you double or triple the size of a triangle, it doubles or triples the mean absolute deviation. This is noticeable in the data as smaller triangles were preferred. A better method would have been to compute the percent differences from the mean, but this would have greatly complicated an idea I was just trying to introduce for the first time.