I'm trying to find more ways to get students writing in math. I know that the process of writing helps clarify and consolidate thoughts. It also is a great way to have students engage with the vocabulary.
After teaching three different ways to find the greatest common factor of two numbers (list all of the factors, use prime factorization, simplify fractions), I split the students up into three groups and asked each group to solve the problem a different way.
As they solved it, I took note of which groups finished earlier, which groups made more mistakes, which groups were more confused, etc. We reviewed each of the three solutions on the board and I then asked everyone to write one good thing and one bad thing about each method. I then asked students to share those thoughts and I summarized them on the board next to each solution (see picture).
Not only did creating this pro/con list help students decide which method they preferred, but it also clarified some misconceptions about why each solution works. They also saw some similarities between the three methods (the numbers 5 and 7 keep showing up). Incidentally, most students did not like method #1, but I warned them that, because it is so intuitive, it would be the method they remember the best.
Nathan Kraft's Blog
Friday, September 23, 2016
Saturday, May 14, 2016
An Alternative to "Add the Opposite"
I've always been a little bothered by how textbooks (and presumably, teachers) explain subtracting integers on a number line. Here's an excerpt from a recent Pearson textbook which has been aligned to the Common Core:
From this, we see that 9 - 5 = 9 + (-5), and from that we conclude that we can always subtract numbers by adding the additive inverse. This makes sense, but what about subtracting a negative? We're just supposed to accept that it is the same as adding a positive? Or what if we are subtracting negatives from a positive? How do you take something away when it's not even there? (I know...zero pairs.)
So how do you explain this without simply telling students to "add the opposite"? Wouldn't it be better if students were comfortable with subtracting negatives?
I teach adding and subtracting integers by having students locate the first number on the number line. You then have two options...you're either going left or right. To do this, they look at the operation. If they see +, they think that they need more of something. If they see -, they think that they need less of something. If we see plus a positive, we need to go in a more positive direction (right). If we see plus a negative, we need to go in a more negative direction (left). For minus a positive, we go less positive (left). For minus a negative, we go less negative (right). And that's it. It makes sense to them and we don't have to be afraid of the subtraction sign.
From here, students use number lines to solve addition and subtraction problems, and eventually, they start to make their own connections. They see that subtracting a negative has the same effect as adding a positive. They see that subtracting a positive has the same effect as adding a negative. As we work with larger numbers, students become less reliant on the number line and use their intuition.
One of the best things about teaching this way is that some of my struggling students can always fall back on the number line. Don't get me wrong, it can be painful to watch a student solve -27-1 by extending a number line far out to the left. I let them do it and then ask them to try a similar problem without writing anything down. Over time, they learn to trust themselves and do it mentally.
Another nice thing about teaching this way is that you can easily extend these ideas to multiplying integers. Positive times negative means more negative. Negative times negative means less negative. You can show how this works with repeated addition/subtraction: -3(-4) = -(-4) - (-4) - (-4).
I hope this provides you with a better alternative than the standard textbook explanation. If you try this, please leave a comment below on any insights that you have.
So how do you explain this without simply telling students to "add the opposite"? Wouldn't it be better if students were comfortable with subtracting negatives?
I teach adding and subtracting integers by having students locate the first number on the number line. You then have two options...you're either going left or right. To do this, they look at the operation. If they see +, they think that they need more of something. If they see -, they think that they need less of something. If we see plus a positive, we need to go in a more positive direction (right). If we see plus a negative, we need to go in a more negative direction (left). For minus a positive, we go less positive (left). For minus a negative, we go less negative (right). And that's it. It makes sense to them and we don't have to be afraid of the subtraction sign.
From here, students use number lines to solve addition and subtraction problems, and eventually, they start to make their own connections. They see that subtracting a negative has the same effect as adding a positive. They see that subtracting a positive has the same effect as adding a negative. As we work with larger numbers, students become less reliant on the number line and use their intuition.
One of the best things about teaching this way is that some of my struggling students can always fall back on the number line. Don't get me wrong, it can be painful to watch a student solve -27-1 by extending a number line far out to the left. I let them do it and then ask them to try a similar problem without writing anything down. Over time, they learn to trust themselves and do it mentally.
Another nice thing about teaching this way is that you can easily extend these ideas to multiplying integers. Positive times negative means more negative. Negative times negative means less negative. You can show how this works with repeated addition/subtraction: -3(-4) = -(-4) - (-4) - (-4).
I hope this provides you with a better alternative than the standard textbook explanation. If you try this, please leave a comment below on any insights that you have.
Friday, February 12, 2016
Developing Student Intuition for Mean Absolute Deviation
For some
time, I’ve been considering a new approach to teaching mean absolute deviation
(MAD). This is a new concept for 6th grade as it is in the Common
Core standards (CCSS.MATH.CONTENT.6.SP.B.5.C) The lesson in the student’s textbook is not terribly helpful. It doesn’t give any purpose for finding the MAD for a set of data
and the directions for doing so are somewhat intimidating. It
is my hope that I can help students intuitively derive MAD on their own, or at
the very least, give them the motivation to learn MAD to identify which set of
data has more spread.
Last year, I had the same hopes
of creating this intuition by having students create equilateral triangles.
This idea was borrowed from a similar activity I worked with Dan Meyer on where
students had to identify which of four triangles was the most equilateral. I had
students create their own triangles and measure the lengths of their sides. We compared the triangles and their measurements to determine which was the best.
It was my hope that students would
see the data and have some basic understanding of what to do with it.
Unfortunately, I only had one student in my five classes really figure it out
without a lot of assistance from me. It was obvious that, if I was going to do
this lesson again, I would have to find some way of creating an easier path for
my students to find the MAD. To build investment and help find meaning, I would again need data that was student
generated, but easier to work with. Thinking about absolute deviations would
have to come naturally and the mean of those deviations the obvious answer to
comparing data sets.
I created a game for students to play that would require the MAD to
determine the winner. Of course, I couldn’t tell the students that this was how
the winner was determined. They would have to come up with this method on their
own. I called for two volunteers to come up to the front of the class and
explained that they would be rolling two dice. Whoever rolled a sum closest to
seven would be the winner. One student rolled a five and
the other student rolled a ten. I placed their sums on a number line in the
front of the room for everyone to see and asked who won and how did we know.
There were a
couple of variations in answers, but the general idea was that one was closer
to 7 than the other. One student was more specific about how five is two away
from seven and ten is three away from seven. Therefore, five is better. I tried
to impress upon my students that quantifying how far each number was away from
7 would really help them as we worked through these different scenarios.
I asked the students to roll
again, but this time I wanted them to roll twice. The boy rolled a seven and a
four. The girl rolled a twelve (already losing) and a ten.
It seemed
obvious who won, but I asked students to write down a sentence or two telling
me who won and explain how they know. There were a couple of ideas about this,
but no one was really thinking about mean absolute deviation at this point. To
their credit, it would not make sense to do it here. There are much easier ways
to compare these sums. What I did want students to see is that the boy’s two
sums deviated from seven by three and zero. The girl’s two sums deviated by
three and five. The sum of those deviations was enough to determine the winner.
One girl said that she
determined who won by taking the average of the sums. I thought this was a neat
idea and it didn’t occur to me to think about it this way. The boy’s average
was 5.5 (1.5 away from 7) and the girl’s average was 11 (4 away from 7). This seemed to validate our belief
that the boy won. I asked the two students to roll again and again had the
students write about which person won. The girl with the averaging method used
it again, and again it seemed to work. I then created a hypothetical situation
where the girl would roll two seven’s (best case scenario) and the boy would roll a two and a
twelve (worst case scenario). I asked, “Who won?”
Before
anyone even answered, I could see some students making the connection that the
average was not going to work every time. In this case, the sums both averaged
out to be seven, indicating a tie, but the boy’s sums were obviously worse than the girl’s.
I explained that the students
would now be placed into groups and creating their own data. With one student
rolling the dice for me, I showed students how to record their results. I
rolled the dice ten times and when finished, I had a line plot that looked like
this:
After
students finished creating their own line plots, they brought them up to me and
I recreated them on Microsoft Excel:
With this data, I asked students
to rank the line plots from best to worst. Three groups volunteered their rankings:
We noticed
that we were in agreement about ranks 1, 2, 3, 7 and 8, but we had trouble
figuring out how the middle groups performed. I placed two of these groups’ line plots on the screen and I asked all students to
figure out, mathematically, which one was better.
From here, I got a lot of
interesting ideas from the students. One girl tried making box and whisker
plots of the data. This made sense because we’ve been using box and whisker
plots lately to describe spread by looking at the range and interquartile range.
(The following day we had a conversation about how box and whisker plots can be
misleading when trying to understand spread.) Another student had an idea to
compare the sums from each side. Another girl
tried to develop a point system where a sum of 7 would be worth 7 points, 6 and
8 would be worth 6 points, 5 and 9 would be worth 5 points, and so on. The
point values were somewhat arbitrary, but she was really developing a good way
of quantifying the spread. After sharing this method with the class, another girl suggested using the distances to seven instead, just like we did in the beginning of the class. Rolling a 7 would be worth zero points, rolling a 6 or 8 would be worth 1 point,
and so on. I didn’t mention this to the class at the time, but this girl was describing the absolute deviations.
I wrote down all of these
deviations with the class and asked, “What’s next?”
Box and
whisker plot girl asked if we could add all of these deviations together and
compare. So we did. We found that Amari’s total sum of these deviations was 37
and Avarey’s was 28. Most of the students felt that Avarey was clearly the
winner. Amari quickly raised his hand to protest, “But I rolled
more times than her! That’s not fair!” At this point, many students suggested
that either Avarey’s group be forced to roll an equal number of times, or we
remove some of Amari’s data. I
asked them to consider how we compare different hitters in baseball. If one
player gets 78 hits in 100 at bats and another player gets 140 hits in 200 at
bats, we don’t force the first player to take 100 more at bats to even things
up. After a couple of students made guesses about how to do this, a girl suggested we find the
mean of these differences. We quickly divided each value by the number of rolls
each group made and found that, on average, Amari was 1.85 away from 7 and
Avarey was 1.87 away from 7. We can say that Amari’s rolls were closer to 7
(less spread), but just barely.
We then reviewed how the students
ranked each of the line plots and compared this against the mean absolute
deviation for each (picture below). It was
interesting for students to see how some of their predictions came true and how
they were completely wrong for others. Nevaeh’s data is a good example of this
– students overwhelmingly thought that her group came in last place, but her
score indicated that she was actually in 3rd place. This
misplacement had more to do with students thinking less about spread and more
about total number of rolls in the 6-8 range. Because Nevaeh didn’t roll as
often as the other groups, it was assumed that she lost because she didn’t roll
very many 6’s, 7’s, or 8’s. However, she only had one sum that was far from the
center. (There is probably a good lesson
here about how the amount of data collected affects comparisons of data sets,
but there was no time for me to discuss it.)
Now that we had some way of
comparing the data, I asked students to collect one more data set. Again, they
had to roll their dice and write down the sums. The only difference is that
they had to find the absolute deviation from 7 for each roll and take the
average of those deviations. Students turned their data in to me and I quickly
checked that they calculated the mean absolute deviation correctly. Again, we
compared line plots and checked those comparisons against the MAD of each data
set.
During the next class, we took
some quick notes on how to calculate the MAD (this time using the mean of the
data set as our central point), constantly referring back to the work we did
the previous day. Students practiced by finding the MAD for a made up set of
data. Finally, they calculated the MAD for average high temperatures for different
cities in the U.S. (This came out of necessity. I explained that the
temperatures in Pottsville, PA varied way too much and I needed to move where
it’s warm all year round. As they were anxious to see me go, they had quite a
few suggestions.)
Overall, I’m pretty happy with
how this lesson went. I think it was worth building the context over time and
it pushed them to really connect the visual (line plot of the data) with the
statistic. When we calculated the MAD for the different cities, students
already had an intuition about which cities would have a low MAD and what that
number actually means. I feel confident that I will keep this lesson for next
year with some minor adjustments.
Special thanks to Bob Lochel and
Tom Hall, two math teachers who were nice enough to exchange ideas with me
about this through email. Also, I'd also like to thank Stephanie Ziegmont for
helping develop some of the writing components of the lesson.
Thursday, October 29, 2015
Can you remember more than 7 digits?
The other day, I came across this website that tests your ability to remember digits.
I thought it was interesting that, according to the website, the average person can remember 7 numbers at once. I've heard this before. This is supposedly the reason why telephone numbers are 7 digits long. At this point, I'm sure you're wondering if you are an "average" person. So, go try it...http://www.humanbenchmark.com/tests/number-memory.
Did you do it? I did it a few times myself and the farthest I got was 12 digits (my worst was 10). This probably means that I'm a superhuman or I have evolved past the rest of you. I'm sorry, but your days are numbered. (Numbered! Get it? No, of course you don't.)
I was still curious about this 7 digit claim, so I posed the problem to my students. Can the average person really only remember 7 numbers?
I had all of my students load the website and play along. After everyone was finished, I recorded the results and made a line plot with the data.
Did you do it? I did it a few times myself and the farthest I got was 12 digits (my worst was 10). This probably means that I'm a superhuman or I have evolved past the rest of you. I'm sorry, but your days are numbered. (Numbered! Get it? No, of course you don't.)
I was still curious about this 7 digit claim, so I posed the problem to my students. Can the average person really only remember 7 numbers?
I had all of my students load the website and play along. After everyone was finished, I recorded the results and made a line plot with the data.
I asked the students to talk to their neighbors about whether or not this data confirms that the average person can remember 7 digits. Overwhelmingly, they felt pretty good about it, especially since the median of the data was 7. (I should note that sixth grade standards are all about analyzing distributions.) They were also able to see that more than half of the students were able to remember at least 7 digits, but less than half could remember 8 or more. Another reason to believe the claim that the average person could remember 7 digits.
We then discussed strategies for memorizing the numbers. Some students mentioned that they chunked the data...remembering 62 as "sixty-two" instead of "six-two". Some of them would practice typing them to build the motor memory.
I also shared a couple of my own strategies...sometimes I could associate a number with something. For instance, once I saw a 53 and, for whatever bizarre reason, I remember that as Bobby Abreu's jersey number. Once I had that image of Bobby Abreu in my head, I stopped worrying about remembering 53. For the longer sets of digits, I would repeat the second half of digits over and over again while staring at the first half of digits. This way, I was relying on both my visual and auditory memory.
Now that the students had some new strategies, I gave them another chance to increase their digits.
As you can see, the data changed, but there really wasn't much improvement. Many students did worse while a few did marginally better. We couldn't make much sense of it, though we suspected that some of these strategies need to be practiced before we could see some results.
At this point, it would have been nice to keep practicing to see if we could improve, but my period is only 37 minutes long. I also had a couple of situations where students figured out they could copy and paste their answers. Cheating would be difficult to monitor.
Side note: Some of my students with IEPs could only remember three digits. This was consistent each time they made an attempt. This was eye-opening for me...when short-term memory is so weak, learning anything must be a huge struggle.
Saturday, October 24, 2015
Why, Common Core? Why?
The other day, I was checking students' work on mean, median, and mode. One of the problems involved finding out what grade you would need to get on a fourth test to have an average of 85 for the class. It's basically a mean problem in reverse, and for students who have never solved this problem, it can be challenging.
One of my students was struggling with this and wrote in her notebook, "WHY COMMON CORE WHY". I laughed and assured her that this problem has been around a lot longer than Common Core. What I really found amusing was that, in terms of content, this sixth grader really hasn't been exposed to some of the more unique things about Common Core. Most of that is happening in elementary school and Pennsylvania only switched over last year, when she was in fifth grade.
In all likelihood, this girl's hatred towards Common Core probably stems from something she overheard her parents say. And now, every time I present her with a challenge, a little voice in the back of her head is going to tell her that this problem is Common Core and it's not really important for her to figure it out. And that's all she needs...another reason to give up.
One of my students was struggling with this and wrote in her notebook, "WHY COMMON CORE WHY". I laughed and assured her that this problem has been around a lot longer than Common Core. What I really found amusing was that, in terms of content, this sixth grader really hasn't been exposed to some of the more unique things about Common Core. Most of that is happening in elementary school and Pennsylvania only switched over last year, when she was in fifth grade.
In all likelihood, this girl's hatred towards Common Core probably stems from something she overheard her parents say. And now, every time I present her with a challenge, a little voice in the back of her head is going to tell her that this problem is Common Core and it's not really important for her to figure it out. And that's all she needs...another reason to give up.
Tuesday, September 29, 2015
Warm-Ups with a Purpose
Warm-ups last year:
I would display four or five review problems on the Smartboard for students to work through as I took attendance. I would then walk around the classroom to see how students were progressing, but would often struggle to help very many of them, nor would I have a good sense of how the class did as a whole. We would then review every problem which was time consuming and not always helpful. The next day, I would create a few more warm-up exercises but I never had a clear picture of what my students were still struggling with or why.
Warm-ups this year:
I was asked to move into a new classroom where every student would have his or her own computer. Over the summer, I looked at several websites that would help me use formative assessment on a daily basis. I was happy to find Socrative (which is FREE!) and I use it everyday for my warm-ups. Students can quickly log in and start working on the exercises. I can create multiple choice, true/false, or short answer questions, and as students are answering them, I can see their responses live! It looks something like this...
This is kind of a big deal. As soon as a student gets something right or wrong, I know. And there's a lot I can do with that information. During those exercises, you'll routinely hear me say things like...
"Mary, awesome job on that last one. Everyone's having trouble with it."
"Almost everybody's getting #1 wrong. Make sure you read it carefully!"
"Sheri, that last one...how are you supposed to set up an addition problem with decimals?"
"Fawn, you seem to be having trouble with greatest common factor. Can I see your work for that last problem?"
"Hey, Andrew. Where's your notebook? Stop trying to do the work in your head. You're not Rain Man!"
After the students finish the exercises, I share the results with them and I let them tell me which ones we need to review (and which ones we don't). We look at commonly selected wrong answers and think about what mistakes students were making.
At the end of the day, I can throw this data onto a spreadsheet (shown below) and decide which topics/skills students have a firm grasp and which need further review. I can see how students progress in some skills over time and share that as a model of learning.
I love that students are getting instant feedback. I love that I have evidence of their growth. I love that we can review results as a class and, rather than students only focusing on their own mistakes, we can ask ourselves, what are we, as a class, doing wrong? What are we, as a class, doing right?
I would display four or five review problems on the Smartboard for students to work through as I took attendance. I would then walk around the classroom to see how students were progressing, but would often struggle to help very many of them, nor would I have a good sense of how the class did as a whole. We would then review every problem which was time consuming and not always helpful. The next day, I would create a few more warm-up exercises but I never had a clear picture of what my students were still struggling with or why.
Warm-ups this year:
I was asked to move into a new classroom where every student would have his or her own computer. Over the summer, I looked at several websites that would help me use formative assessment on a daily basis. I was happy to find Socrative (which is FREE!) and I use it everyday for my warm-ups. Students can quickly log in and start working on the exercises. I can create multiple choice, true/false, or short answer questions, and as students are answering them, I can see their responses live! It looks something like this...
This is kind of a big deal. As soon as a student gets something right or wrong, I know. And there's a lot I can do with that information. During those exercises, you'll routinely hear me say things like...
"Mary, awesome job on that last one. Everyone's having trouble with it."
"Almost everybody's getting #1 wrong. Make sure you read it carefully!"
"Sheri, that last one...how are you supposed to set up an addition problem with decimals?"
"Fawn, you seem to be having trouble with greatest common factor. Can I see your work for that last problem?"
"Hey, Andrew. Where's your notebook? Stop trying to do the work in your head. You're not Rain Man!"
After the students finish the exercises, I share the results with them and I let them tell me which ones we need to review (and which ones we don't). We look at commonly selected wrong answers and think about what mistakes students were making.
At the end of the day, I can throw this data onto a spreadsheet (shown below) and decide which topics/skills students have a firm grasp and which need further review. I can see how students progress in some skills over time and share that as a model of learning.
I love that students are getting instant feedback. I love that I have evidence of their growth. I love that we can review results as a class and, rather than students only focusing on their own mistakes, we can ask ourselves, what are we, as a class, doing wrong? What are we, as a class, doing right?
Sunday, September 20, 2015
Quizzes without Grades
A few weeks ago, I blogged about how I was going to stop putting grades on quizzes. This decision was heavily influenced by Dylan Wiliam's ideas from his book, Embedded Formative Assessment. I also need to mention that Ashli Black has very helpful explaining how she does comments-only grading and pushing me to design a system of grading that works.
This past week, I was finally able to test-run this idea after the students took a quiz on the Order of Operations. I explained my reasoning to the students and, for the most part, they seemed to be okay with it. I told them that this creates a better working environment where students can feel less embarrassed about their performance and work together to identify and correct their mistakes, no matter how well they did. I marked the quizzes by circling the problem number for every wrong solution and then color-coding three problems that I wanted the student to correct. If a problem had a pink mark, they had to identify their error. If there was a purple mark, they had to rework the problem. If a student did not get anything wrong, I gave them a more challenging problem to solve. Finally, while grades were not written on the quizzes, they were calculated and recorded into the online gradebook so parents and students could see them at home.
Overall, I thought it went really well. The students had about 10 minutes to work alone or together on their mistakes and handed the quizzes back to me. Those who did not finish had extra time overnight to do so.
The next day, I used socrative (an online quizzing tool) to ask my students how they felt about my "no grade" policy. The good news is that 70% of my students either liked it or didn't care. More students liked it than didn't like it. However, there is still 30% of my students that didn't like it. While it was not obvious in their responses, I believe that this frustration comes from not having that instant gratification of knowing what your grade is. This impatience isn't unexpected. Many times students will ask me if I graded their quiz ten minutes after handing it in.
In the end, I think the benefit of students revisiting their work and working together to fix mistakes outweighs the annoyance of not getting their grades right away. I'm hoping that, over time, students will begin to also see that benefit.
As a side note, I should say that I'm not really doing "comments-only grading". I had considered writing out comments, but it occurred to me that most of what I'd be writing could later be discovered by the student upon more reflection or figured out with help from a classmate. I believe that writing comments on every wrong answer would have been extremely time consuming and would have deprived my students from discovering their own mistakes.
Update 2/15/15:
Carolina Vila (@MsVila on twitter) asked me if I have kept up with this system. As with anything I experiment with, I look for more efficient ways to do things. (Okay, maybe I just got lazier.)
I mentioned that I color-coded problems in the beginning of the year and that these colors would tell students how I wanted them to reflect on each problem (identify the error/explain what they did wrong or rework the problem). After doing this a few times, it just seemed to make more sense to have students do both things. On a separate piece of paper, they would have to tell me which three problems they chose to rework, tell me (in sentence form) what they did wrong, and finally, rework the problem.
For students that got everything right, I backed away from trying to give them a more challenging problem, and instead, asked them to help other students make their corrections.
Students would turn in their corrections along with their quiz, I would check to see that it was done, AND THEN, I would write their grade on the quiz to give back to them the next day. When I first started taking grades off of the quizzes, I had hoped that I could just put their grades online for them to check, but I ran into too many issues where students and parents couldn't check the grades online because they lost their passwords or didn't have internet access at home. By finally putting the grades on the quizzes, students complained less and respected the correction process more.
On the student side, one of the biggest misconceptions was that making quiz corrections would improve their quiz grade. I explained that they would get credit for making the corrections (similar to a homework grade), but that their quiz grade would remain the same. The only way their grade would improve would be to retake the quiz, and the only way a student would be allowed to retake a quiz is if he or she made the corrections on the first quiz. Altogether, there is plenty of incentive to make these corrections.
Update 2/15/15:
Carolina Vila (@MsVila on twitter) asked me if I have kept up with this system. As with anything I experiment with, I look for more efficient ways to do things. (Okay, maybe I just got lazier.)
I mentioned that I color-coded problems in the beginning of the year and that these colors would tell students how I wanted them to reflect on each problem (identify the error/explain what they did wrong or rework the problem). After doing this a few times, it just seemed to make more sense to have students do both things. On a separate piece of paper, they would have to tell me which three problems they chose to rework, tell me (in sentence form) what they did wrong, and finally, rework the problem.
For students that got everything right, I backed away from trying to give them a more challenging problem, and instead, asked them to help other students make their corrections.
Students would turn in their corrections along with their quiz, I would check to see that it was done, AND THEN, I would write their grade on the quiz to give back to them the next day. When I first started taking grades off of the quizzes, I had hoped that I could just put their grades online for them to check, but I ran into too many issues where students and parents couldn't check the grades online because they lost their passwords or didn't have internet access at home. By finally putting the grades on the quizzes, students complained less and respected the correction process more.
On the student side, one of the biggest misconceptions was that making quiz corrections would improve their quiz grade. I explained that they would get credit for making the corrections (similar to a homework grade), but that their quiz grade would remain the same. The only way their grade would improve would be to retake the quiz, and the only way a student would be allowed to retake a quiz is if he or she made the corrections on the first quiz. Altogether, there is plenty of incentive to make these corrections.
Subscribe to:
Posts (Atom)