R E S O U R C E S

## EXPLAINING HOW TO INTERPRET A CONFIDENCE INTERVAL TO STUDENTS

Many students initially interpret confidence intervals in their problem sets or research the following way:

The 95 percent confidence interval is a repeated sampling concept, and the idea of repeated sampling tends to produce a difficult teaching moment, especially when students are usually only thinking about a single sample of data and a single mean and confidence interval based on that sample of data. Remember that the "true" mean is either in the range specified by the confidence interval or the true mean is not within that range. What students might not understand is that any given sample is just one of a large number of hypothetical samples, and if we take the mean and calculate a confidence interval for each of those samples, 95 percent of the 95 percent confidence intervals will contain the true mean.

To overcome this difficult teaching moment in discussing the interpretation of the confidence interval, I found it instructive to show students a large number of confidence intervals calculated from a large number of samples. I used a simulation procedure that set up the data generating process for 100 samples--though more (e.g. 1 000, 10 000) is also useful. I calculated the mean and a 95 percent confidence interval for each sample. The benefit of this simulation procedure is that you (and students) know the "true" mean--the true mean in my example was the expected value of rolling a six-sided die (so 3.5)--and can show how many confidence intervals contain the true mean. You can then say:

*there is a 95 percent chance that the mean falls within the range specified by the confidence interval.*But when you push students on that interpretation, only a small amount may be able to explain exactly what that means, and for good reason, since confidence intervals have a tricky interpretation that tends to be difficult to convey.The 95 percent confidence interval is a repeated sampling concept, and the idea of repeated sampling tends to produce a difficult teaching moment, especially when students are usually only thinking about a single sample of data and a single mean and confidence interval based on that sample of data. Remember that the "true" mean is either in the range specified by the confidence interval or the true mean is not within that range. What students might not understand is that any given sample is just one of a large number of hypothetical samples, and if we take the mean and calculate a confidence interval for each of those samples, 95 percent of the 95 percent confidence intervals will contain the true mean.

To overcome this difficult teaching moment in discussing the interpretation of the confidence interval, I found it instructive to show students a large number of confidence intervals calculated from a large number of samples. I used a simulation procedure that set up the data generating process for 100 samples--though more (e.g. 1 000, 10 000) is also useful. I calculated the mean and a 95 percent confidence interval for each sample. The benefit of this simulation procedure is that you (and students) know the "true" mean--the true mean in my example was the expected value of rolling a six-sided die (so 3.5)--and can show how many confidence intervals contain the true mean. You can then say:

*remember how I said that 95 percent of the 95 percent confidence intervals contain the true mean? Here is what I meant by that*:Now if you look at the figure above, 96 out of 100 confidence intervals contain the true mean, which is marked by the orange line. But fear not! You can easily follow this up with a question like:

*what do you think will happen if we draw 1 000 samples, 10 000 samples, or 100 000 samples*? This way of explaining confidence intervals actually depicts the concept of repeated sampling that is necessary in understanding confidence intervals. Depending on the course level, perhaps this procedure would also be a rewarding problem set, which is why I didn't post any code.*Posted October 17, 2017.*## Some fun with random numbers

The following code demonstrates that if you randomly sample real numbers between 0 and 1 from a uniform distribution until their sum exceeds 1, the expected number of draws to exceed 1 is e (or 2.71828...).

## Reflections on Teaching Quantitative Methods

I recently finished teaching Quantitative Research Methods for the first time at the graduate level. With that in mind, I have one thing in particular worth sharing that may (or may not) be helpful to others teaching the class. The biggest challenge in teaching quantitative research methods--though this is probably not exclusive to quantitative methods or methods in general--is overcoming "

Perhaps I was at an advantage in teaching quantitative methods because I remember distinctly that methods did not come easy to me; I loved methods courses, but it was very challenging. So it could be that I remember what it was like to spend all night staring at a problem set, wondering if repeating what didn't work two hours ago will magically work this time (don't lie, you've been there...). However, I still worried from the start whether I would "skip steps," assuming that the class is on board--I call it the "it is trivial to see" teaching style.

My solution: I decided to force myself to be as detailed as possible by writing out my lectures as if they were part of a textbook on quantitative research methods. Now you might have just thought

I hope this helps.

__the curse of knowledge__." In brief, the curse of knowledge can be characterized as "it is obvious to me, so why isn't it obvious to you?!" This is probably true for those with any training in quantitative methods from a program--like my graduate program--that aimed to teach you how to figure out and solve your own research design and methods problems. The ability to find solutions on my own has been priceless, until I was put in a situation where I had to communicate what is obvious in my head to people who have never encountered what I'm teaching.Perhaps I was at an advantage in teaching quantitative methods because I remember distinctly that methods did not come easy to me; I loved methods courses, but it was very challenging. So it could be that I remember what it was like to spend all night staring at a problem set, wondering if repeating what didn't work two hours ago will magically work this time (don't lie, you've been there...). However, I still worried from the start whether I would "skip steps," assuming that the class is on board--I call it the "it is trivial to see" teaching style.

My solution: I decided to force myself to be as detailed as possible by writing out my lectures as if they were part of a textbook on quantitative research methods. Now you might have just thought

*you want me to write a 300 page book on methods in four months?*Fair enough. But I found that after about four or five weeks of this type of teaching preparation that I internalized the quality of walking through processes step-by-step; it was around week eight that I decided to stop--I'll admit that I crashed from writing 25-30 page lecture notes every week. It was a simple solution (ha...simple right?) that retrained my brain to not automate the steps and details. Think about it this way: everyone at the college-level knows how to calculate the mean, but throw**Σ**in an equation and the angry mob begins to form. Yes, it's unbelievably simple to understand what**Σ**does--add everything to the right--but not to someone who has never seen Greek letters used in mathematics. But that's not the only benefit. I also found that I think about questions involving research design and methods differently after teaching this course. If nothing else, I have the start of a quantitative methods textbook.I hope this helps.

*Posted September 5, 2017.*## every word counts

Over the past five years of teaching, I've shifted my focus from quantity to quality, especially regarding writing assignments. One constraint I frequently use is a maximum page count for writing assignments, which deliberately forces students to make difficult choices about what should and should not be in their papers. This is similar to the idea that "true artists know when to eliminate rather than add." Students' papers are much better as a result of this type of writing constraint.

However, I've recently picked up on the idea of creating the same type of constraint for examinations, particularly for short answer questions, where students often have an incentive to put everything they can into the answer; during high school, they were taught that this was the best way to make sure they write something that gets them at least partial credit. I remember my high school biology teacher telling us that nearly every answer on the state exam could somehow be related back to the concept "maintaining homeostasis," which would at least get some form of partial credit. This is great advice if you want to get students to pass or succeed on an exam, but not always the best advice if you truly want students to walk away with some understanding of the material. In my short answer questions, I have found those same incentives to "put everything you have on the page," and I noticed that students were spending two or three times as much effort and exam time to write an answer that could easily be finished in a sentence, maybe two. So for all my short answer exam questions, I now use the technique of a word count, which I determine myself by first answering the question, and then determining the maximum number of words necessary to thoughtfully address the question. After practicing short answers with word counts in class, I've noticed the results are exactly as I've expected: a better quality answer in much less space. Having students become conversant in a subject is one of our ultimate goals, but students first need to know the core concepts, mechanisms, and ideas, simply put.

However, I've recently picked up on the idea of creating the same type of constraint for examinations, particularly for short answer questions, where students often have an incentive to put everything they can into the answer; during high school, they were taught that this was the best way to make sure they write something that gets them at least partial credit. I remember my high school biology teacher telling us that nearly every answer on the state exam could somehow be related back to the concept "maintaining homeostasis," which would at least get some form of partial credit. This is great advice if you want to get students to pass or succeed on an exam, but not always the best advice if you truly want students to walk away with some understanding of the material. In my short answer questions, I have found those same incentives to "put everything you have on the page," and I noticed that students were spending two or three times as much effort and exam time to write an answer that could easily be finished in a sentence, maybe two. So for all my short answer exam questions, I now use the technique of a word count, which I determine myself by first answering the question, and then determining the maximum number of words necessary to thoughtfully address the question. After practicing short answers with word counts in class, I've noticed the results are exactly as I've expected: a better quality answer in much less space. Having students become conversant in a subject is one of our ultimate goals, but students first need to know the core concepts, mechanisms, and ideas, simply put.

## assignment checklist

In the process of grading assignments, I came across the idea of creating a cover sheet for writing assignments, where students would be forced to place their initials next to a series of formatting and content requirements. Over time, I refined the checklist by adding specific point deductions next to each item so that students see the penalties while completing the form.

- Download the .tex file
- Preview the .pdf file

## examination cover

While reading Daniel Kahneman's excellent book

Kahneman's idea of a halo effect led me to rethink my grading, and ask the question that none of us really want to think about: is my grading biased by each student's previous performance on other assessments? A student's actual effort and ability are unknown quantities, and it is obvious that our job as a teacher is to award a student with a grade as close as possible to that student's unknown, but true effort and ability. Suppose a student that normally receives grades in the B/B+ range received a C+ on their first exam. This halo effect implies that the teacher could potentially believe the student's work is in the C+ to B- range.

Do I believe I'm giving C's to B+ students? No. But it's worth taking the steps to eliminate halo effects when possible. To do this, I've devised a simple examination cover sheet, where each student receives a cover sheet that they place their name on, which gets returned to me before the exam begins. On the cover sheet is a randomly generated number, which replaces the student's name on all exam pages. Student feedback has been highly supportive after I discussed the halo effect (learning moment for them).

__, I came across a section where Kahneman explains how, in the process of grading students' examinations that contained two essay questions, the second essay was graded conditional on the student's performance on the first. Kahneman refers to this as a halo effect, where, for example, a solid answer on the first essay question will carry over to the student's second essay. A good answer on question one implies that a student will receive the benefit of the doubt on a poor answer for essay two.__*Thinking, Fast and Slow*Kahneman's idea of a halo effect led me to rethink my grading, and ask the question that none of us really want to think about: is my grading biased by each student's previous performance on other assessments? A student's actual effort and ability are unknown quantities, and it is obvious that our job as a teacher is to award a student with a grade as close as possible to that student's unknown, but true effort and ability. Suppose a student that normally receives grades in the B/B+ range received a C+ on their first exam. This halo effect implies that the teacher could potentially believe the student's work is in the C+ to B- range.

Do I believe I'm giving C's to B+ students? No. But it's worth taking the steps to eliminate halo effects when possible. To do this, I've devised a simple examination cover sheet, where each student receives a cover sheet that they place their name on, which gets returned to me before the exam begins. On the cover sheet is a randomly generated number, which replaces the student's name on all exam pages. Student feedback has been highly supportive after I discussed the halo effect (learning moment for them).

## draw, pair, share

This may be one of the craziest things I've tried in the classroom, but I've had great success using it in classes where students are required to learn a lot of terms and definitions. I'm sure I didn't invent DPS (a quick Google search after I typed that confirmed my suspicions), but at the very least I can attest to its value. Students that are clearly hands-on or visual learners are great at this exercise, and I've found that traditional lecture-learners (you know - all 5 percent of our classes) find this task more challenging - which is also good because it forces them to think about the material differently (which is our job to make them do that). But as part of a balanced set of teaching strategies, I've found that DPS is a fun break from lecture and discussion.

The activity is simple. Pick out a set of terms and definitions that you want students to focus on - for example, suppose there is a set of terms that a significant number of students failed to identify on a previous assessment - and have students draw a picture that helps remind them of the term and definition. Have students pick one partner. Show students the term and definition on the screen, and then give them two to three minutes to draw a picture that makes them think of the term and definition. After the two minutes expire, students discuss their picture with their partner and explain why it reminds them of the term and definition; the teacher then asks for some volunteers to present their drawing and explanation to the class. This activity helps students discuss how they visualize the material, but also allows students to learn from their peers.

The activity is simple. Pick out a set of terms and definitions that you want students to focus on - for example, suppose there is a set of terms that a significant number of students failed to identify on a previous assessment - and have students draw a picture that helps remind them of the term and definition. Have students pick one partner. Show students the term and definition on the screen, and then give them two to three minutes to draw a picture that makes them think of the term and definition. After the two minutes expire, students discuss their picture with their partner and explain why it reminds them of the term and definition; the teacher then asks for some volunteers to present their drawing and explanation to the class. This activity helps students discuss how they visualize the material, but also allows students to learn from their peers.

__Make sure to walk around the room and ask different groups to explain their drawings__. Perhaps one or two groups per term. When taken seriously by the instructor, this is a successful activity, and students that learn from visual aids try hard and come up with really creative ideas. Try it - I dare you!