Up next: Exams and Grading
The class of ~200 students is broken into 8 discussion sections, each with ~25 students. Each section meets once per week. Students register for a specific section and must attend that section. Half the sections are led by the instructor and the other half are led by a graduate student TA. In other words, the instructor and TA each spend 4 hours per week leading small-group discussions. The instructor an TA switch sections every two weeks so that students see the instructor half the time and the TA half the time.
The general goal of the discussion sections is to focus on deeper learning that cannot easily be achieved in lectures. The specific goal is to teach the students the skills necessary to understand, evaluate, and explain primary journal articles in the area of cognitive psychology (which should transfer readily to other areas of psychology and many other disciplines). Note that this goal is clearly stated in the Learning Objectives. We chose this goal because it is a useful skill and because we had previously found that students had a hard time transferring the idealized view of research that they had learned in their Statistics and Research Methods courses to real journal articles. Although they have previously taken these background courses, the students need to be reminded of key concepts (e.g., p values, interactions, confounds). To avoid lecturing during our precious small-group discussion time, we provide lecture videos on Research Methods that provided the necessary reminders.
Although a class of 25 students is better than a class of 200 students, it is still very challenging to get all 25 students to participate. Many of the discussion section activities therefore involve breaking the class into groups of 2-4 students, who work together on an activity.
Pre-Test and Post-Test
We can easily evaluate the lecture portion of the course by comparing exam scores with the traditional version of the course (which used largely the same exam questions). However, we did not have discussion sections in the traditional course, so there is no basis for comparison. As a result, we administer a pre-test and post-test in the first and last weeks of the course to assess learning of the discussion section material.
There are two forms of the test, A and B. Half the sections take A as the pre-test and B as the post-test, and half take B as the pre-test and A as the post-test. Both forms begin with a description of an imaginary experiment (objectives, design, and results), followed by multiple-choice questions designed to probe the students’ ability to understand and evaluate the experiment. It is extremely helpful to see which questions students often get wrong, because this helps us refine the discussion sections to maximize learning.
The pre-test scores do not contribute to the students’ grades (because that would be unfair given that we haven’t taught them the material yet). The post-test scores do contribute to the grades (so that the students are motivated to come to the discussion section in the last week). However, the post-test scores have only a minor effect on the final grade, and the students are encouraged not to study for the post-test. This allows us to see how much they are learning from the discussion section activities per se, without a major contribution from additional studying outside the discussion sections.
In Weeks 4 -9, the students complete a worksheet prior to their discussion sections. In most cases, the worksheet guides them through a journal article, focusing them on relevant information and giving them practice explaining hypotheses, methods, results, and conclusions. In addition, annotations have been added to some of the journal articles to provide essential background information. The worksheet is provided as a Microsoft Word file (which can also be edited by Pages, Google Docs, etc.). An example worksheet from the Attention unit can be accessed here.
The students turn in the completed worksheets (in PDF format to avoid compatibility issues) the night before their discussion sections. They must attend discussion section to get credit for the worksheets. The worksheets are graded by the undergraduate TAs, each of whom takes responsibility for ~100 worksheets per week. The correct answers are also posted online after the due date. In most cases, we do not spend much time going through the worksheet answers in class (because this would not take advantage of the small-group format).
We do not have worksheets for the first two weeks, mainly because of the challenges that arise when students add the class after the quarter begins. We also have no worksheet for the last week, mainly because we will administer the post-test that week which does not leave much time for discussing a journal article.
Week 1: Introductions and Pre-Test
The Week 1 discussion sections begin with introductions. Each person (beginning with the instructor) gives their name, where they’re from, and something unusual about themselves. This is designed to be fun, and it gets the students used to talking in class. The students also make name placards that they bring to class and put on their desks every week so that everyone knows everyone else’s name. We try to call on the students by name to reinforce a friendly, personalized atmosphere. After the introductions, the students are given the pre-test so that we can assess their baseline knowledge.
Week 2: Describing Results
We find that students often have trouble writing clear and concise descriptions of research findings, and the goal of Week 2 is to provide them with explicit instruction and practice at this important skill. We begin by showing results from an imaginary experiment, followed by a description of the results. We tell the students that one method for determining whether a description is clear and complete is to have someone else draw the results on the basis of the written description (back-translation). We then have a student come to the chalkboard and attempt to draw the results of the imaginary experiment on the basis of the written description. Students are then broken into pairs, and each person in a pair gets two different patterns of results from the same imaginary experiment. For example, Student 1 would get patterns A and B, and Student 2 would get patterns C and D. Each student writes a verbal description of his or her results, and the other student tries to draw the results from the verbal description (without seeing the data). The students revise their descriptions as necessary until the other student can accurately draw the results from the description. The instructor or TA also circulates during class to provide assistance.
Week 3: Understanding P Values
The goal of Week 3 is for the students to understand what a p value really means. Prior to coming to class, the students watch a series of lecture segments that review the basics of null hypothesis statistical testing (see Research Methods 1 Lecture and Quiz in the Detailed Structure of the Course). We begin each discussion section by having the class work as a group to design a simple experiment with two conditions. We then talk about the four possible outcomes of this experiment (true vs. false null hypothesis X significant vs. nonsignificant p value), focusing on the probability that each will occur.
We then show the students an Excel simulation of the experiment in which the population means are known and samples are drawn at random from the populations. For a given simulation, the spreadsheet shows the single-subject data, the sample means, and the t and p values. We begin with simulations in which the null hypothesis is true. After they’ve seen simulations of a few replications of the experiment, we quickly run through 100 simulated replications, having the students count how often the p value is significant (typically around 5 times). This teaches them what it really means to have an alpha of .05 – if the null hypothesis is true, a Type I error will occur in 5% of experiments. We then show simulations in which the null hypothesis is false and the effect size is large. This shows them that statistics can, under some conditions, reliably yield true significant effects. Finally, we show simulations in which the null hypothesis is false but the effect size is small. Most of the p values are not significant in these simulations. From this, the students see that Type II errors can be very common when statistical power is low.
Week 4: Understanding and Explaining a Simple Journal Article
The goal of Week 4 is for the students to get experience reading and attempting to explain a simple journal article. Because this discussion section occurs during the Perception unit, we have them read a journal article on the topic of perception (Vecera, S. P., Vogel, E. K., & Woodman, G. F. (2002). Lower region: a new cue for figure-ground assignment. Journal of Experimental Psychology: General, 131, 194-205). They fill out a worksheet that guides them through the article prior to class.
In class, we begin by describing a simple imaginary experiment and provide a sample written description of the hypothesis, methods, results, and conclusion (7 sentences). We provide some explicit strategies about how to write a brief summary of this nature (e.g., “methods and results should be written in past tense”). The class is then given the assignment of writing a summary of the hypothesis, methods, results, and conclusion of the journal article they had read for class. The students break into pairs for this. Each student in a pair writes a summary, and then the two students swap and give each other constructive criticism. The instructor or TA also circulates during class to provide assistance.
Week 5: Main Effects and Interactions; Designing an Experiment
Each Week 5 discussion section is divided into two parts. The goal of the first part is for the students to be able to assess the presence of main effects and interactions in 2-factor experiments. Prior to class, they watch a series of lecture videos reminding them of these concepts and fill out a worksheet in which they attempt to assess the main effects and interactions in a few imaginary experiments. In our experience, students tend to learn simple heuristics for this (e.g., “it’s an interaction if the lines cross”), but they don’t understand the underlying concepts and can’t apply them to new data sets. The lectures and worksheet therefore provide multiple different ways of understanding main effects and interactions.
During class, we describe a simple 2 x 2 experiment. We then show four possible patterns of results, with one shown as a table of means, one shown as a bar graph, and two shown as line graphs (to discourage the mindless application of a single heuristic). The class is divided into groups of 2-4 students, and each group works to figure out whether each pattern has a main effect of Factor A, a main effect of Factor B, and/or an interaction. We then discuss the answers as a group, focusing on the many different ways to decide if an interaction or main effect is present (e.g., by looking at slopes of line graphs, by computing differences and means). We then repeat the process with four new sets of results.
The goal of the second part is to prepare the students for the next week’s journal article by having them design an experiment that tests the same hypothesis as the journal article. Specifically, they design an experiment testing the hypothesis that talking on a cell phone while driving impairs driving performance. We do this as a whole class. This focuses the students on the challenges that need to be overcome in designing an experiment with real-world constraints (e.g., small budget, need to establish causality with an experimental manipulation, need an ethical design in which people won’t get hurt in car accidents).
Week 6: Evaluating a Real Experiment
The main goal of Week 6 is for the students to get experience understanding and evaluating a more complex study. Because this discussion section occurs during the Attention unit, we have them read a journal article on the topic of attention (Strayer, D.L., & Johnston, W.A. (2001). Driven to distraction: Dual-task studies of simulated driving and conversing on a cellular telephone. Psychological Science, 12, 462-466). The students fill out a worksheet that guides them through the article prior to class. You can see the worksheet here.
The class begins by having the students identify the main effects and interactions in the data from the first experiment in the paper (to reinforce the lessons from the previous week). We then discuss the statistical analyses provided in the paper and the conclusions that can and cannot be drawn from them. We then discuss several issues related to the paper: a) limitations on the conclusions that can be drawn from laboratory studies; b) follow-up studies that should be done (and have been done); c) the underlying attentional mechanisms that might be responsible for the observed effects; and d) the public policy implications of the research.
Week 7: Discrete vs. Continuous Variables, Sample Size, and Experimental Design
The goal of the first part of the Week 7 discussion sections is for the students to understand why some studies require hundreds of participants whereas many experiments in cognitive psychology have only a couple dozen participants. This begins with a discussion of the journal article they have read for the day, which focuses on memory (Brady, T.F., Konkle, T., Alvarez, G.A., & Oliva, A. (2008). Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences, 105, 14325-14329). The students fill out a worksheet that guides them through the article prior to class. This is a fairly simple study, and the worksheet gives them additional practice explaining hypotheses, methods, results, and conclusions.
We begin the class period by discussing how the study took a categorical variable (“correct” or “incorrect”) and turned it into a quantitative variable by aggregating across multiple trials per subject. We then describe political polls, in which this sort of averaging is not possible and hundreds of subjects are needed for a ±5% margin of error. We then describe an imaginary memory experiment in which subjects get only a single trial, and we show simulated results with the corresponding Chi Squared and p values. We then compare this with a simulation of an experiment that is identical except that each subject has 10 trials and the dependent variable is percent correct (allowing us to use a t test rather than Chi Squared). This allows the students to see how a quantitative dependent variable typically leads to much greater statistical power than a categorical variable.
The discussion of categorical versus quantitative variables is an important preamble to the second part of the class period, in which the students design an experiment to examine whether eyewitness memory can be biased by the nature of the questions being asked of the witness (which is the topic of the journal article for the next week). Left to their own devices, students tend to design experiments with categorical dependent variables and very low statistical power, and the preparation from the first half of the class period lets them see why quantitative variables are typically preferable. The actual journal article for the next week has one experiment with a quantitative dependent variable and another experiment with a categorical dependent variable (and therefore a much larger sample size).
Week 8: Understanding and Designing More Complex Experiments
The goal of the first part of the Week 8 discussion sections is for the students to understand the somewhat complex journal article they have read for the day, which focuses on memory distortions (Loftus, Elizabeth F., & Palmer, John C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning & Verbal Behavior, 13, 585-589). The students fill out a worksheet that guides them through the article prior to class.
The first part of the class period focuses on what was found in the two experiments of the study, what can be concluded from the statistical analyses, the additional analyses that would be conducted in a more modern study, and the use of a second experiment to rule out an alternative explanation of the first experiment.
In the second part of the class period, the students design an experiment to test the hypothesis that is the focus of the journal article they will read for the next week. This helps focus the students on exactly what hypothesis is being tested and some of the challenges involved in testing this hypothesis.
Week 9: Understanding Confounds and Counterbalancing
The goal of the Week 9 discussion sections is for the students to understand what a confound is, how confounds differ from other types of problems, how counterbalancing is used to avoid potential confounds, and modern practices for analyzing factorial experiments. The journal article for this week focuses on the “testing effect,” which the students have experienced via the constant quizzing in the lecture portion of the course, and it also provides a “textbook example” of how a factorial experiment should be analyzed (Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: taking memory tests improves long-term retention. Psychological Science, 17, 249-255). The students fill out a worksheet that guides them through the article prior to class.
The first part of the class focuses on defining the term “confound” and distinguishing confounds from nonsystematic differences between conditions (e.g., due to sampling error). We then discuss some non-obvious potential confounds in the journal article and how the researchers avoided these confounds by means of counterbalancing. We then go through the results, once again picking out the main effects and interactions, and discuss the details of the statistical analyses provided in the paper. The class ends with a discussion of how the students might implement the ideas presented in the journal article to help them learn better in their other courses.
Week 10: Post-Test
In Week 10, the students simply take the post-test.
Up next: Exams and Grading