The flipped classroom approach has been used for years in some disciplines, notably within the humanities. Barbara Walvoord and Virginia Johnson Anderson promoted the use of this approach in their book Effective Grading (1998). They propose a model in which students gain first-exposure learning prior to class and focus on the processing part of learning (synthesizing, analyzing, problem-solving, etc.) in class.
To ensure that students do the preparation necessary for productive class time, Walvoord and Anderson propose an assignment-based model in which students produce work (writing, problems, etc.) prior to class. The students receive productive feedback through the processing activities that occur during class, reducing the need for the instructor to provide extensive written feedback on the students’ work. Walvoord and Anderson describe examples of how this approach has been implemented in history, physics, and biology classes, suggesting its broad applicability.
Maureen Lage, Glenn Platt, and Michael Treglia described a similar approach as the inverted classroom, and reported its application in an introductory economics course in 2000. Lage, Platt, and Treglia initiated their experiment in response to the observation that the traditional lecture format is incompatible with some learning styles.1 To make their course more compatible with their students’ varied learning styles, they designed an inverted classroom in which they provided students with a variety of tools to gain first exposure to material outside of class: textbook readings, lecture videos, Powerpoint presentations with voice-over, and printable Powerpoint slides.
To help ensure student preparation for class, students were expected to complete worksheets that were periodically but randomly collected and graded. Class time was then spent on activities that encouraged students to process and apply economics principles, ranging from mini-lectures in response to student questions to economic experiments to small group discussions of application problems. Both student and instructor response to the approach was positive, with instructors noting that students appeared more motivated than when the course was taught in a traditional format.
Eric Mazur and Catherine Crouch describe a modified form of the flipped classroom that they term peer instruction (2001). Like the approaches described by Walvoord and Anderson and Lage, Platt, and Treglia, the peer instruction (PI) model requires that students gain first exposure prior to class, and uses assignments (in this case, quizzes) to help ensure that students come to class prepared. Class time is structured around alternating mini-lectures and conceptual questions. Importantly, the conceptual questions are not posed informally and answered by student volunteers as in traditional lectures; instead, all students must answer the conceptual question, often via “clickers”, or handheld personal response systems, that allow students to answer anonymously and that allow the instructor to see (and display) the class data immediately. If a large fraction of the class (usually between 30 and 65%) answers incorrectly, then students reconsider the question in small groups while instructors circulate to promote productive discussions. After discussion, students answer the conceptual question again. The instructor provides feedback, explaining the correct answer and following up with related questions if appropriate. The cycle is then repeated with another topic, with each cycle typically taking 13-15 minutes.
Does it work?
Mazur and colleagues have published results suggesting that the PI method results in significant learning gains when compared to traditional instruction (2001). In 1998, Richard Hake gathered data on 2084 students in 14 introductory physics courses taught by traditional methods (defined by the instructor as relying primarily on passive student lectures and algorithmic problem exams), allowing him to define an average gain for students in such courses using pre/post-test data. Hake then compared these results to those seen with interactive engagement methods, defined as “heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors” (Hake p. 65) for 4458 students in 48 courses. He found that students taught with interactive engagement methods exhibited learning gains almost two standard deviations higher than those observed in the traditional courses (0.48 +/- 0.14 vs. 0.23 +/- 0.04). Assessment of classes taught by the PI method provides evidence of even greater learning gains, with students in PI courses exhibiting learning gains ranging from 0.49 to 0.74 over eight years of assessment at Harvard University (Crouch and Mazur, 2001). Interestingly, two introductory physics classes taught by traditional methods during the assessment period at Harvard show much lower learning gains (0.25 in a calculus-based course in 1990 and 0.40 in an algebra-based course in 1999).
Carl Wieman and colleagues have also published evidence that flipping the classroom can produce significant learning gains (Deslauriers et al., 2011). Wieman and colleagues compared two sections of a large-enrollment physics class. The classes were both taught via interactive lecture methods for the majority of the semester and showed no significant differences prior to the experiment. During the twelfth week of the semester, one section was “flipped,” with first exposure to new material occurring prior to class via reading assignments and quizzes, and class time devoted to small group discussion of clicker questions and questions that required written responses. Although class discussion was supported by targeted instructor feedback, no formal lecture was included in the experimental group. The control section was encouraged to read the same assignments prior to class and answered most of the same clicker questions for summative assessment but were not intentionally engaged in active learning exercises during class. During the experiment, student engagement increased in the experimental section (from 45 +/- 5% to 85 +/- 5% as assessed by four trained observers) but did not change in the control section. At the end of the experimental week, students completed a multiple choice test, resulting in an average score of 41 +/- 1% in the control classroom and 74 +/- 1% in the “flipped” classroom, with an effect size of 2.5 standard deviations. Although the authors did not address retention of the gains over time, this dramatic increase in student learning supports the use of the flipped classroom model.
How People Learn, the seminal work from John Bransford, Ann Brown, and Rodney Cocking, reports three key findings about the science of learning, two of which help explain the success of the flipped classroom. Bransford and colleagues assert that
“To develop competence in an area of inquiry, students must: a) have a deep foundation of factual knowledge, b) understand facts and ideas in the context of a conceptual framework, and c) organize knowledge in ways that facilitate retrieval and application” (p. 16).
By providing an opportunity for students to use their new factual knowledge while they have access to immediate feedback from peers and the instructor, the flipped classroom helps students learn to correct misconceptions and organize their new knowledge such that it is more accessible for future use. Furthermore, the immediate feedback that occurs in the flipped classroom also helps students recognize and think about their own growing understanding, thereby supporting Bransford and colleagues’ third major conclusion:
“A ‘metacognitive’ approach to instruction can help students learn to take control of their own learning by defining learning goals and monitoring their progress in achieving them” (p. 18).
Although students’ thinking about their own learning is not an inherent part of the flipped classroom, the higher cognitive functions associated with class activities, accompanied by the ongoing peer/instructor interaction that typically accompanies them, can readily lead to the metacognition associated with deep learning.
The mechanism used for first exposure can vary, from simple textbook readings to lecture videos to podcasts or screencasts. For example, Grand Valley State University math professor Robert Talbert provides screencasts on class topics on his YouTube channel, while Vanderbilt computer science professor Doug Fisher provides his students video lectures prior to class (see an example here). These videos can be created by the instructor or found online from YouTube, the Khan Academy, MIT’s OpenCourseWare, Coursera, or other similar sources. The pre-class exposure doesn’t have to be high-tech, however; in the Deslauriers, Schelew, and Wieman study described above, students simply completed pre-class reading assignments.
In all the examples cited above, students completed a task associated with their preparation….and that task was associated with points. The assignment can vary; the examples above used tasks that ranged from online quizzes to worksheets to short writing assignments, but in each case the task provided an incentive for students to come to class prepared by speaking the common language of undergraduates: points. In many cases, grading for completion rather than effort can be sufficient, particularly if class activities will provide students with the kind of feedback that grading for accuracy usually provides.
The pre-class assignments that students complete as evidence of their preparation can also help both the instructor and the student assess understanding. Pre-class online quizzes can allow the instructor to practice Just-in-Time Teaching (JiTT; Novak et al., 1999), which basically means that the instructor tailors class activities to focus on the elements with which students are struggling. If automatically graded, the quizzes can also help students pinpoint areas where they need help. Pre-class worksheets can also help focus student attention on areas with which they’re struggling, and can be a departure point for class activities, while pre-class writing assignments help students clarify their thinking about a subject, thereby producing richer in-class discussions. Importantly, much of the feedback students need is provided in class, reducing the need for instructors to provide extensive commentary outside of class (Walvoord and Anderson, 1998). In addition, many of the activities used during class time (e.g., clicker questions or debates) can serve as informal checks of student understanding.
If the students gained basic knowledge outside of class, then they need to spend class time to promote deeper learning. Again, the activity will depend on the learning goals of the class and the culture of the discipline. For example, Lage, Platt, and Treglia described experiments students did in class to illustrate economic principles (2000), while Mazur and colleagues focused on student discussion of conceptual “clicker” questions and quantitative problems focused on physical principles (2001). In other contexts, students may spend time in class engaged in debates, data analysis, or synthesis activities. The key is that students are using class time to deepen their understanding and increase their skills at using their new knowledge.
Berrett D (2012). How ‘flipping’ the classroom can improve the traditional lecture. The Chronicle of Higher Education, Feb. 19, 2012.
Anderson LW and Krathwohl D (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. New York: Longman.
Bransford JD, Brown AL, and Cocking RR (2000). How people learn: Brain, mind, experience, and school. Washington, D.C.: National Academy Press.
Crouch CH and Mazur E (2001). Peer instruction: Ten years of experience and results. American Journal of Physics 69: 970-977.
DesLauriers L, Schelew E, and Wieman C (2011). Improved learning in a large-enrollment physics class. Science 332: 862-864.
Fitzpatrick M (2012). Classroom lectures go digital. The New York Times, June 24, 2012.
Hake R (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66: 64-74.
Lage MJ, Platt GJ, and Treglia M (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. The Journal of Economic Education 31: 30-43.
Mazur E (2009). Farewell, Lecture? Science 323: 50-51.
Novak G, Patterson ET, Gavrin AD, and Christian W (1999). Just-in-Time Teaching: Blending Active Learning with Web Technology. Upper Saddle River, NJ: Prentice Hall.
Pashler H, McDaniel M, Rohrer D, and Bjork R (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9: 103-119.
Walvoord BE, and Anderson VJ (1998). Effective grading: A tool for learning and assessment. San Francisco: Jossey-Bass.