E-xcellence in Teaching
Editors: Manisha Sawhney & Natalie Ciarocco

  • 04 Dec 2017 8:18 AM | Anonymous

    Mindfulness and Meditation in Psychology Courses

    Jennifer A. McCabe & Dara G. Friedman-Wheeler

    Goucher College

    As part of a college-wide “theme semester” on mindfulness in spring 2016, we incorporated mindfulness into four psychology classes. Here we share our experiences with regard to course design, assignments and activities, and student feedback. For instructors who are considering including mindfulness and/or meditation in psychology courses, we conclude with a reflection and overall assessment of what went well and what could be modified for the future, integrated with the results of our research on mindfulness in the college classroom.

    Defining Mindfulness and Its Relevance to Education

     A prominent definition of mindfulness in contemporary psychology is “paying attention… on purpose, in the present moment, and non-judgmentally” (Kabat-Zinn, 1994, p. 4). Mindfulness has received much attention recently, in the research literature and elsewhere (for an overview, see Curtiss & Hofmann, 2017). Studies have suggested benefits of mindfulness to physical health (e.g., pre-hypertension; Hughes et al., 2013), mental health (e.g., subjective well-being; Sedlmeier et al., 2012), and cognitive performance (e.g., working memory; Mrazrek, Franklin, Phillips, Baird, & Schooler, 2013).

    Increasingly, researchers are studying mindfulness activities in elementary and secondary schools (e.g., Black & Fernando, 2014; Britton et al., 2014; Mindful Schools, 2017). Research is just beginning to emerge on the effects of mindfulness in the college classroom (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    In the next two sections, each author provides a first-person narrative of her experiences integrating mindfulness into psychology courses.


    Cognitive Psychology Courses (JM)

    I approached this semester with enthusiasm about mindfulness, but a lack of experience. I decided to commit to a regular practice of mindfulness exercises (10 minutes daily) using Headspace (https://www.headspace.com/), which helped bring a degree of authenticity (and confidence) to my courses, and also personal benefit in terms of well-being and focus.

    In integrating mindfulness into Cognitive Psychology, a mid-level undergraduate course, I added a section that defined mindfulness to my syllabus, connected mindfulness to other topics in the course (e.g., perception, attention, memory, decision-making), and invited students to engage in meaningful study and practice of mindfulness throughout the semester. I added a course learning objective connecting mindfulness to metacognition: “Improve your metacognitive skills (knowing what you know, learning how to learn), through traditional book learning and through mindful practice and reflection. (Syllabi for courses discussed in this essay are available by request.)

    On the first day of class, I asked students questions about mindfulness to gauge pre-existing knowledge and practice, before their first mindful meditation exercise (Day 1 of Headspace). At least once per week, class included 5-10 minutes of guided mindfulness exercises. To prepare students, I asked them to arrive on time, to listen to instructions, and to be still and quiet during the meditation time. I assured them that it was okay not to engage in meditation. I emphasized that in addition to possible personal benefits, the exercises might provide insight into research we would read on mindfulness and cognition.

    Throughout the semester, I chose short guided exercises for class use, including several from the UCLA Mindful Awareness Research Center (http://marc.ucla.edu/body.cfm?id=22) and Mindfulness for Teens (http://mindfulnessforteens.com/guided-meditations/). Some were sitting exercises and some were standing; some had longer periods of silence and some were narrated throughout. Whenever possible, I connected the mindfulness activity to the course topic (e.g., body scan meditation for Attention; guided visualization for Visual Imagery). One day we went outside and I guided students through an exercise to focus on aspects of the environment (e.g., colors, shapes, movement; from a training session with Dr. Philippe Goldin).

    Regarding assessment, I revised my existing article summary and reflection assignment to focus on research that related mindfulness/meditation to course topics. For each article, students completed this form and engaged in group discussions during class. I quickly discovered that there were not many published articles about the impact of mindfulness on cognition that were appropriate for students in a mid-level undergraduate course.

    For the topics Perception and Attention, I assigned half the students an article about enhancing visuospatial processing using varieties of meditation (Kozhevnikov, Louchakova, Josipovic, & Motes, 2009), and the other half an article about improvements in perceptual discrimination and sustained attention following meditation training (MacLean et al., 2010). With respect to Memory, I assigned half an article about how brief mindfulness training can improve verbal GRE performance as mediated by enhancing working memory (Mrazek et al., 2013), and the other half read about increases in false memory after meditation (Wilson, Mickes, Stolarz-Fantino, Evrard, & Fantino, 2015). For the final topics in the course, Reasoning and Decision-Making, students read an article about reductions in the sunk-cost bias after meditation (Hafenbrack, Kinias, & Barsade, 2014).

    When I compared responses to mindfulness questions on the first and last days of class, the percentage of students providing a reasonably accurate definition of mindfulness jumped from 10% to 68%, and the percentage listing cognition-related benefits of mindfulness went from 17% to 59%. However, there was no change in the reported practice of mindfulness/meditation, nor in the perceived importance of the scientific study of mindfulness.

    I also incorporated mindfulness into my upper-level course, Seminar in Cognition, Teaching, and Learning. I began this class with an assignment to watch Andy Puddicombe’s TED talk as an orientation to mindfulness (https://www.ted.com/talks/andy_puddicombe_all_it_takes_is_10_mindful_minutes?language=en); to watch the introductory Headspace video; and to complete Day 1 of Headspace’s free “Take 10” program. Students were asked to commit to 10 minutes of guided meditation per day for the next 10 days, then to submit a written reflection. In their reflections, every student expressed openness to the possibility of trying meditation, and for all but 2 students (out of 18), this would be their first experience with it. However, their reflections after 10 days were less encouraging – due perhaps more to time management issues than anything. Although it was a required assignment, many did not find time to complete the program.

    Later in the course, I assigned articles focusing on mindfulness and meditation. Students read an article about the neuroscience of mindfulness and mind-wandering, with implications for education (Immordino-Yang, Christodoulou, & Singh, 2012). They also read and discussed the article on working memory and GRE performance used in Cognitive Psychology (Mrazek et al., 2013). This class day was purposefully scheduled to coincide with Mary-Helen Immordino-Yang’s on-campus lecture, which students were encouraged to attend.

    About five weeks into the semester, we launched a collaborative class project to collect an annotated reference list of resources on mindfulness for educators. Students used library and web applications to search for resources, then built a shared document. The final product was a 16-page file containing primary research articles, review/critique articles, books and book chapters, popular press articles, and web sites relevant to the topic of Mindfulness and Education (http://blogs.goucher.edu/themesemester/files/2016/04/Mindfulness-and-Education-Resources-Sp16.pdf).

    Though I did not collect formal data in this course, students generally demonstrated interest and enthusiasm. Even given the density of some of the readings on mindfulness, there was a good amount of energized discussion. Also, I was impressed by their active participation in the collaborative project and felt this was a meaningful and authentic learning experience.


    Health and Clinical Psychology Courses (DFW)

    Mindfulness seemed a natural fit for my mid-level course in health psychology. Indeed, the topic had come up organically in years past, through a project in which students choose a health behavior to change, using empirically-informed strategies – many students chose to adopt a meditation practice. Spring 2016 was no exception, as several students took on this challenge, availing themselves of tools and apps (e.g., Headspace, Calm) as part of their strategic behavior change project.

    I incorporated a mindfulness-related learning objective into the course: by the end of the semester, students should be able to “describe mindfulness and its health-related benefits.”  Mindfulness was woven into several sections of the course. At the start of the course, where we usually focus on what health psychology is, students also read a brief overview of mindfulness (Kabat-Zinn, 1994), allowing us to operate from a shared conceptualization of mindfulness and to relate it to mental and physical health.

    The health psychology course includes a community-based learning component in which students work collaboratively with staff from Hopewell Cancer Support (a local organization providing psychosocial services to those affected by cancer – including some related to mindfulness), to address particular challenges faced by the non-profit. Because of this collaboration, we discuss cancer early in the class, as well as the research on psychosocial interventions for cancer. Here students read and discussed an article on Mindfulness-Based Cancer Recovery (Tamagawa et al., 2015). Later in the class, as part of our stress and coping topic, we read and talked more broadly about mindfulness and health, reading a review article on mindfulness-based treatments (and research on their effectiveness) for a variety of health conditions (Carlson, 2015). These readings were brought into the classroom in a variety of ways: sometimes we would discuss the articles as a large group, or in small groups. Sometimes I would start class by projecting a short list of thought questions on the screen about the reading and would ask students to write for a minute or two about each question, before getting into groups to discuss one of the questions in more depth.

    Throughout the semester, the mindfulness-related events on campus were brought into the class, through an “event-reporting” assignment. Specifically, students were asked to sign up to attend one of 6 events on campus or in the community during the semester (four of which were mindfulness theme semester speakers Mary-Helen Immordino-Yang, Omid Safi, Alicia Garza, and Dan Siegel), and to report back to the class about what they had heard. Their reports were informal and included (a) biographical information about the speaker (obtained from the event or through Internet research), (b) the main point or points of the talk, (c) the types of “evidence” used to make those points (case examples, personal experience, research…), and (d) how the event related to the field of health psychology or to specific topics covered in class.

    I conceived of the “event reporting” assignment as a way to encourage attendance at these events without insisting that all students attend them all (unrealistic, given schedule constraints), and as a way for the whole class to get some benefit from each talk. In addition, I wanted students to think actively about the events they attended, including identifying the speaker’s main point(s) and the different types of arguments that can be made (based on different “ways of knowing”). I was so pleased with this assignment that I have used it again since.

    During the theme semester I also taught an upper-level course, Seminar in Clinical Psychology: Emotion Regulation, which has always included readings about, experiential activities with, and discussion of mindfulness. During the mindfulness theme semester, I incorporated mindfulness into one of the existing learning objectives, stating that students would be able to “discuss a variety of emotion regulation strategies (including mindfulness) and evaluate their adaptive and maladaptive aspects.”

    In previous iterations of the course, I had introduced students to the practice of mindfulness by conducting part of Jon Kabat-Zinn’s (2006) eating meditation (mindfully attending to a raisin). This semester, I increased the experiential coverage of mindfulness, inviting the class to engage in “Mindful Mondays,” a collection of activities that allowed us to try a variety of purported mindfulness inductions, and to compare and contrast them. I started a shared document and invited students to construct the list of activities collaboratively. Several students added activities but requested that I (or a guide on a video) lead the class through them (e.g., a brief chair-yoga routine intended for the workplace); others proposed activities that they led themselves (e.g., a walking meditation, based on an experience a student had had at a monastery while studying abroad). The ultimate list included activities from the more traditional raisin meditation and a body scan to “mindful creative expression” and coloring. We sometimes left our seats (to do yoga or sit on the floor), and we sometimes left the classroom (to do the walking meditation on the campus’s labyrinth).

    These exercises were voluntary; students could arrive five minutes late to class on any given Monday, if they did not wish to participate in an activity. Generally, though, attendance was excellent, and students seemed enthusiastic about Mindful Mondays (indeed, I proposed such a thing to my seminar the subsequent semester, and they, too, chose to partake). Discussions following the practice focused on topics such as whether or not the effects of the exercises felt subjectively like mindfulness (per the attentional and attitudinal components of the definition), whether or not there might be inadvertent harms associated with these activities, whether some people might benefit from some types of mindfulness more than others, and what characteristics might predict positive experiences with which activities.

    During the theme semester, the class dug more deeply into the scholarly literature on mindfulness, as well. The class has long included a reading on third-wave cognitive behavioral interventions that provides a nice overview of mindfulness as it is incorporated into these treatments (Baer & Huss, 2008). This semester we also read pieces focused on the emotional benefits of mindfulness (Arch & Landy, 2015) and on mindfulness and emotion regulation (Corcoran, Farb, Anderson, & Segal, 2010; Leahy, Tirch, & Napolitano, 2011).

    Near the end of the semester, I asked students to create “concept maps” of mindfulness, in an attempt to integrate the varied aspects of mindfulness that we had read about, discussed, and experienced. Students worked on blank paper, and then volunteered to have their concept maps projected, so that the class could discuss the various components of mindfulness and associated constructs. While each of these concept maps was of course different, they all reflected the complexity of the concept, and I believe that by the end of the semester students showed substantial improvement in their understandings of the construct of mindfulness as used in contemporary clinical psychology.


    Our Research, in Brief

    Separate from the theme semester courses, we have conducted systematic research on mindfulness in the college classroom (importantly, no data were collected during the theme semester). In our study, students in psychology, chemistry, peace studies, and English classes followed a 5-minute guided meditation (an edited mp3 file; Kabat-Zinn, 2005, used with permission) at the start of class. Within-subjects analyses found no benefits for working memory, content retention, mindful awareness during class, or elaboration, at the end of a 4-week period in which students followed the guided meditation, as compared to a 4-week period in which they did not. While we refer interested readers to the full research report (Friedman-Wheeler et al., 2017), we want to share some thoughts about how such an exercise might be beneficial, with adjustments.

    For one, it may be that students who weren’t interested in participating actively did not (although they did sit quietly during the meditation period). It may also be the case that five minutes is not the appropriate dose of meditation for the classroom. Perhaps one minute of silent meditation would be better-suited to the classroom setting (and feel more do-able to students). On the other hand, perhaps five minutes three times a week is an insufficient dose, though a larger dose would consume more class time than instructors might wish.

    Perhaps student buy-in and benefit are enhanced when more context is provided, as was done in the theme semester courses described in this essay. There is an obvious risk of demand characteristics, but perhaps those with a greater understanding of mindfulness might derive more benefit from it than those who participate in an exercise without fully understanding why.


    Conclusion: Opportunities and Challenges for
    Mindfulness in Psychology Courses

    From an academic perspective of encouraging undergraduate students to learn about the science of mindfulness, readers should bear in mind that the level and quality of available readings are varied. For example, while there is ample scholarly work on mindfulness in clinical and health psychology, there is less research suitable for undergraduates related to cognition. Overall, there is a need for more research on mindfulness and learning in higher education. As noted above, the results of our research study suggest no measurable impact of brief in-class interventions on variables related to academic performance, though others have found benefits (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    From a class-time-management perspective, we experienced challenges balancing mindfulness exercises with other activities and content. We found that exercises between two and ten minutes long can work well–and incorporating mindfulness is made far easier by the availability of short mindful meditation exercises online, including those that can be guided by the instructor, and those that are pre-packaged to be presented in video and/or auditory format.

    From a student-engagement perspective, we found that many students were “on board” with the idea of using a small amount of class time to practice mindfulness. However, some seemed disengaged.

    From a student mental health perspective, though there is research suggesting mindfulness practice may lead to improved mental health, we also noted the potential for negative affect–irritation or boredom, or in some cases, perhaps feelings of being overwhelmed (as might happen to some survivors of trauma; Briere & Scott, 2012). We handled these possibilities in this several ways: (1) permitting students to not attend the mindfulness portion of class and/or to leave the room as needed; (2) reminding students that no one can be forced to meditate, and that they can choose to ignore the instructions and sit quietly during the exercises.

    In sum, there are many opportunities for bringing the science and practice of mindfulness into the undergraduate classroom, and the potential seems great. There are, however, challenges to be explored and better understood, as we seek creative ways to connect our students with mindfulness so that they might benefit from it intellectually and personally.



    Arch, J. J., & Landy, L. N. (2015). Emotional benefits of mindfulness. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 208-224). New York, NY: Guilford Press.

    Baer, R. A., & Huss, D. B. (2008). Mindfulness- and acceptance-based therapy. In J. L. Lebow (Ed.), Twenty-first century psychotherapies: Contemporary approaches to theory and practice (pp. 123-166). Hoboken, NJ: John Wiley & Sons.

    Black, D. S., & Fernando, R. (2014). Mindfulness training and classroom behavior among lower-income and ethnic minority elementary school children. Journal of Child and Family Studies, 23(7), 1242-1246. doi:10.1007/s10826-013-9784-4

    Briere, J., & Scott, C. (2012). Mindfulness in trauma treatment. In Principles of trauma therapy: A guide to symptoms, evaluation, and treatment, 2nd edition (pp. 215-230). Thousand Oaks, CA: Sage.

    Britton, W. B., Lepp, N. E., Niles, H. F., Rocha, T., Fisher, N. E., & Gold, J. S. (2014). A randomized controlled pilot trial of classroom-based mindfulness meditation compared to an active control condition in sixth-grade children. Journal of School Psychology, 52(3), 263-278. doi:10.1016/j.jsp.2014.03.002

    Carlson, L. E. (2015). Mindfulness-based interventions for physical conditions: A selective review. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 405-425). New York, NY: Guilford Press.

    Corcoran, K. M., Farb, N., Anderson, A., & Segal, Z. V. (2010). Mindfulness and emotion regulation: Outcomes and possible mediating mechanisms. In A.M. Kring & D.M. Sloan (Eds.), Emotion regulation and psychopathology: A transdiagnostic approach to etiology and treatment (pp. 339-355). New York, NY: Guilford Press.

    Curtiss, J., & Hofmann, S. G. (2017). Meditation. In A. Wenzel (Ed.) The SAGE Encyclopedia of Abnormal and Clinical Psychology. Thousand Oaks, CA: SAGE Publications.

    Friedman-Wheeler, D. G., McCabe, J. A., Chapagain, S., Scherer, A. M., Barrera, M. L., DeVault, K. M., Hoffmann, C., Mazid, L. J., Reese, Z. A., Weinstein, R. N., Mitchell, D., & Finley, M. (2017). A brief mindfulness intervention in the college classroom: Mindful awareness, elaboration, working memory, and retention of course content. Manuscript in preparation.

    Hafenbrack, A. C., Kinias, Z., & Barsade, S. G. (2014). Debiasing the mind through meditation:

    Mindfulness and the sunk-cost bias. Psychological Science, 25(2), 369-376. doi: 10.1177/0956797613503853

    Helber, C., Zook, N., & Immergut, M. (2012). Meditation in higher education: Does it enhance

    cognition? Innovative Higher Education, 37(5), 349-358. doi:10.1007/s10755-0129217-0

    Hughes, J. W., Fresco, D. M., Myerscough, R., van Dulmen, M. M., Carlson, L. E., & Josephson, R. (2013). Randomized controlled trial of mindfulness-based stress reduction for prehypertension. Psychosomatic Medicine, 75(8), 721-728. doi:10.1097/PSY.0b013e3182a3e4e5

    Immordino-Yang, M. H., Christodoulou, J. A., Singh, V. (2012). Rest is not idleness:   

    Implications of the brain’s default mode for human development and education. Perspectives on Psychological Science, 7, 352-364.

    Kabat-Zinn, J. (1994). Wherever you go, there you are: Mindfulness meditation in everyday life. New York, NY: Hyperion.

    Kabat-Zinn, J. (2005). Sitting meditation. On Guided Meditation (Series 1). [mp3 file]. Louisville, CO: Sounds True, Inc.

    Kabat-Zinn, J. (2006). Eating meditation. On Mindfulness for Beginners [CD]. Louisville, CO: Sounds True, Incorporated.

    Kozhevnikov, M., Louchakova, O., Josipovic, Z, & Motes, M. A. (2009). The enhancement of

    visuospatial processing efficiency through Buddhist Deity Meditation. Psychological Science, 20(5), 645-653. doi: 10.1111/j.1467-9280.2009.02345.x

    Leahy, R. L., Tirch, D., & Napolitano, L. A. (2011). Mindfulness. In Emotion regulation in psychotherapy: A practitioner’s guide (pp.91-116). New York, NY: Guilford Press.

    MacLean, K. A., Ferrer, E., Aichele, S. R., Bridwell, D. A., Zanesco, A. P., Jacobs, T. L….

    (2010). Intensive meditation training improves perceptual discrimination and sustained attention. Psychological Science, 21(6), 829-839. doi: 10.1177/0956797610371339

    Mindful Schools. (2017). Research on Mindfulness in Education [Web log page]. Retreived from http://www.mindfulschools.org/about-mindfulness/research/

    Mrazek, M. D., Franklin, M. S., Phillips, D. T., Baird, B., & Schooler, J. W. (2013). Mindfulness training improves working memory capacity and GRE performance while reducing mind wandering. Psychological Science, 24(5), 776-781. doi:10.1177/0956797612459659

    Ramsburg, J. T., & Youmans, R. J. (2014). Meditation in the higher-education classroom: Meditation training improves student knowledge retention during lectures. Mindfulness, 5(4), 431-441. doi:10.1007/s12671-013-0199-5

    Sedlmeier, P., Eberth, J., Schwarz, M., Zimmermann, D., Haarig, F., Jaeger, S., & Kunze, S. (2012). The psychological effects of meditation: A meta-analysis. Psychological Bulletin, 138(6), 1139-1171. doi:10.1037/a0028168

    Tamagawa, R., Speca, M., Stephen, J., Pickering, B., Lawlor-Savage, L., & Calrson, L. E. (2015). Predictors and effects of class attendance and home practice of yoga and meditation among breast cancer survivors in a Mindfulness-Based Cancer Recovery (MBCR) program. Mindfulness, 6(5), 1201-1201. Doi: 10.1007/s12671-014-0381-4.

    Wilson, B. M., Mickes, L., Stolarz-Fantino, S., Evrard, M., & Fantino, E. (2015). Increased false-memory susceptibility after mindfulness meditation. Psychological Science, 26(10), 1567-1573. doi: 10.1177/0956797615593705



    Dara G. Friedman-Wheeler is a licensed clinical psychologist and Associate Professor of Psychology at Goucher College, in Baltimore, MD.  She earned her Ph.D. in Clinical Psychology from American University in Washington DC.  She teaches courses on psychological distress and disorder (abnormal psychology), health psychology, quantitative research methods, and emotion regulation, as well as serving as core faculty for Goucher’s public health minor.  She has experience working with patients in the public sector with presenting problems such as mood disorders, anxiety disorders, suicidality, chronic pain, chronic illness, substance abuse/dependence, and personality disorders.  She has co-authored empirical journal articles and the book Group Cognitive Therapy for Addictions (with Drs. Wenzel, Liese, and Beck), served as associate editor for the SAGE Encyclopedia of Abnormal and Clinical Psychology,  and has received several awards from the National Institutes of Health.  Her interests are in the areas of coping, health, addictions, behavior change, cognitive therapy and mood disorders.


    Jennifer A. McCabe is an Associate Professor of Psychology, and director of the Center for Psychology, at Goucher College in Baltimore, MD. She earned her Ph.D. in Cognitive Psychology from the University of North Carolina at Chapel Hill. She teaches courses on human cognition, as well as introductory psychology. Her research focuses on memory strategies, metacognition, and the scholarship of teaching and learning. She has been recently published in Memory and Cognition, Psychological Science in the Public Interest, Teaching of Psychology, Instructional Science, and Psi Chi Journal of Psychological Research. Supported by Instructional Resource Awards from the Society for the Teaching of Psychology, she has also published two online resources for psychology educators on the topics of mnemonics and memory-strategy demonstrations. She is a consulting editor for Teaching of Psychology.


  • 14 Nov 2017 1:04 PM | Anonymous

    Fantasy Researcher League: Engaging Students in Psychological Research
    Daniel R. VanHorn, North Central College

    In this essay, I describe a Fantasy Researcher League course design that I presented to a group of colleagues at the National Institute on the Teaching of Psychology (NITOP) in 2013. This innovative course was designed to get students excited about psychological research. I am grateful for the encouragement and feedback that I received from those who attended the institute. I have divided this essay into four sections. First, I describe the motivation behind the development of the course. Second, I describe the course itself. Third, I present survey data collected from students that have taken the course. Finally, I discuss how this course might be used in the future.


    While students may not complete textbook reading assignments regularly (Burchfield & Sappington, 2000; Clump, Bauer, & Bradley, 2004), they do often find value in the primary textbook assigned for a course (Carpenter, Bullock, & Potter, 2006). For example, a textbook is often a very useful quick reference guide. Textbooks are also helpful because they simplify and clarify psychological research. The problem with textbooks is that, in truth, psychological research is not simple and clear, but rather it is complex and messy. Textbooks also often present information as if it is finalized instead of an ongoing process and dialogue among experts in the field. Finally, many textbooks are not structured in a way that enables critical evaluation of the research they present. Reading and discussing primary sources (e.g., articles with original research that are published in peer-reviewed journals) provides an alternative to textbooks, and I believe students significantly benefit from working with primary sources in psychology. When students work with primary sources they begin to appreciate the intricate work behind what textbooks present as statements of obvious fact. They start to see that psychological research is constantly evolving and that there is still much to be learned. Working with the psychological literature also helps students develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). They learn to critically examine evidence and use that evidence to evaluate theories and/or claims. A significant challenge that many psychology teachers, including myself, face is getting students to engage in psychological research. Reading and thinking about psychological research is difficult, so we have to find creative ways to motivate our students to work with primary sources in psychology. One approach is to take the things that excite our students outside the classroom and implement them inside the classroom. Keeping this approach in mind, I looked to fantasy sports for help in getting my students engaged with the psychological literature.

    Fantasy sports are extremely popular. The Fantasy Sports Trade Association (2013) estimates the 2013 American market for fantasy sports is over 35 million players. Fantasy sports that are available to players include baseball, basketball, football, hockey, soccer, golf, and auto racing. In fantasy sports, approximately 8-14 participants get together and form a league in the sport of their choice. For example, a small group of friends might form a fantasy professional American football league. Each participant in the league selects current professional American football players that make up their fantasy team. The players on a participant’s team score points based on how they perform in real-life games (e.g., how many yards they gain and how many touchdowns they score) and the participants’ teams compete against each other.

    The Course

    I feel that fantasy sports provides a model that can be utilized in classrooms for engaging students. I took the fantasy sports model and modified it to engage students in psychological research by creating a course that took the form of a game. The official title of the course was Immersion in the Psychological Literature, but the course became known to students and faculty alike as Fantasy Researcher League. The official learning objectives of the course included the following: effectively search for published research and track research lines/programs, describe the research programs of several prominent psychologists, explain the current theory and findings of a few threads of research in the field, and identify how psychological theory and research evolve over the course of a research program. In addition to the official learning objectives described above, I wanted to show students that psychological research is dynamic. It is evolutionary. What students read in their textbooks is old news. I wanted my students to be on the cutting edge of psychological research and get a sense of what is feels like to discover something new. I hoped to get my students excited about research in psychology. I also wanted them to discuss psychology outside of a traditional classroom setting in a place where they would exchange ideas and not worry about whether they were getting a C+ or a B- in the course. Finally, I wanted them to discover their passion by having the freedom to explore their own academic interests.

    The course consisted of a small group of students that met with faculty approximately every three weeks throughout the academic school year. At the beginning of the course, the faculty members teaching the course put together a list of several prominent psychology researchers from a variety of research areas. Students were given the opportunity to add other researchers to this list. All the researchers on the list had to be currently active in the discipline. Each student drafted a team of five researchers from the finalized list. Each researcher could only be selected once. These teams made up our fantasy researcher league. Each student then selected one published article by each of their five researchers and tracked the number of times each article was cited during the course of the game. Students had the option to replace their articles at the beginning of each term. Students also kept track of all of their researchers’ scholarly activities and accomplishments (e.g., books, articles, and presentations) during the academic year. Students documented their researchers’ productivity by designing and maintaining a team webpage.  A student earned points for their team by correctly documenting their team’s scholarly activities and citations. The league scoring system is described in Table 1.

    Table 1

    Fantasy Researcher League Scoring System

    Scholarly Activity


    Book single author


    Book co-author


    Book editor


    Book chapter author


    Article first author


    Article other than first author








    During class meetings, students discussed the recent research activity of their teams. Students were also asked to connect their researchers’ current work to their researchers’ past work. At the end of each class, team scores were updated and high scoring teams were recognized.

    Survey Data

    Five students that participated in the course during the fall of 2011 and eight students that participated during the winter of 2012 completed a voluntary survey where they indicated how much they agreed or disagreed with specific statements related to the learning objectives for the course. Ratings ranged from 1 (strongly disagree) to 7 (strongly agree). Student responses to the closed-ended survey questions are shown in Table 2, and they suggest that we met our learning objectives. The vast majority of students agreed that they developed basic research skills, understood and could discuss cutting edge research, learned about today’s prominent psychological researchers, and learned how research programs evolve over time.

    Table 2

    Student Survey Responses on Course Learning Objectives

    As a result of participating in this course,
    Recoded 7pt. scale to 3pt. scale







    I can better search PsycInfo to locate research-related material and people.




    I can more effectively search for psychological research and researchers in electronic sources.




    I am more familiar with the intellectual history and background of some psychology researchers.




    I am more familiar with some of the most current research in psychology.




    I feel more competent at presenting and discussing a researcher’s current research.




    I have a better understanding of how a researcher’s program of research or interests evolves over time.




    I can describe the research program of several prominent psychology researchers.




    I have a better sense of which areas of psychology interest me and which do not.




    I can better create and edit webpages.




    Students were then asked to describe what they learned in the class beyond the topics already covered in the closed-ended survey questions. Responses to these questions suggest that students enjoyed the social nature of the game, learned more about psychological research, and began to discover what areas of psychology interest them most. Examples of student responses to this open-ended question are included below.

    ·        “I was able to find researchers that I would be interested in following later.”

    ·         “I learned what areas in psychology interest me, which has helped me make decisions for my future.”

    ·        “How to effectively create a webpage.”

    ·        “What modern research is like.”

    ·        “Better research skills.”

    ·        “How to find articles that cite another article.”

    ·        “Winning!”

    The Future

    Student surveys suggest that the fantasy researcher league model engages students in psychological research and provides an exciting alternative to traditional courses and/or assignments. The fantasy researcher league model gets students to read and discuss primary sources. This is crucial because working with primary sources is one way for students to develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). The fantasy researcher league model also helps create a learning community where students play a central role in learning and discovery. It is the students that select the researchers and research topics that are presented and discussed in class. In the fantasy researcher league model, teachers provide the initial structure of the course but then focus on supporting and empowering student learning and discovery. In the future, I envision a fantasy researcher league online gaming experience that can be used in a variety of disciplines and can bring together team managers from a college or across the world. In the meantime, I believe that the fantasy researcher league course described here could be incorporated into many courses as a long-term research project. In my course, students worked individually, but I believe the project would also work well if completed in small groups. 



    Anisfeld, M. (1987). A course to develop competence in critical reading of empirical research in psychology. Teaching of Psychology, 14(4), 224-227. doi:10.1207/s15328023top1404_8

    Burchfield, C. M., & Sappington, J. (2000). Compliance with required reading assignments. Teaching of Psychology, 27(1), 58-60.

    Carpenter, P., Bullock, A., & Potter, J. (2006). Textbooks in teaching and learning: The views of students and their teachers. Brookes eJournal of Learning and Teaching, 2(1), Retrieved from http://bejlt.brookes.ac.uk/

    Chamberlain, K., & Burrough, S. (1985). Techniques for teaching critical reading. Teaching of Psychology, 12(4), 213-215. doi:10.1207/s15328023top1204_8

    Clump, M. A., Bauer, H., & Bradley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology, 31(3), 227-232.

    Fantasy Sports Trade Association. (2013). Home page. Retrieved from http://www.fsta.org/


    Daniel R. VanHorn earned his B.S. in psychology from Wittenberg University in 2003. He earned his M.S. (2005) and Ph.D. (2009) in cognitive psychology from Purdue University. He is currently an Assistant Professor of Psychology at North Central College in Naperville, Illinois. He regularly teaches introductory psychology, cognitive psychology, statistics, and research methods. He also has an active research program in cognitive psychology where he trains aspiring psychologists.
  • 02 Nov 2017 5:20 PM | Anonymous

    Do These Things Even Work? A Call for Research on Study Guides

    J. Hackathorn, A. W. Joyce,  and M. J. Bordieri
    Murray State University

    If one had to predict the most common question asked by students each semester, it would be: “What will be on the test?” Moreover, this question is frequently and predictably followed by requests for a study guide. As good, well-meaning instructors, many of us sigh (maybe cry a little) but ultimately provide them. In fact, many of us even include them in course materials prior to the actual request, just to avoid the conversation. Given how common these requests are, it is surprising that there is little actual research regarding the effectiveness of study guides. A quick search, using key terms such as study guides and exam guides, on Google Scholar leads to only a handful of results, many of which are dated and focused on creating study guides (as opposed to assessing them). Thus, we suddenly found ourselves asking: How much do we really know about study guides?   Do these things even work?

    Arguably, any strategy or aid should help students to perform better on exams than nothing. However, some of the resources that students prefer may actually hinder their performance rather than help it. For example, in a recent analysis of learning aid use and exam performance, Gurung (2004) found that students rate textbooks’ bolded key terms as the most helpful study aid to them, but that their perceived helpfulness of this resource negatively relates to exam performance. Conversely, what they rate as least helpful (i.e., active review practices) has the strongest evidence of improving exam performance (e.g. Dickson, Miller, & Devoley, 2005). In another example, a comparison of exam review styles found that, although students do not prefer traditional (i.e., student directed question and answer format) style exam reviews, their exam performance is highest when they use this style, as compared to other styles (Hackathorn, Cornell, Garczynski, Solomon, Blankmeyer, & Tennial, 2012). Ultimately, this suggests there is a mismatch between what we (perhaps both the learner and the instructor) prefer and what actually improves knowledge, understanding, and exam performance.

    To increase our understanding of study guides, the authors of this essay, as well as other faculty members, recently conducted two separate studies (Cushen, et al., currently under review for publication), using the General Psychology population at Murray State University (MSU). In the first study, we conducted a small experiment using all of the sections of General Psychology offered during a single semester at MSU. Using counterbalancing and random assignment of sections, we compared exam performance following an instructor-provided concept list study guide to performance following student generated study guides. Then, at the end of the semester we queried students’ preferences and gave another brief quiz over material from the first two exams. Our results indicate that despite benefiting the most from creating their own study guides, students strongly prefer the instructor-provided guides.

    In a second study, after we realized that we were making assumptions by limiting study guides to only concept lists and student generated guides, we simply asked our students to identify the types of study guides they prefer. In replication of the past studies that showed students tend to prefer the least helpful study tools, we found that students prefer that the instructor provide study guides that include a list of concepts, followed by definitions and examples of application. In other words, students prefer that the instructor create what ostensibly could be referred to as “their notes.”  They prefer excerpts from the textbooks and simple concept lists the least, but prefer an instructor provided concept list style more than nothing at all or creating their own study guide. In examining their preferences, we realize that it is probably not happenstance that the least preferred study guide styles are also the styles that require the most effort from the student to actively summarize, organize, or synthesize course concepts.

    Obviously, the next question is: What do we do with this information?  We do not believe that we should “throw the baby out with the bathwater.” In Fall of 2016, the primary author of this essay attempted to explain to one class why she would no longer provide study guides, and she was almost the victim of a lynch mob. Perhaps that is hyperbole. Still, the students did not appear to believe that the lack of instructor-provided study guides was in their best interest. In hindsight, the instructor may have been too quick to implement this change. There is much more information needed in this regard.

    In our initial experiment, we tested the efficacy of a concept-list style study guide. Basically, we used the style of study guide that answers the ever-present question: “What is on the test?  What should I study?”  Correctly using this style means that students have to then find definitions, create mental models, links, and organization, and create their own application examples. However, it is unclear how many actually do that. It is possible that, instead, students simply look at the list, recognize the terms, and think that they have studied enough to be prepared for the exam. Future research is needed to see exactly what students do with those study guides.

    In that same vein, beyond not knowing how to properly use a study guide, it is also possible that students do not know how to create a study guide. Although it is important for students to know how to facilitate their own learning, many students have defective study strategies (Bjork, Dunlosky, & Kornell, 2013). Our participants were students in a freshman-level course, with the vast majority being first-semester freshmen. Creating a study guide, especially an effective one, is hard work and takes a clear understanding of what type of information is important. Freshmen, specifically, may struggle with this skill. For example, in a recent General Psychology homework assignment, students were asked to create a mnemonic device related to neurotransmitters. The instructor was quite surprised when many of the students created an acronym depicting an arbitrary list of neurotransmitter names. Sadly, there were no exam questions that would ask them to provide a random list of neurotransmitters. Suffice it to say, freshmen may not have a strong understanding of what it takes to succeed on rigorous college-level exams.

    Unfortunately, many new college students will find, perhaps too late, that their high school strategy of simply memorizing definitions will not be as successful in the college classroom. Thus, one of the first steps toward student success may involve taking time to teach them how to create good study aids. In our experiment, we do not report the types of study guides that students self-create. We can only imagine (and have discussed at great lengths) that they are probably terrible. A cursory review across a subsample of our students confirms that the vast majority of our students fail to consistently generate examples of course content and instead provide a simple list of terms and definitions or a chapter outline in their self-created guides. However, regardless of the quality of the study guides, their exam grades are still higher when they create their own study guides. As a result, even if the instructor gives students a foundation with the concept-list style, teaching them how to improve those study guides should prove fruitful. This assumes, of course, that we can convince students to try a new, potentially more intensive and effortful, study technique that they actually utilize rather than backtracking into old habits as the exam date looms closer (Dembo & Seli, 2004).

    Unfortunately, it is still unclear which types of study guides are the most beneficial. Outside of the extensive work of Karen Wood (Wood, 1989; 1993), who outlines various types of study guides and their individualized purposes, there is a dearth of information regarding which types of study guides are the most effective and in which situations they are effective. The type of study guide one might use in an introductory course where students are being given a foundation for future classes is probably very different from the guide one might use in an applied research methods course in which students are practicing a skill. Thus, much more information is needed with regards to not only the general efficacy, but also the relevance and applicability of study guides across different courses and learners.

    Finally, as tends to be the case in many of our classes, students sometimes appear to dislike assignments that really challenge or require effort of them. It is probably not a coincidence that students prefer the study aids that required less of their effort. And, before we all get migraines from rolling our eyes, it is important to consider that the students may not realize that this relationship exists. As an example, in a recent end-of-semester evaluation comment a student requested the following: “I do not want to be spoon-fed the information, but it would be nice if we could be provided with a list of concepts, in order from the most important to the least important, to help us study for exams.”  Clearly, this student fails to see the connection here between spoon-feeding and the study guide that they requested. Moreover, we doubt this student is alone in this desire. As such, asking students to create their own study guides may result in backlash. Importantly some, if not all, of this backlash can be reduced with transparency, communication, and rapport. However, instructors will need to assess the risk/benefit ratio of implementing a change like this.

    The most surprising aspect of our research is that very few of us question our own use of study guides, even though, frankly, we tire of creating them.  Many of us create these study guides because the students ask for them, or to avoid potential mutiny. Yet, as study guides have been around for so long and are so ubiquitous in higher education, very few of us inquire as to whether they work. It is important to note that this does not make us (or you) bad instructors. Care and efforts for students in any form should never be disregarded. In fact, we suspect that there are myriad instructors who have found ways to improve the effectiveness of study guides, but have yet to publish them. Thus, this essay is a mere call to action. Help us, help them; help us, help ourselves.



    Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.

    Cushen, P., Vázquez Brown, M., Hackathorn, J., Rife, S. C., Joyce, A. W., Smith, E., …Daniels, J. (Under Review). "What's on the test?": The impact of giving students a concept-list study guide.

    Dembo, M. H., & Seli, H. P. (2004). Students' resistance to change in learning strategies courses. Journal of Developmental Education, 27(3), 2 - 11.

    Dickson, K. L., Miller, M. D., & Devoley, M. S. (2005). Effect of textbook study guides on student performance in introductory psychology. Teaching of Psychology, 32(1), 34-39.

    Gurung, R. A. (2004). Pedagogical aids: Learning enhancers or dangerous detours?. Teaching of Psychology, 31(3), 164-166.

    Hackathorn, J., Cornell, K., Garczynski, A., Solomon, E., Blankmeyer, K., & Tennial, R. (2012). Examining exam reviews: A comparison of exam scores and attitudes. Journal of the Scholarship of Teaching and Learning, 12(3), 78-87.

    Wood, K. D. (1989). The Study Guide: A Strategy Review. Paper presented at the Annual Meeting of the College Reading Association. Philadelphia, PA.

    Wood, K. D. (1992). Guiding readers through text: A review of study guides. Newark, DE, International Reading Association.



    Biographical Sketch

    Dr. Jana Hackathorn, Dr. Amanda W. Joyce, and Dr. Michael J. Bordieri are all junior faculty at Murray State University in Murray, KY. Between them, they study everything from close relationships to inhibition in children, from sex to mindfulness, and of course from pedagogy to teaching effectiveness. Last year, the entire junior faculty in Psychology at Murray State (there are a total of eight of them) pooled their efforts to conduct a study examining a topic for which they had all complained: student demands for study guides. As a result of the study, they bonded, resulting in much happier happy hours and a very functional, albeit odd, departmental atmosphere.

  • 04 Oct 2017 7:19 PM | Anonymous

    Team-Based Learning: A Tool for Your Pedagogical Toolbox

     Krisztina V. Jakobsen
    James Madison University

    Teachers whose different styles match with the pedagogical methods they use make for a more authentic and effective teaching and learning experience. There are a variety of strategies in the literature for teachers who would like to move away from a purely lecture format. One of those, Team-Based Learning (TBL), is a method I have been using for several years. TBL is a method to encourage students to be actively involved in their learning. Similar to the ideals associated with a flipped classroom (Jakobsen & Knetemann, in press), students learn primary course content outside of the classroom and work in permanent teams with the material during class (Michaelsen, Bauman, Knight, & Fink, 2004). Below, I outline the core components of TBL and share a few studies that my students and I have done examining the impact on student learning.

     The TBL Process

    Readiness Assurance Process

    The first steps in the TBL process involves ensuring that students understand course material; this process—the Readiness Assurance Process—includes preparation outside of class,  quizzes in class, and a short lecture. Students prepare for class by reading the textbook, watching videos, and/or answering guided questions. When students come to class, they take a multiple-choice quiz individually, which assesses student’s understanding of the course material at various levels of Bloom’s taxonomy. The individual quiz holds students accountable for completing their out of class preparation. Next, students work in their teams to complete the same multiple-choice quiz again. Students receive immediate feedback on their team quiz using scratch-off IF-AT forms. After the team quiz, students have a chance to appeal any questions they miss, which requires them to revisit course materials and provides an opportunity to make a compelling case for alternate answers based on the course materials. Finally, teams submit any questions they still have about the material and the instructor gives a short “muddiest points” lecture. The Readiness Assurance Process takes 50-75 minutes to complete.

    Application Exercises

    After the completion of the Readiness Assurance Process, students should have the necessary knowledge to complete application exercises, which usually take 2-4 class periods. Depending on the complexity of the questions, students may complete 2-5 application exercise questions during a class period. The application exercises have a deliberate structure that allows for teams to focus on the relevant course material and facilitates team and class discussions. The keys to developing successful application exercises involve having all teams work on the same questions, requiring teams to make a simple choice, and having teams report their answer choices simultaneously. To demonstrate the importance of the structure of the application exercises, think about the type and quality of discussions students may have with open-ended questions (Question 1 below) compared to more directed questions (Question 2 below).

    Question 1: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Identify at least one way in which each of the theories below helps you understand why the TBL structure is an effective teaching method.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories

    Question 2: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Decide which of the following theories is most prominent in the TBL structure.  Be prepared to support your answer.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories
    While Question 1 asks students to apply what they know about the theories to the structure of TBL, it may not generate much discussion. Question 2 meets the requirements of each of the deliberate components of the application exercises. All teams are presented with the same problem. Teams have to make a choice among options A-D. For this particular question, all of the answer choices are correct, so what will generate discussion among teams is the rationale behind their decisions. Finally, because the answer choices are very clear, it is easy for teams to simultaneously report their decisions by holding up cards, for example.

    Does it Work?

    Students generally have positive experiences with TBL. They also seem to enjoy the structure (e.g., Adelkhalek, Hussein, Gibbs, & Hamdy, 2010) and report perceiving TBL as an effective teaching method (e.g., Haberyan, 2007). The results are mixed in terms the impact of TBL on academic outcomes compared to more traditional teaching methods (e.g., Carmichael, 2009; Jakobsen, McIlreavy, & Marrs, 2014), and little work has been done regarding how TBL impacts retention (e.g., Emke, Butler, & Larsen, 2016). Over the years, I have worked with student research assistants to collect data in lab-based and classroom-based studies to examine the effectiveness of TBL in promoting recognition memory and retention compared to other pedagogical methods. Here, I present the results of two of those studies.

    In a lab-based study, time-slots were randomly assigned to each of our conditions, as follows:

    • Team-Based Learning: Participants read an article upon arrival to the session, then completed the Readiness Assurance Process and application exercises.
    • Lecture: Participants received a lecture based on the content of the article and took notes during the lecture.
    • Reading: Participants read the article and took notes as they read.
    • Control: Participants completed an anagram.

    One week later, all participants took a 10-item multiple-choice quiz to measure their retention of material from the week before. The results revealed that participants in the TBL and Lecture session did not differ on their scores (p = .141), but participants in the TBL session outperformed participants in the Reading (p = .018) and Control sessions (p < .001). The results of this study suggest that TBL and lecture are both effective ways of teaching, particularly in short-term sessions (e.g., workshops).

    In a class-based study, two classes were randomly assigned to be taught using TBL or Lecture. During the semester, students in the TBL class completed the Readiness Assurance Process and application exercises, while students in the Lecture class received lectures with active learning components. Students’ understanding of course material was assessed at three time points: (1) pre-test at the beginning of the semester, (2) final at the end of the semester, and (3) post-test three months after the completion of the course. Students completed 28 multiple-choice questions at each of the three time points. We based our analyses on students who contributed data at all three time points (N = 34). Students in the TBL and Lecture class did not differ on their pre-test scores (p = .052) or their post-test scores (p = .052). Students in the TBL class performed better than students in the Lecture class on the final (p = .021), suggesting that TBL may enhance short-term retention of course material. The results of this class study are consistent with those of Emke et al. (2016), in which TBL led to better short-term, but not long-term, retention of course material.

    Implementation and Conclusions

    Implementing TBL as outlined above requires some upfront investment for organizing and creating preparatory materials, quizzes, and application exercises. The good news is that components of TBL can be implemented in nearly any class with relative ease. For example, it is easy to incorporate a team quiz to already existing individual quizzes, and once students have the content knowledge (e.g., through lectures), application exercises can be added a little at a time.

    While there is likely no one pedagogical technique that will work for every instructor, data from the TBL literature and my research suggest that TBL is at least as good as other strategies. These results should encourage teachers to work in areas in which they are most comfortable and to cultivate skills they feel important, whether they are central to the course objectives or merely desirable.

    Author note

    Portions of this essay were presented at STP’s Annual Conference on Teaching, Decatur, Georgia, October, 2016. This project was supported by the Society for the Teaching of Psychology’s Scholarship of Teaching and Learning Grant and the Alvin V., Jr. and Nancy C. Baird Professorship to KVJ.


    The following website offer wonderful resources for learning more about and getting started with TBL: Learntbl.ca and www.teambasedlearning.org/


    Abdelkhalek, N., Hussein, A., Gibbs, T., & Hamdy, H. (2010). Using team-based learning to prepare medical students for future problem-based learning. Medical Teacher, 32, 123–129. doi: 10.3109/01421590903548539

    Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38, 54–61.

    Emke, A. R., Butler, A. C., & Larsen, D. P. (2016). Effects of Team-Based Learning on short-term and long-term retention of factual knowledge. Medical Teacher, 38, 306-311. doi: 10.3109/0142159X.2015.1034663

    Haberyan, A. (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9, 143–152.

    Jakobsen & Knetemann. (in press). Putting structure to flipped classrooms using Team-Based Learning. International Journal of Teaching and Learning in Higher Education.

    Jakobsen, K. V., McIlreavy, M., & Marrs, S. (2014). Team-based Learning: The importance of attendance. Psychology Learning & Teaching13(1), 25-31. doi: 10.2304/plat.2014.13.1.25

    Michaelsen, L. K., Knight, A. B., & Fink, L. (2004). Team-based learning: A transformative use of small groups in college teaching. Sterling, VA: Stylus Publishing.


    Krisztina V. Jakobsen is an Associate Professor in the Department of Psychology at James Madison University. She teaches developmental psychology classes in the General Education Program an in the Department of Psychology. Her research interests include studying effective teaching methods and social cognition in infants.



  • 07 Sep 2017 9:15 AM | Anonymous
    Technology Bans and Student Experience in the College Classroom

     Thomas Hutcheon, Ph.D.

    Bard College


    Personal technologies, including laptops and cell phones, have infiltrated the college classroom.  Instructors must now decide whether to implement a ban on the unsupervised use of personal technologies in their courses.  Anecdotal evidence (“students always seem to be looking at their computer screens and not me during class”), and results from recent studies linking the unsupervised use of technology with reductions in academic performance, have led to declarations that the time to ban technology use in the classroom is now (Rosenblum, 2017).  However, it is important for individual instructors to critically evaluate and understand the empirical evidence in favor of technology bans when deciding on the approach to take in their classroom.  Moreover, the impact bans have on student’s experience within the course remains unknown.  The purpose of this essay is to review the evidence in favor of a technology ban, to describe recent results, which suggest a ban can be harmful to students’ engagement and to provide recommendations for instructors to aid in the development of a technology policy for their classrooms.

    Broadly speaking, two primary mechanisms have been proposed to explain the relationship between unsupervised technology use in the classroom and reduced academic performance: misdirection of cognitive resources and superficial encoding of information. First, the presence of personal technology in the classroom allows students a direct line to distracting information via social media, games, and the internet.  Diverting cognitive resources towards online shopping or texting with friends necessarily draws resources away from what is happening in the classroom.  This misdirection of resources means that students do not process the material presented during lecture and this can harm performance (Fried, 2008; Wood et al. 2012).  Importantly, the use of technology may lead to the misdirection of resources, not only for the student using the technology, but for students sitting nearby, and even the instructor (Aguilar-Roca, Williams, & O’Dowd, 2012).  Second, even when students are prevented from accessing the internet or other distractions, the use of laptops leads to a relatively superficial encoding of lecture information.  Students randomly assigned to take lecture notes using a laptop perform worse on follow-up memory tests of lecture material compared to students randomly assigned to take lecture notes using paper and pencil (Hembrooke & Gay, 2003; Mueller & Oppenheimer, 2014).  This finding has been explained by differences in note taking strategies.  Specifically, students using a laptop appear to adopt a verbatim strategy in which they type everything that is said during the lecture.  In contrast, students using paper and pencil reframe and write down the information from the lecture into their own words.  This reframing requires deeper encoding of the information and leads to better retention of the material (Mueller & Oppenheimer, 2014).  Thus, despite successfully resisting temptation and devoting resources to the task of taking notes, the use of laptops is still harmful to the retention of material presented during a lecture.

    However, there are three things to keep in mind when implementing the findings reviewed above as the basis for your personal classroom policy.

    Broadly speaking, studies cited as evidence for the implementation of technology bans use either an experimental or correlational approach.  In the typical experimental approach, participants are randomly assigned to use a laptop or paper and pencil to take notes while listening to a lecture.  Learning is frequently assessed by a quiz on the material that is presented at the end of the lecture (Wood et al., 2012).  Although students using laptops tend to perform worse than those who do not, this procedure is different from students learning the information over the course of a semester, as they likely enact strategies during studying to make up for distracted moments when using online resources, such as reading the textbook or asking a fellow student.  The correlational approach collects various measures of student performance, such as GPA and exam grades, and correlates these with student’s reported cell phone and laptop usage.  The negative correlation between GPA and frequency of technology use is commonly interpreted as technology usage causing a decrease in performance.  However, due to the nature of correlational research, it could similarly be interpreted that weaker students tend to bring their laptops into the classroom (Fried, 2008).  In other words, since a causal relationship cannot be drawn between the use of laptops and class performance, removing access to laptops might not lead to changes in performance. 

    The real-world impact of technology usage on student performance needs to be considered.  What does a statistically significant reduction in performance for students using laptop mean for an individual student sitting in one of our classes?   One illustrative example comes from a rigorous, large-scale study conducted at the United States Military Academy at West Point.  For an entire semester, first year students enrolled in Principles of Economics were randomly assigned to take notes on either a laptop, tablet, or using paper and pencil.  The results from this sample of over 700 students yielded a statistically significant impact on performance.  Specifically, students in the laptop and tablet conditions performed worse on the final exam compared to students in the paper and pencil condition.  Although a statistically significant reduction, the effect amounted to a decrease of 1.7% on the final exam for students in the laptop or tablet condition (Carter, Greenberg, & Walker, 2016).  Thus, despite the presumed chronic misdirection of resources and the superficial encoding of information students experience when using technology, the real-world performance benefits are small.  While any improvement in performance is welcome, there are many simple techniques that instructors can implement over the course of the semester which can show improved exam performance to a greater extent, including retrieval practice at the end of a lecture (e.g. Lyle & Crawford, 2011).

    To date, little research has assessed the impact of a technology ban on student experience within the class.  However, recent research conducted in my lab, which was presented at the Society for the Teaching of Psychology Annual Conference on Teaching (Hutcheon, Richard, & Lian, 2016), indicates that implementing a technology ban reduces student engagement.  Specifically, using data from sixty-nine undergraduate students across four sections of Introduction to Psychology taught by the same instructor, students randomly assigned to a technology-ban section reported lower levels of engagement in the course compared to students randomly assigned to the technology-permitted section, as assessed by the student course engagement questionnaire (SCEQ) (Handelsman, Briggs, Sullivan, & Towler, 2005).  Interestingly, the students surveyed in our sample reported relatively low frequency of cell phone use during a typical class (mean = 2.38) and the vast majority reported a preference for taking notes using paper and pencil (N=61) compared to laptops (N = 8).  In fact, looking at the data for the 61 students who reported a preference for taking notes using paper and pencil, we observed a significant reduction in engagement as a function of laptop ban.  In other words, the technology ban impacted engagement of students who would not even have used technology in the classroom.  These findings suggest that students are sensitive to the structure or rules within the classroom environment, and rules viewed as limiting their choices may impact how much students engage with the material and the instructor. 

    In contrast to reports of Carter et al. (2016), we observed a marginally significant reduction in end of year grades for students in the technology ban compared to the technology permitted condition.  This suggests that the impact of a technology ban on student’s performance in the classroom may not be the same for all classroom environments.  Specifically, students enrolled in a more traditional, small liberal arts environment (Bard College compared to West Point) may be more impacted by the implementation of such bans.


    Consider the make-up of your class.  If you are teaching a small class in which students might not spontaneously use technology, the implementation of a technology ban could negatively impact student experience and performance in the class.  In contrast, if you are teaching a large lecture class in which students might feel less engaged to begin with, the ban might help their experience and performance.

    Minimize the distraction of others.  If you decide not to implement a ban, you should think about ways that you can prevent those students who chose to use laptops from distracting others who choose not to use a laptop.  Methods to alleviate this concern include having specific sections of the classroom dedicated to laptop and technology users (Aguilar-Roca et al., 2012). 

    Provide rationale for your decision.  If you decide to implement a technology ban, providing students with a clear explanation as to why the ban is in place, supported by relevant research is one potential method for reducing the impact of a ban on student engagement.  In conclusion, there is little doubt that under certain situations, unsupervised technology usage can negatively impact academic performance.  However, full consideration regarding the type of course and composition of students within the course is advised before implementing a blanket technology ban.



    Aguilar-Roca, N. M., Williams, A. E., & O’Dowd, D. K. (2012). The impact of laptop-free zones on student performance and attitudes in large lectures. Computers & Education, 59, 1300-1308.

    Carter, S. P., Greenberg, K., & Walker, M. (2016). The impact of computer usage on     academic performance: Evidence from a randomized trial at the United States     Military Academy (SEII Discussion Paper #2016.02).

    Fried, C. B. (2008). In class laptop use and its effects on student learning. Computers & Education, 50, 906-914.

    Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research, 98, 184-191.

    Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning environments. Journal of Computing in Higher Education, 15, 46-64.

    Hutcheon, T. G., Richard, A., & Lian, A. (2016, October). The impact of a technology ban on student’s perceptions and performance in introduction to psychology. Poster presented at the Society for the Teaching of Psychology 15th Annual Conference on Teaching, Decatur, GA.

    Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38, 94-97.

    Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard:  Advantages of longhand over laptop note taking. Psychological Science, 25, 1159-1168.

    Rosenblum, D. (2017, January 2). Leave your laptops at the door to my classroom. The New York Times. Retrieved from http://www.nytimes.com/2017/01/02/opinion/leave-your-laptops-at-the-door-to-my-classroom.html?_r=0

    Wood, E., Zivcakova L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58, 365-374.


    Tom Hutcheon is a Visiting Assistant Professor in the psychology program at Bard College. Tom earned his B.A. in psychology from Bates College and his M.S. and Ph.D. in Cognition and Brain Science from Georgia Tech.  Tom received the Early Career Psychologist Poster award at the 2016 Society for the Teaching of Psychology (STP) Annual Conference on Teaching as well as a 2017 Early Career Psychologist Travel Grant sponsored by STP.  Tom’s research interests include cognitive control, cognitive aging, and effective teaching. Tom can be reached at thutcheo@bard.edu.


  • 15 Aug 2017 11:31 AM | Anonymous

    That’s What She Said: Educating Students about Plagiarism


    Elizabeth A. Sheehan

    Georgia State University


    Dealing with plagiarism is one of the more unpleasant aspects of our job as instructors. There is the sinking feeling you get when you suspect plagiarism, the moment that your Google search returns the exact passage from your student’s paper, the uncomfortable conversation with the student, the documentation to your department, and the potential hearing with the honor board. I would venture to say most of us have either dealt with these ourselves or at least supported another colleague through the process. These cases range from the cringe-worthy (e.g. copying directly from an instructor’s own published article, turning in a paper written by another student in a past semester) to the more minor infringements (e.g. unintentionally omitting quotation marks around a direct quote).

    At the teaching conferences I attended over the last few years, there seems to have been more emphasis on learning outcome assessment and reliance on the APA’s learning outcomes for undergraduates with a psychology major (APA, 2007). One of those outcomes is for students to “demonstrate effective writing skills in various formats” (p. 18). There also never seems to be a lack of presentations on how to incorporate writing assignments into your courses. Increasing writing assignments in your courses might mean increasing the chance you will encounter plagiarism; however, we might be able to prevent some of these cases with a greater focus on educating our students about plagiarism. Moreover, educating our students about plagiarism helps us address other APA learning outcomes about ethical behavior.



    To decrease plagiarism, a good place to start would be to try to understand WHY students plagiarize. At the last meeting of the National Institute on the Teaching of Psychology, I led a Participant Idea Exchange (PIE) on educating students about plagiarism (Sheehan, 2013). These PIE sessions are roundtable discussions on a topic. My group generated the following list of potential reasons students plagiarize:

    • ·         difficulty comprehending a reading;
    • ·         rushing through an assignment;
    • ·         convenience;
    • ·         cultural misunderstanding;
    • ·         poor understanding of the definition of plagiarism;
    • ·         not knowing how to integrate/synthesize/paraphrase;
    • ·         plagiarism is all around us in society; and
    • ·         not confident in their ability to write.

    You may be familiar with some of these, especially time constraints, difficulty with reading comprehension, and the inability to paraphrase. The idea of culture stood out to me from the PIE discussion. First, some cases of plagiarism could be due to cultural misunderstanding. Stowers and Hummel (2011) provide some examples of how students from an Eastern culture may view the use of another’s work. For instance, they assert some Asian students may see it as a sign of disrespect to paraphrase or change someone else’s words.

    A second example of culture is how plagiarism takes place all around us in society. We regularly use the functions of copy and paste on our computers in many different settings. People re-post others’ writing on their Facebook pages, re-blog someone else’s blog entry, forward youTube videos to friends, etc. Usually these events can be accomplished through one or two clicks. While these aren’t examples of academic writing, they do provide precedents that we have to overcome in our courses.



    We had a discussion about plagiarism in my department, and our faculty reported a number of problems in pursuing cases of plagiarism, including some cases not being reported at all, faculty handling cases on their own, cases meeting our discipline’s definition of plagiarism being overturned by the college, not knowing the university reporting procedures, etc. It was clear we needed consistency and clarity. We also decided we wanted to focus less on policing, and to favor educating our students to prevent future plagiarism. You could probably guess that this led to a subcommittee (and the idea for my PIE). Our subcommittee created a standard definition of plagiarism that went into all syllabi, a writing workshop on plagiarism, a quiz, a contract for students, a flow chart of how to report plagiarism, and class activities to teach the identification of proper paraphrasing and citations. These materials (Lamoreaux, Darnell, Sheehan, & Tusher, 2012) are publicly available on the Society for Teaching of Psychology website (http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating%20Students%20about%20Plagiarism.pdf).

                At my PIE, I asked other faculty how they educated their students about plagiarism. Below are the techniques they listed:

    • a quiz on plagiarism;
    • a quiz on student handbook;
    • list policies in the syllabus on paraphrasing and/or a link to school policy;
    • discussion on the first day of class;
    • starting early in introductory classes or freshman year before students are allowed to register for classes; and
    • using technology (e.g. Turnitin or SafeAssign).

    One quiz recommended by multiple instructors is available through Indiana University, and can be found at https://www.indiana.edu/~istd/. At this site, students can complete a tutorial on plagiarism, see examples, take a quiz, and get a certificate of completion. My department uses this site as a part of our plagiarism training for students.

                A lot of us put policies on plagiarism in the syllabus and reference it on the first day of class; however, this alone is not enough. First, we can’t always rely on students to read it or to follow a link to the university policy. Second, we can’t assume they will understand the policy. Gullifer and Tyson (2010) present data demonstrating students have a great deal of confusion over what constitutes plagiarism despite online access to a policy. Students in their study also reported wanting education on plagiarism. These findings are also corroborated by data from Holt (2012).

                Holt provided basic information about plagiarism to a control group of students and training in paraphrasing to an intervention group. The control group received a definition of plagiarism in the syllabus, a link to the university policy, one example of proper paraphrasing, and a 10-minute demonstration of improper phrasing in class. The intervention group received training in paraphrasing and proper citations, along with assignments in class. As you might expect, the group with additional training was able to identify plagiarism more accurately than those without training. This study identified reasons for unintentional plagiarism as well. For example, students thought that quotations were not needed or materials didn’t have to be paraphrased if a citation was provided.

                Something as simple as a weekly paraphrasing activity can help. For 6 weeks of the semester, Barry (2006) gave students a paragraph from a famous developmental theorist. Students had to paraphrase the passage and provide a proper citation. After completing the activity, students’ definitions of plagiarism were more complex than those offered at the onset of the study. Not only did they define plagiarism as “taking someone else’s idea”, they added “not giving credit” to their definition. This isn’t necessarily evidence that this activity would reduce the number plagiarism cases, but it is evidence of students gaining a better understanding of plagiarism.

                You could also incorporate plagiarism as a theme in your course. Estow, Lawrence, and Adams (2011) designed a research methods class where the assignments and projects in the class related to the topic of plagiarism. For example, their students designed a survey about plagiarism, collected data, and wrote a research report on their findings in one set of assignments. The researchers compared the progress of this class to one with the same assignments but a different theme. The students in the plagiarism-themed course were able to better identify plagiarism and generate more strategies for avoiding plagiarism.

    Plagiarism is scary, for both professionals and students. The consequences can be steep. It has resulted in failed assignments, expulsion from school, revoked degrees, and even ended careers. Students often tell me how terrified they are of unintentional plagiarism; Gullifer and Tyson’s participants also expressed fear of unintentional plagiarism and the consequences of plagiarism. Implementing some of these fairly simple ideas in our courses will enhance our students understanding of plagiarism. A better-informed student should be less fearful, more confident in their ability to write, and less likely to plagiarize.




    American Psychological Association. (2007). APA guidelines for the undergraduate psychology major. Retrieved from http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf

    Barry, E. (2006). Can paraphrasing practice help students define plagiarism? College Student Journal, 40(2), 377-384.

    Estow, S., Lawrence, E. K., & Adams, K.A. (2011). Practice makes perfect: Improving students’ skills in understanding and avoiding plagiarism with a themed methods course. Teaching of Psychology, 38(4), 255-258.

    Gullifer, J., & Tyson, G.A. (2010). Exploring university students’ perceptions of plagiarism: A focus group study. Studies in Higher Education, 35(4), 463-481.

    Holt, E. (2012). Education improves plagiarism detection by biology undergraduates. BioScience, 62(6), 585-592.

    Lamoreaux, M., Darnell, K., Sheehan, E., & Tusher, C. (2012). Educating students about plagiarism. Retrieved from  Office of Teaching Resources in Psychology for Society for the Teaching of Psychology website: http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating Students about Plagiarism.pdf

    Sheehan, E. A. (2013, January). Kick plagiarism to the curb: How to educate students before they head down that road. Participant Idea Exchange conducted at the National Institute on the Teaching of Psychology, St. Pete Beach, Fl.

    Stowers, R. H., & Hummel, J. Y. (2011) The use of technology to combat plagiarism in business communication classes. Business Communication Quarterly, 74(2), 164-169.



    Elizabeth Sheehan is a Lecturer at Georgia State University. She earned her PhD in Psychology from Emory University in Cognition and Development. She currently teaches Intro Psychology, an integrated version of Research Methods and Statistics, and Forensic Psychology. She has presented her work on designing study abroad programs, teaching with technology, and incorporating writing assignments into courses at teaching conferences, such as the Southeastern Conference on Teaching of Psychology and the Developmental Science Teaching Institute for the Society for Research in Child Development.


  • 01 Aug 2017 8:36 AM | Anonymous

    Supporting Students Using Balanced In-Class Small Groups


    Hung-Tao Michael Chen

    Eastern Kentucky University


    The usage of in-class small groups has been shown to improve students’ learning experience (Johnson & Johnson, 2002). Although many studies have demonstrated this effect, few studies have looked at how the specific composition of group members could support students who are at risk of dropping out from college. This essay describes a pilot study that uses the College Persistence Questionnaire to group students (Davidson, Beck & Milligan, 2009). Preliminary results are inconclusive, showing that high performing students might be benefitting more from the small groups than low performing students. 


    Creating Small Groups in the Classroom

    Student persistence has been one of the greatest challenges faced in higher education (Seidman, 2005; Tinto, 2006; Tinto 2010). While many researchers have identified students who are at risk of dropping out and proposed intervention strategies, few have looked at the effectiveness of balanced in-class small groups to promote peer networking and support. Conventionally, most instructors who use small groups in the classroom would form the groups by random selection or allow the students to form their own groups. The author of this essay proposes, instead, to form the small groups by first identifying students who have high risk of dropping out from college and group these students with those who are not at risk. These “balanced” small groups should provide students with greater peer support in the classroom.

    We have all encountered students who are underperforming in the classroom and are at risk of dropping out. Factors that include personal, cultural, economic, and social forces all affect a student’s ability to persist in college (Tinto, 2006). Strategies such as building learning communities and cohort systems have been implemented by many universities to improve student retention rate (Tinto, 2010). The problem with many of these retention strategies is that they generally require institutional support and substantial financial backing to ensure success and longevity. Is there a strategy that an instructor could easily implement in the classroom, does not require major course re-design and does not require financial support?

    One strategy that only requires a small investment from the instructor is the usage of balanced small groups in the classroom. The usage of small groups in the classroom is not a new idea and it has proven to be an effective way of promoting learning (Johnson & Johnson, 2002, 2015). Past research has also shown that peer support would increase a student’s college persistence (Eckles & Stradely, 2012; Skahill, 2002). However, not much research has been done to address the usage of small groups to support students who are at risk of dropping out from college. When students are randomly grouped or form groups of their own, there will inevitably be a few groups that are comprised of students who are all at high risk of dropping out. The idea behind the balanced small groups is simple—students who are at high risk and low risk of dropping out should be evenly distributed across all groups. If the cognitive and social mechanisms behind the effectiveness of small groups hold true, then students who are at lower risk of dropping out should be able to support and anchor students who are at higher risk of dropping out. This idea is based on the social interdependence theory that people, when placed in cooperative groups with a positive environment, will help each other to achieve a common goal (Johsons & Johnson, 2015).


    Implementing and Evaluating the Idea

    The first step in creating balanced small groups is to identify and classify students who are at high risk, moderate risk, and low risk of dropping out. The author of this essay used a modified version of the College Persistence Questionnaire (CPQ) to gauge students’ likelihood of persisting in college at the beginning of the semester (Davidson, Beck & Milligan, 2009). The original CPQ by Davidson and colleagues was modified to fit the specific characteristics of the author’s home institution. The modified questionnaire was built in Qualtrics and distributed to the students at the beginning of the semester. It should be noted that the author of this essay adopted a “flipped classroom” teaching model, where at least half of the class period involved small group problem solving (Lage, Platt & Treglia, 2000). The students had to work together to solve short answer questions and multiple choice quizzes. Each group had to turn in one copy of the short answer worksheet and one copy of the multiple choice quiz at the end of every class period. The in-person class met twice a week for 75 minutes each. The first 30 minutes of the class was in the form of a lecture with interactive clicker questions. The other 45 minutes was used to solve an in-class worksheet and a multiple choice quiz question in groups of four. Students were allowed to use their notes while solving the worksheet but they were not allowed to use their notes while completing the multiple choice quiz during the final fifteen minutes of class. A total of four undergraduate teaching assistants who were not enrolled in the specific class assisted with the small group problem solving portion of the class.

    After students’ response for the CPQ had been collected, the author calculated a cumulative score for each student based on the student’s response on the questionnaire. The students were then divided into four categories: those in the bottom 25th percentile, those in the 26th-50th percentile, those in the 51st to 75th percentile, and those in the top 76th percentile. Those in the top 76th percentile were students who were at very low risk of dropping out, those in the bottom 25th percentile were the ones who were at high risk of dropping out. The class had a total of 80 students; half of the students were put into balanced small groups using their CPQ scores and half of the students were placed into small groups randomly, regardless of their CPQ score.  Each group had four students. The balanced groups one student from each of the four CPQ categories; the random groups were created based on student ID number. The students stayed in the same group throughout the semester and they were encouraged to collaborate with each other. The author of this essay used a variety of bonus points and team building tasks throughout the semester to help the students foster a positive and cooperative learning environment (Johnson & Johnson 2015).

      This method of balanced small groups was first piloted during the Spring 2015 semester at a large state university. The results were inconclusive because the comparison between the random groups and the balanced groups did not yield any significant difference. The general trend of the means, however, seemed to show that students who were already at low risk of dropping out were benefitting more from the balanced small groups than students who were at high risk of dropping out. Future studies should probably compare balanced groups with students of varying risk levels, against matched groups where students of similar risk levels were grouped together. Qualitative data and survey data should also be gathered in addition to student performance data. There was also the concern that the balanced-group manipulation appeared to benefit the higher performing students more than the lower performing students who were at high risk of dropping out. This was probably a result of social loafing effect where the high performing students were doing most of the work in the class. The worksheets and the quizzes were graded per group but they should have been issued and graded on an individual basis. Future studies should design the assessments such that every student is held equally responsible. This way, any effect of social loafing should be minimized.


    Author’s note: This essay was based on a study presented at a poster session at the Society for the Teaching of Psychology’s 15th Annual Conference. Decatur, GA, October 2016. 




    Davidson, W. B., Beck, H. P., & Milligan, M. (2009). The College Persistence Questionnaire: Development and validation of an instrument that predicts student attrition. Journal of College Student Development, 50(4), 373-390.

    Eckles, J. E., & Stradley, E. G. (2012). A social network analysis of student retention using archival data. Social Psychology of Education15(2), 165-180.

    Johnson, D. W., & Johnson, R. T. (2002). Learning together and alone: Overview and metaanalysis. Asia Pacific Journal of Education22(1), 95-105.

    Johnson, D. W., & Johnson, R. T.  (2015). Theoretical approaches to cooperative learning.  In R. Gillies (Ed.), Collaborative learning:  Developments in research and practice (pp. 17-46).  New York:  Nova. 

    Lage, M. J., Platt, G. J., & Treglia, M. (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. The Journal of Economic Education31(1), 30-43.

    Seidman, A. (2005). College student retention: Formula for student success (ACE/Praeger series on higher education; American Council on Education/Praeger series on higher education). Westport, CT: Praeger Publishers. 

    Skahill, M. P. (2002). The role of social support network in college persistence among freshman students. The Journal of College Student Retention: Research, Theory, and Practice, 4(1), 39–52.

    Tinto, V. (2006). Research and practice of student retention: what next?. Journal of College Student Retention: Research, Theory & Practice, 8(1), 1-19.

    Tinto, V. (2010). From theory to action: Exploring the institutional conditions for student retention. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 51-89). Netherlands: Springer.


    H.-T. Michael Chen is an Assistant Professor of Psychology at Eastern Kentucky University in Richmond, KY. He graduated from Berea College with a degree in Biology, and earned his M.S. and Ph.D. in Experimental Psychology from the University of Kentucky. He teaches courses in research methods, cognition, and human factors. His research interests include teaching strategies in the classroom and the design of better educational technologies.


  • 16 Jul 2017 1:29 PM | Anonymous
    STP’s SoTL Writing Workshop: A.K.A. How I Wrote a Paper in Two Days

    Michelle A. Drouin

    Indiana University–Purdue University Fort Wayne


    In this paper, I describe my experiences with the Society for the Teaching of Psychology’s (STP) Scholarship of Teaching and Learning (SoTL) Writing Workshop. I first describe the obstacles preventing me from joining such efforts and then describe the process and structure of STP’s Writing Workshop. As a result of my participation, I not only wrote a manuscript from (practically) start to finish in two days, but I also finished three other SoTL papers and developed and implemented a SoTL Writing Retreat on my own campus.

    It is very difficult to say “no” to Regan Gurung. He is charming and charismatic, and as the former President of STP, he is kind of a psychology celebrity. So in May, 2012, when Regan invited me to apply to the STP’s Scholarship of Teaching and Learning (SoTL) Writing Workshop (www.teachpsych.org/conferences/writing/index.php#.UcpdcZzNnUk), try as I might, I could not say “no.”

     “But it’s hard for me to travel,” I said. “I have two young children, five and three.”

    “Perfect! Mine are six and four,” Regan responded.

    “I actually have a lot of projects going on, so I am really doing well on my SoTL writing,” I countered.

    Regan smiled, “Are they finished? You owe it to teachers and students everywhere to get them out.”

    “Teachers and students everywhere?” I pondered, “That’s a lot of people depending on me... .”

    “Ok, I’m in” I replied.

    Thus began my journey with STP’s SoTL Writing Workshop.



    The Obstacles

    As I look back on that day, I can clearly identify the obstacles that were keeping me from engaging in writing workshops generally and this one specifically:

    • I thought I had SoTL writing figured out. I had a few SoTL research papers published and had written two invited book chapters. Although I did not consider myself an expert in SoTL, I was certainly one of the SoTL leaders at my university. I knew I could do the work, so I really did not know what the SoTL Writing Workshop could do for me.
    • I did not think I had the time for a workshop. I was already time pressed—hence the many unfinished projects—so how would I find the time to travel and participate in a workshop?
    • I thought that unfinished projects were a normal part of academic life. My colleague (who has been in his position for 9 years) still has an unfinished project from graduate school. I have many unfinished projects, and as the years go by, that list is growing, especially for SoTL projects. I accepted this as a normal part of my academic journey.
    • I am actually a good, prolific writer. I don’t struggle with writing. I spend much of my academic work time writing both disciplinary and pedagogical papers, and I am successful in getting my work published. According to the 2010-11 UCLA Higher Education Research Institute Faculty Survey, only about 20% of faculty at all baccalaureate institutions had five or more papers accepted or published in the last two years (Hurtado, Eagan, Pryor, Whang, & Tran, 2012), and I am pleased to say that I am in that 20%.


    Despite my many internal protests, I engaged. Two weeks later, I was describing via email my various unfinished SoTL projects to my three fellow group members and reading Optimizing Teaching and Learning: Practicing Pedagogical Research (Gurung & Schwartz, 2009), which Regan sent to workshop participants. I was also learning more about the workshop through email and had received a participant timeline with “soft deadlines to make the workshop most effective”:

    May:  Introductions and basic idea sharing.

    June-August:  Preliminary consultations.

    August 30th:  Project proposal/status—Write a 1-2 page proposal for the topics you would like to research. If there is data collected, then list key hypotheses driving the study and draft a method section.

    September 15th: Complete a preliminary literature search for articles relating to topic of interest or study conducted (outline Intro section).

    Oct 1st: Final report on activity/project status due to Mentors.

                                                 (R. Gurung, personal communication, May 29, 2012)

    Through this email correspondence, I also learned two important things: (1) that the mentors would provide follow-up consultations and draft reading (or other types of assistance) post-workshop, and (2) that the goal of the workshop was to have a SoTL publication submitted by the end of the 2012-13 academic year. As I hoped to finish at least one of my papers by that deadline, I thought this was a realistic goal for me. However, one of the hurdles I faced during my preliminary consultations with Regan was trying to decide which of my many projects to bring to the workshop.

    Getting Organized

    At the time of our initial correspondence, I had SEVEN unfinished SoTL projects. I was already in the writing phase of an online lecture paper and decided to finish that one outside of the SoTL writing workshop; the workshop only accelerated my timeline. Thereafter, I turned my focus to three others: an iPad project, an online decision tree for psychology majors project, and a lecture capture project. In preparation for the August 30deadline, I was overzealous and finished and submitted the decision tree paper, which left me with five papers to complete and nothing firm to bring to the writing workshop. At this point I had to reassess and emailed Regan in desperation—“what project should I now bring to the SoTL writing workshop?”

    Regan replied, “Given that you are progressing well, how about you aim to send a plan of what YOU hope to have done on EACH of the 3-4 topics.  A few sentences on each so you have a clear picture of goals.” (R. Gurung, personal communication, August 29, 2012).

    At this point, I finally committed to paper the goals I had for my various SoTL writing projects and constructed a table that would guide me through the rest of the process. In this table, I listed my five unfinished projects and the goals I had for them for the October workshop (summarized here):

    • iPad cohort & lecture capture projects: Data analyzed; results and methods sections written, literature review mostly done
    • Research assistantship, blogs as learning tools, and research review and presentation projects: Data cleaned; sources gathered

    Creating this table gave me clarity. This was the first time in my academic career that I had actually listed all of my ongoing projects and created goals for each. Until this point, the projects were all quite nebulous—I did not even know how many unfinished SoTL projects I had. After I created the table, I had a visual reminder of my goals, and this was a breakthrough. As I thought about my goals, I knew that if I could arrive at the writing workshop with at least cleaned data sets and relevant sources gathered, I would be able to make the most of the personalized statistical consultations and also be able to get advice on publication. Minimally, this is what I hoped to accomplish, and in the end, this is what I had accomplished when I boarded the plane for Atlanta in October, 2012.

    Attending the Conference

    Early in my career, I heard a rumor about two professors who would get together and complete manuscripts (from start to finish) in a weekend. I remember the questions that rushed through my head at the time—“How did they do it? What did they do to prepare for this writing extravaganza? Did they each work independently, or did they work collaboratively?” Because the source of this rumor had so few answers, I dismissed it as urban legend. However, now I know that this feat can be accomplished.

    When I arrived in Atlanta for the SoTL writing conference, I had 733 words (mostly methods), a cleaned data set, and sources gathered for a manuscript on the effects of using lecture capture in an introductory psychology course. I focused on this paper because after cleaning the data sets of three other projects (research assistantship, blogs, and research review), I decided I needed to collect more data. Meanwhile, although I had enough data for the iPad project, it was not specific enough to psychology to make use of the mentorship I was about to receive. Thus, my lecture capture project became my official SoTL workshop baby.

    The SoTL writing conference runs concurrently with STP’s Best Practices Conference, so we were able to attend the keynote addresses for the Best Practices Conference; however, the rest of the time we were to devote ourselves to our SoTL projects. The structure of the conference was:

    Day 1: Evening arrival, dinner, presentation on doing SoTL research by Regan Gurung, large-group introductions with explanations of our SoTL projects.

    Day 2: Writing, individual consultations with mentor, individual consultations with statistician and ToP editor.

    Day 3: Writing, presentation by Drew Christopher (Editor, Teaching of Psychology) on getting published, departure in the afternoon.

    I spent most of my time writing, in the hotel lobby, side by side with other workshop participants, pausing at times to ask them their feedback on something that I had written but mostly just in my own private writing abyss. I had a few consultations with Regan, where he pointed me to relevant sources and asked me to include additional information. I talked through my statistical analyses with Georjeanna Wilson-Doenges, who helped me see that what I was actually proposing was a mediation model. And I also spoke at length with Drew Christopher, who encouraged us all to be tenacious with our papers. When I boarded the plane to go home, I had 5,697 words and a paper that was nearly complete. A few days later, I sent it to Regan for feedback, and approximately one week later, I sent it out for review.


                A few months later, my paper (Drouin, 2014) was accepted with minor revisions for publication in Teaching of Psychology. However, this was not the only positive outcome of my SoTL writing workshop experience. Two other papers I prepared as part of this process (lecture format study and iPad project) have now been accepted for publication, and I am currently revising another (online decision tree) in response to a revise and resubmit decision. This is the greatest number of SoTL papers I have even written in a one-year time frame and is equivalent to the number of SoTL articles I had accepted before I joined this workshop.

                These accomplishments are overshadowed though by my biggest take-home of the conference. In May, 2013, just one year after my initial conversation with Regan, I coordinated my own SoTL Writing Retreat on my campus. We had 12 participants, working side-by-side with four experienced SoTL mentors, a statistical consultant, and librarians, who assisted with source gathering and finding publication venues. Sponsored by IPFW's Committee for the Advancement of Scholarly Teaching and Learning Excellence, this SoTL writing retreat was the first of its kind on our campus and was a great success. Although I did not follow the STP Writing Workshop model exactly (e.g., due to time constraints, we did not provide consultations in advance, and we also did not create a firm structure for follow-up consultations), we included key elements that were helpful in making the workshop a success for me. More specifically:

    1. We had an application process. Participants were asked to describe the projects they were working on, where they were in the process, and what they hoped to accomplish during the retreat.
    2. Participants were paired with mentors who had knowledge of the content area or data collection method. Based on the applications, we formed mini-groups composed of people who were working on similar projects or using similar data collection methods, and we matched mentors with writers on this basis.
    3. The writing retreat lasted only two days. Longer writing workshops or writing lockdowns that have meetings over weeks or months, like those highlighted by Belcher (2009) or Jakobsen and Lee (2012), certainly have their strengths, but my university already had writing groups, and I had never engaged because I feared the long commitment. Workshops of a limited duration are perfect for commitment-phobes like me, and because this model had worked for me with STP’s workshop, I wanted others to be able to experience this model.
    4. It was a retreat, with large chunks of time devoted to writing. We had only two short workshops on IRB proposals and publication venues; the rest of the time was devoted to manuscript writing or other types of SoTL writing activities (e.g., writing an IRB proposal, writing out a plan for the research).

    Feedback on the workshop was overwhelmingly positive, but I did have suggestions to do more preparatory work with participants before the retreat, which aligns well with STP’s model. Overall, participants appreciated the time devoted exclusively to working on their projects and the synergy we created during those two days in the campus library. It was inspirational for me, and in a sense, I felt that I was paying it forward.

    As I closed the writing workshop, I chose my words carefully: Echoes of a year before and foreshadowing for the essay you are now reading— “This is important work. You owe it to students and teachers everywhere to get it out.”


    Drouin, M. (2014). If you record it, some won’t come: Using lecture capture in introductory psychology. Teaching of Psychology, 41(1), 11-19.

    Hurtado, S., Eagan, M. K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010–2011 HERI Faculty Survey. Los Angeles: Higher Education Research Institute, UCLA.

    Gurung, R. A. R., & Schwartz, E. (2009).Optimizing teaching and learning: Pedagogical research in practice. Malden, MA: Blackwell.

    Jakobsen, K. V., & Lee, M. R. (2012). Faculty writing lockdowns. In J. Holmes, S.C. Baker, & J. R. Stowell (Eds.), Essays from E-xcellence in Teaching (Vol. 11, pp. 26–29). Retrieved from the Society for the Teaching of Psychology Web site: http://teachpsych.org/ebooks/eit2011/index.php

    Michelle Drouin earned her bachelor’s degree in psychology from Cornell University and her DPhil in Experimental Psychology from University of Oxford, England. She is an associate professor of psychology at Indiana University-Purdue University Fort Wayne and teaches courses in introductory psychology, developmental psychology (child and lifespan), social and personality development, and language development. Her research, both disciplinary and pedagogical, is focused on literacy, language, and the ways in which technology affects communication and learning. She has written numerous pedagogical papers and invited book chapters focused mainly on online teaching and the integration of technology in the classroom.


  • 02 Jul 2017 4:41 PM | Anonymous

    Flipped out: Methods and outcomes of flipping abnormal psychology

    Amanda K Sommerfeld, Ph.D.
    Washington College


    The Background

    Abnormal psychology is taught in virtually every undergraduate psychology department across country (Perlman & McCann, 1999). However, despite its popularity, the course is not immune from critiques. Like many college courses, abnormal psychology is often lecture-based (Benjamin, 2002). Although such a pedagogical approach is popular among faculty because of its effectiveness in maximizing content delivery (Kendra, Cattaneo, & Mohr, 2012), in some cases lectures also may be less effective than other methods for promoting learning (c.f., Halonen, 2005).

    Abnormal psychology courses have also been critiqued as lacking both context and nuance. As Norcross, Sommer, and Clifford (2001) note, in abnormal psychology classes, “the painful human experience of psychopathology is frequently overshadowed by descriptions of disembodied symptoms and impersonal treatment” (p. 126). As a result, despite many professors’ intentions to use abnormal psychology courses to decrease stigma (Kendra et al., 2012) and increase student understanding of the contextual factors that shape psychiatric conditions (Lafosse & Zinser, 2002), courses may fall short of these desired outcomes. That was certainly my experience when I first taught abnormal psychology.


    The Issue

    Psychopathology I (PSY 233) is a core course for students who are majoring in psychology with a clinical/counseling concentration at my college. Because of this, as well as the content, the class is frequently filled to capacity (40 students). When I inherited the class in Fall 2014, I kept using what Benjamin (2002) refers to as “the Velveeta (cheese) of teaching methods” (p. 57), otherwise known as a lecture-centered approach (which is comparable to the cheesy foodstuff in that despite the fact that no one admits to liking it, it remains the most popular pedagogical approach; Halonen, 2005). I enhanced the class with media critiques, group projects, and in-class discussions, however class time remained lecture-driven.

    According to my students the course was successful. Students gave high ratings on course evaluation items (rated from 1=strongly disagree to 5=strongly agree) such as “The use of teaching aids was effective” (μ=4.9) and “The instructor answered questions in class in a patient and helpful manner“ (μ=4.9). Students’ qualitative feedback supported these ratings.

                Despite this positive feedback, I was dissatisfied with several aspects of the course. For example, lower student ratings on items such as, “I learned a great deal in this class” (μ=4.6) and “The course raised challenging questions or issues” (μ=4.6), led me to wonder if students were basing their assessments on how much they liked the course rather than their actual learning. What is more, at the end of the semester I didn’t feel confident that I’d met my objective of challenging students to consider how cultural norms and biases contribute to psychiatric conditions. As a result, I was left with the sense that, because of the format, I had reduced the course content to a list of diagnostic criteria, leaving little time for acknowledging symptom variation, challenging stereotypes, or encouraging the development of advocacy attitudes. To combat these shortcomings, I decided to change the class radically, and, with the support of a grant from my college’s Cromwell Center for Teaching and Learning, I flipped—or inverted—the class.


    The Solution

    There is no single definition of flipped instruction (He, Holton, Farkas, & Warschauer, 2016). However, the underlying intent of the approach is to move lecture-based material outside of class, leaving in-class time for “face to face engagement between students and teachers” (Forsey, Low, & Glance, 2013, p. 472). This is commonly achieved by delivering course content before class meetings using recorded lectures, podcasts, or videos. Material is then applied during face-to-face meetings through discussions, activities, and hands-on demonstrations.

    To date, the research on flipped instruction is incomplete. As O’Flaherty and Phillips (2015) note, few studies have “actually demonstrated robust evidence to support that the flipped learning approach is more effective than conventional teaching methods” (p. 94). Despite this, anecdotal evidence is encouraging, with some studies claiming that flipped instruction results in greater student engagement (c.f., Jamaludin & Osman, 2014) and higher test scores and overall grades (c.f., Mason, Shuman, & Cook, 2013). Based on this available evidence, and the issues that I observed in the first iteration of Psychopathology I, flipping the class seemed a worthwhile venture.


    The Implementation

                Flipping Psychopathology I required me to create two sets of materials: out-of-class and in-class. The bulk of class content (i.e., diagnostic criteria, prevalence rates, treatment approaches, etc.) was delivered outside of class through video lectures that were uploaded to the course’s online learning platform. For the first iteration of the flipped course, these lectures were simple, with my voice recorded over PowerPoint slides using SnagIt. These videos were limited to ten minutes so students could easily review information. Prior to class students were required to watch between one and three videos and complete an online quiz. The quizzes were intended to encourage mastery, so students were able to repeat the quizzes multiple times.

    In-class time was focused on application and discussion (Pluta, Richards, & Mutnick, 2012). This required me to create individual and group activities for each class meeting. Sample activities included having students evaluate media depictions of psychiatric disorders for accuracy, writing vignettes of imaginary clients, and discussing the systemic factors that affect how clients manifest symptoms.

                I evaluated the effectiveness of the flipped versus traditional instruction based on data collected at two times: following a lecture-based course in Fall 2014 (N=27) and following a flipped-style course in Fall 2015 (N=34). Data I collected at both points in time included student test scores and grades, student course evaluations, student responses to questions developed for the Web Learning Project (Calderon, Ginsberg, & Ciabocchi, 2012), and instructor reflections.


    The Outcomes

                Data from the traditional and flipped offerings of Psychopathology I suggested the pedagogical change affected outcomes in three domains: student learning, student engagement, and instructor experience.


    Impacts on student learning

                Researchers suggest that flipped instruction is successful because students are able to learn and review pre-class material on their own time and at their own pace (McDonald & Smith, 2013). Many of my students agreed with this assessment, sharing on course evaluations that, “I like how the videos were before class. It allowed for deeper understanding of the material because I can pause, write down questions, and review as needed.” Accordingly, students also rated the “adequacy of resources” as significantly higher than students in the lecture class (t(54)=-2.11, p=.04).

                In opposition to the literature, the accessibility of material outside of class did not translate into higher grades for my students. In fact, although there were no significant differences in final grades between the two classes, students in the flipped class had significantly lower exam grades than students in the lecture-based class (t(58)=2.42, p=.02). What is more, student responses to the item “I learned a lot in this course” were lower in the flipped course (μ=4.3) than in the lecture course (μ=4.6).

                It is possible that some of the student learning drawbacks of the flipped class were related to perceptions of the difficulty of the course. In comparison to students in the lecture class, students in the flipped class rated the course as having a significantly higher “workload” (t(56)=-6.02, p=.00) and being more “difficult” (t(55)=-3.19, p=.00). Further, student qualitative feedback reinforced these ratings, suggesting the flipped style made learning more difficult for some students.

    This perception runs contrary to previous studies that have suggested students perceive flipped courses as less difficult than courses taught using traditional methods (He et al., 2016). However similar to previous research (c.f., O’Flaherty & Phillips, 2015), students in my flipped course suggested the difficulty predominantly stemmed from the increased responsibility they felt: “The flip style makes learning just a little bit harder because it puts all the responsibility on what you do outside of the classroom.”


    Impacts on student engagement

                The fundamental purpose behind flipped instruction is to use in-class time for active learning. Given this, some of the feedback from students in the flipped class led me to question the effectiveness of my in-class activities. For example, students in the flipped course rated the “learning value of in-class materials” significantly lower than students in the lecture course (t(56)=2.326, p=.02). These data were supported by comments such as, “class meetings are interesting but not necessarily informative.”

    Based on these data, it seems that my implementation of flipped pedagogy may have fallen short because of how I structured face-to-face meetings. It may be that, similar to O’Flaherty and Phillips’ (2015) findings in their scoping review, I failed to explain the link between the pre-class activities and the face-to-face sessions. As a result, the in-class material may not have engaged the students.

                With that said, data also suggested students interacted more in the flipped class, which may have facilitated student engagement. For example, students in the flipped course rated the amount and quality of “interaction with other students” as significantly greater than students in the lecture course (t(56)=-6.06, p=.00). Student comments reinforced these data, with one student noting, “I like that we get more time to ask questions in class,” and another mentioning that “the interaction during class time helps to solidify the information.”


    Impact on instructor

                Researchers who study flipped instruction routinely note how demanding it is on instructors. That was certainly my experience in flipping Psychopathology I. Similar to other instructors’ experiences, it took considerable planning and preparation for me to design engaging, interactive in-class activities (c.f., Mason et al., 2013). A great deal of lead-in time was also required to record and edit lectures in advance of class meetings.

                The process of making the videos was also complicated by the limited technical support available to me. Although I consulted with members of the academic technology team at my institution, they did not have the time or resources to help me record or edit the videos. As a result, I had to learn how to use the software and troubleshoot issues on my own. Perhaps because of the amount of time and expertise required to create even simple videos, it is not surprising that researchers have recommended having a support staff or technical team available (c.f., Ferreri & O’Connor, 2013).

                Despite these issues, I also found the flipped course had multiple strengths. Most importantly, because students could access and review the lectures before class meetings they were less concerned with taking notes in class. This freed up the students to listen to their classmates, contribute to discussions, and engage fully in activities. As a result, a greater proportion of students participated in the flipped versus the traditional class.

                Finally, I also found the flipped class provided students with increased opportunities to consider more nuanced issues related to psychiatric disorders. In particular, because the students were introduced to diagnostic criteria and prevalence rates prior to class, they were more prepared to apply and critique that material in class, opening up discussions about stigma, social norms, and systemic forms of privilege and oppression that affect psychological health and illness.


    The conclusions

    The flipped version of Psychopathology I had both strengths and weaknesses. Students appreciated the opportunities for review that the flipped style provided, were better able to consider the nuances of psychiatric conditions, and were more engaged during in-class meetings. On the other hand, some students reported the flipped style made learning more difficult and I found the flipped course took more time to prepare. Given these data, it is not possible to say flipping Psychopathology I improved the course as a whole, at least not after the first offering. However, with revision the flipped course could hold considerable promise to help students develop more critical perspectives on topics relevant to abnormal psychology.




    Benjamin, L. (2002). Lecturing. In S.F. Davis & W. Buskist (Eds.), The teaching of psychology: Essays in honor of Wilbert J. McKeachie and Charles L. Brewer. Mahwah, NH: Lawrence Erlbaum Associates, Inc.

    Calderon, O., Ginsberg, A.P., & Ciabocchi, L. (2012). Multidimensional assessment of pilot blended learning programs: Maximizing program effectiveness based on student and faculty feedback. Journal of Asynchronous Learning Networks, 16(3), 23-37.

    Ferreri, S., & O'Connor (2013). Instructional design and assessment: Redesign of a large lecture course into a small-group learning course. American Journal of Pharmaceutical Education, 77(1), 19.

    Forsey, M., Low, M., & Glance, D. (2013). Flipping the sociology classroom: Towards a practice of online pedagogy. Journal of Sociology, 49(4), 471-485.

    Halonen, J.S. (2005). Abnormal psychology as liberating art and science. Journal of Social and Clinical Psychology, 24(1), 41-50.

    He, W., Holton, A., Farkas, G., & Warschauer, M. (2016). The effects of flipped instruction on out-of-class study time, exam performance, and student perceptions. Learning and Instruction, 45, 61-71.

    Jamaludin, R., & Osman, S. Z. (2014). The use of a flipped classroom to enhance engagement and promote active learning. Journal of Education and Practice, 5(2), 124–131.

    Kendra, M.S., & Cattaneo, L.B., & Mohr, J.J. (2011). Teaching abnormal psychology to improve attitudes toward mental illness and help-seeking. Teaching of Psychology, 39(1), 57-61.

    Lafosse, J.M., & Zinser, M.C. (2002. A case-conference exercise to facilitate understanding of paradigms in abnormal psychology. Teaching of Psychology, 29(3), 220-222.

    Mason, G., Shuman, T., & Cook, K. (2013). Comparing the effectiveness of an inverted classroom to a traditional classroom in an upper-division engineering course. IEEE Transactions on Education, 56(4), 430435.

    McDonald, K., & Smith, C. M. (2013). The flipped classroom for professional development: Part I. Benefits and strategies. The Journal of Continuing Education in Nursing, 44(10), 437.

    O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. Internet and Higher Education, 25, 85-95.

    Norcross, J.C., Sommer, R., & Clifford, J.S. (2001). Incorporating published autobiographies into the abnormal psychology course. Teaching of Psychology, 28(2), 125-128.

    Perlman, B., & McCann, L.I. (1999). The most frequently listed courses in the undergraduate psychology curriculum. Teaching of Psychology, 26(3), 177-182.

    Pluta, W., Richards, B., & Mutnick, A. (2013). PBL and beyond: Trends in collaborative learning. Teaching and Learning in Medicine, 25(S1), S9S16.

  • 15 Jun 2017 1:15 PM | Anonymous
    Using The iPad In Your Academic Workflow:
    Best iPad Productivity Tools For Your Classroom Practices

    David Berg, Ph.D.
    Community College Of Philadelphia

    This document is based on workshops I presented at 35th Annual National Institute on the Teaching of Psychology.

    Introduction To “Using The iPad In Your Academic Workflow”
    In the academic world, our workflow involves a number of different elements which may include planning and scheduling, project management, reading and writing, information management (gathering, sorting, storing), collaboration (students, colleagues, department, college, and organizations), participation in meetings and committees, and interfacing with cyberspace (email and web). We could add many more things to the list, however it’s best to emphasize that workflow for the iPad looks like old S→P→O Psychology. The workflow starts with the INPUT (stimulus) into the iPad from either your computer (via iTunes sync), from the cloud (via DropBox or WiFi), or from your thoughts and ideas. The workflow ends with the OUTPUT back to your computer, to the cloud, to a projector, or perhaps to a printer. OUTPUT can take many forms: written and marked-up documents, media (audio/video/artistic/photos), presentation materials, podcasts, collaborative documents, and so on. What goes on in the middle is the PROCESSING which entails the use of many interconnected tools or apps on the iPad itself -- the majority of this essay focuses on the Process.

    iPad In The Classroom
    Over the past two years or so, more and more faculty have been making use of the iPad as the “tool of choice” in their academic lives. As the iPad (and iOS) have matured, we’ve seen greater numbers adapting the device for their personal use. What about the iPad in the classroom? Beyond some simple usage, most faculty have not tapped the full potential of the iPad—still relying on laptops, smart carts, and the classroom smart podium (nice if your classroom has one). My favorite classroom is currently outfitted with 1976-era technology: a 27” wall-mounted monitor with attached VHS/DVD player (that works most of the time). Schlepping the smart cart from A/V services around the campus is a Herculean chore not for the faint of heart; getting all of the parts working and set up for class...well...resistance is futile!

    So I made an executive decision. Though on a shoe string budget, I decided that I would not upgrade my old laptop but invest in the new tablet technology instead, and adapt it to both my classroom needs and my academic workflow. Mind you, I have a decent up-to-date desktop computer that provides a way around some of the content creation issues that come up regarding the use of tablet computing.

    The next section is aimed at the professional user who wants to make the most out of using the iPad in the classroom. It does not cover classes in colleges that give everyone an iPad (we should only be so lucky), but rather how to make use of the iPad as your go-to-technology.

    The four biggest issues usually raised when we discuss using the iPad are: Content Creation vs. Consumption, Laptop vs. iPad, Device Integration, and College vs. High School teaching. When the original iPad was first released, it really functioned as a superb consumption device—great for personal use but lacking in many ways to create content. Times have changed! You can create to your heart’s content albeit with some limitations in a few areas; however, there isn’t much that you can’t do. Probably (for academics) the most serious limitations are in creating major presentations (PowerPoint and Keynote), developing large media projects, and other areas such as business applications (large excel spreadsheets and such). You can do these things, but not with the same ease as on a laptop or desktop computer.

    Of course this brings us to the next issue of Laptop vs. iPad. The iPad excels as a portable device whether at college, in the classroom, at home, or for travel. In a classroom, the iPad can be connected to any monitor or projector with ease, and further it can be used as a whiteboard making for an interactive class. The laptop may be preferential in terms of data management, content creation of presentations and media, or for research and data. If you need to make a decision, think in terms of what your needs are rather than in terms of what device to buy. I have a wonderful desktop machine so I have given up my old laptop in favor of my iPad; when I retire, I will give up the desktop machine. If you do not have access to a good working computer, you might think about updating.

    Once these first two issues get sorted out, you can then consider the third, Device Integration. NOT A PROBLEM. When the iPad first appeared, about the only way to get information in and out was through iTunes sync. Now, with the proliferation of cloud computing, the issue is no longer a difficulty. I prefer to connect my iPad to my computer every few days and use the sync apps-file sharing method in iTunes. However, many people prefer to use DropBox as their primary means of transferring information between their iPad and their Mac or PC. For specific types of documents, both Google and Microsoft have also introduced their own versions of the cloud for document syncing and collaboration.

    Finally, high school Psychology teachers may have other responsibilities that college instructors don’t have to deal with, such as interfacing with an administrative network, putting together course lessons for five day/week classes, and making lesson plans available to supervisors. There are now a number of apps to facilitate these functions.

    Fair Use Guidelines & Copyright Issues
    We need to exercise great caution in what we download, copy, and/or display. Distribution of copyrighted materials is a serious issue but simply displaying the material may not be. There are strict copyright guidelines regarding such matters. Understanding the fair use guidelines and the exceptions is very important. My experience has been that an email asking permission is easily obtained and avoids many hassles. For an overall view, the Center For Social Media has provided a “best practices” paper dealing with copyright and provides a FAQs review (http://centerforsocialmedia.org/fair-use/related-materials/codes/code-best-practices-fair-use-online-video).

    Some accessories are a must to make full use of the iPad. Choose among the categories based upon personal look, feel, and expense. Try before you buy is always best, so speak to other colleagues and friends to determine what works best for you. If you live near an Apple store or BestBuy then go play. If you cannot, then four reliable online sources for accessories are Amazon.com, Meritline.com, Buy.com, and Handhelditems.com. Must have accessories include:   

    • Bluetooth Keyboard (stand-alone or in a folio case, approximately $50)
    • Folio style case or iPad cover (approximately $35)
    • Stylus (approximately $20) and Screen Cleaner (approximately $10)
    • Auxiliary speakers & headphone (range in price from $5 to $200)
    • Extra charger for office or auto (approximately $20)       

    There are a few excellent websites that will be helpful for both workflow and classroom teaching with the iPad.

    What Do You Want To Do?
    Probably the biggest question is “What do you actually want to do with your iPad?” This needs to be well thought out because it will entail investments of time, training, and some cash (for apps and accessories). I have arbitrarily divided the use of the iPad in both the workflow and the classroom into a number areas. These overlap and are by no means exhaustive. I’ve also listed apps that are highly rated in each category; some are free and others not. Check them out at the iTunes Store online or the App Store app on the iPad. Download the freebees and play. For those that cost, read the reviews and click the “most critical” in the reviews link before buying.

    The Workflow and Classroom Categories & Specific Apps
    Beginning and Ending the Workflow: Input and Output

    Getting your documents into the iPad is a fairly straightforward procedure called syncing.

    The two most popular and efficient ways are through iTunes sync and DropBox. Simply drag a file to DropBox on your computer (PC/Mac), and it will show up on your iPad (assuming that both are in the same wifi network). Once you have the document on the iPad, use the “open in” command to move the file to the appropriate app. Reversing this process moves the document back to your computer.

    iTunes sync occurs when you attach your iPad to the computer. There is a window in iTunes that contains all of the apps that share your documents. Simply add your document into this window, and it will sync to your iPad. The reverse process updates the document which can then be saved.

    The advantage of DropBox is that you don’t have to attach the iPad to the computer; further, you can set up folders to share with other people over any network. iTunes sync’s advantage is better organization and control of your documents. I prefer iTunes sync.

    Output from the iPad is pretty much the reverse of the processes listed. In addition, we can add email and printing as output methods. While I list presentation and communication apps later, printing is a special case, because it can take several steps to print. Some apps are AIRPRINT enabled meaning that they will, without any extra steps, print to an AIRPRINT ENABLED PRINTER. All of the major manufactures make them so if you are purchasing a new printer, look this up in the specs. For those of us who do not need a new printer, several apps are available in the iTunes store that will enable you to use a printer in the same wifi network. Choose apps that have two versions: a lite (free and trial) as well as a paid version. Download the lite and give it a try. If it works, then purchase the full paid version. Loading the app onto the iPad, and the computer version on to your Mac or PC will enable you to print wirelessly over your network. There are several choices: I have used PrintCentral from Eurosmartz ($10) since the iPad came out (it was one of the first apps) and it works just fine for me.

    Project and Task Management
    This category includes apps useful for project and event planning. The particularly popular apps are those that use the built-in Calendar and Reminders; those of you who use Google’s apps may want to integrate the Google Calendar into your iPad use. Additionally for those who really like to have more control, there are numbers of To-Do apps (e.g., Wunderlist, which is free, and ToDo, which costs $5). If you want to do graphic layouts of projects, Popplet and Corkulous are quite good. For special presentations and projects, Exhibit A ($10) is worth investigating. (Costs of the apps below are listed with the app; free apps are denoted by “F”)

    Project and Task Management Apps
    • Calendar (F)   
    • Corkulous (F + $5)   
    • Popplet Lite (F)  
    • ToDo ($5)  
    • Wunderlist (F)    

    Writing and Collaboration And Communication Tools and Apps
    These apps include writing and note taking apps, grading papers, email, Skype, Google docs, Dropbox, Podcast and Screencast production, internet.

    Apps to Substitute for MS Office and Note Taking
    • CloudOn (F)
    • DocsToGo ($10)
    • Google Docs (F)
    • Notability ($1)
    • Pages ($10)
    • Penultimate ($1)
    • Smart Office ($5)
    • SoundNote ($5)
    Good Utilitarian Browsers
    • Chrome (F)
    • Life Browser ($1)
    • Safari (F)

    Browsers That Play Flash
    • Photon ($5)
    • Puffin (F)
    • SkyFire ($3)

    Utility Apps for Recording, Communications, Bar Code Reading
    • Dictate (F)
    • Display Recorder ($10) 
    • FaceTime (F)
    • i-nigma (F) (QR codes)
    • Skype (F)    
    • Twitter (F)       

    Utilities for Printing                 
    • PrintCentral ($10)  
    Utilities for Displaying
    • Reflector ($15)    
    • Splashtop ($2)

    Finding WiFi
    • Wi-Fi Finder (F)

    Information Management
    These apps include textbooks, readers, database for information materials, lecture note replacement, and pdf readers/annotators.

    Apps for information storage -- A personal file cabinet
    • DropBox (F)     
    • EverNote (basic app is free, there is also a premium version for $5/month)
    • Exhibit A ($10)
    • GoodReader ($5)   
    • Google Drive (F)           

    WebPage Storage Apps (Read webpages offline without an internet connection)
    • Instapaper ($4)
    • JotNot ($2)
    • Offline Pages ($5)
    • Pocket (F)
    • Safari (F)

    Research and Reading and Reference
    • APA Journals (F) (priced by subscription)
    • CourseSmart (F) (books – prices vary)
    • Inkling (F) (books – prices vary)
    • Mendeley Lite (F)   
    • Wolfram Alpha ($5)

    PDF annotation, Pdf readers, Book Readers
    • iAnnotate ($10)
    • iBooks (F)
    • Kindle (F)      
    • neu.Annotate+ ($2)
    • Nook (F)

    Apps to use for Presentations, Whiteboard, Digital Jukebox, Survey and Polls (without clickers).  For a digital jukebox use GoodReader, Keynote, or any app that will play PowerPoint Slides
    • GoodReader ($5)
    • Keynote (F$10)
    • Lecture Tools (F)
    • Poll Everywhere (F+)
    • SlideShark (F)

    Classroom Management
    This category includes apps that are used for organizing the class such as calendars, grade books and attendance (roll book). If working with these types of apps feels cumbersome, then setting up a spreadsheet grade book on your computer and transferring it to the iPad may be a good choice. (I personally use the spreadsheet methods but some faculty like an all-in-one app.)
    • Calendar (F)
    • Google Calendar (F)
    • Numbers ($10) (an office spreadsheet)
    • Reminders (F)
    • ToDo ($5)
    • Wunderlist (F)

    The following are specific apps to organize classrooms, attendance, and gradebooks.
    • Class Organizer Complete ($5; for students)
    • GradeBook Pro ($10)
    • InClass (F; for students)
    • TeacherKit (F)
    • Teacher’s Aide (F)

    Demonstration Apps
    This category includes specific psychology-related demonstration apps. These vary from those that can be used as “labs,” class A/V displays, digital jukeboxes (brain and body), and informational for both the professor and students. The list is by no means exhaustive.

    General Psychology Information Apps
    • Psych Drugs (F)
    • PsychExplorer (F)
    • PsychGuide (F)
    • PsychTerms (F)
    • PsycTest Hero ($4)
    • Psychology Latest (F)

    Lab Demos
    • Cardiograph ($2)  
    • PAR CRR ($4)
    • Puffin (APA OPL) (F)
    • Stroop Effect (F)   
    • TouchReflex (F)

    Anatomy & Physiology
    • 3D Brain (F)
    • Brain Tutor (F)
    • Cardiograph ($2)
    • EyesandEars ($1)
    • Grays Anatomy ($1)
    • iMuscle ($2)

    Sensation & Perception             
    • 3D illusions (F)
    • Eye Illusions ($2)
    • EyeTricks ($1)

    Audio/Visual Informational Resources
    • iTunes U (F)    
    • Podcasts (F)   
    • SoundBox ($1)

    DIY Presentations
    • Educreations (F)
    • Explain Everything ($3)

    Video Presentations
    • Apple Video (F)
    • NetFlix ($8 monthly subscription for streaming)
    • YouTube(F)

    Social Media
    • FaceBook (F)
    • Twitter(F)

    You can find a digital version of this document with LIVE internet links (where applicable) on my college webpage (http://faculty.ccp.edu/faculty/dsberg/) and click on “TUTORIALS & DEMOS.

    David Berg is Professor of Psychology at Community College of Philadelphia where he was the recipient of the Lindback Foundation Award for excellence in college teaching, and where he served as past chair of the Behavioral Sciences Department. He received his Ph.D. from Temple University in experimental psychology and completed postdoctoral training in family systems theory from Drexel University/Hahnemann Medical College. David has pioneered workshops focusing on “wellness in the workplace” and has presented these to government, business, and educational institutions. He trains other psychologists to enable them to perform similar workshops. Dr. Berg has presented a number of workshops that focus on the use of writing in Psychology courses, both at NITOP and at APA. Further, he has presented a number of NITOP workshops on use of technology in the classroom. Since the advent of laptop computers, David has consulted with academic teaching faculty to bring them up to the cutting edge in using technology in the classroom. He also serves as a resource for those who teach in institutions on a “shoestring budget” like his own. He views and uses technology as a means to heighten the standards of critical thinking and writing in teaching rather than as a mere adjunct to lecturing.


Powered by Wild Apricot Membership Software