Society for the Teaching of Psychology: Division 2 of the American Psychological Association

Using psychophysiology technology in general psychology to engage students

02 May 2019 3:55 PM | Anonymous

Kameko Halfmann (University of Wisconsin – Platteville)

I remember the first semester I taught general psychology, fresh, energetic, and a little bit naive. Relatively new to teaching, I would read students’ essays and exams, often in frustration when students clung to misconceptions of psychology that I thought I had adequately dispelled. “How do they not remember me explaining this?!” I would wonder in despair. Since then, it has become one of my missions to figure out how to more effectively teach and dispel these common misconceptions.

Indeed, students walk into general psychology with a common sense understanding of human behavior, often heavily influenced by popular science and armed with misconceptions (Lilienfeld, Lohr, & Morier, 2016). Teaching general psychology, I learned, compels active myth busting to help students understand human cognition and behavior through a scientific lens. Best practices in teaching and learning include providing meaningful examples (e.g., Ausubel, 1968), encouraging student cooperation and teamwork (e.g., Johnson, Johnson, & Smith, 1998), and active learning (e.g., Kellum Carr & Dozier, 2001), to name a few.

Over my handful of semesters as an assistant professor, I’ve leaned on these best practices, attempting to incorporate more examples and active learning into all of my courses. I occasionally collected data, dipping my toes into Scholarship of Teaching and Learning (SoTL); the data always letting me know that students felt like the activities helped them learn.

I specifically developed an interest in teaching with technology. This interest grew from another revelation I had: students are not, so to speak, the digital natives we think they are (Prensky, 2001). Students use technology frequently, but the Education Center for Analysis and Research (ECAR) suggests student tech use is broad, not deep. Moreover, ECAR’s report (2018) indicates students still need support to use technology in meaningful ways. Similarly, Beetham and Sharpe (2007) indicate that students do not necessarily have the “habits of practice” for navigating new technology. So, I began to incorporate assignments and activities that forced students to use technology in educational ways. For example, I incorporated social media assignments into several of my courses.

Then, last year, I had the chance to apply for an in-house grant titled “Innovations in Teaching with Technology.” I decided to apply with the goal to purchase Neulog plug-and-play psychophysiology modules. These modules are relatively inexpensive, easy to use, transportable technology that would allow me to incorporate psychophysiology into my courses. Previous research suggested using technology, such as portable EEG, correlated with enhanced attention, interest, and exam scores (Stewart, 2015). Labs such as these would allow students to “do” psychology and bring course content to life (Dunn, McCarthy, Baker, Halonen, & Hill, 2007) rather than having a lecturer “tell” students about experiments.

In particular, I thought, psychology students tend to struggle to understand concepts associated with the biological basis of behavior; therefore, employing active learning methods to bring these concepts to life in lab sessions could be especially impactful (Thibodeau, 2011). I ended up receiving the grant. I also decided it was time for me to more seriously assess my teaching using SoTL.

Initially, I developed one activity, designed to dispel the lie detector myth (i.e., the myth that “the polygraph is an accurate means of detecting dishonesty,” Lilienfeld, Lynn, Ruscio, & Beyerstein, 2010). Students observed me give a demonstration with a student volunteer, showing how to use the equipment. They also saw, through the demonstration, how several stimuli could elicit an electrodermal response. For example, I would have the volunteer take a deep breath, smell a scented candle, and, if they let me, I’d touch their ear with the eraser of a pencil. Each of these stimuli caused an electrodermal response. In other words, the demonstration showed students how the supposed lie detector test was really just measuring autonomic nervous system activity, and many stimuli, not just lying, could lead to changes in sympathetic nervous system arousal. Students then gathered in groups of 5 or 6 and engaged with the technology themselves for about 25 minutes.

The first semester I used this activity, students reported that the activity improved the quality of the course, helped them understand concepts, helped them connect to others, promoted professional growth, enhanced their experience of participation and should be used more often. Each rating was significantly higher than a neutral baseline, with relatively large effect sizes. The following semester, I decided to take this research a step further: did the students actually understand the content better?

In order to pursue this question, I needed another activity that was similar but covered unique content compared to the first. I decided to develop a biofeedback activity using the electrocardiogram module. Students, again, watched a demonstration on how to use the technology and then engaged with the technology, testing how various stimuli affect heart rate and answering questions related to biofeedback.

I was teaching three sections of general psychology last semester when I assessed student understanding before and after engaging in these activities. Early-ish in the semester, when we were covering stress and emotion, I implemented these two activities (i.e., the lie detector activity and the biofeedback activity) over the course of two class periods, using a nonequivalent group pre-test/post-test design. On the first day, all of my students across three sections of general psychology took a pre-quiz related to the autonomic nervous system and why the polygraph is not considered an accurate index of lying. Two sections participated in the activity using the Neulog technology (lie detector active group). The third section participated in a lecture/discussion on the same topic (biofeedback active group). All sections took a post-quiz.

The following class period, I flipped the groups. The section that had previously participated in a lecture/discussion did the biofeedback activity (i.e., the biofeedback active group) and the other two sections engaged in lecture/discussion on the same topic (i.e., the lie detector active group). Everyone took a pre-quiz and post-quiz again. I also included four questions (two per content type) on the following exam and two questions on the final exam (one per content type) to assess learning.

What did I learn? Did the activities work? To be honest, the main thing I learned were the many challenges associated with conducting SoTL research. I did not find an effect of activity on understanding. Neither activity seemed to help or hurt student understanding of the content. But I did see an effect of activity group: one of my groups was outperforming the other overall: the biofeedback active group performed better, on average, across all assessments. I also found an effect of question content: the biofeedback-associated questions were easier, hitting a ceiling for the biofeedback active group on the exam. I also found an effect of time, where students improved from the pre-quiz to post-quiz (for the lie detector active group) and from post-quiz to exam (for the biofeedback active group). But none of these effects interacted with the activity group that students were in. Based on these assessments, participating in an active learning lesson did not boost performance relative to a more lecture-based lesson.

But back to some of the lessons I learned about SoTL: Determining an appropriate method of assessment was challenging. I clearly used questions that were not well-matched in difficulty across content. I also tried to use variations of similar questions over the course of the semester for the different assessment time points; however, some of the questions were clearly more challenging than others. So, my first major lesson was

  1. Pretest assessment questions so they are matched on difficulty across content type and time of assessment.

Another challenge related to my assessment was selecting an appropriate number of questions. I didn’t want this one topic related to my activities to take over my exams, and I ended up using fewer questions than I should have used to gauge student understanding. I also solely relied on multiple choice questions. My second main lesson was

  1. Use several questions and question types to assess understanding over the course of the semester.

Neither of these lessons are particularly surprising (e.g., see http://regangurung.com/scholarship-of-teaching-and-learning-sotl/ for resources on SoTL), but they do take time and forethought to exercise well. Having assessed students several times now, I can better construct my assessments to reflect student understanding and not simply difficulty or other artifacts.

I also attempted to assess students’ understanding at the end of the semester and included two key questions on the cumulative final exam. However, I decided to drop students’ lowest exam of five this semester, and so for many of the students, the cumulative final exam was optional, and only 37 students out of 100 took the final exam. This was the first time I used five exams, including a cumulative final, and it was the first semester I decided to drop the lowest exam. I did not anticipate such a low proportion of students would take the final exam. Although not directly related to my SoTL project, I would not use this set up again. Not only did many students miss out on an important learning opportunity (i.e., taking the final exam), it reduced my analytic power for this research.

Another challenge I ran into were nonequivalent groups. There are two solutions to this problem that come to mind. First, I could collect more data with a new sample. Second, I could use random assignment to split my classes into two groups and invite only half of my students to participate in each activity (giving the other half a day off or a recorded lecture). Hopefully, this semester, I’ll collect more data in different courses and reach out to students from last semester to see if I can capture one more assessment from them to measure longer-term retention of material. Ideally, I will collect the new data using the random assignment technique to split my classes.

I clearly ran into several limitations that prevented me from drawing confident conclusions at the end of the semester. I don’t know if I will ever be fully satisfied with my teaching or if it is possible to design a perfect SoTL project. Each semester, it seems my students challenge me in new ways, reigniting my mission to find a better way to teach a concept or dispel a misconception. And in following semesters, I respond by tweaking my courses, and sometimes by completely overhauling a course. I’ll continue to lean on other’s research as I slowly accumulate my own SoTL. I hope this research encourages others to put their own teaching to the test. You may discover something works better or worse than you thought. Or, like me, you might just be at the starting point for figuring out how to best assess student learning to determine what works.


References

American Psychological Association. (2014). Strengthening the common core of the Introductory Psychology Course. Washington, D.C.: American Psychological Association, Board of Educational Affairs. Retrieved from https://www.apa.org/ed/governance/bea/intro-psych-report.pdf

Ausubel, D. P. (1968). Educational Psychology: A Cognitive View. New York: Holt, Rinehart, & Winston.

Beetham, H., & Sharpe, R. (2007). An introduction to rethinking pedagogy for a digital age. In Beetham, H., & Sharpe, R. (eds), Rethinking Pedagogy for a Digital Age: Designing and Delivering e-Learning. New York, NY: Routledge.

Dunn, D. S., McCarthy, M. A., Baker, S., Halonen, J. S., & Hill, G. W. (2007). Quality benchmarks in undergraduate psychology programs. American Psychologist, 7, 650-670. DOI: 10.1037/0003-066X.62.7.650

EDUCAUSE Center for Analysis and Research. (2018). The ECAR study of undergraduate students and information technology. Louisville, CO: ECAR. Retrieved from https://library.educause.edu/~/media/files/library/2018/10/studentitstudy2018.pdf?la=en

Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Cooperative learning returns to college: What evidence is there that it works? Change: The Magazine of Higher Learning, 30, 27-38. https://doi.org/10.1080/00091389809602629

Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.
http://dx.doi.org/10.1207/S15328023TOP2802_06

Lilienfeld, S. O., Lohr, J. M., & Morier, D. (2001). The Teaching of Courses in the Science and Pseudoscience of Psychology: Useful Resources. Teaching of Psychology, 28(3), 182–191. https://doi.org/10.1207/S15328023TOP2803_03

Lilienfeld, S.O., Lynn, S.J., Ruscio, J., & Beyerstein, B.J. (2010). 50 great myths of popular psychology: Shattering widespread misconceptions about human behavior. New York: Wiley-Blackwell.

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9, 1-6.

Stewart, P. C. (2015). This is your brain on psychology: Wireless electroencephalography technology in a university classroom. Teaching of Psychology, 42, 234-241. https://doi.org/10.1177/0098628315587621

Thibodeau, R. (2011). Design and implementation of an undergraduate laboratory course in psychophysiology. Teaching of Psychology, 38, 259-261. https://doi.org/10.1177/0098628311421325



Powered by Wild Apricot Membership Software