Society for the Teaching of Psychology: Division 2 of the American Psychological Association

E-xcellence in Teaching
Editor: Annie S. Ditta

  • 03 Jul 2018 10:53 AM | Anonymous

    Laura Chesniak-Phipps and Laura Terry  (Grand Canyon University)


    Faculty members at a Christian university are typically expected to integrate faith into the curriculum. Not only is this encouraged by the administration and falls in line with the mission and vision of the university, but it is also expected by many students. A previous study suggested that students who attend Christian institutions anticipated that their education would prepare them for their future career and also strengthen their spirituality (Sherr, Huff & Curran, 2007). Often, faith integration is defined at the university level and does not consider the students’ perception of this integration (Burton & Nwosu, 2003). As faculty at a Christian university, we were interested in learning from students how they perceived the Integration of Faith and Learning (IFL).  The goal was to determine where IFL was apparent and how faculty could best include this element in their courses. Results of this investigation provided us with insight into student perceptions and offered an opportunity for us to share suggestions with institutions and professors interested in IFL.

    In order to examine this issue, students who were enrolled in Introduction to Psychology courses were asked to participate in focus groups. Focus groups were selected for data collection because they allow for follow-up questions to clarify, and gain a richer understanding of, participant responses. Focus groups, consisting of 50 students, lasted approximately one hour and participants were asked five questions. Students were first asked to sign informed consent forms and then were separated into groups of 7-8. Questions were presented one at a time, and participants were asked to spend a few minutes to individually respond. They were given small pieces of paper and told to write down one response per paper and then were asked to share their responses with their group. When ideas were shared, group members with similar ideas were to indicate that a theme was identified. This allowed for organic coding within the small groups. Finally, the groups were asked to report their responses to the whole group so that themes could be identified and ideas could be grouped. After the focus group was complete, the researchers examined the categories that were created by the students and categorized responses into logical themes.

    Professor Led Integration

    One of the main findings of the focus groups was that participants viewed instructors as being primarily responsible for faith integration. Participants also reported that they experienced faith integration in some classes but not in others. This suggests that while instructors are seen as primarily responsible for integrating faith, not all are doing so. It may be that some instructors do not feel comfortable integrating faith,  or are not sure how to go about doing so. These results support findings from past research (Dykstra, Foster, Kleiner & Koch, 1995; Hall, Ripley, Garzon & Mangis, 2009) which indicated that professors play an integral role in integration of faith in the classroom and should be considered the main source of IFL. From examining previous studies, and the current focus group work, it is clear that students see their professors as not only leaders in their field, but also factors in their development of faith and as a connection between faith and their specific discipline. These results suggest that universities should consider professors as primary agents for the integration of faith and should provide training and necessary resources to support them in this endeavor.

    Integration across Disciplines

    It was not surprising that when participants were asked about in which types of classes they saw IFL, the majority responded theology. However, they also reported IFL in science, technology, engineering and mathematics (STEM), humanities, communications, business and fine arts classes. Furthermore, with the exception of theology, the participants also reported perceived difficulty integrating faith into the above disciplines. This suggests that while classes focusing on religion can easily include components of faith, it is possible to integrate faith into all classes, regardless of the discipline. One reason for this may be that individual professors who teach these classes have a strong faith-base. This also presents an opportunity to explore the curriculum and determine where faith can be integrated organically within each discipline, regardless of an instructor’s religious background. While some of these areas may be more challenging than others, participant responses indicated that there is integration which suggests it is possible and it can be successful.

    Solutions for Integration

    Due to the responsibility of IFL resting primarily on the professor, the training, resources, and materials may help to increase instructor knowledge and confidence. A standardized curriculum could also be developed to include the integration of faith into specific topics within the class. Instructors who are noted as being skilled with integrating faith can be consulted when developing curricula. Dykstra et al. (1995) identified a level of integration where courses can be designed with the inclusion of IFL activities. Incorporating elements of faith into courses through a centralized curriculum would ensure that, despite individual differences in instructors, students will receive the same types of integration. Universities that do not adopt a fully centralized curriculum but want to integrate faith seamlessly, may choose to incorporate assignments or discussion questions that can be used by all faculty members. This would make certain that, despite individual differences in instructors, students will receive the integration they desire.

    Past research suggested that discussion is one of the most common types of integration (Hall et al., 2009) and that this is a path for students to process their personal views (Dyksta et al., 1995). In the focus groups, only a small number of participants reported that class discussion was where they experienced IFL. Some focus group participants referred to the main discussion forum in the online learning management system as a place where discussion could be used. One option could be to have instructors incorporate pre-written discussion questions into the learning management system that focus on IFL. If there are instructors who are not comfortable with IFL in their classroom, pre-written discussion questions that tie into content of the course could be added to provide an avenue to incorporate and discuss faith. Professors who are less comfortable integrating faith or do not have the personal experience to do so can still provide IFL for their students.

    Students indicated several ways in which faith could be integrated into the classroom and campus experiences. Examples included prayer and personal expression that demonstrate the fruits of the spirit. Prayer in the classroom can be achieved in a variety of ways, from professor led prayer to students taking turns leading prayers, or through online discussion forums. One option professors might choose to use for incorporation is a prayer forum in their learning management system. This provides an opportunity for students to share their prayer requests and to pray for each other.

    IFL is an important part of the curriculum at Christian universities and understanding student perceptions of integration can lead to more effective strategies. As faculty members, we strive to deliver a quality education to our students and support the mission and vision of our university. Understanding our student’s perceptions allows us to examine what is being done well and what can be improved upon. While this study focuses on IFL, important lessons can be derived for other learning institutions. In higher education, it is important to understand curricular objectives that are being delivered to students.

    Individual differences in instruction can be leveled by providing a standard curriculum to ensure that all graduates, regardless of their program of study, class modality, or instructor, receive a quality education.


    References

    Burton, L.D., & Nwosu, C.C. (2003). Student perceptions of the integration of faith, learning, and practice in an educational methods course. Journal of Research on Christian Education 12(2), 101-135.

    Chu, J. (2005) Faith and frat boys. Higher Education Research Institute, 165 (19). 48-50.

    Dykstra, M. L., Foster. J. D., Kleiner, K. A., Koch, C. J. (1995). Integrating across the psychology curriculum: A correlation review approach. Journal of Psychology and Theology, 23(4). 278-288.

    Hall, L. E. M., Ripley, J. S., Garzon, F. L., Mangis, M. W. (2009). The other side of the podium: Student perspectives on learning integration. Journal of Psychology and Theology, 37(1). 15-27.

    Sherr, M., Huff, G., & Curran, M. (2007). Student perceptions of salient indicators of integration of faith and learning (IFL): The Christian vocation model. Journal of Research on Christian Education, 16(1), 15-33.



  • 03 Jun 2018 9:57 PM | Anonymous

    Suzanne Wood (University of Toronto)

    At large research universities, undergraduates can get lost in the shuffle. Both logistically and economically, it is more feasible to hold lecture-style classes and to leave undergraduate lab experiences to those who are selected for research assistant positions.  However, this places a significant strain on already overburdened research faculty and their labs and leaves many qualified undergraduates in the lurch.  These undergraduates may be curious about research but may lack the confidence to approach faculty members for open research opportunities (see Bangera & Brownell, 2014 for discussion). Running laboratory courses can meet the needs of these students and lead to many of the same outcomes as achieved through individual research placements in labs, including improvement in scientific writing, computational, and technical skills (Shapiro et al., 2015). Undergraduate research experiences have also been found to bolster student interest in science as a career (Lopatto, 2007).

    One of the most exciting components of my position at the University of Toronto Psychology department was the directive to update the small (maximum enrollment of 20) psychobiology (behavioral neuroscience) undergraduate lab course with new, innovative methods. While I was fortunate that my department was already footing the bill for a massive renovation of the dedicated lab space, including the purchase of lightly used equipment, the accompanying course development was left entirely in my hands. To best utilize these resources, I set about designing a course that would leverage the power of high-impact learning practices which can lead to increased student engagement and retention (Kuh, 2008). These types of learning practices are highly encouraged at the University of Toronto and are documented periodically as part of the National Survey of Student Engagement (University of Toronto, 2014). The power of these practices can be harnessed for many types of courses, but are particularly amenable for a laboratory course setting.

    High-Impact Practices

    The key elements of high-impact practices were integrated into the course redesign as follows:

    Undergraduate Research

    While protocols for this course were established and approved ahead of time, students had the rare opportunity to gain hands-on experience with rodents before deciding to join a lab or apply for graduate school. In addition, while neural structures had been the focus of tissue staining techniques in previous iterations of this course, I updated the curriculum to include analysis of neural activity (c-fos staining). Experience with these types of technique are critical for those undergraduates hoping to pursue behavioral neuroscience graduate work today.

    Collaborative Projects

    Experiment days required participation from all students. Students were also encouraged to work on statistical analyses together, and time in class was allocated to help facilitate this collaboration. Only the writing assignments were completed independently. This distribution of work was an attempt to more closely mimic actual research settings (significant collaboration), while providing assignments for individual marks (written assignments).

    Writing-Intensive Course

    Students submitted multiple writing assignments throughout the semester. Time was devoted in class to faculty-student, or teaching assistant (TA)-student, one-on-one meetings to discuss each writing assignment. The manner in which students addressed their own weaknesses throughout the semester was considered when assigning grades.  This type of intensive feedback was only realistically possible with a small instructor (and TA)-student ratio.

    Career Exploration in the Community

    Preferences in enrollment were given to third year research specialists (high-achieving students who were interested in research, typically with intentions to attend graduate or medical school). With this in mind, I focused on what they would need to know after graduation, either when applying to jobs or graduate programs. I worked with the Career Centre to schedule a visit for students to a local, off-campus neuroscience laboratory during regular class time. To ensure the greatest learning outcomes, I scheduled a preparation session hosted by the Career Centre during class the week before the trip, as well as a debriefing session the week afterward. Students were encouraged to learn not just about the “traditional” research career paths, but also about paths in “non-traditional” science roles (e.g., fundraising, human resources, infrastructure, vivarium management, etc.).

    Student-Faculty Interactions

    The course offered undergraduates the rare opportunity to interact directly with a faculty member on a weekly basis in a small group setting. In my department, third and fourth year courses tend to enroll 50 students, with a small number of seminars offered with maximum enrollments of 20. This small group format allowed for many informal discussions regarding topics in related research areas, career paths, etc. The TA for the class was also tapped for information regarding graduate school applications, life as a graduate student, and other related topics.

    Student Reactions

    The university-wide, online course evaluation tool gathered opinions from students over the past two years concerning the perceived quality of their educational experience in this lab course. The responses were overwhelmingly positive. Below are sample quotes from the anonymous student feedback concerning the high-impact learning course components:

    “This lab course is extremely novel and interesting…I’ve never learned anything this stimulating and applied in any of my other courses.”

    “I learned valuable skills that are rare for an undergraduate course.”

    “[The] personal feedback on papers was excellent and I saw a massive improvement in my scientific writing.”

                  “Such a great course that is unique from most other courses at U of T.”

     “Why aren’t there more courses like this available to undergraduates?!”

    Notably, one student applied to a graduate program in Health Services Administration after completing this course. She ascribes this decision to the class field trip and hearing from one of the neuroscience institute’s employees about “non-traditional” career paths.

    Obstacles

    While the above components of this course have been successful, I would be remiss if I did not mention some of the significant hurdles faced when developing this course. Specifically, three main obstacles continued to rear their heads whenever I seemed to finally settle on an activity or experiment: time, money, and the lengthy commute of my students.

    Time

    One of the challenges in running this lab course was carving out the time to prepare. In contrast to a lecture-based course, a lab course involves preparation of not only learning objectives, content, assignments, and the like, but also logistics such as obtaining the relevant ethics board approval, equipment set up and testing, federal approval for scheduled drug possession, piloting experiments ahead of time, etc. The departmentally assigned teaching assistant was only employed for the term, so, in preparation throughout the summer, I found myself working on tasks during the day that required business hour communication (e.g., federal drug approvals) as well as cognitively taxing jobs such as course design. I spent nights on more menial tasks such as setting up and testing equipment.

    To help offset some of the time burden during the following year, I applied for a small university grant (Advancing Teaching and Learning in Arts & Science; ATLAS) that supported a TA to assist throughout the year in the design, implementation, and piloting of new protocols. The TA was invaluable in offsetting some of the burden of the background work involved in this course, leaving me the time to handle course design logistics. The TA shined in the development of the brain histology protocol and the listing of the necessary equipment and supplies to run it. He completed this task with gusto, leaving no detail out, and saving me countless hours.

    In addition, recruiting help from the Career Centre was essential for setting up the field trip component of the class. They were a source of enthusiastic support during both terms. Again, this collaboration saved me an enormous amount of time in scheduling logistics.

    Money

    Tied in closely with time constraints are money issues. As I mentioned above, an in-house grant helped me greatly, not only for the TA assistance outside of the regular term, but also for purchasing critical pieces of small equipment to complement what was already being supplied by the department. Specifically, I added in molecular biology techniques that reflected common practices in today’s behavioral neuroscience research (it is no longer sufficient to focus exclusively on animal behavior; genetic, histological, and molecular biological techniques are also expected). Equipment such as pipettes and glassware were not part of the lab renovation but were critical to the implementation of these new protocols.

    For instructors at smaller institutions, or if no in-house financial support is available, you may consider the possibility of recruiting undergraduate volunteers who were superstars in previous iterations of the class. While you will benefit from their assistance, the students will benefit enormously from this experience: they will see the setup of the lab from the “inside” perspective and will solidify what they learned in the class. This type of leadership experience will set them apart from their fellow students when applying to graduate school or employment positions upon graduation. In general, undergraduate teaching assistants have been found to benefit greatly from their experiences with the class (e.g., Schalk, McGinnis, Harring, Hendrickson, & Smith, 2009).

    Large, Commuter Campus

    At a primarily commuter campus, the design of the class is constrained to events taking place during class hours only. This is particularly challenging in a psychobiology class where behavioral animal experiments are used. Extended learning tasks (e.g., Morris water maze, radial arm maze, etc.) are simply out of the question. I selected tasks that could be run within a three-hour class session: an abbreviated version of object recognition, comparing rats’ performance on low-dose amphetamine with saline; and open field locomotion, comparing mice injected with diazepam, amphetamine, or saline. Brain tissue histology was performed over the course of several weeks, with tissue being frozen between sessions.

    Benefits can also be found with this type of situation. While students did not have the opportunity to run paradigms that required daily interactions with the rodents, having all laboratory work performed within class hours made this unique experience accessible to students who might not have the flexibility to participate in apprentice-style lab opportunities (e.g., those with lengthy commutes, jobs, or other time commitments; see Bangera & Brownell, 2014). In addition, I was able to leverage the urban location of the campus to coordinate a field trip within walking distance (see High-Impact Practices: Career Exploration in the Community section).

    Take Away Points

    While this piece focuses on a single course at a large research institution, the embedded lessons can be applied to many different settings:

    • 1)     Seek out and find help. Learn about the resource available to you such as institutional funding and offices on campus such as the career center, teaching and learning center, etc. Also, look to TAs and undergraduates to participate in the implementation of classes that are as technically burdensome.
    • 2)     Know your students. Do your students commute, or do they live on campus? Are they 3rd and 4th year students, or are they just starting out? Considerations such as these can help guide your instructional design choices (although all could probably benefit from some instruction on scientific writing, as well as a basic stats review).
    • 3)     While new equipment is fun, it does not make a class. Take advantage of what you have access to, but know that your job is not done once those boxes of new equipment and supplies have been delivered. Implementing high-impact practices can help to ensure important learning experiences for your students, regardless of sophistication of laboratory techniques.

    References

    Bangera, G., & Brownell, S. E. (2014). Course-based undergraduate research experiences can make scientific research more inclusive. CBE Life Sci Educ, 13(4), 602-606. doi:10.1187/cbe.14-06-0099

    Kuh, G. D. (2008). High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. Washington, DC: Association of American Colleges and Universities.

    Lopatto, D. (2007). Undergraduate research experiences support science career decisions and active learning. CBE Life Sci Educ, 6(4), 297-306. doi:10.1187/cbe.07-06-0039

    Schalk, K. A., McGinnis, J. R., Harring, J. R., Hendrickson, A., & Smith, A. C. (2009). The undergraduate teaching assistant experience offers opportunities similar to the undergraduate research experience. J Microbiol Biol Educ, 10(1), 32-42.

    Shapiro, C., Moberg-Parker, J., Toma, S., Ayon, C., Zimmerman, H., Roth-Johnson, E. A., . . . Sanders, E. R. (2015). Comparing the Impact of Course-Based and Apprentice-Based Research Experiences in a Life Science Laboratory Curriculum. J Microbiol Biol Educ, 16(2), 186-197. doi:10.1128/jmbe.v16i2.1045

    University of Toronto (2014). Results of the National Survey of Student Engagement. Retrieved on May 31, 2017 from http://www.provost.utoronto.ca/Assets/Provost+Digital+Assets/NSSE2014report.pdf


  • 01 May 2018 6:10 PM | Anonymous

    Karen Z. Naufel  (Georgia Southern University)

    Psychology sometimes has a public relations problem. People are skeptical of its science (Lillienfeld, 2012) and usefulness (Halonen, 2011). It is important that we teach others about the practicality and ubiquity of psychology. Teaching about these values is not limited to only the classroom. Instead, if people are to learn about psychological science, we as instructors must extend our teaching beyond our academic borders. As others have said, we must teach to the community (e.g., Lilienfeld, 2012; Zimbardo, 2004).

    Over the past several years, I have had this privilege of teaching psychology in the community. The process is different from teaching students. Community members have more freedom in choosing what they want to learn. The technology available in the classroom is not always available in community settings. The chance to correct a misunderstanding of information is limited. Simply put, effective teaching in the community often requires a different subset of skills than effective classroom teaching. In this essay, I present some tips for teaching the community that I've picked up along the way. Although there are many ways to teach in the community, I focus on how to give lectures (or “programs” as they are typically called).

    Tips for Getting Started

    Compared to students, community members have different incentives for learning material: They are not learning to ace tests or get good grades. Instead, they choose to learn when topics appeal to them. Therefore, it is crucial to identify topics that will appeal to a wide, non-academic audience. Identifying topics that will draw in such an audience can be tricky. If a program topic seems relevant and interesting, people come. If a program topic is too narrow, controversial, or academic, then community members may shy away from attending. Here are some tips for generating appealing program topics:

    • Pick topics that meet community needs. If people stereotype psychology as a field that

    only helps others with personal problems, then people are not likely to know how psychology could relate to them. Likewise, if psychology instructors aren’t connected with the community, then instructors also may not know what the community really needs.

    Identifying community needs comes from submersing oneself in the community. It can come from looking at local organizations’ webpages, daily conversations with people at the coffee shop, or a chat with a worker while in the grocery store checkout line. Think about how psychology is connected to the issues that others bring up in these situations. Then, brainstorm program ideas that relate.

    • Teach only what you know. As you generate program ideas, remember the ethicality of teaching only what you know. The American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct has specific provisions regarding making public statements [see Sections 2.01(a & c); 5 & 7]. Additionally, academic freedom does provide some license to talk freely. However, this freedom also comes with the responsibility of providing accurate information (Hunt, 2010). Sometimes, you may be invited (or tempted) to give a program on a topic outside of your area of expertise. In such instances, it is best to decline and instead refer the program to a knowledgeable colleague.

    • Reframe program titles so they don’t create reactance. As we know from our long familiarity with the confirmation bias, people look for information that confirms rather than disconfirms their beliefs (Nickerson, 1998). Therefore, a talk entitled, “Spanking: Why It’s Not a Good Idea” will likely only draw in a crowd of people who already agree with the premise. Those who spank their children—arguably those who need this information more—may avoid the talk altogether. A talk title that is less direct (e.g., “Making the Terrible Twos Less Terrible: Strategies for Raising Healthy Toddlers, Preschoolers, and Children”) may elicit greater reception.

    • Rapport matters. Even with a snazzy title, it can be difficult to get an audience. In tightknit or small communities, activities from newcomers or outsiders may be viewed suspiciously. Therefore, posting fliers about your program around town, creating a public Facebook event, or announcing it in a newspaper may work, but the resulting audiences may be embarrassingly minimal. (Can you imagine giving a talk to only one person? I can. It’s awkward.)

    Personally, the best experiences I have had in getting program gigs have come from connecting with people from the community (see Tip 1). Go to Farmer’s Markets, spin classes, and community events. While waiting for your coffee at the local shop, chat with another frequent customer. Join locally-based Facebook groups or other groups, many of whom can recruit audiences for you. As you foster these friendships, it becomes easier for you to tell them what you do, and easier for them to ask for and value your expertise.

    • Consider how your institution views these activities. Most likely, your institution will herald these activities as important service work. However, consider important policy and legal ramifications. Such service opportunities may also be considered consulting work in certain circumstances—even if your work is free. In these cases, institutions may limit the number of hours a faculty member can engage in consulting behaviors. Some institutions may require permission to use university’s supplies, such as a laptop or printer, for these events. Others may fully cover you should be injured while delivering a program, but the institution may require that formal paperwork be filed beforehand.

    Tips for Developing a Program

    Creating a lecture is not the same as developing a program. Beloved teaching strategies like think-pair-share may seem odd in a community setting, and assigning readings beforehand may not be possible. Instead, an instructor will likely get one brief shot to deliver the information clearly and succinctly. To increase the likelihood that a program goes well, consider these tips:

    • Teach to the community, not to students. I remember a moment I was discussing research with a community member. I used the word “altruistic”— a word with which the community member was unfamiliar. She then said, “you professors like your big words, don’t you?” At that moment, I felt the rapport between us plummet. I had reinforced a stereotype that academics were not connected to the outside community.

    Since then, I’ve aimed to be more mindful of my audience. Americans tend to read an eighth-grade level or less, and a substantial portion of the population lacks basic reading skills (Literacy Project Foundation, 2017). Therefore, lectures for a typical college-level psychology class may be too advanced for many community members, and it is important to adjust accordingly.

    To make it more likely that a program appeals to wide audiences, it’s wise to have people with a variety of educational backgrounds offer feedback on your program’s recruitment materials, program, and activities. Although it is intended for creating health materials, the Center for Disease Control’s brochure Simply Put: A Guide for Creating Easy-to-Understand Materials has transferrable tips for delivering presentations to an audience with a wide range of literacy levels (Center for Disease Control, 2009). Additionally, reading-level calculation tools, such as the Flesch-Kincaid scale, can determine if text (or a transcription of what one plans to say) is at an acceptable level. Many word processing software systems, like MS Word, have such tools built in.

    • Fair use rules for copyrighted material may be different. Do you have a favorite cartoon that you like to show to your classes? Is there a graph in a journal article that really illustrates a concept? The same principles for fair use in academic settings are not necessarily the same ones for use in community settings. To determine what media can be included in a program, consider how these media will be used. For instance, does the organization want to post your program's handouts on their webpage? Will the organization disseminate your program's materials to others? It is pertinent to review fair use policies to determine whether materials can be used.

    Some websites have materials that are free for public use. For instance, Pixabay.com has thousands of photographs available, and it does not require attribution or the creator’s permission to use. Other websites, such as the NOBA project (NOBAproject.com), have license agreements explaining how the material can be used and shared.

    • Plan for no PowerPoint. If planning to use technology as part of the presentation, and your program is off campus, remember that not all organizations have equipment for you to use. BYOT (Bringing Your Own Technology) may be an option. If you choose to BYOT, ask about the room setup prior to coming. Rooms can be too small for a projector, outlets may not be available, or the room setup may not be conducive for using technology. On one occasion, I was told a monitor with an HDMI cable would be available to hookup to my laptop. It was, but the monitor size was much too small for everyone to see the graphics clearly. On another occasion, I was promised a projector. When I arrived, they had a projector, but no projector screen. Unfortunately, art occupied all wall space, which meant I couldn’t project on those surfaces. Luckily, I had brought handouts so I could improvise on the spot.

    Although I love using technology in the classroom, I rarely use it anymore when giving programs to the community. Instead, I have found that giant Post-It® notes can be great for writing quick points or drawing quick visuals. Handouts, too, can provide a summary of key points without relying on the randomness of technology.

    • Be prepared to give programs of varying lengths. Instructors may be used to having nearly an hour or more to give a program. However, community programs vary drastically in time allotment. Though sometimes I have an hour or more to speak, I am usually asked to give shorter (10-20 minute) programs.

    Some programs take place during an organization’s regular meeting. Their regular meeting agenda may run long, which cuts into the program time. I have had to change the length of my program on the spot. Just as it is important to have an idea what to cut from a lecture, it is also a good to have an idea what to cut if giving a program.

    If you find yourself with a tiny time limit, remember these rules: 1) Emphasize a single main point, and 2) Provide participants with specific steps for how to obtain more information upon completion. The last step is particularly important in preventing participants from internet searching pseudoscientific and inaccurate information.

    Tips for Finishing up a Program

    • Assess your work. Techniques that work in classrooms may not work as well in the community. Alternatively, a novel approach in the community may inspire a new teaching technique for your classroom. If at all at all possible, chat with attendees after you give your program. Such chats can provide insight to if and how they will use the information they learned. For longer programs and workshops, it is also acceptable to ask participants to complete a very brief survey about your talk. (You can for shorter programs as well, but it may impinge on your time limit). The assessment aspect, whether formal or informal, is vital for improving your techniques for future programs.

    • Take experiences back to the classroom. Teaching community members can augment the quality of your own classes. Students often crave real-world application of material, and these experiences—unless proprietary—can provide examples to share with your students. Additionally, these experiences can foster the community relationships necessary to have successful and unique service learning opportunities. For instance, a program on creating customer satisfaction surveys for small business owners could transform into an indirect service learning project for students in a research methods course. To maintain a relationship with the community members following a program, the instructor could suggest having students work on the project as part of a course assignment.

    Enjoy the reward. Though teaching students and the community may require different approaches, they do yield similar feelings of reward. When teaching either in the classroom or in the community, we are often providing the first glimpse of psychological science. In both cases, it is exciting to see those wide-eyed moments when people realize the extent to which psychology is valuable to them.

    References

    American Psychological Association. (2017). Ethical principles of psychologists and code of

    conduct (2002, Amended June 1, 2010 and January 1, 2017). Retrieved from

    http://www.apa.org/ethics/code/index.aspx

    Center for Disease Control (2009). Simply out: A guide for creating easy-to-understand

    materials. Retrieved on July 24, 2017

    from https://www.cdc.gov/healthliteracy/pdf/simply_put.pdf

    Halonen, J. (2011). Are there too many psychology majors? White paper prepared for Staff of

    the State University System of Florida Board of Governance. Retrieved from

    https://www.cogdop.org/page_attachments/0000/0199/FLA_White_Paper_for_cogop_posting.pdf

    Hunt, E. (2010) The rights and responsibilities implied by academic freedom. Personality and Individual Differences, 49, 264-271. doi:10.1016/j.paid.2010.01.011

    Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study

    of human behavior as unscientific. American Psychologist, 67, 111-129. doi:

    10.1037/a0023963

    Literacy Foundation Project (2017). Staggering Illiteracy Statistics. Retrieved on July 24, 2017

    from http://literacyprojectfoundation.org/community/statistics/

    Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review

    of General Psychology, 2, 175-220. doi: 10.1037/1089-2680.2.2.175

    Zimbardo, P. G. (2004). Does psychology make a significant difference in our lives? American

    Psychologist, 59, 339-351. doi: 10.1037/0003-066X.59.5.339



  • 01 Apr 2018 5:10 PM | Anonymous

    Kamil Hamaoui  (Westchester Community College)

    Empirical studies have established that the testing effect is an effective strategy for improving long-term memory (Brown, Roediger, & McDaniel, 2014; Roediger, Smith, & Putnam, 2011). In short, we can improve our ability to remember information, concepts, and skills when we test ourselves during learning. In terms of the stage model of memory, if we repeatedly practice retrieving a memory from long-term memory into working memory, it becomes more firmly consolidated in long-term memory, preventing future forgetting.

    Most studies of the testing effect have been conducted in the lab, under carefully controlled, artificial conditions, which calls the external validity of the effect into question. However, in recent years, researchers interested in teaching and learning have examined the applicability of the testing effect to the classroom setting. The testing effect can be used in the classroom by administering quizzes on content that students have already learned, whether through reading, lecture, discussion, or some other activity. These quizzes can be multiple-choice, fill-in-the-blank, short-answer, etc. and may be administered at the beginning of a class meeting, at the end, or integrated throughout coverage of the content. Does the periodic use of review quizzes in the classroom lead students to better learn and remember course content?  Will students who are quizzed perform better on the comprehensive exams given after a block of material or at the end of the term?

    Findings from applied studies on the testing effect are mixed, but Nguyen and McDaniel (2015) present some general conclusions in their review of the existing literature. Quizzing does improve exam performance when the exam questions are the same or similar to the quiz questions. However, it seems that there is no improvement if the exam questions test on the same topic as the quizzes, but on different concepts.

    This suggests that if we want to make maximum use of the testing effect to improve student learning, we should quiz students on all the concepts we want them to learn. As any instructor knows, however, regardless of the level of experience, this isn’t feasible. As it is, without any class time devoted to quizzing, we struggle with the issue of what content to cover in class, since we don’t have enough time to cover everything we want students to learn. This raises several questions:

    • ·       Can quizzing serve a purpose beyond the testing effect? 
    • ·       Will having periodic review quizzes on some concepts motivate           students to study outside of the classroom?  If so, will the type of studying they do benefit their long-term memory of the material studied? 
    • ·       Does it make a difference if the quizzes are graded or ungraded?  Will graded quizzes motivate students to study more effectively, leading to better long-term learning?

    In order to address these questions and get some answers for myself, I designed and conducted an experiment on the effects of different types of review quizzes on long-term learning in three sections of my General Psychology course at Westchester Community College. I administered periodic short-essay quizzes testing students’ (n = 75) understanding of specific concepts covered during the previous class sessions. Quizzes were scheduled and designated as counting towards the course grade (graded), not counting towards the course grade (ungraded), or potentially counting towards the course grade (pop). For the latter condition, a coin toss just prior to the quiz determined whether the quiz would be graded or not. A Latin square design was used to control for differences in the difficulty of topics and order effects.  Specifically, each quiz condition was assigned to a different topic (sensation and perception, learning, or memory) in each of the three sections, and each quiz condition was assigned to a different time in the term (first, second, or third) in each of the three sections.]

                  Unannounced, practice tests consisting of short-essay questions were administered halfway through the term and at the conclusion of the term. These tests included questions on the same topics as the review quizzes, but on different concepts. I predicted that students would perform better on the topics which had preceding graded or pop review quizzes than ungraded quizzes, thinking that students would study these topics more in preparation for those quizzes.

                  What did I find? There were no significant differences in test scores between the different quiz conditions. Evidently, the type of studying that students did in preparation for graded or potentially graded quizzes was not beneficial to their long-term learning relative to the type of studying, if any, that students did in preparation for ungraded quizzes. My guess is that most students simply read over their notes for a few minutes right before the quiz as they were waiting for class to begin. This might have been effective for performing well on the quiz, but it did not benefit their long-term learning any more than whatever preparation (probably none) that they did for ungraded quizzes. As we know, reading and understanding what is being read in the moment is not the same thing as learning and remembering something in the long term. Also, “massed practice,” familiar to students as cramming, is not as effective as “distributed practice” or spacing out one’s studying in smaller learning sessions (Brown, Roediger, & McDaniel, 2014). Ironically, at the end of the term, students reported that they thought the graded or pop quizzes were best for their personal learning because they studied more. This suggests that students do not have insight into the studying strategies that are required for long-term retention of course content.

                  What else did students think about the different kinds of quizzes?  Beyond their erroneous belief that the additional studying they did for graded or potentially graded quizzes compared to ungraded quizzes was good for their learning, students reported a strong dislike for the pop quizzes. They preferred predictability, either knowing that a quiz would be worth points or not. If it was worth points, they reported being more motivated to study and felt rewarded for their studying. If it was not worth points, they felt less anxiety and could focus on other classes.

    So what did I learn from this study? How will it inform my teaching? I learned that ungraded quizzes are the way to go. Making quizzes graded or potentially graded does not lead students to study in ways that benefit their long-term learning any more than making them ungraded, and many students experience increased stress from graded quizzes. In addition, making quizzes graded means you have to grade them, which can take considerable time depending upon the type of questions and the size of the class.

    On the other hand, using ungraded, in-class review quizzes has multiple benefits. If exams have similar questions to the quizzes or test on the same concepts, the testing effect will boost students’ learning and performance on the exams. And, a few studies have found that ungraded quizzes actually produce a stronger testing effect than graded quizzes (Khanna, 2015; Wickline & Spektor, 2011).

    In addition, with appropriate feedback, review quizzes can serve as a valuable formative assessment tool. Students can learn what they know and what they don’t know, and how their thinking and test-taking can be improved. If quizzes consist of short-answer or short-essay questions, after students write their responses, the teacher can ask students to share what they wrote and then evaluate the responses for students in class. Many criteria or intellectual standards (Paul & Elder, 2000) factor into the quality of written work. These include accuracy, clarity, precision, logic, depth, breadth, relevance, significance, and fairness. Criteria other than accuracy, which is whether the response is correct or incorrect, often make the difference between a “good,” “very good,” or “excellent” response.

    For example, let’s say that a student writes that evolutionary psychology is the “study of traits and what they do for us.” This is basically correct, but the response is of low quality. The wording “what they do for us” lacks precision and clarity. The wording could be improved by stating that evolutionary psychology is the study of how traits “function to improve our adaptiveness to the environments in which we live.”  The instructor can point out that adaptation, or a variant of the term, is a keyword that should be included in the definition. It could also be pointed out that the response lacks relevance to psychology, which is about behavior and mental processes. To make the response relevant, the wording “behavioral and psychological” traits should be included. Moreover, it’s not just about humans. To make the definition broader, non-human animals, which are studied by comparative psychologists, should be included as well. Taking the time to give this type of detailed feedback in class teaches students about critical evaluation, an important part of critical thinking. It also gives students clear expectations for how their written work on exams and assignments will be graded.

    One last benefit of using in-class review quizzes is that they can be used to incentivize attendance and create a more orderly beginning and end to the class session. If attendance is required for the course, papers students use to write their quiz responses can be collected and used to take attendance. If attendance is not required, quizzes can be offered as all-or-none extra credit. We know how much students love extra credit! If the quiz is given at the very beginning of class, students will be encouraged to come on time. Some students will inevitably arrive late, but they will trickle in quietly without causing a disruption to the learning environment. If the quiz is given at the very end of class, students will be encouraged to refrain from packing up until the class is officially over. No more of that infernal shuffling as we get close to the end time of class!  

    I have been using in-class review quizzes for some time now, but in various ways and off and on in various classes. After completing this study and reviewing the relevant literature, I’m now convinced more than ever in their usefulness and in leaving them ungraded. I recommend making stress-free, ungraded in-class review quizzes part of your teaching tool kit!

    References

    Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, MA: The Belknap Press of Harvard University Press.

    Khanna, M. M. (2015). Ungraded pop quizzes: Test-enhanced learning without all the anxiety. Teaching of Psychology, 42, 174-178. doi: 10.1177/0098628315573144

    Nguyen, K., & McDaniel, M. A. (2015). Using quizzing to assist student learning in the classroom: The good, the bad, and the ugly. Teaching of Psychology, 42, 87-92. doi: 10.1177/0098628314562685

    Paul, R. W., & Elder, L. (2000). Critical thinking: Basic theory and instructional structures handbook. Tomales, CA: Foundation for Critical Thinking.

    Roediger, H. L., Smith, M. A., & Putnam, A. L. (2011). Ten benefits of testing and their applications to educational practice. In B. H. Ross (ed.), Psychology of learning and motivation. San Diego: Elsevier Academic Press.

    Wickline, V. B., & Spektor, V. G. (2011). Practice (rather than graded) quizzes, with answers, may increase introductory psychology exam performance. Teaching of Psychology, 38, 98-101. doi: 10.1177/0098628311401580



  • 01 Mar 2018 9:28 AM | Anonymous

    Ronald G. Shapiro

    Since most students who complete occasional psychology courses and even most undergraduate psychology majors will not enroll in graduate school in psychology or become psychology professionals, it is important to prepare these students for jobs in other fields. This article provides suggestions on how offering a non-majors psychology course in lieu of introduction to psychology for non-majors, making minor changes to other courses, providing different types of opportunities, and focusing recommendations can help to prepare students for jobs in different fields.

    Non-Majors Psychology Course. One of the “facts” I learned in graduate school was that non-majors who earned an “A” in an introduction to psychology course, when asked to retake the final exam a year later did not pass it (Sidney L. Pressey study reported by David Hothersall in History and Systems Class, Ohio State University, circa 1977). This fact has had huge impact on my thinking. If people aren’t going to remember it, why teach it? One might argue that it is easier to relearn material. True, but your non-major may not be very likely to do this. Instead, I would recommend making a list of those items you really want non-majors to remember five years after the final exam and teach those materials and only those materials to undergraduate non-majors. Be thorough in teaching those materials. Teach them in a variety of contexts. One way to do this would be to offer a non-majors psychology course. Structure the non-majors course in ways that students might use the material (rather than as we structure the field with our specialties). Topics might focus on how to use psychology:

    ·       In society (separating “fake news” and “alternative facts” from science);

    ·       In marketing and advertisement;

    ·       In working with others;

    ·       In structuring a work environment;

    ·       In understanding how a person develops from birth through death; or

    ·       As a potential consumer of psychological services.

    This structure would help students better use the materials, and see how what’s being taught might be helpful to them. In this restructured course remember to teach only what you want the students to remember 5 years after the final exam.

    In Today’s Courses. Explain and have students complete numerous projects applying whatever you teach to real world solutions. If the material you teach is basic research that is so cutting-edge that there are no applications for it yet, have the students participate in projects which help them to think about how the materials might be used to change lives a year, a decade or a generation from now. This may require teaching less material, but in more depth. Show students how to become a “citizen expert” (if not a scientist) continuing to follow up on these projects throughout life.

    Providing Advice to Students. Truly understand the student’s objectives (and the objectives of the person paying for the student’s education) before offering advice. Early in my career I would have advised a student that their primary objective in college is to learn all that they can from their academic departments. Everything else is secondary. For some students this is truly the case and I would recommend this today. For example, I have encouraged many high school students to meet faculty on their visits to college campuses and figure out how they can become involved in their research from freshman week onward. For other students, I would today argue that their best bet is to lead a very balanced life. The extracurricular activities, friendships formed, internships, and other experiences might be more valuable to them than what they learn in their academic departments. Encourage these students to take advantage of the numerous benefits provided while they are enrolled in a program (i.e. regular access to faculty, internship programs) that are harder to obtain without the student status. Recommend that students learn as much about business as possible through studying I/O psychology as well as completing courses in business. Also, recommend that students learn as much about technology as their interests allow, because more and more positions will require knowledge about technology   

    Producing a Resume. You may wish to help your students prepare their resume. Resumes for industry are vastly different than academic resumes or CVs. An industrial resume needs to ROAR (be Results Oriented and Relevant). In addition to being much shorter than academic CVs, they need to show a potential recruiter and a potential hiring manager why this applicant is better than the numerous others applying for the same job, in just seconds. A resume that shows real results, and that the applicant took the initiative to show how they would apply their knowledge and experiences to meet the specific employers’ needs are most beneficial. Keywords may be important for the recruiter. Showing real results (rather than job responsibilities) that will demonstrate to a hiring manager how those results translate into action is critical. In response to the frequently asked question “How long should a resume be?” The answer is long enough so that the person reading it becomes more enthusiastic about the candidate with every sentence, not bored with redundant or irrelevant detail. Providing the names of faculty members (e.g., worked in Professor Smith’s lab) are only important if the reader is likely to know or have heard of Professor Smith. References would not normally be included on a resume (to protect faculty from random calls), and the words “References Furnished Upon Request” should never be included on a resume because the point is obvious and also, it is somewhat insulting to the reader (saying “I do not trust you with the names of my references”).

    Writing letters of recommendation. You are writing a letter of recommendation, not a performance evaluation. Your job, should you chose to accept it, is to sell the student to prospective employers by pointing out his or her strengths and why the potential employer will be better off with this student (as opposed to someone else) on their team. Before deciding if you can do this (unless you know you cannot up front) review the student’s resume and ask the student for a listing of content you might include in their letters. If you cannot use the content, explain to the student what you can do for them in a letter and suggest that there are probably others who can do a better job for them. Don’t “kill the student with faint praise.”  Don’t discuss student’s weaknesses or areas for improvement.

    The interview. Help your students to be able to communicate with potential colleagues, managers, people familiar with their work, and people not familiar with their work. In an industrial interview, applicants may meet with many people including recruiters, potential managers, and colleagues. Be sure your students can communicate their research work as well as other topics effectively. They should be able to explain their work (emphasizing their own contributions and differentiating them from the work of others) in one minute, five minutes, ten minutes or a full length presentation and have the listener engaged, excited about the topic, and seeing how the applicant would be the best fit in their organization. Please be sure to do this in the time allocated. One way to do this is to show how their research fits into the company’s mission and requirements. I might add the purpose of the interview is to determine if there is a good fit between the candidate and the position for both the applicant and the company. Accordingly, the applicant should be prepared to ask meaningful questions that will help them to decide if the position is a good fit for them explain how they will be a real asset to the specific company and demonstrate a thorough understanding of the company and enthusiasm for being part of it.

    Decision Making. Businesses need to get products to market in a timely fashion. Thus, decision making is simply different than in academics. In academic basic research one might want to have a standard of p<.05, p<.01, p<.001, etc. In industry decision making may be made with absolutely no evidence (depending upon the industry). If an employee is 50.01% confident in a decision based upon knowledge and research they should be prepared to make a recommendation, as the recommendation is based upon some knowledge. Depending on the circumstances, they should also be prepared to qualify how confident they are in the decision. Rather than using p values for decision making, corporate executives may be more likely to use the 80/20 rule. That is, you can accomplish 80% of what you want to do with 20% of the effort. So, stop the process and go when you are 80% confident. You can help students understand this important distinction.

    Deadlines. Deadlines are critical in business… far more so than in academics. They are real. No matter how thorough a contribution is, if it is late it may be totally useless. There may be some circumstances in which a late contribution may be acceptable, usually when an even more critical process has been delayed. The odds of this are minimal. The academic practice of deducting points for late work really doesn’t apply to much in business. A recommendation a day or a week late is not, for example 80% or 90% as good as a recommendation delivered in a timely fashion. A more realistic way to make decisions about accepting late work would be to shuffle a deck of cards after the late work is completed. Draw a card off the top. If it is, for example, an ace, accept the work. If not, don’t.

    Oral Communications. Communicating in business is simply different than communicating in school. For example, I learned a very bad habit in graduate school. Ask questions to show you understand the work and to show defects in a presenter’s thinking. One of my best managers ever pointed this out to me. His recommendation was to: 1) only ask my questions if everyone else had completed theirs and my question had not been asked and 2) only ask questions for clarification. Otherwise, address the questions with the presenter off line. Be sure that your students understand this important distinction.

    Written Communications. In academics we tend to write long journal articles explaining numerous details about our work. In industry, a brief executive summary is the more important means of communication. Executives trust that we know how to do our work and we may not need to demonstrate how we derived our results to them. When sending written communication, keep the receiver in mind and anticipate their schedule, mind frame, and organizational style (i.e. details versus quick summaries). Chances are that an executive will be very busy, rushed, and stretched thin, in which case having results and next steps up front will go a long way. Keep thorough lab notes. Depending on the corporate culture expected from your executive team, write the detailed report for backup or else skip it all together. In my first report on a study I did at a major corporation, two of us were presenting. My colleague was to present part 1. I was to present parts 2 and 3. Somehow, when he finished I went right into part 3. No one cared that the details were left out. Indeed the comments I received from my client were completely complimentary… that my department had learned how to present more concisely.

    Research Involvement. Offer your students an opportunity to work with you on research. This will help them to develop great skills. Be sure that they can explain what the research was all about, their role in it, and how that research was better because of their participation (as opposed to that of another person). Be sure they can explain this very succinctly as well as in detail.

    Perception of Degree Value. I’ve heard professionals, even a vice president in a major corporation, say “I was a psychology major and it was useless to me. It did not help me get a job.”  That statement may be true. I did point out to her that while the degree may not have helped her secure her first position with the business, what she learned probably helped her to advance very quickly from an entry level position to a high level executive position. She agreed. My recommendation here is to clearly explain to your students what a psychology degree may and may not do for them in the business world, generally when they are considering the major. Explain this at the beginning of the semester for each course. Explain again, at the end of the semester, how the content should help them. In between assign work that will help the students to explain how the content might apply to the business world.

    Seminars. Invite alumni who have gone into industry 1, 5, 10, and 20 years ago to offer seminars at your school showing how their degrees have helped them, and how the students might apply their degrees.

    Internships. Completing one or two successful internships or coops can be extremely valuable for students as a learning experience. If they perform well, it may also be the key to having a great job waiting for them on graduation day.

    In summary, I would say that a psychology major can be an extremely valuable tool to help a professional throughout their career if they make the most of it by becoming extremely involved with their department, research, course work, and internships. If they, on the other hand focus on taking mostly large lecture courses to meet the minimum degree requirements they will be minimizing the value of their degree.

     

    Author note: I would like to thank Industrial Consultant Dr. Margarita Posada Cossuto for helpful comments.


  • 01 Feb 2018 9:27 AM | Anonymous

    Jennifer A. Oliver (Rockhurst University)

    The use of case studies is a common active learning strategy employed in psychology. Case learning is useful for developing critical-thinking skills (Krain, 2010), and for increasing students’ motivation and interest in course material (McManus, 1986a; McManus, 1986b). Researchers have described many positive outcomes of using case studies. These include helping abstract theoretical information become concrete, facilitating understanding; reinforcing course concepts as students analyze, infer, and examine relationships (Graham & Cline, 1980); and integrating students’ learning as they incorporate theory into practice and make practice integral to theory (McDade, 1995).

    But most of the work examining the use of case studies uses pre-written cases. While I wanted to use cases in my Psychology of Disability course, the only cases that I could find were focused either on abnormal psychology or on special education, and neither area was a good fit for this course. So, I decided to have students write their own cases. Few studies have examined having students write their own cases. Successful application of student-generated case studies has been used at both the undergraduate level in business and science, as well as in medical training (Yurco, 2014). In fact, Yurco reported that when students created their own cases, they developed greater confidence, ownership of the learning process, a deeper understanding of the material, and improved critical thinking skills in an introductory neurobiology course. McManus (1986b) reported that having student groups compose a problem-focused case and generate potential solutions to the problem in the case assisted students in consolidating course concepts in an adolescent psychology course.

    In this essay, I describe an applied project that I use in my undergraduate Psychology of Disabilities course, along with information on students’ performance and their views of the project. The Psychology of Disabilities is a 4000-level class (junior and senior level). All of our 4000-level courses require an assignment that involves an integrated literature review but I also wanted to incorporate some application into the course at a broader level than just using exam questions.

    The Project

    In the Psychology of Disabilities course, students chose a disability and wrote their own case study of an individual with that particular disability. The project included:

           An integrative literature review (minimum of 4 double-spaced pages) describing the disability, including psychological and behavioral characteristics, prevalence rate, developmental changes as an individual with the disability moves from childhood to adolescence to adulthood, (possible) causes of the disability, and at least three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world. Students had to cite at least eight credible academic sources, with at least two of the sources being empirical journal articles. They were allowed to use one internet source that summarizes information on the disability; however that source had to be a credible source, written by individuals who are professionals and knowledgeable about the disability. I provided students with examples of sources that would both be acceptable and not acceptable. Students turned in rough drafts of this section at midterm for feedback before the final project was due at the end of the semester.

           A case study of a fictional individual with that disability at two contrasting ages (minimum of 1 full page, single-spaced per age). In keeping with the developmental focus of the class, students could use any ages between preschool and young adulthood (up through the early-20s). In their case study, students needed to apply the characteristics, described in the literature review, that an individual with that disability would exhibit at the chosen ages, and include either a behavioral interaction and/or a verbal interaction between the individual and at least one other person

           A complete description of two possible interventions/treatments that would be appropriate for their fictional individual, including the effectiveness of each intervention/treatment. In addition, students discussed which age from their case each intervention/treatment would be most appropriate for and why.

    An example of a case study and two additional completed projects were available for the students to use as models.

    Student Performance

    In order to determine how well students performed on the assignment, I evaluated the grades on each section of the assignment from 56 students (28 each, in Spring 2014 and in Spring 2015). The percentages of grades for each area of the assignment were as follows:

    Case Study

    Literature Review

    Treatment/Intervention

    A

    58.9

    60.7

    51.8

    B

    32.2

    17.9

    30.3

    C

    5.4

    16.0

    12.5

    Below C

    3.5

    5.4

    5.4

    Overall, students performed well on all three areas of the assignment, with at least 78% earning an A or B on each portion. Over 90% of the students did quite well on the case study portion. Common areas where students missed points were not providing an example of behavioral and/or verbal interactions between the individual and another person, not including all of the characteristics described in the literature review in the case, or not meeting the length requirement. A higher percentage of students received a C or lower on the literature review portion than on the other two sections of the project, which was surprising since they received feedback on a previous draft of this section of the project. Common difficulties on the literature review included not fully describing the disability, choosing inappropriate sources (especially an over-reliance on internet sources), and lack of integration of information from multiple sources. In addition, students were asked to describe three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world; students often ignored the actual sociocultural factor choices given in the assignment and came up with their own factors. This was the first psychology course that required a writing assignment this in-depth for some students, which may explain the lower scores on this section. A few students did not incorporate feedback that was provided on their draft. If students lost points on the treatment/intervention section, it was typically because they either did not fully describe the treatment/intervention or failed to discuss the effectiveness of the treatment/intervention. A few students did not discuss how the treatments/interventions related to the case study portion of the assignment.

    I also wanted to assess students’ views of the project. After students had turned in their final project, they completed a 3-item anonymous rating of the project. Each question was rated on a 5-point Likert scale (1=strongly disagree, 5= strongly agree). Students’ average ratings were quite high:

           Completing the case study project increased my understanding of disabilities, M= 4.32 (sd=.69, range 3-5)

           The case study project was a useful way to help me learn the class material, M= 4.29 (s.=.73, range 3-5)

           I rate the project as interesting, M=4.38 (sd =.62, range 3-5)

    Students’ anonymous ratings for the case study project were quite high, with the lowest rating for all three questions as neutral. Thus, this project may be one way to get students more actively engaged in learning about disabilities. In addition to the students’ high ratings of the project, there were numerous unsolicited comments on the course evaluations that they enjoyed the project and it helped them learn to apply course material.

    I was also interested in whether completing a big application project was related to student performance on application-based material on the exams. There are three exams in the course. Each exam has nine application-based multiple-choice questions. I give Exam 1 before students have completed any of the project. I give Exam 2 after students have completed a draft of the literature review but before they have written the case study portion. Students take Exam 3 after they have completed the final project. I looked at these application-based multiple-choice questions on each exam to see if there was improvement after completing the case study.

    Average % correct

    Exam 1

    59.2

    Exam 2

    60.4

    Exam 3

    81.6

    Students, on average, performed better on the application-based multiple-choice questions after completing the case study. While there was no difference between scores on Exams 1 and 2, t(8) = -1.976, p=.084, there were significant differences between performance on Exam 1 and Exam 3, t(8) = -3.086, p=.015 and Exam 2 and Exam 3, t(8) = -3.117, p=.014.

    Performance on the application-based multiple-choice questions on the exams improved after completion of the case study project. Students may be getting better at application-based multiple-questions with repeated practice on the exams but completing the case study project may have also helped in learning to apply information.

    Suggestions for Using the Project in Other Psychology Courses

    While I designed this project for a specific course, it could easily be adapted for use in other Psychology classes, either with or without a literature review, such as:

    • ·       Abnormal Psychology–students pick (or are assigned) a psychiatric disorder and create a fictional individual with that disorder, describing the symptoms specific to the characteristics (age, race/ethnicity, etc.) of the individual. Students could also discuss a specific theoretical orientation toward treatment.
    • ·       Community Psychology–have students create a case about an individual, demonstrating how that individual is connected to his/her environments and how specific problems within the individual’s community have an impact the individual.
    • ·       Developmental Psychology–have students develop a fictional individual and describe how that individual changes while passing through the different developmental time periods. For example, in a child psychology class, what that individual looks like at early childhood compared to middle childhood. Or students could use one developmental period (e.g., adolescence) and describe how physical, cognitive, and social-emotional developmental interacts at that age for that particular individual.
    • ·       Health Psychology–students could create a case study about an individual with a specific health issue, discussing how the individual adjusts and copes with the issue, what behaviors could protect the individual’s health, what behaviors harm the individual’s health, and how those behaviors could be changed.
    • Concluding Thoughts

    I have found this project to be a fun, engaging way to help students learn about disabilities. It demonstrates that the majority of students can apply information and describe how characteristics of disabilities can change developmentally. In addition, students appear to enjoy the assignment and it actually is more fun to read and grade than traditional literature reviews.

    References

    Graham, P.T, & Cline, P.C. (1980). The case method: A basic teaching approach. Theory into Practice, 19(2), 112–116.

    Krain, M. (2010). The effects of different types of case learning on student engagement. International Studies Perspectives, 11, 291-308.

    McDade, S.A. (1995). Case study pedagogy to advance critical thinking. Teaching of Psychology, 22(1), 9-10.

    McManus, J.L. (1986a). “Live” case study/journal record in adolescent psychology. Teaching of Psychology, 13(2), 70-74.

    McManus, J.L. (1986b). Student composed case study in adolescent psychology. Teaching of Psychology, 13(2), 92-93.

    Yurco, P. (2014). Student-generated cases: Giving students more ownership in the learning process. Journal of College Science Teaching, 43(3), 54-58


  • 15 Jan 2018 4:43 PM | Anonymous
    Harwood, E.A., & Marsano, M. (Rivier University)

    Teaching in the age of millennial students is a challenge that should be embraced by all faculty, but what does this entail? Present day students have grown-up alongside technology as a basis for communication and understanding. Termed “digital natives” by Marc Prensky (2001), millennial students spend a great deal of time communicating through technology and are used to having information at their fingertips. Sending an average of 100 texts a day (Lenhart, 2012), the millennial student expects a near immediate response to comments, and can easily find the answer to a question by asking Google. Because millennials have a completely different experience with information than previous generations, especially the ease with which it can be accessed, students may wonder why we don’t instantly respond to email or provide our lecture notes before class (van der Meer, 2012).  Taking notes may seem archaic and pointless if material is always available. Nevertheless, teaching students the skills necessary to navigate through a surplus of information and having them  recognize the importance of quality over quantity are now essential components of college curricula.

    How many times have your students asked you, “Is this going to be on the test?” Although this may seem an annoying question, students may be searching for clues about the essential concepts of the class. Main points that are crystal clear to us may not be as clear to our students (van der Meer, 2012). As experts in our field, we have already created our own organizational frameworks for the concepts we teach. We have formed deep, complex connections that have helped us master the material and make it seem easy for us to understand, while it may remain difficult for our students (Ambrose, Bridges, DiPietro, Lorett, & Norman, 2010). How can we scaffold our “expert” frameworks for our students to build their own connections among course concepts and past experiences? In this essay, we describe several teaching techniques for creating these frameworks, from the way we encourage effective note-taking to the way we speak and incorporate multimedia.

    Why do students struggle with note-taking? Effective note-taking requires extensive cognitive resources, especially working memory capacity (Stefanou, Hoffman, & Vielee, 2008). Listening to the professor while simultaneously writing notes is difficult for many students (van der Meer, 2012). Differences in working memory resources may put some students at a distinct disadvantage depending on the types of notes they take (Bui, Myerson & Hale, 2013). Students with documented and undocumented learning disabilities may also face impediments. If the cognitive load is too great, students may not be able to contextualize or personalize the notes (Stefanou et al., 2008). Some may furiously write down everything you say, while others may copy down only what’s on the PowerPoint slides. Others may just sit back and wait until you put the slides online.

    Nevertheless, writing an idea down can help with long-term retention (Bui et al., 2013). Writing about a concept necessitates active recall and allows the formulation of clearer thoughts and more connections (Bui et al., 2013).  Is it better to attempt to transcribe a lecture or take more condensed, structured notes? While transcription of lectures by computer may help initially with the recording of more notes and immediate recall of facts, taking organized notes shows more durable retention in a 24-hour delay condition (Bui et al., 2013). Although, when students are allowed to study their transcribed lectures, recall is superior, especially for those with lower working memory capabilities (with a 24-hour delay involving transcription of an 11-minute lecture) (Bui et al., 2013). The attention necessary to transcribe a full lecture was not tested, however this research (Bui et al., 2013) once again reminds us that students differ in their capabilities, and what works for one may not work for another.

    Brief, targeted interventions can improve note-taking. Nakayama, Mutsuura and Yamamoto (2016) provided students with two short instructions, once at the beginning and again at the mid-point of a course, on note-taking techniques, which included examples of good notes. This instruction increased student metacognition with regards to note-taking and improved the quality of notes over the course of the semester. Deliberately reviewing and restructuring notes can significantly improve grades as well (Cohen, Kim, Tan & Winkelmes, 2013). For example, outlining, summarizing, and drawing connections between different concepts requires active engagement and leads to better test performances than review alone (Cohen et al., 2013).

    Another technique for note-taking that utilizes scaffolding is directed notes (Harwood, 2016).  Similar to a review guide for an exam, directed notes act as a review guide for that day’s class. Given at the beginning of the class period, directed notes consist of a list of questions and activities about that day’s topics with plenty of space for students to write in their answers. The following are examples from a few different courses:

    1.      Summarize how neurons communicate.                            Action Potential
           How is it like firing a gun?                              Absolute Refractory Period
           Use the terms to the side in your summary                                 Threshold
                                                                                             All or None Response

    2.      Now that we’ve covered the functions of the different brain structures, create your own concept map using your notes

    3.      What advice would you give our aging population given what you know about adult development?

    4.      Describe how each of the following individuals expanded our understanding of attachment.

     

    Name                                                              Contributions

    John Bowlby

    Harry Harlow

    Konrad Lorenz                                                Imprinting
                                                                            Critical Period

    Mary Ainsworth                                              Strange Situation Task

     

    5.      Lambert (1992) proposed 4 therapeutic factors that lead to client improvement. These are

     

    The Big Four                                                 Variance                     Examples

    1. Client/Extra Therapeutic Factors

    2. Therapeutic Alliance

    3. Placebo, Hope, Expectancy

    4. Therapeutic Techniques

     

    6.      Write down your immediate reactions to this individual’s story of heroin addiction.

     

    As you can see, directed notes point students towards important concepts, and assist students in creating their own examples and applying the material. When provided guidelines, but not explicit notes, the student is encouraged to form meaningful connections on the main ideas identified by the professor. Some important guidelines to keep in mind when creating directed notes for your course are to include different types of questions and response formats, leave plenty of space for students to write, and ensure that directed notes are assimilated into the course in some way, whether it be group work or as a test review. Psychology is so pertinent to everyday life that it is ripe with ways to make the material personally meaningful (“If you had to take an anti-depressant, which one would you take and why?”). Take advantage of this and further students’ critical thinking and interest in the field.

    While professors may be tempted to think that directed notes and guided notes are synonymous, there is a distinction between the two. Guided notes are an alternative to traditional PowerPoint slides with information missing to encourage attendance (Barbetta & Skaruppa, 1995). Results among the college population are mixed on whether guided notes provide advantages above and beyond complete PowerPoint slides on test performance (Neef, McCord & Ferreri, 2006). Guided notes may be effective in demonstrating information, but they may fail to encourage students to make connections beyond what’s on the slides.

    Note-taking techniques are one way that scaffolding can be achieved in the classroom, allowing students to organize and detail their thoughts in written form.  In addition   the presentation of information to students provides another opportunity for framing information. For example, one can provide organizational cues during class, such as using explicit language that differentiates main points (“Carl Rogers identified 3 core conditions for a successful therapeutic relationship. The first is unconditional positive regard…”).  Further, one can provide transitional language that encourages students to refocus on a new idea and cues the type of notes to take and their organization (“Now that we understand the structure of a neuron, let’s discuss how neurons communicate” (Titsworth, 2004). We can also encourage students to elaborate beyond what we have explicitly covered since the more information students add to their notes, the higher their scores on applied questions (Stefanou et al.,2008). For example, after explaining a concept or definition, I (Harwood) give students a few moments to write down their own examples (“Give an example of an empathic response to a friend’s problem”) and then have several share with the class. Five-minute writing prompts on a class topic can also foster generative notes and class conversation (“Based on what we’ve covered so far, why do you think heroin is so hard to quit?”).

    Using technology as a tool for creating conceptual frameworks in a course can also be effective with millennial students. PowerPoint slides are a possible method for scaffolding information and cuing students on how to organize their notes (Stefanou et al., 2008). With the integration of technology starting in k-12 schools (Ruggiero & Mong, 2015), students prefer, and may even expect PowerPoint slides (Landrum, 2010). While students may want these slides before class (Babb & Ross, 2009; Landrum, 2010) and it may increase class participation for those who typically participate (Babb & Ross, 2009), it does not appear to aid in test performance (Babb & Ross, 2009), final grades (Bowman, 2009), or the addition of new ideas to one’s notes (Stefanou et al., 2008). We find that for many students, providing slides before class can decrease interest and stunt conversation. My (Harwood) compromise is to provide slides after we have finished the chapter for students to fill any gaps in their notes. 

    Finding the right balance between incorporating PowerPoint or other presentation media into a lecture while meeting students’ needs is a necessary consideration during lesson planning. Some believe that PowerPoint slides may condense the material too much, acting as “CliffsNotes” for the class, or preventing “big picture” thinking with its linear presentation (Kirova, Massing, Prochner, & Cleghorn, 2016).  It may be more effective to think of multimedia presentation technology as an extension of conveying main points and transitional language, rather than being the sole conveyor of information during a lecture. As much as we tend to lump students into the group of “millennials,” it is important to recognize their individual learning capacities and the need for a variety of teaching techniques.

    If you choose to use PowerPoint as a scaffolding technique, there are some common mistakes to avoid. First, don’t use your slides as “cue cards” (Gardner & Aleksejuniene, 2011). They should be made with the students in mind, rather than the instructor. When information is read off a slide, it decreases cognitive understanding by overloading working memory and inhibiting students’ opportunities to create connections. In addition, students tend to lose interest quickly Second, don’t overburden the slides with text (Gardner & Aleksejuniene, 2011; Stefanou et al., 2008). Providing too much information on a slide may result in students copying information rather than recording their own thoughts (Stefanou et al., 2008). In limiting the amount written on the slides, students are given the opportunity to reason through information, which can promote generative learning. Third, integrating images with verbal descriptions is more effective for learning than text alone (Gardner & Aleksejuniene, 2011). Pictures really can say a 1000 words! Seeing the devastating physical effects of methamphetamine use in a series of mug shots is much more powerful than reading about it or hearing a recitation of symptoms from the instructor. And fourth, incorporate video clips and other media that naturally appeal to the millennial student (Garder & Aleksejuniene, 2011). Identifying the symptoms of cocaine abuse from a movie scene is an excellent way to elicit interest from students. Further seize the teachable moment by explicitly discussing how these images and clips relate to course concepts (“What properties of methamphetamine lead to these physical changes” or “What symptoms are the characters showing that indicate stimulant use”). Students may not automatically see these connections on their own.

    PowerPoint slides, organizational cues, and transitional language all aid students in creating their own class notes. Note-taking is a skill often overlooked by college educators who assume their students already know how to do it. In a traditional lecture format, only a small amount of content is accurately captured in student notes (Kiewra, 1985). Considered more than just a “recording technique” (van der Meer, 2012, p. 13), taking notes and reviewing them helps students reconstruct what they have learned and makes it more personally meaningful.  This actively engages the student with the material and increases retention (Bohay, Blakely, Tamplin, & Radvanksy, 2011; Cohen et al., 2013; Kobayashi, 2006). Note-taking is a skill that will follow students long after they have left the classroom, giving them an advantage in the workplace by preventing mistakes and saving time.

    Regardless of the format an instructor chooses to use, it is important to remember that millennial students will benefit from exemplified note-taking and scaffolded frameworks of knowledge. Considering the technology-centered background of today’s millennial student, we would be wise to incorporate media presentations in the classroom because they garner more attention. However, this must be tempered with the understanding that our main focus must be on generative learning and helping students make meaningful connections. Inspired teaching is more than content delivery. It is student-centered and focuses on cultivating skills that lead to a successful life.

    References

    Ambrose, S.H., Bridges, M.W., DiPietro, M., Lovett, M.C., & Norman, M.K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: John Wiley & Sons, Inc.

    Babb, K.A., & Ross, C. (2009).  The timing of online lecture slide availability and its effect on attendance, participation and exam performance. Computers & Education, 52, 868-881. doi:10.1016/j.compedu.2008.12.009

    Barbetta, P.M., & Skaruppa, C.L. (1995). Looking for a way to improve your behavior analysis lectures? Try guided notes. The Behavior Analyst, 18(1), 155-160.

    Bohay, M., Blakely, D. P., Tamplin, A. K., & Radvansky, G. A. (2011). Note taking, review, memory, and comprehension. American Journal of Psychology, 124(1), 63-73. doi: 10.5406/amerjpsyc.124.1.0063

    Bowman, L. L. (2009). Does posting PowerPoint presentations on WebCT affect class performance or attendance? Journal of Instructional Psychology36(2), 104-107.

    Bui, D.C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2), 299-309. doi: 10.1037/a0030367

    Cohen, D. D., Kim, E., Tan, J., & Winkelmes, M. (2013). A note-restructuring intervention increases students’ exam scores. College Teaching, 61(3), 95-99. doi: 10.1080/87567555.2013.793168

    Gardner, K., & Aleksejuniene, J. (2011). PowerPoint and learning theories: Reaching out to the millennials. Transformative Dialogues: Teaching & Learning Journal, 5(1), 1-11.

    Harwood, E. (2016). A Strategy for Active Engagement in the Classroom. In W. Altman, L. Stein, & J. E. Westfall (Eds.), Essays from E-xcellence in Teaching (Vol. 15, pp.  1-4). Retrieved from the Society for the Teaching of Psychology Web site: http://teachpsych.org/ebooks/eit2015/index.php.

    Kiewra, K. A. (1985). Providing the instructor's notes: An effective addition to student notetaking. Educational Psychologist, 20(1), 33-39. doi: 10.1207/s15326985ep2001_5

    Kirova, A., Massing, C., Prochner, L., & Cleghorn, A. (2016). Shaping the 'habits of mind' of diverse learners in early childhood teacher education programs through PowerPoint: An illustrative case. Journal of Pedagogy, 7(1), 59-78. doi:  10.1515/jped-2016-0004

    Kobayashi, K. (2006). Combined effects of notetaking/reviewing on learning and the enhancement through interventions: A metaanalytic review, Educational Psychology: An International Journal of Experimental Educational Psychology, 26(3), 459-477. doi: 10.1080/01443410500342070

    Landrum, R. E. (2010). Faculty and student perceptions of providing instructor lecture notes to students: Match or mismatch? Journal of Instructional Psychology, 37(3), 216-221. Retrieved from http://www.projectinnovation.biz/jip_2006.html.

    Lenhart, A. (2012, March, 19). Teens, smartphones & texting. Retrieved March 16, 2017, from Pew Research Center: Internet, Science & Tech Web Site: http://www.pewinternet.org/2012/03/19/teens-smartphones-texting/#

    Nakayama, M., Mutsuura, K., & Yamamoto, H. (2016). Students’ reflections on their learning and note-taking activities in a blended learning course. The Electronic Journal of eLearning, 14(1), 43-53.

    Neef, N.A., McCord, B.E., & Ferreri, S. J. (2006). Effects of guided notes versus completed notes during lectures on college students’ quiz performance. Journal of Applied Behavior Analysis, 39(1), 123-130. doi: 10.1901/jaba.2006.94-04

    Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6.

    Ruggiero, D., & Mong, C.J. (2015). The teacher technology integration experience: Practice and reflection in the classroom. Journal of Information Technology Education: Research, 14, 161-178

    Stefanou, C., Hoffman, L., & Vielee, N. (2008). Note-taking in the college classroom as evidence of generative learning. Learning Environment Research, 11, 1-17. doi: 10.1007/s10984-007-9033-0

    Titsworth, B. S. (2004). Students' notetaking: The effects of teacher immediacy and clarity. Communication Education, 53(4), 305-320.

    Van der Meer, J. (2012). Students’ note-taking challenges in the twenty-first century: Considerations for teachers and academic staff developers. Teaching in Higher Education, 17(1), 13-23. http://dx.doi.org/10.1080/13562517.2011.590974

  • 18 Dec 2017 4:01 PM | Anonymous
    Help Sheet Content Predicts Test Performance


    Mark R. Ludorf and Sarah O. Clark
    Stephen F. Austin State University

    Readers of E-xcellence in Teaching know the importance of finding the best teaching methods and techniques to reach students. Although instructors rightfully seek to improve their teaching to enhance student learning, often times too much focus is placed on enhancing “input” and not enough focus is placed on enhancing the fidelity of “output”. That is, instructors should explore both the methods to make them better teachers, but also consider innovative methods to create better measurements of what students have learned.

    Professors regularly confront the challenge of teaching to a student population with diverse levels of academic ability. To address such diverse ability instructors have implemented various pedagogical methods, many of which are time consuming and tedious. One method instructors have used to address diverse learning abilities is to allow students to access information during a test. Some instructors limit the amount of information that is accessible (e.g., index card or standard sheet of paper), while other instructors allow access to an unlimited amount of information (i.e., “open book”).

    Ludorf (1994), allowed students to select the amount information they could access on each of five statistics tests. Results showed significantly higher average test performance (72% versus 62%) when less information was accessed than when more information was accessed; a result consistent with previous results (Boniface, 1985)

    During the last 3 decades numerous researchers (e.g., Dorsel & Cundiff, 1979) have explored the role of help sheets (aka cheat sheet or crib sheet) and how the use of help sheets is related to test performance (Dickson & Bauer, 2008; Dickson & Miller, 2005; Hindman, 1980; Visco, Swaminathan, Zagumny & Anthony, 2007; Whitley, 1996), learning (Dickson & Bauer, 2008; Funk & Dickson, 2011) and anxiety reduction (e.g., Drake, Freed, and Hunter, 1998; Erbe, 2007; Trigwell, 1987). Overall the results have been mixed regarding help sheet use and the variables investigated.

    One aspect of help sheets that has received little attention is the relationship between the content of a help sheet and test performance. Most of the research cited above examined the relationship between test performance and whether or not a student used a help sheet. Only a few studies (Dickson & Miller, 2006; Gharib, Phillips, & Mathew, 2012; Visco, et al., 2007) have explored how the specific content of a help sheet is related to performance.

    Dickson and Miller (2006) found significantly higher test performance when students used an instructor provided help sheet compared to a student provided help sheet. However, the result may be confounded as help sheet condition may have varied systematically with the amount of studying students did. Visco et al. (2007), examined student generated help sheets and concluded that students likely need additional direction on what content to include on a help sheet in order to enhance performance. Finally, Gharib et al. (2012) examined the quality of students’ help sheets and found a reliable and positive relationship between the quality of the help sheet content and test performance; where a quality measure was obtained by rating a help sheet for organization and amount of detail.

    To summarize the relevant research, the use of help sheets is not reliably or consistently related to student performance, learning, or anxiety levels. Moreover, help sheet quality appears to vary across students and such variation may explain the body of results. Thus, help sheet content should be examined more systematically.

    The current study provided a systematic exploration to determine whether characteristics of the help sheet content (e.g., overall quality, inclusion of process information, density of information, etc) were related to test performance. Results of the study may be used to provide students guidance (Visco et al., 2007) when constructing a help sheet in order to enhance performance.

    Method

    Participants

     Participants (N = 21) were students enrolled in a required junior level psychological statistics course. Other sections of the course were taught by different instructors; students selected to enroll in this section unaware of the assessment that would be conducted. A majority of the participants were women. No other demographic information was collected.  

    Materials

    Students created a one-page 8.5 × 11 in. [21.6 × 28 cm] help sheet to use on each test. The help sheet could contain any information a student wanted to include and both sides of the sheet could be used. Students were informed that help sheets would be collected.  Both sides of each help sheet were scanned to create an electronic copy. All help sheets were returned when the tests were returned.

    Procedures

    Students were required to construct a help sheet for each test, though there was no requirement to use the help sheet. Based on informal observation during the test, all students appeared to use the help sheet to some degree.

    Tests in the statistics course were all problem based and were graded on a 100 point scale. Student help sheets were collected, scanned, and rated by two raters on the variables of interest below. Both help sheet raters were blind to students’ test performance at the time that the ratings were made.

    Variables of interest. Help sheets were evaluated on the variables of Overall Quality (4 – 0, with 4 being the highest quality); Verbal Process information (i.e., instructions) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Numeric Process information (i.e., solved problems) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Density of the information (as rated in deciles – 10 – 100%), Organization of information (1 <very organized> – 3 <neutral>  – 5 <very unorganized> ), use of Color (present or absent) and Submission Order (ordinal position when the test was submitted).

     

    Results

    Analyses

    The analyses were based on students’ help sheets and test performance from a single test. Interrater rater reliability was computed for the two raters across the scales described above. Interrater reliability ranged from moderate to high, .521(Organization) to .978 (Density).

    Help sheet ratings for the two raters were averaged and then regressed against students’ test scores to determine which characteristics of help sheet predicted tested performance. Results showed that higher quality help sheets predicted higher test performance (b = 33.20, p < .001) as did lower density of information (b = -.35, p = .05). Moreover, higher verbal process scores were associated with lower test performance (b = 13.14, p < .01). None of the other variables were related to performance (p > .05).

    Discussion, Conclusion and Recommendations

    Results of the preliminary analyses suggest that it is not enough just to consider whether a student has access to a help sheet or not, but rather a careful examination of the help sheet content is required. Similar to Gharib et al. (2012), overall quality of the help sheet was found to be a very important characteristic of the help sheet. As overall quality increased, test scores also increased.

    Density of information was also significantly related to performance. Although not the strongest effect, it appears that having less information on the help sheet predicted higher performance. Such a pattern is consistent with previous research (Visco et al., 2007) which may indicated that density of information is a proxy for learning in an inverse direction. That is, students who have a robust understanding of the material do not need to include as much information on the sheet and create a less dense help sheet. Conversely, students who do not have a robust understanding of the material must include as much information as possible to compensate for the lack of understanding, thereby creating a high density help sheet.

    One surprising finding was that students who included more verbal process information, which included information like instructions on how to perform some processes, scored lower than those students who included less of this information. Similar to the density argument above, it could be the case that students who included more verbal process information did so because they were not comfortable completing such problems without help sheet information and so they included more verbal process information on their help sheets.

    Finally, in examining the help sheet research there are two notable issues. First, help sheets do not facilitate student performance in courses involving mostly content knowledge including abnormal psychology (Hindman, 1980), developmental psychology (Dickson and Miller, 2005 and 2006), or social psychology (Whitley. 1996). However, when a course includes more process than content knowledge, as in the current course or other studies including statistics (Ludorf, 1994, Philips, et al., 2012) or engineering (Visco et al., 2007), students’ test performance appears to be related to help sheet content. Second, taking into account the research showing that content of a help sheet is related to test performance, we join Visco and colleagues in calling for the need of instructors to become more involved with help sheet construction as a way to provide students of all abilities a high quality help sheet.

    References

    Boniface, D. (1985). Candidates’ use of notes and textbooks during an open-book examination. Educational Research, 27(3), 201-209.

    Dickson, K. L., & Bauer, J. (2008). Do students learn course material during crib card construction? Teaching of Psychology, 35, 117-120.

    Dickson, K. L., & Miller, M. D. (2005). Authorized crib cards do not improve exam performance. Teaching of Psychology, 32, 230–232.

    Dickson, K. L., & Miller, M. D. (2006). Effect of crib card construction and use on exam performance. Teaching of Psychology, 33, 39–40.

    Dorsel, T. N., & Cundiff, G. W. (1979). The cheat-sheet: Efficient coding device or indispensable crutch? Journal of Experimental Education, 48, 39–42.

    Drake, V. K., Freed, P., & Hunter, J. M. (1998). Crib sheets or security blankets? Issues in Mental Health Nursing, 19, 291–300.

    Erbe, B. (2007). Reducing test anxiety while increasing learning – The cheat sheet. College Teaching, 55(3), 96-97.

    Funk, S. C., & Dickson, K. L. (2011). Crib card use during tests: Helpful or a crutch? Teaching of Psychology, 38, 114-117.

    Gharib, A., Phillips, W., & Mathew, N. (2012). Cheat Sheet or Open-Book? A Comparison of the Effects of Exam Types on Performance, Retention, and Anxiety. Psychology Research, 2(8), 469-478

    Hindman, C. D. (1980). Crib notes in the classroom: Cheaters never win. Teaching of Psychology, 7, 166–168.

    Ludorf, M. R. (1994). Student selected testing: A more sensitive evaluation of learning.  Paper presented to the American Psychological Society Institute on The Teaching of Psychology, Washington, DC.

    Trigwell, K. (1987). The crib card examination system. Assessment and Evaluation in Higher Education, 12, 56–65.

    Visco, D., Swaminathan, S., Zagumny, L, & Anthony, H. (2007). AC 2007-621: Interpreting Student-Constructed Study Guides. ASEE Annual Meeting and Exposition Proceedings, Honolulu, HI.

    Whitley, B. E., Jr. (1996). Does “cheating” help? The effect of using authorized crib notes during examinations. College Student Journal, 30, 489–493.

     

    Author Notes

    Mark Ludorf is a Cognitive psychologist who joined the faculty at Stephen F. Austin State University(SFA) in the fall of 1990 and is currently a Full Professor of Psychology. He has served in university wide administrative positions at two universities (SFA and Oakland University in Rochester, MI). He was also an American Council on Education (ACE) Fellow in Academic Administration.  Ludorf has been active in the use technology in higher education. He has taught online since 2001 and developed several online courses. His other academic interests are in leadership and study abroad. Ludorf currently serves as Senior Editor of the Journal of Leadership Studies. He has also offered numerous study abroad programs in Italy.  At SFA Ludorf has been recognized as the Alumni Distinguished Professor and was awarded the SFA Foundation Faculty Achievement Award.

    Sara Clark was an undergraduate teaching assistant in statistics at Stephen F. Austin State University. She completed her Bachelor’s degree in Psychology at SFA. She was also the 2013 recipient of the Jeff and Jackie Badders Award which is given to the top graduating senior psychology major.

  • 04 Dec 2017 8:18 AM | Anonymous

    Mindfulness and Meditation in Psychology Courses

    Jennifer A. McCabe & Dara G. Friedman-Wheeler

    Goucher College

    As part of a college-wide “theme semester” on mindfulness in spring 2016, we incorporated mindfulness into four psychology classes. Here we share our experiences with regard to course design, assignments and activities, and student feedback. For instructors who are considering including mindfulness and/or meditation in psychology courses, we conclude with a reflection and overall assessment of what went well and what could be modified for the future, integrated with the results of our research on mindfulness in the college classroom.

    Defining Mindfulness and Its Relevance to Education

     A prominent definition of mindfulness in contemporary psychology is “paying attention… on purpose, in the present moment, and non-judgmentally” (Kabat-Zinn, 1994, p. 4). Mindfulness has received much attention recently, in the research literature and elsewhere (for an overview, see Curtiss & Hofmann, 2017). Studies have suggested benefits of mindfulness to physical health (e.g., pre-hypertension; Hughes et al., 2013), mental health (e.g., subjective well-being; Sedlmeier et al., 2012), and cognitive performance (e.g., working memory; Mrazrek, Franklin, Phillips, Baird, & Schooler, 2013).

    Increasingly, researchers are studying mindfulness activities in elementary and secondary schools (e.g., Black & Fernando, 2014; Britton et al., 2014; Mindful Schools, 2017). Research is just beginning to emerge on the effects of mindfulness in the college classroom (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    In the next two sections, each author provides a first-person narrative of her experiences integrating mindfulness into psychology courses.

     

    Cognitive Psychology Courses (JM)

    I approached this semester with enthusiasm about mindfulness, but a lack of experience. I decided to commit to a regular practice of mindfulness exercises (10 minutes daily) using Headspace (https://www.headspace.com/), which helped bring a degree of authenticity (and confidence) to my courses, and also personal benefit in terms of well-being and focus.

    In integrating mindfulness into Cognitive Psychology, a mid-level undergraduate course, I added a section that defined mindfulness to my syllabus, connected mindfulness to other topics in the course (e.g., perception, attention, memory, decision-making), and invited students to engage in meaningful study and practice of mindfulness throughout the semester. I added a course learning objective connecting mindfulness to metacognition: “Improve your metacognitive skills (knowing what you know, learning how to learn), through traditional book learning and through mindful practice and reflection. (Syllabi for courses discussed in this essay are available by request.)

    On the first day of class, I asked students questions about mindfulness to gauge pre-existing knowledge and practice, before their first mindful meditation exercise (Day 1 of Headspace). At least once per week, class included 5-10 minutes of guided mindfulness exercises. To prepare students, I asked them to arrive on time, to listen to instructions, and to be still and quiet during the meditation time. I assured them that it was okay not to engage in meditation. I emphasized that in addition to possible personal benefits, the exercises might provide insight into research we would read on mindfulness and cognition.

    Throughout the semester, I chose short guided exercises for class use, including several from the UCLA Mindful Awareness Research Center (http://marc.ucla.edu/body.cfm?id=22) and Mindfulness for Teens (http://mindfulnessforteens.com/guided-meditations/). Some were sitting exercises and some were standing; some had longer periods of silence and some were narrated throughout. Whenever possible, I connected the mindfulness activity to the course topic (e.g., body scan meditation for Attention; guided visualization for Visual Imagery). One day we went outside and I guided students through an exercise to focus on aspects of the environment (e.g., colors, shapes, movement; from a training session with Dr. Philippe Goldin).

    Regarding assessment, I revised my existing article summary and reflection assignment to focus on research that related mindfulness/meditation to course topics. For each article, students completed this form and engaged in group discussions during class. I quickly discovered that there were not many published articles about the impact of mindfulness on cognition that were appropriate for students in a mid-level undergraduate course.

    For the topics Perception and Attention, I assigned half the students an article about enhancing visuospatial processing using varieties of meditation (Kozhevnikov, Louchakova, Josipovic, & Motes, 2009), and the other half an article about improvements in perceptual discrimination and sustained attention following meditation training (MacLean et al., 2010). With respect to Memory, I assigned half an article about how brief mindfulness training can improve verbal GRE performance as mediated by enhancing working memory (Mrazek et al., 2013), and the other half read about increases in false memory after meditation (Wilson, Mickes, Stolarz-Fantino, Evrard, & Fantino, 2015). For the final topics in the course, Reasoning and Decision-Making, students read an article about reductions in the sunk-cost bias after meditation (Hafenbrack, Kinias, & Barsade, 2014).

    When I compared responses to mindfulness questions on the first and last days of class, the percentage of students providing a reasonably accurate definition of mindfulness jumped from 10% to 68%, and the percentage listing cognition-related benefits of mindfulness went from 17% to 59%. However, there was no change in the reported practice of mindfulness/meditation, nor in the perceived importance of the scientific study of mindfulness.

    I also incorporated mindfulness into my upper-level course, Seminar in Cognition, Teaching, and Learning. I began this class with an assignment to watch Andy Puddicombe’s TED talk as an orientation to mindfulness (https://www.ted.com/talks/andy_puddicombe_all_it_takes_is_10_mindful_minutes?language=en); to watch the introductory Headspace video; and to complete Day 1 of Headspace’s free “Take 10” program. Students were asked to commit to 10 minutes of guided meditation per day for the next 10 days, then to submit a written reflection. In their reflections, every student expressed openness to the possibility of trying meditation, and for all but 2 students (out of 18), this would be their first experience with it. However, their reflections after 10 days were less encouraging – due perhaps more to time management issues than anything. Although it was a required assignment, many did not find time to complete the program.

    Later in the course, I assigned articles focusing on mindfulness and meditation. Students read an article about the neuroscience of mindfulness and mind-wandering, with implications for education (Immordino-Yang, Christodoulou, & Singh, 2012). They also read and discussed the article on working memory and GRE performance used in Cognitive Psychology (Mrazek et al., 2013). This class day was purposefully scheduled to coincide with Mary-Helen Immordino-Yang’s on-campus lecture, which students were encouraged to attend.

    About five weeks into the semester, we launched a collaborative class project to collect an annotated reference list of resources on mindfulness for educators. Students used library and web applications to search for resources, then built a shared document. The final product was a 16-page file containing primary research articles, review/critique articles, books and book chapters, popular press articles, and web sites relevant to the topic of Mindfulness and Education (http://blogs.goucher.edu/themesemester/files/2016/04/Mindfulness-and-Education-Resources-Sp16.pdf).

    Though I did not collect formal data in this course, students generally demonstrated interest and enthusiasm. Even given the density of some of the readings on mindfulness, there was a good amount of energized discussion. Also, I was impressed by their active participation in the collaborative project and felt this was a meaningful and authentic learning experience.

     

    Health and Clinical Psychology Courses (DFW)

    Mindfulness seemed a natural fit for my mid-level course in health psychology. Indeed, the topic had come up organically in years past, through a project in which students choose a health behavior to change, using empirically-informed strategies – many students chose to adopt a meditation practice. Spring 2016 was no exception, as several students took on this challenge, availing themselves of tools and apps (e.g., Headspace, Calm) as part of their strategic behavior change project.

    I incorporated a mindfulness-related learning objective into the course: by the end of the semester, students should be able to “describe mindfulness and its health-related benefits.”  Mindfulness was woven into several sections of the course. At the start of the course, where we usually focus on what health psychology is, students also read a brief overview of mindfulness (Kabat-Zinn, 1994), allowing us to operate from a shared conceptualization of mindfulness and to relate it to mental and physical health.

    The health psychology course includes a community-based learning component in which students work collaboratively with staff from Hopewell Cancer Support (a local organization providing psychosocial services to those affected by cancer – including some related to mindfulness), to address particular challenges faced by the non-profit. Because of this collaboration, we discuss cancer early in the class, as well as the research on psychosocial interventions for cancer. Here students read and discussed an article on Mindfulness-Based Cancer Recovery (Tamagawa et al., 2015). Later in the class, as part of our stress and coping topic, we read and talked more broadly about mindfulness and health, reading a review article on mindfulness-based treatments (and research on their effectiveness) for a variety of health conditions (Carlson, 2015). These readings were brought into the classroom in a variety of ways: sometimes we would discuss the articles as a large group, or in small groups. Sometimes I would start class by projecting a short list of thought questions on the screen about the reading and would ask students to write for a minute or two about each question, before getting into groups to discuss one of the questions in more depth.

    Throughout the semester, the mindfulness-related events on campus were brought into the class, through an “event-reporting” assignment. Specifically, students were asked to sign up to attend one of 6 events on campus or in the community during the semester (four of which were mindfulness theme semester speakers Mary-Helen Immordino-Yang, Omid Safi, Alicia Garza, and Dan Siegel), and to report back to the class about what they had heard. Their reports were informal and included (a) biographical information about the speaker (obtained from the event or through Internet research), (b) the main point or points of the talk, (c) the types of “evidence” used to make those points (case examples, personal experience, research…), and (d) how the event related to the field of health psychology or to specific topics covered in class.

    I conceived of the “event reporting” assignment as a way to encourage attendance at these events without insisting that all students attend them all (unrealistic, given schedule constraints), and as a way for the whole class to get some benefit from each talk. In addition, I wanted students to think actively about the events they attended, including identifying the speaker’s main point(s) and the different types of arguments that can be made (based on different “ways of knowing”). I was so pleased with this assignment that I have used it again since.

    During the theme semester I also taught an upper-level course, Seminar in Clinical Psychology: Emotion Regulation, which has always included readings about, experiential activities with, and discussion of mindfulness. During the mindfulness theme semester, I incorporated mindfulness into one of the existing learning objectives, stating that students would be able to “discuss a variety of emotion regulation strategies (including mindfulness) and evaluate their adaptive and maladaptive aspects.”

    In previous iterations of the course, I had introduced students to the practice of mindfulness by conducting part of Jon Kabat-Zinn’s (2006) eating meditation (mindfully attending to a raisin). This semester, I increased the experiential coverage of mindfulness, inviting the class to engage in “Mindful Mondays,” a collection of activities that allowed us to try a variety of purported mindfulness inductions, and to compare and contrast them. I started a shared document and invited students to construct the list of activities collaboratively. Several students added activities but requested that I (or a guide on a video) lead the class through them (e.g., a brief chair-yoga routine intended for the workplace); others proposed activities that they led themselves (e.g., a walking meditation, based on an experience a student had had at a monastery while studying abroad). The ultimate list included activities from the more traditional raisin meditation and a body scan to “mindful creative expression” and coloring. We sometimes left our seats (to do yoga or sit on the floor), and we sometimes left the classroom (to do the walking meditation on the campus’s labyrinth).

    These exercises were voluntary; students could arrive five minutes late to class on any given Monday, if they did not wish to participate in an activity. Generally, though, attendance was excellent, and students seemed enthusiastic about Mindful Mondays (indeed, I proposed such a thing to my seminar the subsequent semester, and they, too, chose to partake). Discussions following the practice focused on topics such as whether or not the effects of the exercises felt subjectively like mindfulness (per the attentional and attitudinal components of the definition), whether or not there might be inadvertent harms associated with these activities, whether some people might benefit from some types of mindfulness more than others, and what characteristics might predict positive experiences with which activities.

    During the theme semester, the class dug more deeply into the scholarly literature on mindfulness, as well. The class has long included a reading on third-wave cognitive behavioral interventions that provides a nice overview of mindfulness as it is incorporated into these treatments (Baer & Huss, 2008). This semester we also read pieces focused on the emotional benefits of mindfulness (Arch & Landy, 2015) and on mindfulness and emotion regulation (Corcoran, Farb, Anderson, & Segal, 2010; Leahy, Tirch, & Napolitano, 2011).

    Near the end of the semester, I asked students to create “concept maps” of mindfulness, in an attempt to integrate the varied aspects of mindfulness that we had read about, discussed, and experienced. Students worked on blank paper, and then volunteered to have their concept maps projected, so that the class could discuss the various components of mindfulness and associated constructs. While each of these concept maps was of course different, they all reflected the complexity of the concept, and I believe that by the end of the semester students showed substantial improvement in their understandings of the construct of mindfulness as used in contemporary clinical psychology.

     

    Our Research, in Brief

    Separate from the theme semester courses, we have conducted systematic research on mindfulness in the college classroom (importantly, no data were collected during the theme semester). In our study, students in psychology, chemistry, peace studies, and English classes followed a 5-minute guided meditation (an edited mp3 file; Kabat-Zinn, 2005, used with permission) at the start of class. Within-subjects analyses found no benefits for working memory, content retention, mindful awareness during class, or elaboration, at the end of a 4-week period in which students followed the guided meditation, as compared to a 4-week period in which they did not. While we refer interested readers to the full research report (Friedman-Wheeler et al., 2017), we want to share some thoughts about how such an exercise might be beneficial, with adjustments.

    For one, it may be that students who weren’t interested in participating actively did not (although they did sit quietly during the meditation period). It may also be the case that five minutes is not the appropriate dose of meditation for the classroom. Perhaps one minute of silent meditation would be better-suited to the classroom setting (and feel more do-able to students). On the other hand, perhaps five minutes three times a week is an insufficient dose, though a larger dose would consume more class time than instructors might wish.

    Perhaps student buy-in and benefit are enhanced when more context is provided, as was done in the theme semester courses described in this essay. There is an obvious risk of demand characteristics, but perhaps those with a greater understanding of mindfulness might derive more benefit from it than those who participate in an exercise without fully understanding why.

     

    Conclusion: Opportunities and Challenges for
    Mindfulness in Psychology Courses

    From an academic perspective of encouraging undergraduate students to learn about the science of mindfulness, readers should bear in mind that the level and quality of available readings are varied. For example, while there is ample scholarly work on mindfulness in clinical and health psychology, there is less research suitable for undergraduates related to cognition. Overall, there is a need for more research on mindfulness and learning in higher education. As noted above, the results of our research study suggest no measurable impact of brief in-class interventions on variables related to academic performance, though others have found benefits (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    From a class-time-management perspective, we experienced challenges balancing mindfulness exercises with other activities and content. We found that exercises between two and ten minutes long can work well–and incorporating mindfulness is made far easier by the availability of short mindful meditation exercises online, including those that can be guided by the instructor, and those that are pre-packaged to be presented in video and/or auditory format.

    From a student-engagement perspective, we found that many students were “on board” with the idea of using a small amount of class time to practice mindfulness. However, some seemed disengaged.

    From a student mental health perspective, though there is research suggesting mindfulness practice may lead to improved mental health, we also noted the potential for negative affect–irritation or boredom, or in some cases, perhaps feelings of being overwhelmed (as might happen to some survivors of trauma; Briere & Scott, 2012). We handled these possibilities in this several ways: (1) permitting students to not attend the mindfulness portion of class and/or to leave the room as needed; (2) reminding students that no one can be forced to meditate, and that they can choose to ignore the instructions and sit quietly during the exercises.

    In sum, there are many opportunities for bringing the science and practice of mindfulness into the undergraduate classroom, and the potential seems great. There are, however, challenges to be explored and better understood, as we seek creative ways to connect our students with mindfulness so that they might benefit from it intellectually and personally.

     

    References

    Arch, J. J., & Landy, L. N. (2015). Emotional benefits of mindfulness. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 208-224). New York, NY: Guilford Press.

    Baer, R. A., & Huss, D. B. (2008). Mindfulness- and acceptance-based therapy. In J. L. Lebow (Ed.), Twenty-first century psychotherapies: Contemporary approaches to theory and practice (pp. 123-166). Hoboken, NJ: John Wiley & Sons.

    Black, D. S., & Fernando, R. (2014). Mindfulness training and classroom behavior among lower-income and ethnic minority elementary school children. Journal of Child and Family Studies, 23(7), 1242-1246. doi:10.1007/s10826-013-9784-4

    Briere, J., & Scott, C. (2012). Mindfulness in trauma treatment. In Principles of trauma therapy: A guide to symptoms, evaluation, and treatment, 2nd edition (pp. 215-230). Thousand Oaks, CA: Sage.

    Britton, W. B., Lepp, N. E., Niles, H. F., Rocha, T., Fisher, N. E., & Gold, J. S. (2014). A randomized controlled pilot trial of classroom-based mindfulness meditation compared to an active control condition in sixth-grade children. Journal of School Psychology, 52(3), 263-278. doi:10.1016/j.jsp.2014.03.002

    Carlson, L. E. (2015). Mindfulness-based interventions for physical conditions: A selective review. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 405-425). New York, NY: Guilford Press.

    Corcoran, K. M., Farb, N., Anderson, A., & Segal, Z. V. (2010). Mindfulness and emotion regulation: Outcomes and possible mediating mechanisms. In A.M. Kring & D.M. Sloan (Eds.), Emotion regulation and psychopathology: A transdiagnostic approach to etiology and treatment (pp. 339-355). New York, NY: Guilford Press.

    Curtiss, J., & Hofmann, S. G. (2017). Meditation. In A. Wenzel (Ed.) The SAGE Encyclopedia of Abnormal and Clinical Psychology. Thousand Oaks, CA: SAGE Publications.

    Friedman-Wheeler, D. G., McCabe, J. A., Chapagain, S., Scherer, A. M., Barrera, M. L., DeVault, K. M., Hoffmann, C., Mazid, L. J., Reese, Z. A., Weinstein, R. N., Mitchell, D., & Finley, M. (2017). A brief mindfulness intervention in the college classroom: Mindful awareness, elaboration, working memory, and retention of course content. Manuscript in preparation.

    Hafenbrack, A. C., Kinias, Z., & Barsade, S. G. (2014). Debiasing the mind through meditation:

    Mindfulness and the sunk-cost bias. Psychological Science, 25(2), 369-376. doi: 10.1177/0956797613503853

    Helber, C., Zook, N., & Immergut, M. (2012). Meditation in higher education: Does it enhance

    cognition? Innovative Higher Education, 37(5), 349-358. doi:10.1007/s10755-0129217-0

    Hughes, J. W., Fresco, D. M., Myerscough, R., van Dulmen, M. M., Carlson, L. E., & Josephson, R. (2013). Randomized controlled trial of mindfulness-based stress reduction for prehypertension. Psychosomatic Medicine, 75(8), 721-728. doi:10.1097/PSY.0b013e3182a3e4e5

    Immordino-Yang, M. H., Christodoulou, J. A., Singh, V. (2012). Rest is not idleness:   

    Implications of the brain’s default mode for human development and education. Perspectives on Psychological Science, 7, 352-364.

    Kabat-Zinn, J. (1994). Wherever you go, there you are: Mindfulness meditation in everyday life. New York, NY: Hyperion.

    Kabat-Zinn, J. (2005). Sitting meditation. On Guided Meditation (Series 1). [mp3 file]. Louisville, CO: Sounds True, Inc.

    Kabat-Zinn, J. (2006). Eating meditation. On Mindfulness for Beginners [CD]. Louisville, CO: Sounds True, Incorporated.

    Kozhevnikov, M., Louchakova, O., Josipovic, Z, & Motes, M. A. (2009). The enhancement of

    visuospatial processing efficiency through Buddhist Deity Meditation. Psychological Science, 20(5), 645-653. doi: 10.1111/j.1467-9280.2009.02345.x

    Leahy, R. L., Tirch, D., & Napolitano, L. A. (2011). Mindfulness. In Emotion regulation in psychotherapy: A practitioner’s guide (pp.91-116). New York, NY: Guilford Press.

    MacLean, K. A., Ferrer, E., Aichele, S. R., Bridwell, D. A., Zanesco, A. P., Jacobs, T. L….

    (2010). Intensive meditation training improves perceptual discrimination and sustained attention. Psychological Science, 21(6), 829-839. doi: 10.1177/0956797610371339

    Mindful Schools. (2017). Research on Mindfulness in Education [Web log page]. Retreived from http://www.mindfulschools.org/about-mindfulness/research/

    Mrazek, M. D., Franklin, M. S., Phillips, D. T., Baird, B., & Schooler, J. W. (2013). Mindfulness training improves working memory capacity and GRE performance while reducing mind wandering. Psychological Science, 24(5), 776-781. doi:10.1177/0956797612459659

    Ramsburg, J. T., & Youmans, R. J. (2014). Meditation in the higher-education classroom: Meditation training improves student knowledge retention during lectures. Mindfulness, 5(4), 431-441. doi:10.1007/s12671-013-0199-5

    Sedlmeier, P., Eberth, J., Schwarz, M., Zimmermann, D., Haarig, F., Jaeger, S., & Kunze, S. (2012). The psychological effects of meditation: A meta-analysis. Psychological Bulletin, 138(6), 1139-1171. doi:10.1037/a0028168

    Tamagawa, R., Speca, M., Stephen, J., Pickering, B., Lawlor-Savage, L., & Calrson, L. E. (2015). Predictors and effects of class attendance and home practice of yoga and meditation among breast cancer survivors in a Mindfulness-Based Cancer Recovery (MBCR) program. Mindfulness, 6(5), 1201-1201. Doi: 10.1007/s12671-014-0381-4.

    Wilson, B. M., Mickes, L., Stolarz-Fantino, S., Evrard, M., & Fantino, E. (2015). Increased false-memory susceptibility after mindfulness meditation. Psychological Science, 26(10), 1567-1573. doi: 10.1177/0956797615593705

     

     

    Dara G. Friedman-Wheeler is a licensed clinical psychologist and Associate Professor of Psychology at Goucher College, in Baltimore, MD.  She earned her Ph.D. in Clinical Psychology from American University in Washington DC.  She teaches courses on psychological distress and disorder (abnormal psychology), health psychology, quantitative research methods, and emotion regulation, as well as serving as core faculty for Goucher’s public health minor.  She has experience working with patients in the public sector with presenting problems such as mood disorders, anxiety disorders, suicidality, chronic pain, chronic illness, substance abuse/dependence, and personality disorders.  She has co-authored empirical journal articles and the book Group Cognitive Therapy for Addictions (with Drs. Wenzel, Liese, and Beck), served as associate editor for the SAGE Encyclopedia of Abnormal and Clinical Psychology,  and has received several awards from the National Institutes of Health.  Her interests are in the areas of coping, health, addictions, behavior change, cognitive therapy and mood disorders.

     

    Jennifer A. McCabe is an Associate Professor of Psychology, and director of the Center for Psychology, at Goucher College in Baltimore, MD. She earned her Ph.D. in Cognitive Psychology from the University of North Carolina at Chapel Hill. She teaches courses on human cognition, as well as introductory psychology. Her research focuses on memory strategies, metacognition, and the scholarship of teaching and learning. She has been recently published in Memory and Cognition, Psychological Science in the Public Interest, Teaching of Psychology, Instructional Science, and Psi Chi Journal of Psychological Research. Supported by Instructional Resource Awards from the Society for the Teaching of Psychology, she has also published two online resources for psychology educators on the topics of mnemonics and memory-strategy demonstrations. She is a consulting editor for Teaching of Psychology.

     

  • 14 Nov 2017 1:04 PM | Anonymous

    Fantasy Researcher League: Engaging Students in Psychological Research
    Daniel R. VanHorn, North Central College

    In this essay, I describe a Fantasy Researcher League course design that I presented to a group of colleagues at the National Institute on the Teaching of Psychology (NITOP) in 2013. This innovative course was designed to get students excited about psychological research. I am grateful for the encouragement and feedback that I received from those who attended the institute. I have divided this essay into four sections. First, I describe the motivation behind the development of the course. Second, I describe the course itself. Third, I present survey data collected from students that have taken the course. Finally, I discuss how this course might be used in the future.


    Motivation

    While students may not complete textbook reading assignments regularly (Burchfield & Sappington, 2000; Clump, Bauer, & Bradley, 2004), they do often find value in the primary textbook assigned for a course (Carpenter, Bullock, & Potter, 2006). For example, a textbook is often a very useful quick reference guide. Textbooks are also helpful because they simplify and clarify psychological research. The problem with textbooks is that, in truth, psychological research is not simple and clear, but rather it is complex and messy. Textbooks also often present information as if it is finalized instead of an ongoing process and dialogue among experts in the field. Finally, many textbooks are not structured in a way that enables critical evaluation of the research they present. Reading and discussing primary sources (e.g., articles with original research that are published in peer-reviewed journals) provides an alternative to textbooks, and I believe students significantly benefit from working with primary sources in psychology. When students work with primary sources they begin to appreciate the intricate work behind what textbooks present as statements of obvious fact. They start to see that psychological research is constantly evolving and that there is still much to be learned. Working with the psychological literature also helps students develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). They learn to critically examine evidence and use that evidence to evaluate theories and/or claims. A significant challenge that many psychology teachers, including myself, face is getting students to engage in psychological research. Reading and thinking about psychological research is difficult, so we have to find creative ways to motivate our students to work with primary sources in psychology. One approach is to take the things that excite our students outside the classroom and implement them inside the classroom. Keeping this approach in mind, I looked to fantasy sports for help in getting my students engaged with the psychological literature.

    Fantasy sports are extremely popular. The Fantasy Sports Trade Association (2013) estimates the 2013 American market for fantasy sports is over 35 million players. Fantasy sports that are available to players include baseball, basketball, football, hockey, soccer, golf, and auto racing. In fantasy sports, approximately 8-14 participants get together and form a league in the sport of their choice. For example, a small group of friends might form a fantasy professional American football league. Each participant in the league selects current professional American football players that make up their fantasy team. The players on a participant’s team score points based on how they perform in real-life games (e.g., how many yards they gain and how many touchdowns they score) and the participants’ teams compete against each other.


    The Course

    I feel that fantasy sports provides a model that can be utilized in classrooms for engaging students. I took the fantasy sports model and modified it to engage students in psychological research by creating a course that took the form of a game. The official title of the course was Immersion in the Psychological Literature, but the course became known to students and faculty alike as Fantasy Researcher League. The official learning objectives of the course included the following: effectively search for published research and track research lines/programs, describe the research programs of several prominent psychologists, explain the current theory and findings of a few threads of research in the field, and identify how psychological theory and research evolve over the course of a research program. In addition to the official learning objectives described above, I wanted to show students that psychological research is dynamic. It is evolutionary. What students read in their textbooks is old news. I wanted my students to be on the cutting edge of psychological research and get a sense of what is feels like to discover something new. I hoped to get my students excited about research in psychology. I also wanted them to discuss psychology outside of a traditional classroom setting in a place where they would exchange ideas and not worry about whether they were getting a C+ or a B- in the course. Finally, I wanted them to discover their passion by having the freedom to explore their own academic interests.

    The course consisted of a small group of students that met with faculty approximately every three weeks throughout the academic school year. At the beginning of the course, the faculty members teaching the course put together a list of several prominent psychology researchers from a variety of research areas. Students were given the opportunity to add other researchers to this list. All the researchers on the list had to be currently active in the discipline. Each student drafted a team of five researchers from the finalized list. Each researcher could only be selected once. These teams made up our fantasy researcher league. Each student then selected one published article by each of their five researchers and tracked the number of times each article was cited during the course of the game. Students had the option to replace their articles at the beginning of each term. Students also kept track of all of their researchers’ scholarly activities and accomplishments (e.g., books, articles, and presentations) during the academic year. Students documented their researchers’ productivity by designing and maintaining a team webpage.  A student earned points for their team by correctly documenting their team’s scholarly activities and citations. The league scoring system is described in Table 1.


    Table 1

    Fantasy Researcher League Scoring System

    Scholarly Activity

    Points

    Book single author

    8

    Book co-author

    4

    Book editor

    3

    Book chapter author

    3

    Article first author

    4

    Article other than first author

    2

    Citation

    1

    Presentation

    3

    Grant/Award

    3

    During class meetings, students discussed the recent research activity of their teams. Students were also asked to connect their researchers’ current work to their researchers’ past work. At the end of each class, team scores were updated and high scoring teams were recognized.


    Survey Data

    Five students that participated in the course during the fall of 2011 and eight students that participated during the winter of 2012 completed a voluntary survey where they indicated how much they agreed or disagreed with specific statements related to the learning objectives for the course. Ratings ranged from 1 (strongly disagree) to 7 (strongly agree). Student responses to the closed-ended survey questions are shown in Table 2, and they suggest that we met our learning objectives. The vast majority of students agreed that they developed basic research skills, understood and could discuss cutting edge research, learned about today’s prominent psychological researchers, and learned how research programs evolve over time.


    Table 2

    Student Survey Responses on Course Learning Objectives

    As a result of participating in this course,
    Recoded 7pt. scale to 3pt. scale

    Agree

    (5-7)

    Neutral

    (4)

    Disagree

    (1-3)

    I can better search PsycInfo to locate research-related material and people.

    11

    1

    1

    I can more effectively search for psychological research and researchers in electronic sources.

    12

    1

    0

    I am more familiar with the intellectual history and background of some psychology researchers.

    11

    2

    0

    I am more familiar with some of the most current research in psychology.

    13

    0

    0

    I feel more competent at presenting and discussing a researcher’s current research.

    13

    0

    0

    I have a better understanding of how a researcher’s program of research or interests evolves over time.

    11

    2

    0

    I can describe the research program of several prominent psychology researchers.

    9

    4

    0

    I have a better sense of which areas of psychology interest me and which do not.

    13

    0

    0

    I can better create and edit webpages.

    13

    0

    0

    Students were then asked to describe what they learned in the class beyond the topics already covered in the closed-ended survey questions. Responses to these questions suggest that students enjoyed the social nature of the game, learned more about psychological research, and began to discover what areas of psychology interest them most. Examples of student responses to this open-ended question are included below.

    ·        “I was able to find researchers that I would be interested in following later.”

    ·         “I learned what areas in psychology interest me, which has helped me make decisions for my future.”

    ·        “How to effectively create a webpage.”

    ·        “What modern research is like.”

    ·        “Better research skills.”

    ·        “How to find articles that cite another article.”

    ·        “Winning!”


    The Future

    Student surveys suggest that the fantasy researcher league model engages students in psychological research and provides an exciting alternative to traditional courses and/or assignments. The fantasy researcher league model gets students to read and discuss primary sources. This is crucial because working with primary sources is one way for students to develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). The fantasy researcher league model also helps create a learning community where students play a central role in learning and discovery. It is the students that select the researchers and research topics that are presented and discussed in class. In the fantasy researcher league model, teachers provide the initial structure of the course but then focus on supporting and empowering student learning and discovery. In the future, I envision a fantasy researcher league online gaming experience that can be used in a variety of disciplines and can bring together team managers from a college or across the world. In the meantime, I believe that the fantasy researcher league course described here could be incorporated into many courses as a long-term research project. In my course, students worked individually, but I believe the project would also work well if completed in small groups. 

     

    References

    Anisfeld, M. (1987). A course to develop competence in critical reading of empirical research in psychology. Teaching of Psychology, 14(4), 224-227. doi:10.1207/s15328023top1404_8


    Burchfield, C. M., & Sappington, J. (2000). Compliance with required reading assignments. Teaching of Psychology, 27(1), 58-60.

    Carpenter, P., Bullock, A., & Potter, J. (2006). Textbooks in teaching and learning: The views of students and their teachers. Brookes eJournal of Learning and Teaching, 2(1), Retrieved from http://bejlt.brookes.ac.uk/

    Chamberlain, K., & Burrough, S. (1985). Techniques for teaching critical reading. Teaching of Psychology, 12(4), 213-215. doi:10.1207/s15328023top1204_8

    Clump, M. A., Bauer, H., & Bradley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology, 31(3), 227-232.

    Fantasy Sports Trade Association. (2013). Home page. Retrieved from http://www.fsta.org/

    ***************************

    Daniel R. VanHorn earned his B.S. in psychology from Wittenberg University in 2003. He earned his M.S. (2005) and Ph.D. (2009) in cognitive psychology from Purdue University. He is currently an Assistant Professor of Psychology at North Central College in Naperville, Illinois. He regularly teaches introductory psychology, cognitive psychology, statistics, and research methods. He also has an active research program in cognitive psychology where he trains aspiring psychologists.
Powered by Wild Apricot Membership Software