Society for the Teaching of Psychology: Division 2 of the American Psychological Association

E-xcellence in Teaching
Editor: Annie S. Ditta

  • 01 Mar 2018 9:28 AM | Anonymous

    Ronald G. Shapiro

    Since most students who complete occasional psychology courses and even most undergraduate psychology majors will not enroll in graduate school in psychology or become psychology professionals, it is important to prepare these students for jobs in other fields. This article provides suggestions on how offering a non-majors psychology course in lieu of introduction to psychology for non-majors, making minor changes to other courses, providing different types of opportunities, and focusing recommendations can help to prepare students for jobs in different fields.

    Non-Majors Psychology Course. One of the “facts” I learned in graduate school was that non-majors who earned an “A” in an introduction to psychology course, when asked to retake the final exam a year later did not pass it (Sidney L. Pressey study reported by David Hothersall in History and Systems Class, Ohio State University, circa 1977). This fact has had huge impact on my thinking. If people aren’t going to remember it, why teach it? One might argue that it is easier to relearn material. True, but your non-major may not be very likely to do this. Instead, I would recommend making a list of those items you really want non-majors to remember five years after the final exam and teach those materials and only those materials to undergraduate non-majors. Be thorough in teaching those materials. Teach them in a variety of contexts. One way to do this would be to offer a non-majors psychology course. Structure the non-majors course in ways that students might use the material (rather than as we structure the field with our specialties). Topics might focus on how to use psychology:

    ·       In society (separating “fake news” and “alternative facts” from science);

    ·       In marketing and advertisement;

    ·       In working with others;

    ·       In structuring a work environment;

    ·       In understanding how a person develops from birth through death; or

    ·       As a potential consumer of psychological services.

    This structure would help students better use the materials, and see how what’s being taught might be helpful to them. In this restructured course remember to teach only what you want the students to remember 5 years after the final exam.

    In Today’s Courses. Explain and have students complete numerous projects applying whatever you teach to real world solutions. If the material you teach is basic research that is so cutting-edge that there are no applications for it yet, have the students participate in projects which help them to think about how the materials might be used to change lives a year, a decade or a generation from now. This may require teaching less material, but in more depth. Show students how to become a “citizen expert” (if not a scientist) continuing to follow up on these projects throughout life.

    Providing Advice to Students. Truly understand the student’s objectives (and the objectives of the person paying for the student’s education) before offering advice. Early in my career I would have advised a student that their primary objective in college is to learn all that they can from their academic departments. Everything else is secondary. For some students this is truly the case and I would recommend this today. For example, I have encouraged many high school students to meet faculty on their visits to college campuses and figure out how they can become involved in their research from freshman week onward. For other students, I would today argue that their best bet is to lead a very balanced life. The extracurricular activities, friendships formed, internships, and other experiences might be more valuable to them than what they learn in their academic departments. Encourage these students to take advantage of the numerous benefits provided while they are enrolled in a program (i.e. regular access to faculty, internship programs) that are harder to obtain without the student status. Recommend that students learn as much about business as possible through studying I/O psychology as well as completing courses in business. Also, recommend that students learn as much about technology as their interests allow, because more and more positions will require knowledge about technology   

    Producing a Resume. You may wish to help your students prepare their resume. Resumes for industry are vastly different than academic resumes or CVs. An industrial resume needs to ROAR (be Results Oriented and Relevant). In addition to being much shorter than academic CVs, they need to show a potential recruiter and a potential hiring manager why this applicant is better than the numerous others applying for the same job, in just seconds. A resume that shows real results, and that the applicant took the initiative to show how they would apply their knowledge and experiences to meet the specific employers’ needs are most beneficial. Keywords may be important for the recruiter. Showing real results (rather than job responsibilities) that will demonstrate to a hiring manager how those results translate into action is critical. In response to the frequently asked question “How long should a resume be?” The answer is long enough so that the person reading it becomes more enthusiastic about the candidate with every sentence, not bored with redundant or irrelevant detail. Providing the names of faculty members (e.g., worked in Professor Smith’s lab) are only important if the reader is likely to know or have heard of Professor Smith. References would not normally be included on a resume (to protect faculty from random calls), and the words “References Furnished Upon Request” should never be included on a resume because the point is obvious and also, it is somewhat insulting to the reader (saying “I do not trust you with the names of my references”).

    Writing letters of recommendation. You are writing a letter of recommendation, not a performance evaluation. Your job, should you chose to accept it, is to sell the student to prospective employers by pointing out his or her strengths and why the potential employer will be better off with this student (as opposed to someone else) on their team. Before deciding if you can do this (unless you know you cannot up front) review the student’s resume and ask the student for a listing of content you might include in their letters. If you cannot use the content, explain to the student what you can do for them in a letter and suggest that there are probably others who can do a better job for them. Don’t “kill the student with faint praise.”  Don’t discuss student’s weaknesses or areas for improvement.

    The interview. Help your students to be able to communicate with potential colleagues, managers, people familiar with their work, and people not familiar with their work. In an industrial interview, applicants may meet with many people including recruiters, potential managers, and colleagues. Be sure your students can communicate their research work as well as other topics effectively. They should be able to explain their work (emphasizing their own contributions and differentiating them from the work of others) in one minute, five minutes, ten minutes or a full length presentation and have the listener engaged, excited about the topic, and seeing how the applicant would be the best fit in their organization. Please be sure to do this in the time allocated. One way to do this is to show how their research fits into the company’s mission and requirements. I might add the purpose of the interview is to determine if there is a good fit between the candidate and the position for both the applicant and the company. Accordingly, the applicant should be prepared to ask meaningful questions that will help them to decide if the position is a good fit for them explain how they will be a real asset to the specific company and demonstrate a thorough understanding of the company and enthusiasm for being part of it.

    Decision Making. Businesses need to get products to market in a timely fashion. Thus, decision making is simply different than in academics. In academic basic research one might want to have a standard of p<.05, p<.01, p<.001, etc. In industry decision making may be made with absolutely no evidence (depending upon the industry). If an employee is 50.01% confident in a decision based upon knowledge and research they should be prepared to make a recommendation, as the recommendation is based upon some knowledge. Depending on the circumstances, they should also be prepared to qualify how confident they are in the decision. Rather than using p values for decision making, corporate executives may be more likely to use the 80/20 rule. That is, you can accomplish 80% of what you want to do with 20% of the effort. So, stop the process and go when you are 80% confident. You can help students understand this important distinction.

    Deadlines. Deadlines are critical in business… far more so than in academics. They are real. No matter how thorough a contribution is, if it is late it may be totally useless. There may be some circumstances in which a late contribution may be acceptable, usually when an even more critical process has been delayed. The odds of this are minimal. The academic practice of deducting points for late work really doesn’t apply to much in business. A recommendation a day or a week late is not, for example 80% or 90% as good as a recommendation delivered in a timely fashion. A more realistic way to make decisions about accepting late work would be to shuffle a deck of cards after the late work is completed. Draw a card off the top. If it is, for example, an ace, accept the work. If not, don’t.

    Oral Communications. Communicating in business is simply different than communicating in school. For example, I learned a very bad habit in graduate school. Ask questions to show you understand the work and to show defects in a presenter’s thinking. One of my best managers ever pointed this out to me. His recommendation was to: 1) only ask my questions if everyone else had completed theirs and my question had not been asked and 2) only ask questions for clarification. Otherwise, address the questions with the presenter off line. Be sure that your students understand this important distinction.

    Written Communications. In academics we tend to write long journal articles explaining numerous details about our work. In industry, a brief executive summary is the more important means of communication. Executives trust that we know how to do our work and we may not need to demonstrate how we derived our results to them. When sending written communication, keep the receiver in mind and anticipate their schedule, mind frame, and organizational style (i.e. details versus quick summaries). Chances are that an executive will be very busy, rushed, and stretched thin, in which case having results and next steps up front will go a long way. Keep thorough lab notes. Depending on the corporate culture expected from your executive team, write the detailed report for backup or else skip it all together. In my first report on a study I did at a major corporation, two of us were presenting. My colleague was to present part 1. I was to present parts 2 and 3. Somehow, when he finished I went right into part 3. No one cared that the details were left out. Indeed the comments I received from my client were completely complimentary… that my department had learned how to present more concisely.

    Research Involvement. Offer your students an opportunity to work with you on research. This will help them to develop great skills. Be sure that they can explain what the research was all about, their role in it, and how that research was better because of their participation (as opposed to that of another person). Be sure they can explain this very succinctly as well as in detail.

    Perception of Degree Value. I’ve heard professionals, even a vice president in a major corporation, say “I was a psychology major and it was useless to me. It did not help me get a job.”  That statement may be true. I did point out to her that while the degree may not have helped her secure her first position with the business, what she learned probably helped her to advance very quickly from an entry level position to a high level executive position. She agreed. My recommendation here is to clearly explain to your students what a psychology degree may and may not do for them in the business world, generally when they are considering the major. Explain this at the beginning of the semester for each course. Explain again, at the end of the semester, how the content should help them. In between assign work that will help the students to explain how the content might apply to the business world.

    Seminars. Invite alumni who have gone into industry 1, 5, 10, and 20 years ago to offer seminars at your school showing how their degrees have helped them, and how the students might apply their degrees.

    Internships. Completing one or two successful internships or coops can be extremely valuable for students as a learning experience. If they perform well, it may also be the key to having a great job waiting for them on graduation day.

    In summary, I would say that a psychology major can be an extremely valuable tool to help a professional throughout their career if they make the most of it by becoming extremely involved with their department, research, course work, and internships. If they, on the other hand focus on taking mostly large lecture courses to meet the minimum degree requirements they will be minimizing the value of their degree.

     

    Author note: I would like to thank Industrial Consultant Dr. Margarita Posada Cossuto for helpful comments.


  • 01 Feb 2018 9:27 AM | Anonymous

    Jennifer A. Oliver (Rockhurst University)

    The use of case studies is a common active learning strategy employed in psychology. Case learning is useful for developing critical-thinking skills (Krain, 2010), and for increasing students’ motivation and interest in course material (McManus, 1986a; McManus, 1986b). Researchers have described many positive outcomes of using case studies. These include helping abstract theoretical information become concrete, facilitating understanding; reinforcing course concepts as students analyze, infer, and examine relationships (Graham & Cline, 1980); and integrating students’ learning as they incorporate theory into practice and make practice integral to theory (McDade, 1995).

    But most of the work examining the use of case studies uses pre-written cases. While I wanted to use cases in my Psychology of Disability course, the only cases that I could find were focused either on abnormal psychology or on special education, and neither area was a good fit for this course. So, I decided to have students write their own cases. Few studies have examined having students write their own cases. Successful application of student-generated case studies has been used at both the undergraduate level in business and science, as well as in medical training (Yurco, 2014). In fact, Yurco reported that when students created their own cases, they developed greater confidence, ownership of the learning process, a deeper understanding of the material, and improved critical thinking skills in an introductory neurobiology course. McManus (1986b) reported that having student groups compose a problem-focused case and generate potential solutions to the problem in the case assisted students in consolidating course concepts in an adolescent psychology course.

    In this essay, I describe an applied project that I use in my undergraduate Psychology of Disabilities course, along with information on students’ performance and their views of the project. The Psychology of Disabilities is a 4000-level class (junior and senior level). All of our 4000-level courses require an assignment that involves an integrated literature review but I also wanted to incorporate some application into the course at a broader level than just using exam questions.

    The Project

    In the Psychology of Disabilities course, students chose a disability and wrote their own case study of an individual with that particular disability. The project included:

           An integrative literature review (minimum of 4 double-spaced pages) describing the disability, including psychological and behavioral characteristics, prevalence rate, developmental changes as an individual with the disability moves from childhood to adolescence to adulthood, (possible) causes of the disability, and at least three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world. Students had to cite at least eight credible academic sources, with at least two of the sources being empirical journal articles. They were allowed to use one internet source that summarizes information on the disability; however that source had to be a credible source, written by individuals who are professionals and knowledgeable about the disability. I provided students with examples of sources that would both be acceptable and not acceptable. Students turned in rough drafts of this section at midterm for feedback before the final project was due at the end of the semester.

           A case study of a fictional individual with that disability at two contrasting ages (minimum of 1 full page, single-spaced per age). In keeping with the developmental focus of the class, students could use any ages between preschool and young adulthood (up through the early-20s). In their case study, students needed to apply the characteristics, described in the literature review, that an individual with that disability would exhibit at the chosen ages, and include either a behavioral interaction and/or a verbal interaction between the individual and at least one other person

           A complete description of two possible interventions/treatments that would be appropriate for their fictional individual, including the effectiveness of each intervention/treatment. In addition, students discussed which age from their case each intervention/treatment would be most appropriate for and why.

    An example of a case study and two additional completed projects were available for the students to use as models.

    Student Performance

    In order to determine how well students performed on the assignment, I evaluated the grades on each section of the assignment from 56 students (28 each, in Spring 2014 and in Spring 2015). The percentages of grades for each area of the assignment were as follows:

    Case Study

    Literature Review

    Treatment/Intervention

    A

    58.9

    60.7

    51.8

    B

    32.2

    17.9

    30.3

    C

    5.4

    16.0

    12.5

    Below C

    3.5

    5.4

    5.4

    Overall, students performed well on all three areas of the assignment, with at least 78% earning an A or B on each portion. Over 90% of the students did quite well on the case study portion. Common areas where students missed points were not providing an example of behavioral and/or verbal interactions between the individual and another person, not including all of the characteristics described in the literature review in the case, or not meeting the length requirement. A higher percentage of students received a C or lower on the literature review portion than on the other two sections of the project, which was surprising since they received feedback on a previous draft of this section of the project. Common difficulties on the literature review included not fully describing the disability, choosing inappropriate sources (especially an over-reliance on internet sources), and lack of integration of information from multiple sources. In addition, students were asked to describe three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world; students often ignored the actual sociocultural factor choices given in the assignment and came up with their own factors. This was the first psychology course that required a writing assignment this in-depth for some students, which may explain the lower scores on this section. A few students did not incorporate feedback that was provided on their draft. If students lost points on the treatment/intervention section, it was typically because they either did not fully describe the treatment/intervention or failed to discuss the effectiveness of the treatment/intervention. A few students did not discuss how the treatments/interventions related to the case study portion of the assignment.

    I also wanted to assess students’ views of the project. After students had turned in their final project, they completed a 3-item anonymous rating of the project. Each question was rated on a 5-point Likert scale (1=strongly disagree, 5= strongly agree). Students’ average ratings were quite high:

           Completing the case study project increased my understanding of disabilities, M= 4.32 (sd=.69, range 3-5)

           The case study project was a useful way to help me learn the class material, M= 4.29 (s.=.73, range 3-5)

           I rate the project as interesting, M=4.38 (sd =.62, range 3-5)

    Students’ anonymous ratings for the case study project were quite high, with the lowest rating for all three questions as neutral. Thus, this project may be one way to get students more actively engaged in learning about disabilities. In addition to the students’ high ratings of the project, there were numerous unsolicited comments on the course evaluations that they enjoyed the project and it helped them learn to apply course material.

    I was also interested in whether completing a big application project was related to student performance on application-based material on the exams. There are three exams in the course. Each exam has nine application-based multiple-choice questions. I give Exam 1 before students have completed any of the project. I give Exam 2 after students have completed a draft of the literature review but before they have written the case study portion. Students take Exam 3 after they have completed the final project. I looked at these application-based multiple-choice questions on each exam to see if there was improvement after completing the case study.

    Average % correct

    Exam 1

    59.2

    Exam 2

    60.4

    Exam 3

    81.6

    Students, on average, performed better on the application-based multiple-choice questions after completing the case study. While there was no difference between scores on Exams 1 and 2, t(8) = -1.976, p=.084, there were significant differences between performance on Exam 1 and Exam 3, t(8) = -3.086, p=.015 and Exam 2 and Exam 3, t(8) = -3.117, p=.014.

    Performance on the application-based multiple-choice questions on the exams improved after completion of the case study project. Students may be getting better at application-based multiple-questions with repeated practice on the exams but completing the case study project may have also helped in learning to apply information.

    Suggestions for Using the Project in Other Psychology Courses

    While I designed this project for a specific course, it could easily be adapted for use in other Psychology classes, either with or without a literature review, such as:

    • ·       Abnormal Psychology–students pick (or are assigned) a psychiatric disorder and create a fictional individual with that disorder, describing the symptoms specific to the characteristics (age, race/ethnicity, etc.) of the individual. Students could also discuss a specific theoretical orientation toward treatment.
    • ·       Community Psychology–have students create a case about an individual, demonstrating how that individual is connected to his/her environments and how specific problems within the individual’s community have an impact the individual.
    • ·       Developmental Psychology–have students develop a fictional individual and describe how that individual changes while passing through the different developmental time periods. For example, in a child psychology class, what that individual looks like at early childhood compared to middle childhood. Or students could use one developmental period (e.g., adolescence) and describe how physical, cognitive, and social-emotional developmental interacts at that age for that particular individual.
    • ·       Health Psychology–students could create a case study about an individual with a specific health issue, discussing how the individual adjusts and copes with the issue, what behaviors could protect the individual’s health, what behaviors harm the individual’s health, and how those behaviors could be changed.
    • Concluding Thoughts

    I have found this project to be a fun, engaging way to help students learn about disabilities. It demonstrates that the majority of students can apply information and describe how characteristics of disabilities can change developmentally. In addition, students appear to enjoy the assignment and it actually is more fun to read and grade than traditional literature reviews.

    References

    Graham, P.T, & Cline, P.C. (1980). The case method: A basic teaching approach. Theory into Practice, 19(2), 112–116.

    Krain, M. (2010). The effects of different types of case learning on student engagement. International Studies Perspectives, 11, 291-308.

    McDade, S.A. (1995). Case study pedagogy to advance critical thinking. Teaching of Psychology, 22(1), 9-10.

    McManus, J.L. (1986a). “Live” case study/journal record in adolescent psychology. Teaching of Psychology, 13(2), 70-74.

    McManus, J.L. (1986b). Student composed case study in adolescent psychology. Teaching of Psychology, 13(2), 92-93.

    Yurco, P. (2014). Student-generated cases: Giving students more ownership in the learning process. Journal of College Science Teaching, 43(3), 54-58


  • 15 Jan 2018 4:43 PM | Anonymous
    Harwood, E.A., & Marsano, M. (Rivier University)

    Teaching in the age of millennial students is a challenge that should be embraced by all faculty, but what does this entail? Present day students have grown-up alongside technology as a basis for communication and understanding. Termed “digital natives” by Marc Prensky (2001), millennial students spend a great deal of time communicating through technology and are used to having information at their fingertips. Sending an average of 100 texts a day (Lenhart, 2012), the millennial student expects a near immediate response to comments, and can easily find the answer to a question by asking Google. Because millennials have a completely different experience with information than previous generations, especially the ease with which it can be accessed, students may wonder why we don’t instantly respond to email or provide our lecture notes before class (van der Meer, 2012).  Taking notes may seem archaic and pointless if material is always available. Nevertheless, teaching students the skills necessary to navigate through a surplus of information and having them  recognize the importance of quality over quantity are now essential components of college curricula.

    How many times have your students asked you, “Is this going to be on the test?” Although this may seem an annoying question, students may be searching for clues about the essential concepts of the class. Main points that are crystal clear to us may not be as clear to our students (van der Meer, 2012). As experts in our field, we have already created our own organizational frameworks for the concepts we teach. We have formed deep, complex connections that have helped us master the material and make it seem easy for us to understand, while it may remain difficult for our students (Ambrose, Bridges, DiPietro, Lorett, & Norman, 2010). How can we scaffold our “expert” frameworks for our students to build their own connections among course concepts and past experiences? In this essay, we describe several teaching techniques for creating these frameworks, from the way we encourage effective note-taking to the way we speak and incorporate multimedia.

    Why do students struggle with note-taking? Effective note-taking requires extensive cognitive resources, especially working memory capacity (Stefanou, Hoffman, & Vielee, 2008). Listening to the professor while simultaneously writing notes is difficult for many students (van der Meer, 2012). Differences in working memory resources may put some students at a distinct disadvantage depending on the types of notes they take (Bui, Myerson & Hale, 2013). Students with documented and undocumented learning disabilities may also face impediments. If the cognitive load is too great, students may not be able to contextualize or personalize the notes (Stefanou et al., 2008). Some may furiously write down everything you say, while others may copy down only what’s on the PowerPoint slides. Others may just sit back and wait until you put the slides online.

    Nevertheless, writing an idea down can help with long-term retention (Bui et al., 2013). Writing about a concept necessitates active recall and allows the formulation of clearer thoughts and more connections (Bui et al., 2013).  Is it better to attempt to transcribe a lecture or take more condensed, structured notes? While transcription of lectures by computer may help initially with the recording of more notes and immediate recall of facts, taking organized notes shows more durable retention in a 24-hour delay condition (Bui et al., 2013). Although, when students are allowed to study their transcribed lectures, recall is superior, especially for those with lower working memory capabilities (with a 24-hour delay involving transcription of an 11-minute lecture) (Bui et al., 2013). The attention necessary to transcribe a full lecture was not tested, however this research (Bui et al., 2013) once again reminds us that students differ in their capabilities, and what works for one may not work for another.

    Brief, targeted interventions can improve note-taking. Nakayama, Mutsuura and Yamamoto (2016) provided students with two short instructions, once at the beginning and again at the mid-point of a course, on note-taking techniques, which included examples of good notes. This instruction increased student metacognition with regards to note-taking and improved the quality of notes over the course of the semester. Deliberately reviewing and restructuring notes can significantly improve grades as well (Cohen, Kim, Tan & Winkelmes, 2013). For example, outlining, summarizing, and drawing connections between different concepts requires active engagement and leads to better test performances than review alone (Cohen et al., 2013).

    Another technique for note-taking that utilizes scaffolding is directed notes (Harwood, 2016).  Similar to a review guide for an exam, directed notes act as a review guide for that day’s class. Given at the beginning of the class period, directed notes consist of a list of questions and activities about that day’s topics with plenty of space for students to write in their answers. The following are examples from a few different courses:

    1.      Summarize how neurons communicate.                            Action Potential
           How is it like firing a gun?                              Absolute Refractory Period
           Use the terms to the side in your summary                                 Threshold
                                                                                             All or None Response

    2.      Now that we’ve covered the functions of the different brain structures, create your own concept map using your notes

    3.      What advice would you give our aging population given what you know about adult development?

    4.      Describe how each of the following individuals expanded our understanding of attachment.

     

    Name                                                              Contributions

    John Bowlby

    Harry Harlow

    Konrad Lorenz                                                Imprinting
                                                                            Critical Period

    Mary Ainsworth                                              Strange Situation Task

     

    5.      Lambert (1992) proposed 4 therapeutic factors that lead to client improvement. These are

     

    The Big Four                                                 Variance                     Examples

    1. Client/Extra Therapeutic Factors

    2. Therapeutic Alliance

    3. Placebo, Hope, Expectancy

    4. Therapeutic Techniques

     

    6.      Write down your immediate reactions to this individual’s story of heroin addiction.

     

    As you can see, directed notes point students towards important concepts, and assist students in creating their own examples and applying the material. When provided guidelines, but not explicit notes, the student is encouraged to form meaningful connections on the main ideas identified by the professor. Some important guidelines to keep in mind when creating directed notes for your course are to include different types of questions and response formats, leave plenty of space for students to write, and ensure that directed notes are assimilated into the course in some way, whether it be group work or as a test review. Psychology is so pertinent to everyday life that it is ripe with ways to make the material personally meaningful (“If you had to take an anti-depressant, which one would you take and why?”). Take advantage of this and further students’ critical thinking and interest in the field.

    While professors may be tempted to think that directed notes and guided notes are synonymous, there is a distinction between the two. Guided notes are an alternative to traditional PowerPoint slides with information missing to encourage attendance (Barbetta & Skaruppa, 1995). Results among the college population are mixed on whether guided notes provide advantages above and beyond complete PowerPoint slides on test performance (Neef, McCord & Ferreri, 2006). Guided notes may be effective in demonstrating information, but they may fail to encourage students to make connections beyond what’s on the slides.

    Note-taking techniques are one way that scaffolding can be achieved in the classroom, allowing students to organize and detail their thoughts in written form.  In addition   the presentation of information to students provides another opportunity for framing information. For example, one can provide organizational cues during class, such as using explicit language that differentiates main points (“Carl Rogers identified 3 core conditions for a successful therapeutic relationship. The first is unconditional positive regard…”).  Further, one can provide transitional language that encourages students to refocus on a new idea and cues the type of notes to take and their organization (“Now that we understand the structure of a neuron, let’s discuss how neurons communicate” (Titsworth, 2004). We can also encourage students to elaborate beyond what we have explicitly covered since the more information students add to their notes, the higher their scores on applied questions (Stefanou et al.,2008). For example, after explaining a concept or definition, I (Harwood) give students a few moments to write down their own examples (“Give an example of an empathic response to a friend’s problem”) and then have several share with the class. Five-minute writing prompts on a class topic can also foster generative notes and class conversation (“Based on what we’ve covered so far, why do you think heroin is so hard to quit?”).

    Using technology as a tool for creating conceptual frameworks in a course can also be effective with millennial students. PowerPoint slides are a possible method for scaffolding information and cuing students on how to organize their notes (Stefanou et al., 2008). With the integration of technology starting in k-12 schools (Ruggiero & Mong, 2015), students prefer, and may even expect PowerPoint slides (Landrum, 2010). While students may want these slides before class (Babb & Ross, 2009; Landrum, 2010) and it may increase class participation for those who typically participate (Babb & Ross, 2009), it does not appear to aid in test performance (Babb & Ross, 2009), final grades (Bowman, 2009), or the addition of new ideas to one’s notes (Stefanou et al., 2008). We find that for many students, providing slides before class can decrease interest and stunt conversation. My (Harwood) compromise is to provide slides after we have finished the chapter for students to fill any gaps in their notes. 

    Finding the right balance between incorporating PowerPoint or other presentation media into a lecture while meeting students’ needs is a necessary consideration during lesson planning. Some believe that PowerPoint slides may condense the material too much, acting as “CliffsNotes” for the class, or preventing “big picture” thinking with its linear presentation (Kirova, Massing, Prochner, & Cleghorn, 2016).  It may be more effective to think of multimedia presentation technology as an extension of conveying main points and transitional language, rather than being the sole conveyor of information during a lecture. As much as we tend to lump students into the group of “millennials,” it is important to recognize their individual learning capacities and the need for a variety of teaching techniques.

    If you choose to use PowerPoint as a scaffolding technique, there are some common mistakes to avoid. First, don’t use your slides as “cue cards” (Gardner & Aleksejuniene, 2011). They should be made with the students in mind, rather than the instructor. When information is read off a slide, it decreases cognitive understanding by overloading working memory and inhibiting students’ opportunities to create connections. In addition, students tend to lose interest quickly Second, don’t overburden the slides with text (Gardner & Aleksejuniene, 2011; Stefanou et al., 2008). Providing too much information on a slide may result in students copying information rather than recording their own thoughts (Stefanou et al., 2008). In limiting the amount written on the slides, students are given the opportunity to reason through information, which can promote generative learning. Third, integrating images with verbal descriptions is more effective for learning than text alone (Gardner & Aleksejuniene, 2011). Pictures really can say a 1000 words! Seeing the devastating physical effects of methamphetamine use in a series of mug shots is much more powerful than reading about it or hearing a recitation of symptoms from the instructor. And fourth, incorporate video clips and other media that naturally appeal to the millennial student (Garder & Aleksejuniene, 2011). Identifying the symptoms of cocaine abuse from a movie scene is an excellent way to elicit interest from students. Further seize the teachable moment by explicitly discussing how these images and clips relate to course concepts (“What properties of methamphetamine lead to these physical changes” or “What symptoms are the characters showing that indicate stimulant use”). Students may not automatically see these connections on their own.

    PowerPoint slides, organizational cues, and transitional language all aid students in creating their own class notes. Note-taking is a skill often overlooked by college educators who assume their students already know how to do it. In a traditional lecture format, only a small amount of content is accurately captured in student notes (Kiewra, 1985). Considered more than just a “recording technique” (van der Meer, 2012, p. 13), taking notes and reviewing them helps students reconstruct what they have learned and makes it more personally meaningful.  This actively engages the student with the material and increases retention (Bohay, Blakely, Tamplin, & Radvanksy, 2011; Cohen et al., 2013; Kobayashi, 2006). Note-taking is a skill that will follow students long after they have left the classroom, giving them an advantage in the workplace by preventing mistakes and saving time.

    Regardless of the format an instructor chooses to use, it is important to remember that millennial students will benefit from exemplified note-taking and scaffolded frameworks of knowledge. Considering the technology-centered background of today’s millennial student, we would be wise to incorporate media presentations in the classroom because they garner more attention. However, this must be tempered with the understanding that our main focus must be on generative learning and helping students make meaningful connections. Inspired teaching is more than content delivery. It is student-centered and focuses on cultivating skills that lead to a successful life.

    References

    Ambrose, S.H., Bridges, M.W., DiPietro, M., Lovett, M.C., & Norman, M.K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: John Wiley & Sons, Inc.

    Babb, K.A., & Ross, C. (2009).  The timing of online lecture slide availability and its effect on attendance, participation and exam performance. Computers & Education, 52, 868-881. doi:10.1016/j.compedu.2008.12.009

    Barbetta, P.M., & Skaruppa, C.L. (1995). Looking for a way to improve your behavior analysis lectures? Try guided notes. The Behavior Analyst, 18(1), 155-160.

    Bohay, M., Blakely, D. P., Tamplin, A. K., & Radvansky, G. A. (2011). Note taking, review, memory, and comprehension. American Journal of Psychology, 124(1), 63-73. doi: 10.5406/amerjpsyc.124.1.0063

    Bowman, L. L. (2009). Does posting PowerPoint presentations on WebCT affect class performance or attendance? Journal of Instructional Psychology36(2), 104-107.

    Bui, D.C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2), 299-309. doi: 10.1037/a0030367

    Cohen, D. D., Kim, E., Tan, J., & Winkelmes, M. (2013). A note-restructuring intervention increases students’ exam scores. College Teaching, 61(3), 95-99. doi: 10.1080/87567555.2013.793168

    Gardner, K., & Aleksejuniene, J. (2011). PowerPoint and learning theories: Reaching out to the millennials. Transformative Dialogues: Teaching & Learning Journal, 5(1), 1-11.

    Harwood, E. (2016). A Strategy for Active Engagement in the Classroom. In W. Altman, L. Stein, & J. E. Westfall (Eds.), Essays from E-xcellence in Teaching (Vol. 15, pp.  1-4). Retrieved from the Society for the Teaching of Psychology Web site: http://teachpsych.org/ebooks/eit2015/index.php.

    Kiewra, K. A. (1985). Providing the instructor's notes: An effective addition to student notetaking. Educational Psychologist, 20(1), 33-39. doi: 10.1207/s15326985ep2001_5

    Kirova, A., Massing, C., Prochner, L., & Cleghorn, A. (2016). Shaping the 'habits of mind' of diverse learners in early childhood teacher education programs through PowerPoint: An illustrative case. Journal of Pedagogy, 7(1), 59-78. doi:  10.1515/jped-2016-0004

    Kobayashi, K. (2006). Combined effects of notetaking/reviewing on learning and the enhancement through interventions: A metaanalytic review, Educational Psychology: An International Journal of Experimental Educational Psychology, 26(3), 459-477. doi: 10.1080/01443410500342070

    Landrum, R. E. (2010). Faculty and student perceptions of providing instructor lecture notes to students: Match or mismatch? Journal of Instructional Psychology, 37(3), 216-221. Retrieved from http://www.projectinnovation.biz/jip_2006.html.

    Lenhart, A. (2012, March, 19). Teens, smartphones & texting. Retrieved March 16, 2017, from Pew Research Center: Internet, Science & Tech Web Site: http://www.pewinternet.org/2012/03/19/teens-smartphones-texting/#

    Nakayama, M., Mutsuura, K., & Yamamoto, H. (2016). Students’ reflections on their learning and note-taking activities in a blended learning course. The Electronic Journal of eLearning, 14(1), 43-53.

    Neef, N.A., McCord, B.E., & Ferreri, S. J. (2006). Effects of guided notes versus completed notes during lectures on college students’ quiz performance. Journal of Applied Behavior Analysis, 39(1), 123-130. doi: 10.1901/jaba.2006.94-04

    Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6.

    Ruggiero, D., & Mong, C.J. (2015). The teacher technology integration experience: Practice and reflection in the classroom. Journal of Information Technology Education: Research, 14, 161-178

    Stefanou, C., Hoffman, L., & Vielee, N. (2008). Note-taking in the college classroom as evidence of generative learning. Learning Environment Research, 11, 1-17. doi: 10.1007/s10984-007-9033-0

    Titsworth, B. S. (2004). Students' notetaking: The effects of teacher immediacy and clarity. Communication Education, 53(4), 305-320.

    Van der Meer, J. (2012). Students’ note-taking challenges in the twenty-first century: Considerations for teachers and academic staff developers. Teaching in Higher Education, 17(1), 13-23. http://dx.doi.org/10.1080/13562517.2011.590974

  • 18 Dec 2017 4:01 PM | Anonymous
    Help Sheet Content Predicts Test Performance


    Mark R. Ludorf and Sarah O. Clark
    Stephen F. Austin State University

    Readers of E-xcellence in Teaching know the importance of finding the best teaching methods and techniques to reach students. Although instructors rightfully seek to improve their teaching to enhance student learning, often times too much focus is placed on enhancing “input” and not enough focus is placed on enhancing the fidelity of “output”. That is, instructors should explore both the methods to make them better teachers, but also consider innovative methods to create better measurements of what students have learned.

    Professors regularly confront the challenge of teaching to a student population with diverse levels of academic ability. To address such diverse ability instructors have implemented various pedagogical methods, many of which are time consuming and tedious. One method instructors have used to address diverse learning abilities is to allow students to access information during a test. Some instructors limit the amount of information that is accessible (e.g., index card or standard sheet of paper), while other instructors allow access to an unlimited amount of information (i.e., “open book”).

    Ludorf (1994), allowed students to select the amount information they could access on each of five statistics tests. Results showed significantly higher average test performance (72% versus 62%) when less information was accessed than when more information was accessed; a result consistent with previous results (Boniface, 1985)

    During the last 3 decades numerous researchers (e.g., Dorsel & Cundiff, 1979) have explored the role of help sheets (aka cheat sheet or crib sheet) and how the use of help sheets is related to test performance (Dickson & Bauer, 2008; Dickson & Miller, 2005; Hindman, 1980; Visco, Swaminathan, Zagumny & Anthony, 2007; Whitley, 1996), learning (Dickson & Bauer, 2008; Funk & Dickson, 2011) and anxiety reduction (e.g., Drake, Freed, and Hunter, 1998; Erbe, 2007; Trigwell, 1987). Overall the results have been mixed regarding help sheet use and the variables investigated.

    One aspect of help sheets that has received little attention is the relationship between the content of a help sheet and test performance. Most of the research cited above examined the relationship between test performance and whether or not a student used a help sheet. Only a few studies (Dickson & Miller, 2006; Gharib, Phillips, & Mathew, 2012; Visco, et al., 2007) have explored how the specific content of a help sheet is related to performance.

    Dickson and Miller (2006) found significantly higher test performance when students used an instructor provided help sheet compared to a student provided help sheet. However, the result may be confounded as help sheet condition may have varied systematically with the amount of studying students did. Visco et al. (2007), examined student generated help sheets and concluded that students likely need additional direction on what content to include on a help sheet in order to enhance performance. Finally, Gharib et al. (2012) examined the quality of students’ help sheets and found a reliable and positive relationship between the quality of the help sheet content and test performance; where a quality measure was obtained by rating a help sheet for organization and amount of detail.

    To summarize the relevant research, the use of help sheets is not reliably or consistently related to student performance, learning, or anxiety levels. Moreover, help sheet quality appears to vary across students and such variation may explain the body of results. Thus, help sheet content should be examined more systematically.

    The current study provided a systematic exploration to determine whether characteristics of the help sheet content (e.g., overall quality, inclusion of process information, density of information, etc) were related to test performance. Results of the study may be used to provide students guidance (Visco et al., 2007) when constructing a help sheet in order to enhance performance.

    Method

    Participants

     Participants (N = 21) were students enrolled in a required junior level psychological statistics course. Other sections of the course were taught by different instructors; students selected to enroll in this section unaware of the assessment that would be conducted. A majority of the participants were women. No other demographic information was collected.  

    Materials

    Students created a one-page 8.5 × 11 in. [21.6 × 28 cm] help sheet to use on each test. The help sheet could contain any information a student wanted to include and both sides of the sheet could be used. Students were informed that help sheets would be collected.  Both sides of each help sheet were scanned to create an electronic copy. All help sheets were returned when the tests were returned.

    Procedures

    Students were required to construct a help sheet for each test, though there was no requirement to use the help sheet. Based on informal observation during the test, all students appeared to use the help sheet to some degree.

    Tests in the statistics course were all problem based and were graded on a 100 point scale. Student help sheets were collected, scanned, and rated by two raters on the variables of interest below. Both help sheet raters were blind to students’ test performance at the time that the ratings were made.

    Variables of interest. Help sheets were evaluated on the variables of Overall Quality (4 – 0, with 4 being the highest quality); Verbal Process information (i.e., instructions) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Numeric Process information (i.e., solved problems) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Density of the information (as rated in deciles – 10 – 100%), Organization of information (1 <very organized> – 3 <neutral>  – 5 <very unorganized> ), use of Color (present or absent) and Submission Order (ordinal position when the test was submitted).

     

    Results

    Analyses

    The analyses were based on students’ help sheets and test performance from a single test. Interrater rater reliability was computed for the two raters across the scales described above. Interrater reliability ranged from moderate to high, .521(Organization) to .978 (Density).

    Help sheet ratings for the two raters were averaged and then regressed against students’ test scores to determine which characteristics of help sheet predicted tested performance. Results showed that higher quality help sheets predicted higher test performance (b = 33.20, p < .001) as did lower density of information (b = -.35, p = .05). Moreover, higher verbal process scores were associated with lower test performance (b = 13.14, p < .01). None of the other variables were related to performance (p > .05).

    Discussion, Conclusion and Recommendations

    Results of the preliminary analyses suggest that it is not enough just to consider whether a student has access to a help sheet or not, but rather a careful examination of the help sheet content is required. Similar to Gharib et al. (2012), overall quality of the help sheet was found to be a very important characteristic of the help sheet. As overall quality increased, test scores also increased.

    Density of information was also significantly related to performance. Although not the strongest effect, it appears that having less information on the help sheet predicted higher performance. Such a pattern is consistent with previous research (Visco et al., 2007) which may indicated that density of information is a proxy for learning in an inverse direction. That is, students who have a robust understanding of the material do not need to include as much information on the sheet and create a less dense help sheet. Conversely, students who do not have a robust understanding of the material must include as much information as possible to compensate for the lack of understanding, thereby creating a high density help sheet.

    One surprising finding was that students who included more verbal process information, which included information like instructions on how to perform some processes, scored lower than those students who included less of this information. Similar to the density argument above, it could be the case that students who included more verbal process information did so because they were not comfortable completing such problems without help sheet information and so they included more verbal process information on their help sheets.

    Finally, in examining the help sheet research there are two notable issues. First, help sheets do not facilitate student performance in courses involving mostly content knowledge including abnormal psychology (Hindman, 1980), developmental psychology (Dickson and Miller, 2005 and 2006), or social psychology (Whitley. 1996). However, when a course includes more process than content knowledge, as in the current course or other studies including statistics (Ludorf, 1994, Philips, et al., 2012) or engineering (Visco et al., 2007), students’ test performance appears to be related to help sheet content. Second, taking into account the research showing that content of a help sheet is related to test performance, we join Visco and colleagues in calling for the need of instructors to become more involved with help sheet construction as a way to provide students of all abilities a high quality help sheet.

    References

    Boniface, D. (1985). Candidates’ use of notes and textbooks during an open-book examination. Educational Research, 27(3), 201-209.

    Dickson, K. L., & Bauer, J. (2008). Do students learn course material during crib card construction? Teaching of Psychology, 35, 117-120.

    Dickson, K. L., & Miller, M. D. (2005). Authorized crib cards do not improve exam performance. Teaching of Psychology, 32, 230–232.

    Dickson, K. L., & Miller, M. D. (2006). Effect of crib card construction and use on exam performance. Teaching of Psychology, 33, 39–40.

    Dorsel, T. N., & Cundiff, G. W. (1979). The cheat-sheet: Efficient coding device or indispensable crutch? Journal of Experimental Education, 48, 39–42.

    Drake, V. K., Freed, P., & Hunter, J. M. (1998). Crib sheets or security blankets? Issues in Mental Health Nursing, 19, 291–300.

    Erbe, B. (2007). Reducing test anxiety while increasing learning – The cheat sheet. College Teaching, 55(3), 96-97.

    Funk, S. C., & Dickson, K. L. (2011). Crib card use during tests: Helpful or a crutch? Teaching of Psychology, 38, 114-117.

    Gharib, A., Phillips, W., & Mathew, N. (2012). Cheat Sheet or Open-Book? A Comparison of the Effects of Exam Types on Performance, Retention, and Anxiety. Psychology Research, 2(8), 469-478

    Hindman, C. D. (1980). Crib notes in the classroom: Cheaters never win. Teaching of Psychology, 7, 166–168.

    Ludorf, M. R. (1994). Student selected testing: A more sensitive evaluation of learning.  Paper presented to the American Psychological Society Institute on The Teaching of Psychology, Washington, DC.

    Trigwell, K. (1987). The crib card examination system. Assessment and Evaluation in Higher Education, 12, 56–65.

    Visco, D., Swaminathan, S., Zagumny, L, & Anthony, H. (2007). AC 2007-621: Interpreting Student-Constructed Study Guides. ASEE Annual Meeting and Exposition Proceedings, Honolulu, HI.

    Whitley, B. E., Jr. (1996). Does “cheating” help? The effect of using authorized crib notes during examinations. College Student Journal, 30, 489–493.

     

    Author Notes

    Mark Ludorf is a Cognitive psychologist who joined the faculty at Stephen F. Austin State University(SFA) in the fall of 1990 and is currently a Full Professor of Psychology. He has served in university wide administrative positions at two universities (SFA and Oakland University in Rochester, MI). He was also an American Council on Education (ACE) Fellow in Academic Administration.  Ludorf has been active in the use technology in higher education. He has taught online since 2001 and developed several online courses. His other academic interests are in leadership and study abroad. Ludorf currently serves as Senior Editor of the Journal of Leadership Studies. He has also offered numerous study abroad programs in Italy.  At SFA Ludorf has been recognized as the Alumni Distinguished Professor and was awarded the SFA Foundation Faculty Achievement Award.

    Sara Clark was an undergraduate teaching assistant in statistics at Stephen F. Austin State University. She completed her Bachelor’s degree in Psychology at SFA. She was also the 2013 recipient of the Jeff and Jackie Badders Award which is given to the top graduating senior psychology major.

  • 04 Dec 2017 8:18 AM | Anonymous

    Mindfulness and Meditation in Psychology Courses

    Jennifer A. McCabe & Dara G. Friedman-Wheeler

    Goucher College

    As part of a college-wide “theme semester” on mindfulness in spring 2016, we incorporated mindfulness into four psychology classes. Here we share our experiences with regard to course design, assignments and activities, and student feedback. For instructors who are considering including mindfulness and/or meditation in psychology courses, we conclude with a reflection and overall assessment of what went well and what could be modified for the future, integrated with the results of our research on mindfulness in the college classroom.

    Defining Mindfulness and Its Relevance to Education

     A prominent definition of mindfulness in contemporary psychology is “paying attention… on purpose, in the present moment, and non-judgmentally” (Kabat-Zinn, 1994, p. 4). Mindfulness has received much attention recently, in the research literature and elsewhere (for an overview, see Curtiss & Hofmann, 2017). Studies have suggested benefits of mindfulness to physical health (e.g., pre-hypertension; Hughes et al., 2013), mental health (e.g., subjective well-being; Sedlmeier et al., 2012), and cognitive performance (e.g., working memory; Mrazrek, Franklin, Phillips, Baird, & Schooler, 2013).

    Increasingly, researchers are studying mindfulness activities in elementary and secondary schools (e.g., Black & Fernando, 2014; Britton et al., 2014; Mindful Schools, 2017). Research is just beginning to emerge on the effects of mindfulness in the college classroom (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    In the next two sections, each author provides a first-person narrative of her experiences integrating mindfulness into psychology courses.

     

    Cognitive Psychology Courses (JM)

    I approached this semester with enthusiasm about mindfulness, but a lack of experience. I decided to commit to a regular practice of mindfulness exercises (10 minutes daily) using Headspace (https://www.headspace.com/), which helped bring a degree of authenticity (and confidence) to my courses, and also personal benefit in terms of well-being and focus.

    In integrating mindfulness into Cognitive Psychology, a mid-level undergraduate course, I added a section that defined mindfulness to my syllabus, connected mindfulness to other topics in the course (e.g., perception, attention, memory, decision-making), and invited students to engage in meaningful study and practice of mindfulness throughout the semester. I added a course learning objective connecting mindfulness to metacognition: “Improve your metacognitive skills (knowing what you know, learning how to learn), through traditional book learning and through mindful practice and reflection. (Syllabi for courses discussed in this essay are available by request.)

    On the first day of class, I asked students questions about mindfulness to gauge pre-existing knowledge and practice, before their first mindful meditation exercise (Day 1 of Headspace). At least once per week, class included 5-10 minutes of guided mindfulness exercises. To prepare students, I asked them to arrive on time, to listen to instructions, and to be still and quiet during the meditation time. I assured them that it was okay not to engage in meditation. I emphasized that in addition to possible personal benefits, the exercises might provide insight into research we would read on mindfulness and cognition.

    Throughout the semester, I chose short guided exercises for class use, including several from the UCLA Mindful Awareness Research Center (http://marc.ucla.edu/body.cfm?id=22) and Mindfulness for Teens (http://mindfulnessforteens.com/guided-meditations/). Some were sitting exercises and some were standing; some had longer periods of silence and some were narrated throughout. Whenever possible, I connected the mindfulness activity to the course topic (e.g., body scan meditation for Attention; guided visualization for Visual Imagery). One day we went outside and I guided students through an exercise to focus on aspects of the environment (e.g., colors, shapes, movement; from a training session with Dr. Philippe Goldin).

    Regarding assessment, I revised my existing article summary and reflection assignment to focus on research that related mindfulness/meditation to course topics. For each article, students completed this form and engaged in group discussions during class. I quickly discovered that there were not many published articles about the impact of mindfulness on cognition that were appropriate for students in a mid-level undergraduate course.

    For the topics Perception and Attention, I assigned half the students an article about enhancing visuospatial processing using varieties of meditation (Kozhevnikov, Louchakova, Josipovic, & Motes, 2009), and the other half an article about improvements in perceptual discrimination and sustained attention following meditation training (MacLean et al., 2010). With respect to Memory, I assigned half an article about how brief mindfulness training can improve verbal GRE performance as mediated by enhancing working memory (Mrazek et al., 2013), and the other half read about increases in false memory after meditation (Wilson, Mickes, Stolarz-Fantino, Evrard, & Fantino, 2015). For the final topics in the course, Reasoning and Decision-Making, students read an article about reductions in the sunk-cost bias after meditation (Hafenbrack, Kinias, & Barsade, 2014).

    When I compared responses to mindfulness questions on the first and last days of class, the percentage of students providing a reasonably accurate definition of mindfulness jumped from 10% to 68%, and the percentage listing cognition-related benefits of mindfulness went from 17% to 59%. However, there was no change in the reported practice of mindfulness/meditation, nor in the perceived importance of the scientific study of mindfulness.

    I also incorporated mindfulness into my upper-level course, Seminar in Cognition, Teaching, and Learning. I began this class with an assignment to watch Andy Puddicombe’s TED talk as an orientation to mindfulness (https://www.ted.com/talks/andy_puddicombe_all_it_takes_is_10_mindful_minutes?language=en); to watch the introductory Headspace video; and to complete Day 1 of Headspace’s free “Take 10” program. Students were asked to commit to 10 minutes of guided meditation per day for the next 10 days, then to submit a written reflection. In their reflections, every student expressed openness to the possibility of trying meditation, and for all but 2 students (out of 18), this would be their first experience with it. However, their reflections after 10 days were less encouraging – due perhaps more to time management issues than anything. Although it was a required assignment, many did not find time to complete the program.

    Later in the course, I assigned articles focusing on mindfulness and meditation. Students read an article about the neuroscience of mindfulness and mind-wandering, with implications for education (Immordino-Yang, Christodoulou, & Singh, 2012). They also read and discussed the article on working memory and GRE performance used in Cognitive Psychology (Mrazek et al., 2013). This class day was purposefully scheduled to coincide with Mary-Helen Immordino-Yang’s on-campus lecture, which students were encouraged to attend.

    About five weeks into the semester, we launched a collaborative class project to collect an annotated reference list of resources on mindfulness for educators. Students used library and web applications to search for resources, then built a shared document. The final product was a 16-page file containing primary research articles, review/critique articles, books and book chapters, popular press articles, and web sites relevant to the topic of Mindfulness and Education (http://blogs.goucher.edu/themesemester/files/2016/04/Mindfulness-and-Education-Resources-Sp16.pdf).

    Though I did not collect formal data in this course, students generally demonstrated interest and enthusiasm. Even given the density of some of the readings on mindfulness, there was a good amount of energized discussion. Also, I was impressed by their active participation in the collaborative project and felt this was a meaningful and authentic learning experience.

     

    Health and Clinical Psychology Courses (DFW)

    Mindfulness seemed a natural fit for my mid-level course in health psychology. Indeed, the topic had come up organically in years past, through a project in which students choose a health behavior to change, using empirically-informed strategies – many students chose to adopt a meditation practice. Spring 2016 was no exception, as several students took on this challenge, availing themselves of tools and apps (e.g., Headspace, Calm) as part of their strategic behavior change project.

    I incorporated a mindfulness-related learning objective into the course: by the end of the semester, students should be able to “describe mindfulness and its health-related benefits.”  Mindfulness was woven into several sections of the course. At the start of the course, where we usually focus on what health psychology is, students also read a brief overview of mindfulness (Kabat-Zinn, 1994), allowing us to operate from a shared conceptualization of mindfulness and to relate it to mental and physical health.

    The health psychology course includes a community-based learning component in which students work collaboratively with staff from Hopewell Cancer Support (a local organization providing psychosocial services to those affected by cancer – including some related to mindfulness), to address particular challenges faced by the non-profit. Because of this collaboration, we discuss cancer early in the class, as well as the research on psychosocial interventions for cancer. Here students read and discussed an article on Mindfulness-Based Cancer Recovery (Tamagawa et al., 2015). Later in the class, as part of our stress and coping topic, we read and talked more broadly about mindfulness and health, reading a review article on mindfulness-based treatments (and research on their effectiveness) for a variety of health conditions (Carlson, 2015). These readings were brought into the classroom in a variety of ways: sometimes we would discuss the articles as a large group, or in small groups. Sometimes I would start class by projecting a short list of thought questions on the screen about the reading and would ask students to write for a minute or two about each question, before getting into groups to discuss one of the questions in more depth.

    Throughout the semester, the mindfulness-related events on campus were brought into the class, through an “event-reporting” assignment. Specifically, students were asked to sign up to attend one of 6 events on campus or in the community during the semester (four of which were mindfulness theme semester speakers Mary-Helen Immordino-Yang, Omid Safi, Alicia Garza, and Dan Siegel), and to report back to the class about what they had heard. Their reports were informal and included (a) biographical information about the speaker (obtained from the event or through Internet research), (b) the main point or points of the talk, (c) the types of “evidence” used to make those points (case examples, personal experience, research…), and (d) how the event related to the field of health psychology or to specific topics covered in class.

    I conceived of the “event reporting” assignment as a way to encourage attendance at these events without insisting that all students attend them all (unrealistic, given schedule constraints), and as a way for the whole class to get some benefit from each talk. In addition, I wanted students to think actively about the events they attended, including identifying the speaker’s main point(s) and the different types of arguments that can be made (based on different “ways of knowing”). I was so pleased with this assignment that I have used it again since.

    During the theme semester I also taught an upper-level course, Seminar in Clinical Psychology: Emotion Regulation, which has always included readings about, experiential activities with, and discussion of mindfulness. During the mindfulness theme semester, I incorporated mindfulness into one of the existing learning objectives, stating that students would be able to “discuss a variety of emotion regulation strategies (including mindfulness) and evaluate their adaptive and maladaptive aspects.”

    In previous iterations of the course, I had introduced students to the practice of mindfulness by conducting part of Jon Kabat-Zinn’s (2006) eating meditation (mindfully attending to a raisin). This semester, I increased the experiential coverage of mindfulness, inviting the class to engage in “Mindful Mondays,” a collection of activities that allowed us to try a variety of purported mindfulness inductions, and to compare and contrast them. I started a shared document and invited students to construct the list of activities collaboratively. Several students added activities but requested that I (or a guide on a video) lead the class through them (e.g., a brief chair-yoga routine intended for the workplace); others proposed activities that they led themselves (e.g., a walking meditation, based on an experience a student had had at a monastery while studying abroad). The ultimate list included activities from the more traditional raisin meditation and a body scan to “mindful creative expression” and coloring. We sometimes left our seats (to do yoga or sit on the floor), and we sometimes left the classroom (to do the walking meditation on the campus’s labyrinth).

    These exercises were voluntary; students could arrive five minutes late to class on any given Monday, if they did not wish to participate in an activity. Generally, though, attendance was excellent, and students seemed enthusiastic about Mindful Mondays (indeed, I proposed such a thing to my seminar the subsequent semester, and they, too, chose to partake). Discussions following the practice focused on topics such as whether or not the effects of the exercises felt subjectively like mindfulness (per the attentional and attitudinal components of the definition), whether or not there might be inadvertent harms associated with these activities, whether some people might benefit from some types of mindfulness more than others, and what characteristics might predict positive experiences with which activities.

    During the theme semester, the class dug more deeply into the scholarly literature on mindfulness, as well. The class has long included a reading on third-wave cognitive behavioral interventions that provides a nice overview of mindfulness as it is incorporated into these treatments (Baer & Huss, 2008). This semester we also read pieces focused on the emotional benefits of mindfulness (Arch & Landy, 2015) and on mindfulness and emotion regulation (Corcoran, Farb, Anderson, & Segal, 2010; Leahy, Tirch, & Napolitano, 2011).

    Near the end of the semester, I asked students to create “concept maps” of mindfulness, in an attempt to integrate the varied aspects of mindfulness that we had read about, discussed, and experienced. Students worked on blank paper, and then volunteered to have their concept maps projected, so that the class could discuss the various components of mindfulness and associated constructs. While each of these concept maps was of course different, they all reflected the complexity of the concept, and I believe that by the end of the semester students showed substantial improvement in their understandings of the construct of mindfulness as used in contemporary clinical psychology.

     

    Our Research, in Brief

    Separate from the theme semester courses, we have conducted systematic research on mindfulness in the college classroom (importantly, no data were collected during the theme semester). In our study, students in psychology, chemistry, peace studies, and English classes followed a 5-minute guided meditation (an edited mp3 file; Kabat-Zinn, 2005, used with permission) at the start of class. Within-subjects analyses found no benefits for working memory, content retention, mindful awareness during class, or elaboration, at the end of a 4-week period in which students followed the guided meditation, as compared to a 4-week period in which they did not. While we refer interested readers to the full research report (Friedman-Wheeler et al., 2017), we want to share some thoughts about how such an exercise might be beneficial, with adjustments.

    For one, it may be that students who weren’t interested in participating actively did not (although they did sit quietly during the meditation period). It may also be the case that five minutes is not the appropriate dose of meditation for the classroom. Perhaps one minute of silent meditation would be better-suited to the classroom setting (and feel more do-able to students). On the other hand, perhaps five minutes three times a week is an insufficient dose, though a larger dose would consume more class time than instructors might wish.

    Perhaps student buy-in and benefit are enhanced when more context is provided, as was done in the theme semester courses described in this essay. There is an obvious risk of demand characteristics, but perhaps those with a greater understanding of mindfulness might derive more benefit from it than those who participate in an exercise without fully understanding why.

     

    Conclusion: Opportunities and Challenges for
    Mindfulness in Psychology Courses

    From an academic perspective of encouraging undergraduate students to learn about the science of mindfulness, readers should bear in mind that the level and quality of available readings are varied. For example, while there is ample scholarly work on mindfulness in clinical and health psychology, there is less research suitable for undergraduates related to cognition. Overall, there is a need for more research on mindfulness and learning in higher education. As noted above, the results of our research study suggest no measurable impact of brief in-class interventions on variables related to academic performance, though others have found benefits (e.g., Helber, Zook, & Immergut, 2012; Ramsburg & Youmans, 2014).

    From a class-time-management perspective, we experienced challenges balancing mindfulness exercises with other activities and content. We found that exercises between two and ten minutes long can work well–and incorporating mindfulness is made far easier by the availability of short mindful meditation exercises online, including those that can be guided by the instructor, and those that are pre-packaged to be presented in video and/or auditory format.

    From a student-engagement perspective, we found that many students were “on board” with the idea of using a small amount of class time to practice mindfulness. However, some seemed disengaged.

    From a student mental health perspective, though there is research suggesting mindfulness practice may lead to improved mental health, we also noted the potential for negative affect–irritation or boredom, or in some cases, perhaps feelings of being overwhelmed (as might happen to some survivors of trauma; Briere & Scott, 2012). We handled these possibilities in this several ways: (1) permitting students to not attend the mindfulness portion of class and/or to leave the room as needed; (2) reminding students that no one can be forced to meditate, and that they can choose to ignore the instructions and sit quietly during the exercises.

    In sum, there are many opportunities for bringing the science and practice of mindfulness into the undergraduate classroom, and the potential seems great. There are, however, challenges to be explored and better understood, as we seek creative ways to connect our students with mindfulness so that they might benefit from it intellectually and personally.

     

    References

    Arch, J. J., & Landy, L. N. (2015). Emotional benefits of mindfulness. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 208-224). New York, NY: Guilford Press.

    Baer, R. A., & Huss, D. B. (2008). Mindfulness- and acceptance-based therapy. In J. L. Lebow (Ed.), Twenty-first century psychotherapies: Contemporary approaches to theory and practice (pp. 123-166). Hoboken, NJ: John Wiley & Sons.

    Black, D. S., & Fernando, R. (2014). Mindfulness training and classroom behavior among lower-income and ethnic minority elementary school children. Journal of Child and Family Studies, 23(7), 1242-1246. doi:10.1007/s10826-013-9784-4

    Briere, J., & Scott, C. (2012). Mindfulness in trauma treatment. In Principles of trauma therapy: A guide to symptoms, evaluation, and treatment, 2nd edition (pp. 215-230). Thousand Oaks, CA: Sage.

    Britton, W. B., Lepp, N. E., Niles, H. F., Rocha, T., Fisher, N. E., & Gold, J. S. (2014). A randomized controlled pilot trial of classroom-based mindfulness meditation compared to an active control condition in sixth-grade children. Journal of School Psychology, 52(3), 263-278. doi:10.1016/j.jsp.2014.03.002

    Carlson, L. E. (2015). Mindfulness-based interventions for physical conditions: A selective review. In K. W. Brown, J. D. Creswell, R. M. Ryan, K. W. Brown, J. D. Creswell, R. M. Ryan (Eds.), Handbook of mindfulness: Theory, research, and practice (pp. 405-425). New York, NY: Guilford Press.

    Corcoran, K. M., Farb, N., Anderson, A., & Segal, Z. V. (2010). Mindfulness and emotion regulation: Outcomes and possible mediating mechanisms. In A.M. Kring & D.M. Sloan (Eds.), Emotion regulation and psychopathology: A transdiagnostic approach to etiology and treatment (pp. 339-355). New York, NY: Guilford Press.

    Curtiss, J., & Hofmann, S. G. (2017). Meditation. In A. Wenzel (Ed.) The SAGE Encyclopedia of Abnormal and Clinical Psychology. Thousand Oaks, CA: SAGE Publications.

    Friedman-Wheeler, D. G., McCabe, J. A., Chapagain, S., Scherer, A. M., Barrera, M. L., DeVault, K. M., Hoffmann, C., Mazid, L. J., Reese, Z. A., Weinstein, R. N., Mitchell, D., & Finley, M. (2017). A brief mindfulness intervention in the college classroom: Mindful awareness, elaboration, working memory, and retention of course content. Manuscript in preparation.

    Hafenbrack, A. C., Kinias, Z., & Barsade, S. G. (2014). Debiasing the mind through meditation:

    Mindfulness and the sunk-cost bias. Psychological Science, 25(2), 369-376. doi: 10.1177/0956797613503853

    Helber, C., Zook, N., & Immergut, M. (2012). Meditation in higher education: Does it enhance

    cognition? Innovative Higher Education, 37(5), 349-358. doi:10.1007/s10755-0129217-0

    Hughes, J. W., Fresco, D. M., Myerscough, R., van Dulmen, M. M., Carlson, L. E., & Josephson, R. (2013). Randomized controlled trial of mindfulness-based stress reduction for prehypertension. Psychosomatic Medicine, 75(8), 721-728. doi:10.1097/PSY.0b013e3182a3e4e5

    Immordino-Yang, M. H., Christodoulou, J. A., Singh, V. (2012). Rest is not idleness:   

    Implications of the brain’s default mode for human development and education. Perspectives on Psychological Science, 7, 352-364.

    Kabat-Zinn, J. (1994). Wherever you go, there you are: Mindfulness meditation in everyday life. New York, NY: Hyperion.

    Kabat-Zinn, J. (2005). Sitting meditation. On Guided Meditation (Series 1). [mp3 file]. Louisville, CO: Sounds True, Inc.

    Kabat-Zinn, J. (2006). Eating meditation. On Mindfulness for Beginners [CD]. Louisville, CO: Sounds True, Incorporated.

    Kozhevnikov, M., Louchakova, O., Josipovic, Z, & Motes, M. A. (2009). The enhancement of

    visuospatial processing efficiency through Buddhist Deity Meditation. Psychological Science, 20(5), 645-653. doi: 10.1111/j.1467-9280.2009.02345.x

    Leahy, R. L., Tirch, D., & Napolitano, L. A. (2011). Mindfulness. In Emotion regulation in psychotherapy: A practitioner’s guide (pp.91-116). New York, NY: Guilford Press.

    MacLean, K. A., Ferrer, E., Aichele, S. R., Bridwell, D. A., Zanesco, A. P., Jacobs, T. L….

    (2010). Intensive meditation training improves perceptual discrimination and sustained attention. Psychological Science, 21(6), 829-839. doi: 10.1177/0956797610371339

    Mindful Schools. (2017). Research on Mindfulness in Education [Web log page]. Retreived from http://www.mindfulschools.org/about-mindfulness/research/

    Mrazek, M. D., Franklin, M. S., Phillips, D. T., Baird, B., & Schooler, J. W. (2013). Mindfulness training improves working memory capacity and GRE performance while reducing mind wandering. Psychological Science, 24(5), 776-781. doi:10.1177/0956797612459659

    Ramsburg, J. T., & Youmans, R. J. (2014). Meditation in the higher-education classroom: Meditation training improves student knowledge retention during lectures. Mindfulness, 5(4), 431-441. doi:10.1007/s12671-013-0199-5

    Sedlmeier, P., Eberth, J., Schwarz, M., Zimmermann, D., Haarig, F., Jaeger, S., & Kunze, S. (2012). The psychological effects of meditation: A meta-analysis. Psychological Bulletin, 138(6), 1139-1171. doi:10.1037/a0028168

    Tamagawa, R., Speca, M., Stephen, J., Pickering, B., Lawlor-Savage, L., & Calrson, L. E. (2015). Predictors and effects of class attendance and home practice of yoga and meditation among breast cancer survivors in a Mindfulness-Based Cancer Recovery (MBCR) program. Mindfulness, 6(5), 1201-1201. Doi: 10.1007/s12671-014-0381-4.

    Wilson, B. M., Mickes, L., Stolarz-Fantino, S., Evrard, M., & Fantino, E. (2015). Increased false-memory susceptibility after mindfulness meditation. Psychological Science, 26(10), 1567-1573. doi: 10.1177/0956797615593705

     

     

    Dara G. Friedman-Wheeler is a licensed clinical psychologist and Associate Professor of Psychology at Goucher College, in Baltimore, MD.  She earned her Ph.D. in Clinical Psychology from American University in Washington DC.  She teaches courses on psychological distress and disorder (abnormal psychology), health psychology, quantitative research methods, and emotion regulation, as well as serving as core faculty for Goucher’s public health minor.  She has experience working with patients in the public sector with presenting problems such as mood disorders, anxiety disorders, suicidality, chronic pain, chronic illness, substance abuse/dependence, and personality disorders.  She has co-authored empirical journal articles and the book Group Cognitive Therapy for Addictions (with Drs. Wenzel, Liese, and Beck), served as associate editor for the SAGE Encyclopedia of Abnormal and Clinical Psychology,  and has received several awards from the National Institutes of Health.  Her interests are in the areas of coping, health, addictions, behavior change, cognitive therapy and mood disorders.

     

    Jennifer A. McCabe is an Associate Professor of Psychology, and director of the Center for Psychology, at Goucher College in Baltimore, MD. She earned her Ph.D. in Cognitive Psychology from the University of North Carolina at Chapel Hill. She teaches courses on human cognition, as well as introductory psychology. Her research focuses on memory strategies, metacognition, and the scholarship of teaching and learning. She has been recently published in Memory and Cognition, Psychological Science in the Public Interest, Teaching of Psychology, Instructional Science, and Psi Chi Journal of Psychological Research. Supported by Instructional Resource Awards from the Society for the Teaching of Psychology, she has also published two online resources for psychology educators on the topics of mnemonics and memory-strategy demonstrations. She is a consulting editor for Teaching of Psychology.

     

  • 14 Nov 2017 1:04 PM | Anonymous

    Fantasy Researcher League: Engaging Students in Psychological Research
    Daniel R. VanHorn, North Central College

    In this essay, I describe a Fantasy Researcher League course design that I presented to a group of colleagues at the National Institute on the Teaching of Psychology (NITOP) in 2013. This innovative course was designed to get students excited about psychological research. I am grateful for the encouragement and feedback that I received from those who attended the institute. I have divided this essay into four sections. First, I describe the motivation behind the development of the course. Second, I describe the course itself. Third, I present survey data collected from students that have taken the course. Finally, I discuss how this course might be used in the future.


    Motivation

    While students may not complete textbook reading assignments regularly (Burchfield & Sappington, 2000; Clump, Bauer, & Bradley, 2004), they do often find value in the primary textbook assigned for a course (Carpenter, Bullock, & Potter, 2006). For example, a textbook is often a very useful quick reference guide. Textbooks are also helpful because they simplify and clarify psychological research. The problem with textbooks is that, in truth, psychological research is not simple and clear, but rather it is complex and messy. Textbooks also often present information as if it is finalized instead of an ongoing process and dialogue among experts in the field. Finally, many textbooks are not structured in a way that enables critical evaluation of the research they present. Reading and discussing primary sources (e.g., articles with original research that are published in peer-reviewed journals) provides an alternative to textbooks, and I believe students significantly benefit from working with primary sources in psychology. When students work with primary sources they begin to appreciate the intricate work behind what textbooks present as statements of obvious fact. They start to see that psychological research is constantly evolving and that there is still much to be learned. Working with the psychological literature also helps students develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). They learn to critically examine evidence and use that evidence to evaluate theories and/or claims. A significant challenge that many psychology teachers, including myself, face is getting students to engage in psychological research. Reading and thinking about psychological research is difficult, so we have to find creative ways to motivate our students to work with primary sources in psychology. One approach is to take the things that excite our students outside the classroom and implement them inside the classroom. Keeping this approach in mind, I looked to fantasy sports for help in getting my students engaged with the psychological literature.

    Fantasy sports are extremely popular. The Fantasy Sports Trade Association (2013) estimates the 2013 American market for fantasy sports is over 35 million players. Fantasy sports that are available to players include baseball, basketball, football, hockey, soccer, golf, and auto racing. In fantasy sports, approximately 8-14 participants get together and form a league in the sport of their choice. For example, a small group of friends might form a fantasy professional American football league. Each participant in the league selects current professional American football players that make up their fantasy team. The players on a participant’s team score points based on how they perform in real-life games (e.g., how many yards they gain and how many touchdowns they score) and the participants’ teams compete against each other.


    The Course

    I feel that fantasy sports provides a model that can be utilized in classrooms for engaging students. I took the fantasy sports model and modified it to engage students in psychological research by creating a course that took the form of a game. The official title of the course was Immersion in the Psychological Literature, but the course became known to students and faculty alike as Fantasy Researcher League. The official learning objectives of the course included the following: effectively search for published research and track research lines/programs, describe the research programs of several prominent psychologists, explain the current theory and findings of a few threads of research in the field, and identify how psychological theory and research evolve over the course of a research program. In addition to the official learning objectives described above, I wanted to show students that psychological research is dynamic. It is evolutionary. What students read in their textbooks is old news. I wanted my students to be on the cutting edge of psychological research and get a sense of what is feels like to discover something new. I hoped to get my students excited about research in psychology. I also wanted them to discuss psychology outside of a traditional classroom setting in a place where they would exchange ideas and not worry about whether they were getting a C+ or a B- in the course. Finally, I wanted them to discover their passion by having the freedom to explore their own academic interests.

    The course consisted of a small group of students that met with faculty approximately every three weeks throughout the academic school year. At the beginning of the course, the faculty members teaching the course put together a list of several prominent psychology researchers from a variety of research areas. Students were given the opportunity to add other researchers to this list. All the researchers on the list had to be currently active in the discipline. Each student drafted a team of five researchers from the finalized list. Each researcher could only be selected once. These teams made up our fantasy researcher league. Each student then selected one published article by each of their five researchers and tracked the number of times each article was cited during the course of the game. Students had the option to replace their articles at the beginning of each term. Students also kept track of all of their researchers’ scholarly activities and accomplishments (e.g., books, articles, and presentations) during the academic year. Students documented their researchers’ productivity by designing and maintaining a team webpage.  A student earned points for their team by correctly documenting their team’s scholarly activities and citations. The league scoring system is described in Table 1.


    Table 1

    Fantasy Researcher League Scoring System

    Scholarly Activity

    Points

    Book single author

    8

    Book co-author

    4

    Book editor

    3

    Book chapter author

    3

    Article first author

    4

    Article other than first author

    2

    Citation

    1

    Presentation

    3

    Grant/Award

    3

    During class meetings, students discussed the recent research activity of their teams. Students were also asked to connect their researchers’ current work to their researchers’ past work. At the end of each class, team scores were updated and high scoring teams were recognized.


    Survey Data

    Five students that participated in the course during the fall of 2011 and eight students that participated during the winter of 2012 completed a voluntary survey where they indicated how much they agreed or disagreed with specific statements related to the learning objectives for the course. Ratings ranged from 1 (strongly disagree) to 7 (strongly agree). Student responses to the closed-ended survey questions are shown in Table 2, and they suggest that we met our learning objectives. The vast majority of students agreed that they developed basic research skills, understood and could discuss cutting edge research, learned about today’s prominent psychological researchers, and learned how research programs evolve over time.


    Table 2

    Student Survey Responses on Course Learning Objectives

    As a result of participating in this course,
    Recoded 7pt. scale to 3pt. scale

    Agree

    (5-7)

    Neutral

    (4)

    Disagree

    (1-3)

    I can better search PsycInfo to locate research-related material and people.

    11

    1

    1

    I can more effectively search for psychological research and researchers in electronic sources.

    12

    1

    0

    I am more familiar with the intellectual history and background of some psychology researchers.

    11

    2

    0

    I am more familiar with some of the most current research in psychology.

    13

    0

    0

    I feel more competent at presenting and discussing a researcher’s current research.

    13

    0

    0

    I have a better understanding of how a researcher’s program of research or interests evolves over time.

    11

    2

    0

    I can describe the research program of several prominent psychology researchers.

    9

    4

    0

    I have a better sense of which areas of psychology interest me and which do not.

    13

    0

    0

    I can better create and edit webpages.

    13

    0

    0

    Students were then asked to describe what they learned in the class beyond the topics already covered in the closed-ended survey questions. Responses to these questions suggest that students enjoyed the social nature of the game, learned more about psychological research, and began to discover what areas of psychology interest them most. Examples of student responses to this open-ended question are included below.

    ·        “I was able to find researchers that I would be interested in following later.”

    ·         “I learned what areas in psychology interest me, which has helped me make decisions for my future.”

    ·        “How to effectively create a webpage.”

    ·        “What modern research is like.”

    ·        “Better research skills.”

    ·        “How to find articles that cite another article.”

    ·        “Winning!”


    The Future

    Student surveys suggest that the fantasy researcher league model engages students in psychological research and provides an exciting alternative to traditional courses and/or assignments. The fantasy researcher league model gets students to read and discuss primary sources. This is crucial because working with primary sources is one way for students to develop critical thinking skills (Anisfeld, 1987; Chamberlain & Burrough, 1985). The fantasy researcher league model also helps create a learning community where students play a central role in learning and discovery. It is the students that select the researchers and research topics that are presented and discussed in class. In the fantasy researcher league model, teachers provide the initial structure of the course but then focus on supporting and empowering student learning and discovery. In the future, I envision a fantasy researcher league online gaming experience that can be used in a variety of disciplines and can bring together team managers from a college or across the world. In the meantime, I believe that the fantasy researcher league course described here could be incorporated into many courses as a long-term research project. In my course, students worked individually, but I believe the project would also work well if completed in small groups. 

     

    References

    Anisfeld, M. (1987). A course to develop competence in critical reading of empirical research in psychology. Teaching of Psychology, 14(4), 224-227. doi:10.1207/s15328023top1404_8


    Burchfield, C. M., & Sappington, J. (2000). Compliance with required reading assignments. Teaching of Psychology, 27(1), 58-60.

    Carpenter, P., Bullock, A., & Potter, J. (2006). Textbooks in teaching and learning: The views of students and their teachers. Brookes eJournal of Learning and Teaching, 2(1), Retrieved from http://bejlt.brookes.ac.uk/

    Chamberlain, K., & Burrough, S. (1985). Techniques for teaching critical reading. Teaching of Psychology, 12(4), 213-215. doi:10.1207/s15328023top1204_8

    Clump, M. A., Bauer, H., & Bradley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology, 31(3), 227-232.

    Fantasy Sports Trade Association. (2013). Home page. Retrieved from http://www.fsta.org/

    ***************************

    Daniel R. VanHorn earned his B.S. in psychology from Wittenberg University in 2003. He earned his M.S. (2005) and Ph.D. (2009) in cognitive psychology from Purdue University. He is currently an Assistant Professor of Psychology at North Central College in Naperville, Illinois. He regularly teaches introductory psychology, cognitive psychology, statistics, and research methods. He also has an active research program in cognitive psychology where he trains aspiring psychologists.
  • 02 Nov 2017 5:20 PM | Anonymous

    Do These Things Even Work? A Call for Research on Study Guides

    J. Hackathorn, A. W. Joyce,  and M. J. Bordieri
    Murray State University

    If one had to predict the most common question asked by students each semester, it would be: “What will be on the test?” Moreover, this question is frequently and predictably followed by requests for a study guide. As good, well-meaning instructors, many of us sigh (maybe cry a little) but ultimately provide them. In fact, many of us even include them in course materials prior to the actual request, just to avoid the conversation. Given how common these requests are, it is surprising that there is little actual research regarding the effectiveness of study guides. A quick search, using key terms such as study guides and exam guides, on Google Scholar leads to only a handful of results, many of which are dated and focused on creating study guides (as opposed to assessing them). Thus, we suddenly found ourselves asking: How much do we really know about study guides?   Do these things even work?

    Arguably, any strategy or aid should help students to perform better on exams than nothing. However, some of the resources that students prefer may actually hinder their performance rather than help it. For example, in a recent analysis of learning aid use and exam performance, Gurung (2004) found that students rate textbooks’ bolded key terms as the most helpful study aid to them, but that their perceived helpfulness of this resource negatively relates to exam performance. Conversely, what they rate as least helpful (i.e., active review practices) has the strongest evidence of improving exam performance (e.g. Dickson, Miller, & Devoley, 2005). In another example, a comparison of exam review styles found that, although students do not prefer traditional (i.e., student directed question and answer format) style exam reviews, their exam performance is highest when they use this style, as compared to other styles (Hackathorn, Cornell, Garczynski, Solomon, Blankmeyer, & Tennial, 2012). Ultimately, this suggests there is a mismatch between what we (perhaps both the learner and the instructor) prefer and what actually improves knowledge, understanding, and exam performance.

    To increase our understanding of study guides, the authors of this essay, as well as other faculty members, recently conducted two separate studies (Cushen, et al., currently under review for publication), using the General Psychology population at Murray State University (MSU). In the first study, we conducted a small experiment using all of the sections of General Psychology offered during a single semester at MSU. Using counterbalancing and random assignment of sections, we compared exam performance following an instructor-provided concept list study guide to performance following student generated study guides. Then, at the end of the semester we queried students’ preferences and gave another brief quiz over material from the first two exams. Our results indicate that despite benefiting the most from creating their own study guides, students strongly prefer the instructor-provided guides.

    In a second study, after we realized that we were making assumptions by limiting study guides to only concept lists and student generated guides, we simply asked our students to identify the types of study guides they prefer. In replication of the past studies that showed students tend to prefer the least helpful study tools, we found that students prefer that the instructor provide study guides that include a list of concepts, followed by definitions and examples of application. In other words, students prefer that the instructor create what ostensibly could be referred to as “their notes.”  They prefer excerpts from the textbooks and simple concept lists the least, but prefer an instructor provided concept list style more than nothing at all or creating their own study guide. In examining their preferences, we realize that it is probably not happenstance that the least preferred study guide styles are also the styles that require the most effort from the student to actively summarize, organize, or synthesize course concepts.

    Obviously, the next question is: What do we do with this information?  We do not believe that we should “throw the baby out with the bathwater.” In Fall of 2016, the primary author of this essay attempted to explain to one class why she would no longer provide study guides, and she was almost the victim of a lynch mob. Perhaps that is hyperbole. Still, the students did not appear to believe that the lack of instructor-provided study guides was in their best interest. In hindsight, the instructor may have been too quick to implement this change. There is much more information needed in this regard.

    In our initial experiment, we tested the efficacy of a concept-list style study guide. Basically, we used the style of study guide that answers the ever-present question: “What is on the test?  What should I study?”  Correctly using this style means that students have to then find definitions, create mental models, links, and organization, and create their own application examples. However, it is unclear how many actually do that. It is possible that, instead, students simply look at the list, recognize the terms, and think that they have studied enough to be prepared for the exam. Future research is needed to see exactly what students do with those study guides.

    In that same vein, beyond not knowing how to properly use a study guide, it is also possible that students do not know how to create a study guide. Although it is important for students to know how to facilitate their own learning, many students have defective study strategies (Bjork, Dunlosky, & Kornell, 2013). Our participants were students in a freshman-level course, with the vast majority being first-semester freshmen. Creating a study guide, especially an effective one, is hard work and takes a clear understanding of what type of information is important. Freshmen, specifically, may struggle with this skill. For example, in a recent General Psychology homework assignment, students were asked to create a mnemonic device related to neurotransmitters. The instructor was quite surprised when many of the students created an acronym depicting an arbitrary list of neurotransmitter names. Sadly, there were no exam questions that would ask them to provide a random list of neurotransmitters. Suffice it to say, freshmen may not have a strong understanding of what it takes to succeed on rigorous college-level exams.

    Unfortunately, many new college students will find, perhaps too late, that their high school strategy of simply memorizing definitions will not be as successful in the college classroom. Thus, one of the first steps toward student success may involve taking time to teach them how to create good study aids. In our experiment, we do not report the types of study guides that students self-create. We can only imagine (and have discussed at great lengths) that they are probably terrible. A cursory review across a subsample of our students confirms that the vast majority of our students fail to consistently generate examples of course content and instead provide a simple list of terms and definitions or a chapter outline in their self-created guides. However, regardless of the quality of the study guides, their exam grades are still higher when they create their own study guides. As a result, even if the instructor gives students a foundation with the concept-list style, teaching them how to improve those study guides should prove fruitful. This assumes, of course, that we can convince students to try a new, potentially more intensive and effortful, study technique that they actually utilize rather than backtracking into old habits as the exam date looms closer (Dembo & Seli, 2004).

    Unfortunately, it is still unclear which types of study guides are the most beneficial. Outside of the extensive work of Karen Wood (Wood, 1989; 1993), who outlines various types of study guides and their individualized purposes, there is a dearth of information regarding which types of study guides are the most effective and in which situations they are effective. The type of study guide one might use in an introductory course where students are being given a foundation for future classes is probably very different from the guide one might use in an applied research methods course in which students are practicing a skill. Thus, much more information is needed with regards to not only the general efficacy, but also the relevance and applicability of study guides across different courses and learners.

    Finally, as tends to be the case in many of our classes, students sometimes appear to dislike assignments that really challenge or require effort of them. It is probably not a coincidence that students prefer the study aids that required less of their effort. And, before we all get migraines from rolling our eyes, it is important to consider that the students may not realize that this relationship exists. As an example, in a recent end-of-semester evaluation comment a student requested the following: “I do not want to be spoon-fed the information, but it would be nice if we could be provided with a list of concepts, in order from the most important to the least important, to help us study for exams.”  Clearly, this student fails to see the connection here between spoon-feeding and the study guide that they requested. Moreover, we doubt this student is alone in this desire. As such, asking students to create their own study guides may result in backlash. Importantly some, if not all, of this backlash can be reduced with transparency, communication, and rapport. However, instructors will need to assess the risk/benefit ratio of implementing a change like this.

    The most surprising aspect of our research is that very few of us question our own use of study guides, even though, frankly, we tire of creating them.  Many of us create these study guides because the students ask for them, or to avoid potential mutiny. Yet, as study guides have been around for so long and are so ubiquitous in higher education, very few of us inquire as to whether they work. It is important to note that this does not make us (or you) bad instructors. Care and efforts for students in any form should never be disregarded. In fact, we suspect that there are myriad instructors who have found ways to improve the effectiveness of study guides, but have yet to publish them. Thus, this essay is a mere call to action. Help us, help them; help us, help ourselves.

     

    References

    Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.

    Cushen, P., Vázquez Brown, M., Hackathorn, J., Rife, S. C., Joyce, A. W., Smith, E., …Daniels, J. (Under Review). "What's on the test?": The impact of giving students a concept-list study guide.

    Dembo, M. H., & Seli, H. P. (2004). Students' resistance to change in learning strategies courses. Journal of Developmental Education, 27(3), 2 - 11.

    Dickson, K. L., Miller, M. D., & Devoley, M. S. (2005). Effect of textbook study guides on student performance in introductory psychology. Teaching of Psychology, 32(1), 34-39.

    Gurung, R. A. (2004). Pedagogical aids: Learning enhancers or dangerous detours?. Teaching of Psychology, 31(3), 164-166.

    Hackathorn, J., Cornell, K., Garczynski, A., Solomon, E., Blankmeyer, K., & Tennial, R. (2012). Examining exam reviews: A comparison of exam scores and attitudes. Journal of the Scholarship of Teaching and Learning, 12(3), 78-87.

    Wood, K. D. (1989). The Study Guide: A Strategy Review. Paper presented at the Annual Meeting of the College Reading Association. Philadelphia, PA.

    Wood, K. D. (1992). Guiding readers through text: A review of study guides. Newark, DE, International Reading Association.

     

     

    Biographical Sketch

    Dr. Jana Hackathorn, Dr. Amanda W. Joyce, and Dr. Michael J. Bordieri are all junior faculty at Murray State University in Murray, KY. Between them, they study everything from close relationships to inhibition in children, from sex to mindfulness, and of course from pedagogy to teaching effectiveness. Last year, the entire junior faculty in Psychology at Murray State (there are a total of eight of them) pooled their efforts to conduct a study examining a topic for which they had all complained: student demands for study guides. As a result of the study, they bonded, resulting in much happier happy hours and a very functional, albeit odd, departmental atmosphere.

  • 04 Oct 2017 7:19 PM | Anonymous

    Team-Based Learning: A Tool for Your Pedagogical Toolbox

     Krisztina V. Jakobsen
    James Madison University

    Teachers whose different styles match with the pedagogical methods they use make for a more authentic and effective teaching and learning experience. There are a variety of strategies in the literature for teachers who would like to move away from a purely lecture format. One of those, Team-Based Learning (TBL), is a method I have been using for several years. TBL is a method to encourage students to be actively involved in their learning. Similar to the ideals associated with a flipped classroom (Jakobsen & Knetemann, in press), students learn primary course content outside of the classroom and work in permanent teams with the material during class (Michaelsen, Bauman, Knight, & Fink, 2004). Below, I outline the core components of TBL and share a few studies that my students and I have done examining the impact on student learning.

     The TBL Process

    Readiness Assurance Process

    The first steps in the TBL process involves ensuring that students understand course material; this process—the Readiness Assurance Process—includes preparation outside of class,  quizzes in class, and a short lecture. Students prepare for class by reading the textbook, watching videos, and/or answering guided questions. When students come to class, they take a multiple-choice quiz individually, which assesses student’s understanding of the course material at various levels of Bloom’s taxonomy. The individual quiz holds students accountable for completing their out of class preparation. Next, students work in their teams to complete the same multiple-choice quiz again. Students receive immediate feedback on their team quiz using scratch-off IF-AT forms. After the team quiz, students have a chance to appeal any questions they miss, which requires them to revisit course materials and provides an opportunity to make a compelling case for alternate answers based on the course materials. Finally, teams submit any questions they still have about the material and the instructor gives a short “muddiest points” lecture. The Readiness Assurance Process takes 50-75 minutes to complete.

    Application Exercises

    After the completion of the Readiness Assurance Process, students should have the necessary knowledge to complete application exercises, which usually take 2-4 class periods. Depending on the complexity of the questions, students may complete 2-5 application exercise questions during a class period. The application exercises have a deliberate structure that allows for teams to focus on the relevant course material and facilitates team and class discussions. The keys to developing successful application exercises involve having all teams work on the same questions, requiring teams to make a simple choice, and having teams report their answer choices simultaneously. To demonstrate the importance of the structure of the application exercises, think about the type and quality of discussions students may have with open-ended questions (Question 1 below) compared to more directed questions (Question 2 below).

    Question 1: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Identify at least one way in which each of the theories below helps you understand why the TBL structure is an effective teaching method.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories

    Question 2: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Decide which of the following theories is most prominent in the TBL structure.  Be prepared to support your answer.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories
    While Question 1 asks students to apply what they know about the theories to the structure of TBL, it may not generate much discussion. Question 2 meets the requirements of each of the deliberate components of the application exercises. All teams are presented with the same problem. Teams have to make a choice among options A-D. For this particular question, all of the answer choices are correct, so what will generate discussion among teams is the rationale behind their decisions. Finally, because the answer choices are very clear, it is easy for teams to simultaneously report their decisions by holding up cards, for example.

    Does it Work?

    Students generally have positive experiences with TBL. They also seem to enjoy the structure (e.g., Adelkhalek, Hussein, Gibbs, & Hamdy, 2010) and report perceiving TBL as an effective teaching method (e.g., Haberyan, 2007). The results are mixed in terms the impact of TBL on academic outcomes compared to more traditional teaching methods (e.g., Carmichael, 2009; Jakobsen, McIlreavy, & Marrs, 2014), and little work has been done regarding how TBL impacts retention (e.g., Emke, Butler, & Larsen, 2016). Over the years, I have worked with student research assistants to collect data in lab-based and classroom-based studies to examine the effectiveness of TBL in promoting recognition memory and retention compared to other pedagogical methods. Here, I present the results of two of those studies.

    In a lab-based study, time-slots were randomly assigned to each of our conditions, as follows:

    • Team-Based Learning: Participants read an article upon arrival to the session, then completed the Readiness Assurance Process and application exercises.
    • Lecture: Participants received a lecture based on the content of the article and took notes during the lecture.
    • Reading: Participants read the article and took notes as they read.
    • Control: Participants completed an anagram.

    One week later, all participants took a 10-item multiple-choice quiz to measure their retention of material from the week before. The results revealed that participants in the TBL and Lecture session did not differ on their scores (p = .141), but participants in the TBL session outperformed participants in the Reading (p = .018) and Control sessions (p < .001). The results of this study suggest that TBL and lecture are both effective ways of teaching, particularly in short-term sessions (e.g., workshops).

    In a class-based study, two classes were randomly assigned to be taught using TBL or Lecture. During the semester, students in the TBL class completed the Readiness Assurance Process and application exercises, while students in the Lecture class received lectures with active learning components. Students’ understanding of course material was assessed at three time points: (1) pre-test at the beginning of the semester, (2) final at the end of the semester, and (3) post-test three months after the completion of the course. Students completed 28 multiple-choice questions at each of the three time points. We based our analyses on students who contributed data at all three time points (N = 34). Students in the TBL and Lecture class did not differ on their pre-test scores (p = .052) or their post-test scores (p = .052). Students in the TBL class performed better than students in the Lecture class on the final (p = .021), suggesting that TBL may enhance short-term retention of course material. The results of this class study are consistent with those of Emke et al. (2016), in which TBL led to better short-term, but not long-term, retention of course material.

    Implementation and Conclusions

    Implementing TBL as outlined above requires some upfront investment for organizing and creating preparatory materials, quizzes, and application exercises. The good news is that components of TBL can be implemented in nearly any class with relative ease. For example, it is easy to incorporate a team quiz to already existing individual quizzes, and once students have the content knowledge (e.g., through lectures), application exercises can be added a little at a time.

    While there is likely no one pedagogical technique that will work for every instructor, data from the TBL literature and my research suggest that TBL is at least as good as other strategies. These results should encourage teachers to work in areas in which they are most comfortable and to cultivate skills they feel important, whether they are central to the course objectives or merely desirable.

    Author note

    Portions of this essay were presented at STP’s Annual Conference on Teaching, Decatur, Georgia, October, 2016. This project was supported by the Society for the Teaching of Psychology’s Scholarship of Teaching and Learning Grant and the Alvin V., Jr. and Nancy C. Baird Professorship to KVJ.

    Resources

    The following website offer wonderful resources for learning more about and getting started with TBL: Learntbl.ca and www.teambasedlearning.org/

    References

    Abdelkhalek, N., Hussein, A., Gibbs, T., & Hamdy, H. (2010). Using team-based learning to prepare medical students for future problem-based learning. Medical Teacher, 32, 123–129. doi: 10.3109/01421590903548539

    Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38, 54–61.

    Emke, A. R., Butler, A. C., & Larsen, D. P. (2016). Effects of Team-Based Learning on short-term and long-term retention of factual knowledge. Medical Teacher, 38, 306-311. doi: 10.3109/0142159X.2015.1034663

    Haberyan, A. (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9, 143–152.

    Jakobsen & Knetemann. (in press). Putting structure to flipped classrooms using Team-Based Learning. International Journal of Teaching and Learning in Higher Education.

    Jakobsen, K. V., McIlreavy, M., & Marrs, S. (2014). Team-based Learning: The importance of attendance. Psychology Learning & Teaching13(1), 25-31. doi: 10.2304/plat.2014.13.1.25

    Michaelsen, L. K., Knight, A. B., & Fink, L. (2004). Team-based learning: A transformative use of small groups in college teaching. Sterling, VA: Stylus Publishing.

     

    Krisztina V. Jakobsen is an Associate Professor in the Department of Psychology at James Madison University. She teaches developmental psychology classes in the General Education Program an in the Department of Psychology. Her research interests include studying effective teaching methods and social cognition in infants.

     

     

  • 07 Sep 2017 9:15 AM | Anonymous
    Technology Bans and Student Experience in the College Classroom

     Thomas Hutcheon, Ph.D.

    Bard College

     

    Personal technologies, including laptops and cell phones, have infiltrated the college classroom.  Instructors must now decide whether to implement a ban on the unsupervised use of personal technologies in their courses.  Anecdotal evidence (“students always seem to be looking at their computer screens and not me during class”), and results from recent studies linking the unsupervised use of technology with reductions in academic performance, have led to declarations that the time to ban technology use in the classroom is now (Rosenblum, 2017).  However, it is important for individual instructors to critically evaluate and understand the empirical evidence in favor of technology bans when deciding on the approach to take in their classroom.  Moreover, the impact bans have on student’s experience within the course remains unknown.  The purpose of this essay is to review the evidence in favor of a technology ban, to describe recent results, which suggest a ban can be harmful to students’ engagement and to provide recommendations for instructors to aid in the development of a technology policy for their classrooms.

    Broadly speaking, two primary mechanisms have been proposed to explain the relationship between unsupervised technology use in the classroom and reduced academic performance: misdirection of cognitive resources and superficial encoding of information. First, the presence of personal technology in the classroom allows students a direct line to distracting information via social media, games, and the internet.  Diverting cognitive resources towards online shopping or texting with friends necessarily draws resources away from what is happening in the classroom.  This misdirection of resources means that students do not process the material presented during lecture and this can harm performance (Fried, 2008; Wood et al. 2012).  Importantly, the use of technology may lead to the misdirection of resources, not only for the student using the technology, but for students sitting nearby, and even the instructor (Aguilar-Roca, Williams, & O’Dowd, 2012).  Second, even when students are prevented from accessing the internet or other distractions, the use of laptops leads to a relatively superficial encoding of lecture information.  Students randomly assigned to take lecture notes using a laptop perform worse on follow-up memory tests of lecture material compared to students randomly assigned to take lecture notes using paper and pencil (Hembrooke & Gay, 2003; Mueller & Oppenheimer, 2014).  This finding has been explained by differences in note taking strategies.  Specifically, students using a laptop appear to adopt a verbatim strategy in which they type everything that is said during the lecture.  In contrast, students using paper and pencil reframe and write down the information from the lecture into their own words.  This reframing requires deeper encoding of the information and leads to better retention of the material (Mueller & Oppenheimer, 2014).  Thus, despite successfully resisting temptation and devoting resources to the task of taking notes, the use of laptops is still harmful to the retention of material presented during a lecture.

    However, there are three things to keep in mind when implementing the findings reviewed above as the basis for your personal classroom policy.

    Broadly speaking, studies cited as evidence for the implementation of technology bans use either an experimental or correlational approach.  In the typical experimental approach, participants are randomly assigned to use a laptop or paper and pencil to take notes while listening to a lecture.  Learning is frequently assessed by a quiz on the material that is presented at the end of the lecture (Wood et al., 2012).  Although students using laptops tend to perform worse than those who do not, this procedure is different from students learning the information over the course of a semester, as they likely enact strategies during studying to make up for distracted moments when using online resources, such as reading the textbook or asking a fellow student.  The correlational approach collects various measures of student performance, such as GPA and exam grades, and correlates these with student’s reported cell phone and laptop usage.  The negative correlation between GPA and frequency of technology use is commonly interpreted as technology usage causing a decrease in performance.  However, due to the nature of correlational research, it could similarly be interpreted that weaker students tend to bring their laptops into the classroom (Fried, 2008).  In other words, since a causal relationship cannot be drawn between the use of laptops and class performance, removing access to laptops might not lead to changes in performance. 

    The real-world impact of technology usage on student performance needs to be considered.  What does a statistically significant reduction in performance for students using laptop mean for an individual student sitting in one of our classes?   One illustrative example comes from a rigorous, large-scale study conducted at the United States Military Academy at West Point.  For an entire semester, first year students enrolled in Principles of Economics were randomly assigned to take notes on either a laptop, tablet, or using paper and pencil.  The results from this sample of over 700 students yielded a statistically significant impact on performance.  Specifically, students in the laptop and tablet conditions performed worse on the final exam compared to students in the paper and pencil condition.  Although a statistically significant reduction, the effect amounted to a decrease of 1.7% on the final exam for students in the laptop or tablet condition (Carter, Greenberg, & Walker, 2016).  Thus, despite the presumed chronic misdirection of resources and the superficial encoding of information students experience when using technology, the real-world performance benefits are small.  While any improvement in performance is welcome, there are many simple techniques that instructors can implement over the course of the semester which can show improved exam performance to a greater extent, including retrieval practice at the end of a lecture (e.g. Lyle & Crawford, 2011).

    To date, little research has assessed the impact of a technology ban on student experience within the class.  However, recent research conducted in my lab, which was presented at the Society for the Teaching of Psychology Annual Conference on Teaching (Hutcheon, Richard, & Lian, 2016), indicates that implementing a technology ban reduces student engagement.  Specifically, using data from sixty-nine undergraduate students across four sections of Introduction to Psychology taught by the same instructor, students randomly assigned to a technology-ban section reported lower levels of engagement in the course compared to students randomly assigned to the technology-permitted section, as assessed by the student course engagement questionnaire (SCEQ) (Handelsman, Briggs, Sullivan, & Towler, 2005).  Interestingly, the students surveyed in our sample reported relatively low frequency of cell phone use during a typical class (mean = 2.38) and the vast majority reported a preference for taking notes using paper and pencil (N=61) compared to laptops (N = 8).  In fact, looking at the data for the 61 students who reported a preference for taking notes using paper and pencil, we observed a significant reduction in engagement as a function of laptop ban.  In other words, the technology ban impacted engagement of students who would not even have used technology in the classroom.  These findings suggest that students are sensitive to the structure or rules within the classroom environment, and rules viewed as limiting their choices may impact how much students engage with the material and the instructor. 

    In contrast to reports of Carter et al. (2016), we observed a marginally significant reduction in end of year grades for students in the technology ban compared to the technology permitted condition.  This suggests that the impact of a technology ban on student’s performance in the classroom may not be the same for all classroom environments.  Specifically, students enrolled in a more traditional, small liberal arts environment (Bard College compared to West Point) may be more impacted by the implementation of such bans.

     Recommendations

    Consider the make-up of your class.  If you are teaching a small class in which students might not spontaneously use technology, the implementation of a technology ban could negatively impact student experience and performance in the class.  In contrast, if you are teaching a large lecture class in which students might feel less engaged to begin with, the ban might help their experience and performance.

    Minimize the distraction of others.  If you decide not to implement a ban, you should think about ways that you can prevent those students who chose to use laptops from distracting others who choose not to use a laptop.  Methods to alleviate this concern include having specific sections of the classroom dedicated to laptop and technology users (Aguilar-Roca et al., 2012). 

    Provide rationale for your decision.  If you decide to implement a technology ban, providing students with a clear explanation as to why the ban is in place, supported by relevant research is one potential method for reducing the impact of a ban on student engagement.  In conclusion, there is little doubt that under certain situations, unsupervised technology usage can negatively impact academic performance.  However, full consideration regarding the type of course and composition of students within the course is advised before implementing a blanket technology ban.

     

    References

    Aguilar-Roca, N. M., Williams, A. E., & O’Dowd, D. K. (2012). The impact of laptop-free zones on student performance and attitudes in large lectures. Computers & Education, 59, 1300-1308.

    Carter, S. P., Greenberg, K., & Walker, M. (2016). The impact of computer usage on     academic performance: Evidence from a randomized trial at the United States     Military Academy (SEII Discussion Paper #2016.02).

    Fried, C. B. (2008). In class laptop use and its effects on student learning. Computers & Education, 50, 906-914.

    Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research, 98, 184-191.

    Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning environments. Journal of Computing in Higher Education, 15, 46-64.

    Hutcheon, T. G., Richard, A., & Lian, A. (2016, October). The impact of a technology ban on student’s perceptions and performance in introduction to psychology. Poster presented at the Society for the Teaching of Psychology 15th Annual Conference on Teaching, Decatur, GA.

    Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38, 94-97.

    Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard:  Advantages of longhand over laptop note taking. Psychological Science, 25, 1159-1168.

    Rosenblum, D. (2017, January 2). Leave your laptops at the door to my classroom. The New York Times. Retrieved from http://www.nytimes.com/2017/01/02/opinion/leave-your-laptops-at-the-door-to-my-classroom.html?_r=0

    Wood, E., Zivcakova L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58, 365-374.

     

    Tom Hutcheon is a Visiting Assistant Professor in the psychology program at Bard College. Tom earned his B.A. in psychology from Bates College and his M.S. and Ph.D. in Cognition and Brain Science from Georgia Tech.  Tom received the Early Career Psychologist Poster award at the 2016 Society for the Teaching of Psychology (STP) Annual Conference on Teaching as well as a 2017 Early Career Psychologist Travel Grant sponsored by STP.  Tom’s research interests include cognitive control, cognitive aging, and effective teaching. Tom can be reached at thutcheo@bard.edu.

     

  • 15 Aug 2017 11:31 AM | Anonymous

    That’s What She Said: Educating Students about Plagiarism

     

    Elizabeth A. Sheehan

    Georgia State University

     

    Dealing with plagiarism is one of the more unpleasant aspects of our job as instructors. There is the sinking feeling you get when you suspect plagiarism, the moment that your Google search returns the exact passage from your student’s paper, the uncomfortable conversation with the student, the documentation to your department, and the potential hearing with the honor board. I would venture to say most of us have either dealt with these ourselves or at least supported another colleague through the process. These cases range from the cringe-worthy (e.g. copying directly from an instructor’s own published article, turning in a paper written by another student in a past semester) to the more minor infringements (e.g. unintentionally omitting quotation marks around a direct quote).

    At the teaching conferences I attended over the last few years, there seems to have been more emphasis on learning outcome assessment and reliance on the APA’s learning outcomes for undergraduates with a psychology major (APA, 2007). One of those outcomes is for students to “demonstrate effective writing skills in various formats” (p. 18). There also never seems to be a lack of presentations on how to incorporate writing assignments into your courses. Increasing writing assignments in your courses might mean increasing the chance you will encounter plagiarism; however, we might be able to prevent some of these cases with a greater focus on educating our students about plagiarism. Moreover, educating our students about plagiarism helps us address other APA learning outcomes about ethical behavior.

     

    WHY DO STUDENTS PLAGIARIZE

    To decrease plagiarism, a good place to start would be to try to understand WHY students plagiarize. At the last meeting of the National Institute on the Teaching of Psychology, I led a Participant Idea Exchange (PIE) on educating students about plagiarism (Sheehan, 2013). These PIE sessions are roundtable discussions on a topic. My group generated the following list of potential reasons students plagiarize:

    • ·         difficulty comprehending a reading;
    • ·         rushing through an assignment;
    • ·         convenience;
    • ·         cultural misunderstanding;
    • ·         poor understanding of the definition of plagiarism;
    • ·         not knowing how to integrate/synthesize/paraphrase;
    • ·         plagiarism is all around us in society; and
    • ·         not confident in their ability to write.

    You may be familiar with some of these, especially time constraints, difficulty with reading comprehension, and the inability to paraphrase. The idea of culture stood out to me from the PIE discussion. First, some cases of plagiarism could be due to cultural misunderstanding. Stowers and Hummel (2011) provide some examples of how students from an Eastern culture may view the use of another’s work. For instance, they assert some Asian students may see it as a sign of disrespect to paraphrase or change someone else’s words.

    A second example of culture is how plagiarism takes place all around us in society. We regularly use the functions of copy and paste on our computers in many different settings. People re-post others’ writing on their Facebook pages, re-blog someone else’s blog entry, forward youTube videos to friends, etc. Usually these events can be accomplished through one or two clicks. While these aren’t examples of academic writing, they do provide precedents that we have to overcome in our courses.

     

    EDUCATING STUDENTS ABOUT PLAGIARISM

    We had a discussion about plagiarism in my department, and our faculty reported a number of problems in pursuing cases of plagiarism, including some cases not being reported at all, faculty handling cases on their own, cases meeting our discipline’s definition of plagiarism being overturned by the college, not knowing the university reporting procedures, etc. It was clear we needed consistency and clarity. We also decided we wanted to focus less on policing, and to favor educating our students to prevent future plagiarism. You could probably guess that this led to a subcommittee (and the idea for my PIE). Our subcommittee created a standard definition of plagiarism that went into all syllabi, a writing workshop on plagiarism, a quiz, a contract for students, a flow chart of how to report plagiarism, and class activities to teach the identification of proper paraphrasing and citations. These materials (Lamoreaux, Darnell, Sheehan, & Tusher, 2012) are publicly available on the Society for Teaching of Psychology website (http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating%20Students%20about%20Plagiarism.pdf).

                At my PIE, I asked other faculty how they educated their students about plagiarism. Below are the techniques they listed:

    • a quiz on plagiarism;
    • a quiz on student handbook;
    • list policies in the syllabus on paraphrasing and/or a link to school policy;
    • discussion on the first day of class;
    • starting early in introductory classes or freshman year before students are allowed to register for classes; and
    • using technology (e.g. Turnitin or SafeAssign).

    One quiz recommended by multiple instructors is available through Indiana University, and can be found at https://www.indiana.edu/~istd/. At this site, students can complete a tutorial on plagiarism, see examples, take a quiz, and get a certificate of completion. My department uses this site as a part of our plagiarism training for students.

                A lot of us put policies on plagiarism in the syllabus and reference it on the first day of class; however, this alone is not enough. First, we can’t always rely on students to read it or to follow a link to the university policy. Second, we can’t assume they will understand the policy. Gullifer and Tyson (2010) present data demonstrating students have a great deal of confusion over what constitutes plagiarism despite online access to a policy. Students in their study also reported wanting education on plagiarism. These findings are also corroborated by data from Holt (2012).

                Holt provided basic information about plagiarism to a control group of students and training in paraphrasing to an intervention group. The control group received a definition of plagiarism in the syllabus, a link to the university policy, one example of proper paraphrasing, and a 10-minute demonstration of improper phrasing in class. The intervention group received training in paraphrasing and proper citations, along with assignments in class. As you might expect, the group with additional training was able to identify plagiarism more accurately than those without training. This study identified reasons for unintentional plagiarism as well. For example, students thought that quotations were not needed or materials didn’t have to be paraphrased if a citation was provided.

                Something as simple as a weekly paraphrasing activity can help. For 6 weeks of the semester, Barry (2006) gave students a paragraph from a famous developmental theorist. Students had to paraphrase the passage and provide a proper citation. After completing the activity, students’ definitions of plagiarism were more complex than those offered at the onset of the study. Not only did they define plagiarism as “taking someone else’s idea”, they added “not giving credit” to their definition. This isn’t necessarily evidence that this activity would reduce the number plagiarism cases, but it is evidence of students gaining a better understanding of plagiarism.

                You could also incorporate plagiarism as a theme in your course. Estow, Lawrence, and Adams (2011) designed a research methods class where the assignments and projects in the class related to the topic of plagiarism. For example, their students designed a survey about plagiarism, collected data, and wrote a research report on their findings in one set of assignments. The researchers compared the progress of this class to one with the same assignments but a different theme. The students in the plagiarism-themed course were able to better identify plagiarism and generate more strategies for avoiding plagiarism.

    Plagiarism is scary, for both professionals and students. The consequences can be steep. It has resulted in failed assignments, expulsion from school, revoked degrees, and even ended careers. Students often tell me how terrified they are of unintentional plagiarism; Gullifer and Tyson’s participants also expressed fear of unintentional plagiarism and the consequences of plagiarism. Implementing some of these fairly simple ideas in our courses will enhance our students understanding of plagiarism. A better-informed student should be less fearful, more confident in their ability to write, and less likely to plagiarize.

     

    References

     

    American Psychological Association. (2007). APA guidelines for the undergraduate psychology major. Retrieved from http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf

    Barry, E. (2006). Can paraphrasing practice help students define plagiarism? College Student Journal, 40(2), 377-384.

    Estow, S., Lawrence, E. K., & Adams, K.A. (2011). Practice makes perfect: Improving students’ skills in understanding and avoiding plagiarism with a themed methods course. Teaching of Psychology, 38(4), 255-258.

    Gullifer, J., & Tyson, G.A. (2010). Exploring university students’ perceptions of plagiarism: A focus group study. Studies in Higher Education, 35(4), 463-481.

    Holt, E. (2012). Education improves plagiarism detection by biology undergraduates. BioScience, 62(6), 585-592.

    Lamoreaux, M., Darnell, K., Sheehan, E., & Tusher, C. (2012). Educating students about plagiarism. Retrieved from  Office of Teaching Resources in Psychology for Society for the Teaching of Psychology website: http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating Students about Plagiarism.pdf

    Sheehan, E. A. (2013, January). Kick plagiarism to the curb: How to educate students before they head down that road. Participant Idea Exchange conducted at the National Institute on the Teaching of Psychology, St. Pete Beach, Fl.

    Stowers, R. H., & Hummel, J. Y. (2011) The use of technology to combat plagiarism in business communication classes. Business Communication Quarterly, 74(2), 164-169.

     

     

    Elizabeth Sheehan is a Lecturer at Georgia State University. She earned her PhD in Psychology from Emory University in Cognition and Development. She currently teaches Intro Psychology, an integrated version of Research Methods and Statistics, and Forensic Psychology. She has presented her work on designing study abroad programs, teaching with technology, and incorporating writing assignments into courses at teaching conferences, such as the Southeastern Conference on Teaching of Psychology and the Developmental Science Teaching Institute for the Society for Research in Child Development.

     


Powered by Wild Apricot Membership Software