Society for the Teaching of Psychology: Division 2 of the American Psychological Association

E-xcellence in Teaching
Editor: Annie S. Ditta

  • 03 Sep 2018 8:06 PM | Anonymous

    Alice Szczepaniak (Boston University)

    Robyn Johnson (Boston University)

    Naamah Azoulay Jarnot (University of Southern Maine)

    Changiz Mohiyeddini (Northeastern University)

    Sohila Mohiyeddini (California University of Management & Sciences)

    Haley Carson  (Northeastern University)


    Despite over 75 years of research on student persistence (Jones & Braxton, 2010), there have been few substantial gains in student persistence in recent years (Tinto, 2007). Persistence measures those students who continue to be enrolled in the university (McGrath & Burd, 2012). Low persistence rates can have a widespread impact:

    • ·       On a national level, college degree attainment has been linked to economic growth. Graduates from four-year colleges pay an average of 91% more in taxes each year than those with just high school degrees (Ma, Pender, & Welch, 2016).
    • ·       At an institutional level, student retention is used as a key performance indicator for the institution (Crosling, Heagney, & Thomas, 2009). Freshman persistence and graduation rates are among the metrics that define the quality of an academic institution (Culver, 2008).
    • ·       On an individual level, persistence is necessary for a college student to realize the social and economic benefits associated with higher education (Wolniak, Mayhew, & Engberg, 2012).

    According to higher education theorist Vince Tinto’s model of college student departure, dropout from college is the result of the students’s experiences in the academic and social systems of the college. The higher the degree of integration of the student into the college’s social and academic system, the greater the student’s commitment to the specific institution and to the goal of college completion (Tinto, 1975). Terenzini and Wright (1987) found that students’ levels of academic and social integration in one year had a positive influence on their level of academic and social integration in the next year. More recently, Strauss and Volkwein (2004) established that social activities, classroom experiences, and friendships are key predictors of institutional commitment.

    Based on this background information, we reasoned that student experiences that allow for both academic and social integration would increase student persistence. Thus, the objective of our study was to investigate whether positive group work experiences (Mohiyeddini, Johnson, Azoulay Jarnot, & Mohiyeddini, in preparation; Mohiyeddini, Azoulay, & Bauer, 2015) will increase students’ intention to persist.

    The Study

    Students were recruited at three different college campuses in London. To be included in the study, the students had to have current membership in a small mixed-gender group work of three to four students for at least one semester. While the classes were on different subjects, for each class the aim of group work was to produce a collaborative report and/or a presentation as a graded course requirement. Students participating in the study completed an initial questionnaire that included demographic and socioeconomic information, as well as a baseline measure of their intention to persist. Approximately five months after the first measurement, these students were asked to complete a follow-up questionnaire on their current intention to persist and their experiences with their group work. 232 students completed the study.

    To measure group work experiences we used the Positive Group Work Inventory (PGWI) (Mohiyeddini et al., in preparation). The PGWI is made up of 24 items that measure six central factors of group work experiences:

    • 1.     Perceived respect
          “We comment on each other’s performance with an appropriate tone”
    • 2.     Perceived fairness
          “The workload and responsibilities were fairly distributed among us”
    • 3.     Effective commitment
          “My group members were committed to our group work”
    • 4.     Perceived transparency
          “The rules for our collaboration were clear”
    • 5.     Perceived support
          “Other group members gave me the support that I needed to complete my part”
    • 6.     Perceived inclusion
          “I had the feeling that I belonged to my group”

    We measured the students’ intention to persist twice, once at the beginning of the study and again at the end of study (approximately 5 months later) with two items following Ajzen’s recommendations (1991):

    • 1.     “I intend to complete my degree at my current university”
    • 2.     “I intend to continue with my education at my current university”

    Our Findings

    After controlling for variables such as age, gender, and the student’s baseline intention to persist, we found that perceived respect (β = .125, p = .010) and perceived inclusion (β = .147, p = .002) were predictive of students’ intention to persist. The more students perceived respect and inclusion in their group work experience, the higher their intention to persist and complete their degree at their current academic institution. The predictive value of perceived inclusion suggests that if groups could foster a better sense of inclusion among members, that intention to persist could have an even larger impact on individual’s intention to persist, though the groups in this particular study did not do a particularly good job of fostering that kind of inclusive environment.

    Our findings are in line with recent theories and research on the impact of perceived respect on teams. Perceived respect reflects that the individual feels valued by the team (Branscombe, Spears, Ellemers, & Doosje, 2002; Huo & Binning, 2008; Smith, Tyler, & Huo, 2003; Tyler & Blader, 2003). Individuals who feel respected by other team members experience higher levels of identification with the team (Sleebos, Ellemers, & de Gilder, 2007) and put more effort into achieving team goals (Tyler & Blader, 2003).

    In a related vein, social identity theory (Tajfel, 1978; Tajfel & Turner, 1979) highlights that social identification processes, during which individuals tend to think of themselves in terms of their belonging to and inclusion in a social group or collective, have a crucial impact on individuals’ collaborative behaviors. Following social identity theory, our results extend these findings and may suggest that perceived inclusion in a team supports the sense of being a part of an academic institution as a larger community and therefore strengthens a student’s intention to complete their education at that institution.

    Limitations

    Although the current investigation advanced research on student persistence and positive group work experiences of students in several ways, there were also a number of limitations to our study. First, the study was based on self-reported data, which are affected by reappraisal of past events due to present (critical) circumstances, by impairment of memory over time, and by non-disclosure and reporting biases. Second, the questionnaire used in this study was presented in a consistent order and was not counterbalanced, which might have influenced the results and prompted order effects. Furthermore, considering the sample size, a non-random sampling method, lack of control group, and the recruitment of very few colleges, the generalizability of the findings is limited.

    What to Do with this Information

    Despite these limitations, our study expands our understanding of student persistence and highlights the potential impact of positive group work experiences on students. Fostering positive group work experiences could be an effective tool to improve the persistence intention of students. This can be done through:

           Workshops for faculty and staff that explain key conditions of a positive group work experience and provide tools and a framework for facilitating respect and inclusion in their class.

           Courses for students, such as first year seminars, that focus on teaching positive group work skills, particularly respect and inclusion.

    References

    Ajzen, I. (1991) The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.

    Branscombe, N. R., Spears, R., Ellemers N., & Doosje, B. (2002). Intragroup and intergroup evaluation effects on group behavior. Personality and Social Psychology Bulletin, 28(6), 744–753. doi:10.1177/0146 167202289004.

    Crosling, G., Heagney, M., & Thomas, L. (2009). Improving student retention in higher education. Australian Universities’ Review, 51(2), 9-18.

    Culver, T. (2008). A new way to measure student success: Introducing the student success "Funnel"--A valuable tool for retention planning and goal-setting. Retrieved from http://ezproxy.neu.edu/login?url=http://search.proquest.com/docview/1238186212?accountid=12826

    Huo, Y. J., & Binning, K. R. (2008). Why the psychological experience of respect matters in group life: An integrative account. Social and Personality Psychology Compass, 2(4), 1570-1585. https://doi.org/10.1111/j.1751-9004.2008.00129.x

    Jones, W. A., & Braxton, J. M. (2010). Cataloging and comparing institutional efforts to increase student retention rates. Journal of College Student Retention, 11(1), 123-139.

    Ma, J., Pender, M., & Welch, M. (2016). Education pays 2016: The benefits of higher education for individuals and society. The College Board, Trends in Higher Education Series. Retrieved from https://trends.collegeboard.org/sites/default/files/education-pays-2016-full-report.pdf

    McGrath, S. M., & Burd, G. D. (2012). A success course for freshmen on academic probation: Persistence and graduation outcomes. NACADA Journal, 32(1), 43-52.

    Mohiyeddini, C., Azoulay, N., & Bauer, S (2015, May). Maximizing collaborative small group work experiences: An assessment approach. Paper presented at the Conference for Advancing Evidence-Based Teaching, Boston, MA.

    Mohiyeddini, C., Johnson, R., Azoulay Jarnot, N., & Mohiyeddini, S. (in preparation). Individual differences in positive group work experiences in collaborative student learning.

    Sleebos, E., Ellemers, N., & De Gilder, D. (2007). Explaining the motivational forces of (dis)respect: How self-focused and group-focused concerns can result in the display of group-serving efforts. Gruppendynamik und Organisationsberatung, 38(3), 327-342.

    Smith, H. J., Tyler, T. R., & Huo, Y. J. (2003). Interpersonal treatment, social identity and organizational behavior. In S. A. Haslam, D. van Knippenberg, M. J. Platow, & N. Ellemers (Eds.), Social identity at work: Developing theory for organizational practice (pp. 155-171). Philadelphia, PA: Psychology Press.

    Strauss, L. C. & Volkwein, J. F. (2004). Predictors of student commitment at two-year and four-year institutions. The Journal of Higher Education, 75(2), 203-227.

    Tajfel, H. (Ed.) (1978). Differentiation between social groups: Studies in the social psychology of intergroup relations. European Monographs in Social Psychology No. 14, London: Academic Press.

    Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. Austin, and S. Worchel. (Eds) The social psychology of intergroup relations. Monterey, CA: Brooks/Cole.

    Terenzini, P. T., & Wright, T. M. (1987). Influences on students’ academic growth during four years of college. Research in Higher Education, 26(2), 161-179.

    Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89-125. Retrieved from http://www.jstor.org/stable/1170024

    Tinto, V. (2007). Research and practice of student retention: What next? Journal of College Student Retention, 8(1), 1-19.

    Tyler, T. R., & Blader, S. L. (2003). The group engagement model: Procedural justice, social identity, and cooperative behavior. Personality and Social Psychology Review, 7(4), 349–361.

    Wolniak, G. C., Mayhew, M. J., & Engberg, M. E. (2012). Learning’s weak link to persistence. The Journal of Higher Education, 83(6), 795-819.


  • 02 Aug 2018 7:43 PM | Anonymous

    Amber M. Chenoweth and Brittany L. Jackson (Hiram College)

    Autism Spectrum Disorders (ASD) have a relatively recent history in terms of research attention. With the newly updated diagnostic criteria in the DSM-5 (American Psychiatric Association, 2013), even more attention has been made to this spectrum of developmental disorders as individual diagnoses may have changed (e.g., individuals with former diagnoses of Asperger’s syndrome are now diagnosed with ASD). Further, typical developing students are finding themselves in a variety of situations in which students with ASD are included, often without a full understanding of the experience of their peers with ASD. This lack of understanding can lead to a range of responses toward their peers with ASD, including simple confusion and frustration when attempting to interact with their peers with ASD, to the extreme of bullying those with ASD (Swaim & Morgan, 2001). As Harnum, Duffy, and Ferguson (2007) found, this is due to the perception that individuals with ASD are not the same as typical developing individuals, leading to less openness to interaction.

          Our institution is poised with a unique opportunity for our students to interact more fully with individuals with ASD, being situated nearby to a Living and Learning Community. This organization is a fully-functioning organic farm that provides the opportunity for adults with ASD to work and be provided with occupational therapy options. Several students from our institution have participated in internship opportunities at Living and Learning Community and found these experiences rewarding, both in a service aspect as well as in future career exploration. Moreover, this interaction with individuals with ASD serves to increase student understanding of the complexity of this spectrum of disorders. Because of this stimulated interest in ASD among students on our campus, several faculty members across disciplines offer courses on ASD. Given our institution’s emphasis on interdisciplinary learning, we found this to be a great opportunity to engage students in a course to explore the many facets of ASD.

          Why an interdisciplinary course? Much value can be gained from engaging students in exploring a complex topic through multiple and integrated lenses. By allowing students the opportunity to explore these topics within the course setting, we can push them to challenge their previously held beliefs and ideas while they explore that shared space between disciplines. Further, interdisciplinary courses, particularly those that are team-taught, can foster creative, critical, and divergent thinking, all skills that are sought by our students’ future employers (Putrienė, 2015).

    Our Course

    Our course integrates the disciplines of psychology and theatre. From the psychology perspective, students are exposed to material from the scientific literature on ASD, examining with depth the topics of diagnosis, hypothesized causes, treatments, as well as the concept of neurodiversity. The theatre perspective exposes students to two key areas: playwriting and acting. Students learn techniques for telling a story drawing from multiple sources – readings, interviews, discussions – and acquire how to portray what they learn in both abstract and concrete ways, while being made aware of issues of accuracy and sensitivity to a population different from themselves. The interdisciplinary nature of the course also integrates the disciplines to model for students how the two inform one another. For example, the psychology content serves as the context for which to explore these topics in a theatrical way; awareness of body, space, and wording informs students in best approaches to interviewing individuals with ASD and those that support those individuals.

    The learning objectives of this course are for students to demonstrate understanding of the science of ASD and theatre methodology (playwriting, performing), and connect and listen to the other, testing empathy skills, and gain a truer sense of one’s own humanity. To meet these objectives, we designed the course to engage students in the following activities:

    • ·       Class discussions focused on topics about the science of ASD (neurobiological etiology, symptoms, and therapeutic interventions), as well as theatrical portrayals and storytelling. The basis for these discussions are assigned readings, including both fiction and nonfiction sources, scientific articles, case studies, guest speakers, and current event topics.
    • ·       Short writing assignments that scaffold students through the writing process by requiring students to submit specific creative writing pieces drawn from scientific literature. This begins with having students write a letter based on a scientific article, then a short story, and eventually multiple scenes of a play.
    • ·       Interviews with either individuals with ASD or those that work with individuals with ASD, including caregivers and family members, teachers, doctors, intervention specialists, case workers, etc.
    • ·       Field trips to various locations to explore aspects of ASD. Past field trips have included visiting the local Living and Learning Community that provides occupational therapy for adults with ASD and New York City to see the play The Curious Incident of the Dog in the Nighttime on Broadway.
    • ·       Media portrayals that depict various aspects of ASD. Past feature films have included Rain Main, Temple Grandin, and Ben-X, as well as the documentary Autism is a World. We also have students view clips from TV shows that highlight characters either overtly diagnosed with ASD (e.g., Parenthood) and those that are exhibiting common characteristics associated with ASD (e.g., The Big Bang Theory). These portrayals are the basis for class discussions on accuracy of portrayals, the ethics of presenting characters with ASD in often stereotypical ways, how these portrayals either promote or hinder the idea of neurodiversity, as well as to inform students on how to connect with the characters they are developing in their final performance piece and present in both an accurate and sensitive way.
    • ·       The final performance piece requires students to draw upon all the class activities to develop a brief (approximately 10 minute) play focused on a specific ASD topic. Students work in small groups (5 students per group, on average) to write and perform their piece.

    Assessing Our Course

    During the fall 2015 offering of Exploring Ability and Disability: ASD, we administered a voluntary pre- and post-test survey to students enrolled in the course to assess changes in knowledge of ASD, as well as to inform us on the students perceived effectiveness of the course activities described above. A total of 25 of our 31 enrolled students completed both the pre- and the post-test set of questionnaires.

    At pre-test, we administered a prior experience survey which revealed participants had, on average, approximately five years of experience interacting with an individual with ASD, typically a classmate, friend, co-worker, or relative.

    At both the pre- and the post-test we administered the Autism Knowledge Survey-Revised (AKS-R) developed by Stuart, Swiezy, and Ashby (2008). This questionnaire of 20 statements about ASD provided a measure of baseline and change in knowledge of ASD, as participants indicated on a 6-point Likert-type scale how much they agreed or disagreed with each statement. We found that overall students did increase in their knowledge of ASD compared to pre-course baseline. However, there were a couple items that did not show the same increase in knowledge, highlighting our need to address those topics more clearly in future offerings of the course. One example included the item “Children with autism do not show attachments, even to parents/caregivers,” to which the correct response should be “Fully Disagree.” Upon reflection, we identified areas where we could emphasize the fuller range of emotion and attachments that children with ASD do express.

    We also administered the Openness Scale, adapted from Harnum et al. (2007), at both pre- and post-test. This scale first presented a vignette depicting characteristic behaviors of an individual that may be diagnosed with ASD, and then presented a series of statements regarding reactions to and the willingness to interact with that individual, to which respondents rated on a 5-point Likert-type scale how much they agreed or disagreed with each statement. With this measure, we found that participants remained at their initial high openness to interact levels pre-to-post, indicating that there may have been potential bias. This bias may be from demand characteristics – who would want to admit that they would not want to interact with an individual who clearly displays behaviors of the disorder for which this course is based upon? – or from the self-selecting nature of taking a course on ASD, or both.

    Lastly, we surveyed participants on their class experience with a series of open-ended questions. They all took the form of “Reflect on how the [assignment/activity] affected your understanding of individuals with Autism Spectrum Disorder.” Representative responses are below.

    Final performance pieces.

    • ·       “It helped me understanding people with autism because it allowed me to imagine what it would be like to actually be involved in a family with children with autism.”
    • ·       “There were many different views of autism portrayed. It reminded me that everyone experiences the disorder differently.”
    • ·       “I think it helped show how people took their own version of what they saw autism as and turned it into a play. Each play had different aspects to it which showed all the things we've learned.”

    Short writing assignments.

    • ·       “They allowed me to express what I have learned in different ways from monologues to poems. Sometimes things are hard to express so this gave me the chance to try different ways.”
    • ·       “The SWA's were the most influential piece for my learning in this course. I learned a ton through the articles and reflecting in a creative way.”
    • ·       “I never thought I could be creative when talking about autism.”

    Class discussions.

    • ·       “It allowed me to see and compare my thoughts with my peers and fellow classmates. I got to see and hear that I wasn't the only person with confusions and thoughts about people with autism.”
    • ·       “They allowed me to see many different opinions from everyone in class. Not everyone has the same type and amount of experiences so this class gave me the chance to see what others see and think.”

    Interviews.

    • ·       “It made me face something about my friend. And myself.”
    • ·       “I learned SO much about my interviewee and ASD in general. I had known the person for years, yet never thought to ask these questions or care to listen for the answers. This was a crucial part of the course.”

    Field trips and media portrayals.

    • ·       “It definitely drew my attention to the fact that many types of the disorder are not reflected in the media at all.”
    • ·       “I loved them all, all of them provided me with a learning experience that helped me gain an understanding of ASD. It also reminded me that this is something I want to do for the rest of my life.”
    • ·       “This was a great idea as it allowed us to gain realistic perspectives of people with ASD. Experiencing something in real life is much different than in a classroom or through a book.”

    As with any course assessment, however, not all our responses were quite so positive. A few respondents indicated they thought the short writing assignments and class discussions were repetitive, and that the final performance pieces showed more stereotypical representations of ASD. Overall, though, students generally rated each of the activities as valuable at some level, and attributed the activities and assignments to enhancing their knowledge of ASD.

    Lessons Learned

    From the open-ended responses, both indicating “success” as well as negative aspects to consider for future modifications to the course, we have identified key lessons learned when teaching a course on a complex and sensitive topic, particularly when you are expecting them to demonstrate their understanding through creative methods.

    First, it is crucial to provide your students lots of examples. Luckily, we have taught this course a handful of times now and have built up a repository of good examples (with those students’ permission, of course) to share with our students, particularly for the short writing assignments. One area we identified could use more examples is with plays and performances in different formats. By having students practice what we are expecting to have them complete by the end of the course – i.e., a full performance piece – we greatly enhance the success of achieving that learning goal.

    Second, whenever possible incorporate experiential learning opportunities. As evidenced by the responses we received about the field trips and interviews, students learn so much by doing rather than just through merely reading or being lectured to. We are fortunate to have the Living and Learning Community for adults with ASD within walking distance of our campus, but there are other ways to incorporate these experiences into any course. For example, reach out to the local community for guest speakers, such as special education instructors, the director of the local disability services office, and parents of children with ASD. In our experience, many of the individuals in these positions are wanting to share their experiences to promote greater understanding. We also found value from engaging our students in simulation exercises, such as simulating sensory overload. However, such exercises need to be placed in the correct context and introduced and discussed in a way to not promote feelings of pity in the students, but rather to promote understanding (Nario-Redmond, Gospodinov, & Cobb, 2017).

    Third, and we would argue most important, be encouraging! For some students, this is their first creative experience and they are anxious. In our course, many of our students did not come having had acting experience, or even creative writing experience. It was important for us to emphasize that the class space was a safe space to explore both the theme and content of the course, as well as how to express themselves in a creative way. This led to our favorite response provided above: “I never thought I could be creative when talking about autism.”

    References

    American Psychiatric Association. (2013). Autism Spectrum Disorder. In Diagnostic and statistical manual of mental disorders (5th ed.), 50–59.

    Harnum, M., Duffy, J., & Ferguson, D. A. (2007). Adults’ versus children’s perceptions of a child with autism or attention deficit hyperactivity disorder. Journal of Autism and Developmental Disorders, 37, 1337-1343.

    Nario-Redmond, M. R., Gospodinov, D., & Cobb, A. (2017, March 13). Crip for a day: The unintended negative consequences of disability simulations. Rehabilitation Psychology. Advance online publication. http://dx.doi.org/10.1037/rep0000127

    Putrienė, N. (2015). The links between competences acquired through interdisciplinary studies and the needs of the labour market. Social Sciences (1392-0758), 88(2), 54-64. doi:10.5755/j01.ss.88.2.12741

    Swaim, K. F., & Morgan, S. B. (2001). Children’s attitudes and behavioral intentions toward a peer with autistic behaviors: Does a brief educational intervention have an effect? Journal of Autism and Developmental Disorders, 31, 195-205

    Stuart, M., Swiezy, N., & Ashby, I. (February 2008). Autism Knowledge Survey:  Understanding Trends in Autism Spectrum Disorders. Poster presented at the 2nd annual ABA Autism Conference, Atlanta, GA.


  • 03 Jul 2018 10:53 AM | Anonymous

    Laura Chesniak-Phipps and Laura Terry  (Grand Canyon University)


    Faculty members at a Christian university are typically expected to integrate faith into the curriculum. Not only is this encouraged by the administration and falls in line with the mission and vision of the university, but it is also expected by many students. A previous study suggested that students who attend Christian institutions anticipated that their education would prepare them for their future career and also strengthen their spirituality (Sherr, Huff & Curran, 2007). Often, faith integration is defined at the university level and does not consider the students’ perception of this integration (Burton & Nwosu, 2003). As faculty at a Christian university, we were interested in learning from students how they perceived the Integration of Faith and Learning (IFL).  The goal was to determine where IFL was apparent and how faculty could best include this element in their courses. Results of this investigation provided us with insight into student perceptions and offered an opportunity for us to share suggestions with institutions and professors interested in IFL.

    In order to examine this issue, students who were enrolled in Introduction to Psychology courses were asked to participate in focus groups. Focus groups were selected for data collection because they allow for follow-up questions to clarify, and gain a richer understanding of, participant responses. Focus groups, consisting of 50 students, lasted approximately one hour and participants were asked five questions. Students were first asked to sign informed consent forms and then were separated into groups of 7-8. Questions were presented one at a time, and participants were asked to spend a few minutes to individually respond. They were given small pieces of paper and told to write down one response per paper and then were asked to share their responses with their group. When ideas were shared, group members with similar ideas were to indicate that a theme was identified. This allowed for organic coding within the small groups. Finally, the groups were asked to report their responses to the whole group so that themes could be identified and ideas could be grouped. After the focus group was complete, the researchers examined the categories that were created by the students and categorized responses into logical themes.

    Professor Led Integration

    One of the main findings of the focus groups was that participants viewed instructors as being primarily responsible for faith integration. Participants also reported that they experienced faith integration in some classes but not in others. This suggests that while instructors are seen as primarily responsible for integrating faith, not all are doing so. It may be that some instructors do not feel comfortable integrating faith,  or are not sure how to go about doing so. These results support findings from past research (Dykstra, Foster, Kleiner & Koch, 1995; Hall, Ripley, Garzon & Mangis, 2009) which indicated that professors play an integral role in integration of faith in the classroom and should be considered the main source of IFL. From examining previous studies, and the current focus group work, it is clear that students see their professors as not only leaders in their field, but also factors in their development of faith and as a connection between faith and their specific discipline. These results suggest that universities should consider professors as primary agents for the integration of faith and should provide training and necessary resources to support them in this endeavor.

    Integration across Disciplines

    It was not surprising that when participants were asked about in which types of classes they saw IFL, the majority responded theology. However, they also reported IFL in science, technology, engineering and mathematics (STEM), humanities, communications, business and fine arts classes. Furthermore, with the exception of theology, the participants also reported perceived difficulty integrating faith into the above disciplines. This suggests that while classes focusing on religion can easily include components of faith, it is possible to integrate faith into all classes, regardless of the discipline. One reason for this may be that individual professors who teach these classes have a strong faith-base. This also presents an opportunity to explore the curriculum and determine where faith can be integrated organically within each discipline, regardless of an instructor’s religious background. While some of these areas may be more challenging than others, participant responses indicated that there is integration which suggests it is possible and it can be successful.

    Solutions for Integration

    Due to the responsibility of IFL resting primarily on the professor, the training, resources, and materials may help to increase instructor knowledge and confidence. A standardized curriculum could also be developed to include the integration of faith into specific topics within the class. Instructors who are noted as being skilled with integrating faith can be consulted when developing curricula. Dykstra et al. (1995) identified a level of integration where courses can be designed with the inclusion of IFL activities. Incorporating elements of faith into courses through a centralized curriculum would ensure that, despite individual differences in instructors, students will receive the same types of integration. Universities that do not adopt a fully centralized curriculum but want to integrate faith seamlessly, may choose to incorporate assignments or discussion questions that can be used by all faculty members. This would make certain that, despite individual differences in instructors, students will receive the integration they desire.

    Past research suggested that discussion is one of the most common types of integration (Hall et al., 2009) and that this is a path for students to process their personal views (Dyksta et al., 1995). In the focus groups, only a small number of participants reported that class discussion was where they experienced IFL. Some focus group participants referred to the main discussion forum in the online learning management system as a place where discussion could be used. One option could be to have instructors incorporate pre-written discussion questions into the learning management system that focus on IFL. If there are instructors who are not comfortable with IFL in their classroom, pre-written discussion questions that tie into content of the course could be added to provide an avenue to incorporate and discuss faith. Professors who are less comfortable integrating faith or do not have the personal experience to do so can still provide IFL for their students.

    Students indicated several ways in which faith could be integrated into the classroom and campus experiences. Examples included prayer and personal expression that demonstrate the fruits of the spirit. Prayer in the classroom can be achieved in a variety of ways, from professor led prayer to students taking turns leading prayers, or through online discussion forums. One option professors might choose to use for incorporation is a prayer forum in their learning management system. This provides an opportunity for students to share their prayer requests and to pray for each other.

    IFL is an important part of the curriculum at Christian universities and understanding student perceptions of integration can lead to more effective strategies. As faculty members, we strive to deliver a quality education to our students and support the mission and vision of our university. Understanding our student’s perceptions allows us to examine what is being done well and what can be improved upon. While this study focuses on IFL, important lessons can be derived for other learning institutions. In higher education, it is important to understand curricular objectives that are being delivered to students.

    Individual differences in instruction can be leveled by providing a standard curriculum to ensure that all graduates, regardless of their program of study, class modality, or instructor, receive a quality education.


    References

    Burton, L.D., & Nwosu, C.C. (2003). Student perceptions of the integration of faith, learning, and practice in an educational methods course. Journal of Research on Christian Education 12(2), 101-135.

    Chu, J. (2005) Faith and frat boys. Higher Education Research Institute, 165 (19). 48-50.

    Dykstra, M. L., Foster. J. D., Kleiner, K. A., Koch, C. J. (1995). Integrating across the psychology curriculum: A correlation review approach. Journal of Psychology and Theology, 23(4). 278-288.

    Hall, L. E. M., Ripley, J. S., Garzon, F. L., Mangis, M. W. (2009). The other side of the podium: Student perspectives on learning integration. Journal of Psychology and Theology, 37(1). 15-27.

    Sherr, M., Huff, G., & Curran, M. (2007). Student perceptions of salient indicators of integration of faith and learning (IFL): The Christian vocation model. Journal of Research on Christian Education, 16(1), 15-33.



  • 03 Jun 2018 9:57 PM | Anonymous

    Suzanne Wood (University of Toronto)

    At large research universities, undergraduates can get lost in the shuffle. Both logistically and economically, it is more feasible to hold lecture-style classes and to leave undergraduate lab experiences to those who are selected for research assistant positions.  However, this places a significant strain on already overburdened research faculty and their labs and leaves many qualified undergraduates in the lurch.  These undergraduates may be curious about research but may lack the confidence to approach faculty members for open research opportunities (see Bangera & Brownell, 2014 for discussion). Running laboratory courses can meet the needs of these students and lead to many of the same outcomes as achieved through individual research placements in labs, including improvement in scientific writing, computational, and technical skills (Shapiro et al., 2015). Undergraduate research experiences have also been found to bolster student interest in science as a career (Lopatto, 2007).

    One of the most exciting components of my position at the University of Toronto Psychology department was the directive to update the small (maximum enrollment of 20) psychobiology (behavioral neuroscience) undergraduate lab course with new, innovative methods. While I was fortunate that my department was already footing the bill for a massive renovation of the dedicated lab space, including the purchase of lightly used equipment, the accompanying course development was left entirely in my hands. To best utilize these resources, I set about designing a course that would leverage the power of high-impact learning practices which can lead to increased student engagement and retention (Kuh, 2008). These types of learning practices are highly encouraged at the University of Toronto and are documented periodically as part of the National Survey of Student Engagement (University of Toronto, 2014). The power of these practices can be harnessed for many types of courses, but are particularly amenable for a laboratory course setting.

    High-Impact Practices

    The key elements of high-impact practices were integrated into the course redesign as follows:

    Undergraduate Research

    While protocols for this course were established and approved ahead of time, students had the rare opportunity to gain hands-on experience with rodents before deciding to join a lab or apply for graduate school. In addition, while neural structures had been the focus of tissue staining techniques in previous iterations of this course, I updated the curriculum to include analysis of neural activity (c-fos staining). Experience with these types of technique are critical for those undergraduates hoping to pursue behavioral neuroscience graduate work today.

    Collaborative Projects

    Experiment days required participation from all students. Students were also encouraged to work on statistical analyses together, and time in class was allocated to help facilitate this collaboration. Only the writing assignments were completed independently. This distribution of work was an attempt to more closely mimic actual research settings (significant collaboration), while providing assignments for individual marks (written assignments).

    Writing-Intensive Course

    Students submitted multiple writing assignments throughout the semester. Time was devoted in class to faculty-student, or teaching assistant (TA)-student, one-on-one meetings to discuss each writing assignment. The manner in which students addressed their own weaknesses throughout the semester was considered when assigning grades.  This type of intensive feedback was only realistically possible with a small instructor (and TA)-student ratio.

    Career Exploration in the Community

    Preferences in enrollment were given to third year research specialists (high-achieving students who were interested in research, typically with intentions to attend graduate or medical school). With this in mind, I focused on what they would need to know after graduation, either when applying to jobs or graduate programs. I worked with the Career Centre to schedule a visit for students to a local, off-campus neuroscience laboratory during regular class time. To ensure the greatest learning outcomes, I scheduled a preparation session hosted by the Career Centre during class the week before the trip, as well as a debriefing session the week afterward. Students were encouraged to learn not just about the “traditional” research career paths, but also about paths in “non-traditional” science roles (e.g., fundraising, human resources, infrastructure, vivarium management, etc.).

    Student-Faculty Interactions

    The course offered undergraduates the rare opportunity to interact directly with a faculty member on a weekly basis in a small group setting. In my department, third and fourth year courses tend to enroll 50 students, with a small number of seminars offered with maximum enrollments of 20. This small group format allowed for many informal discussions regarding topics in related research areas, career paths, etc. The TA for the class was also tapped for information regarding graduate school applications, life as a graduate student, and other related topics.

    Student Reactions

    The university-wide, online course evaluation tool gathered opinions from students over the past two years concerning the perceived quality of their educational experience in this lab course. The responses were overwhelmingly positive. Below are sample quotes from the anonymous student feedback concerning the high-impact learning course components:

    “This lab course is extremely novel and interesting…I’ve never learned anything this stimulating and applied in any of my other courses.”

    “I learned valuable skills that are rare for an undergraduate course.”

    “[The] personal feedback on papers was excellent and I saw a massive improvement in my scientific writing.”

                  “Such a great course that is unique from most other courses at U of T.”

     “Why aren’t there more courses like this available to undergraduates?!”

    Notably, one student applied to a graduate program in Health Services Administration after completing this course. She ascribes this decision to the class field trip and hearing from one of the neuroscience institute’s employees about “non-traditional” career paths.

    Obstacles

    While the above components of this course have been successful, I would be remiss if I did not mention some of the significant hurdles faced when developing this course. Specifically, three main obstacles continued to rear their heads whenever I seemed to finally settle on an activity or experiment: time, money, and the lengthy commute of my students.

    Time

    One of the challenges in running this lab course was carving out the time to prepare. In contrast to a lecture-based course, a lab course involves preparation of not only learning objectives, content, assignments, and the like, but also logistics such as obtaining the relevant ethics board approval, equipment set up and testing, federal approval for scheduled drug possession, piloting experiments ahead of time, etc. The departmentally assigned teaching assistant was only employed for the term, so, in preparation throughout the summer, I found myself working on tasks during the day that required business hour communication (e.g., federal drug approvals) as well as cognitively taxing jobs such as course design. I spent nights on more menial tasks such as setting up and testing equipment.

    To help offset some of the time burden during the following year, I applied for a small university grant (Advancing Teaching and Learning in Arts & Science; ATLAS) that supported a TA to assist throughout the year in the design, implementation, and piloting of new protocols. The TA was invaluable in offsetting some of the burden of the background work involved in this course, leaving me the time to handle course design logistics. The TA shined in the development of the brain histology protocol and the listing of the necessary equipment and supplies to run it. He completed this task with gusto, leaving no detail out, and saving me countless hours.

    In addition, recruiting help from the Career Centre was essential for setting up the field trip component of the class. They were a source of enthusiastic support during both terms. Again, this collaboration saved me an enormous amount of time in scheduling logistics.

    Money

    Tied in closely with time constraints are money issues. As I mentioned above, an in-house grant helped me greatly, not only for the TA assistance outside of the regular term, but also for purchasing critical pieces of small equipment to complement what was already being supplied by the department. Specifically, I added in molecular biology techniques that reflected common practices in today’s behavioral neuroscience research (it is no longer sufficient to focus exclusively on animal behavior; genetic, histological, and molecular biological techniques are also expected). Equipment such as pipettes and glassware were not part of the lab renovation but were critical to the implementation of these new protocols.

    For instructors at smaller institutions, or if no in-house financial support is available, you may consider the possibility of recruiting undergraduate volunteers who were superstars in previous iterations of the class. While you will benefit from their assistance, the students will benefit enormously from this experience: they will see the setup of the lab from the “inside” perspective and will solidify what they learned in the class. This type of leadership experience will set them apart from their fellow students when applying to graduate school or employment positions upon graduation. In general, undergraduate teaching assistants have been found to benefit greatly from their experiences with the class (e.g., Schalk, McGinnis, Harring, Hendrickson, & Smith, 2009).

    Large, Commuter Campus

    At a primarily commuter campus, the design of the class is constrained to events taking place during class hours only. This is particularly challenging in a psychobiology class where behavioral animal experiments are used. Extended learning tasks (e.g., Morris water maze, radial arm maze, etc.) are simply out of the question. I selected tasks that could be run within a three-hour class session: an abbreviated version of object recognition, comparing rats’ performance on low-dose amphetamine with saline; and open field locomotion, comparing mice injected with diazepam, amphetamine, or saline. Brain tissue histology was performed over the course of several weeks, with tissue being frozen between sessions.

    Benefits can also be found with this type of situation. While students did not have the opportunity to run paradigms that required daily interactions with the rodents, having all laboratory work performed within class hours made this unique experience accessible to students who might not have the flexibility to participate in apprentice-style lab opportunities (e.g., those with lengthy commutes, jobs, or other time commitments; see Bangera & Brownell, 2014). In addition, I was able to leverage the urban location of the campus to coordinate a field trip within walking distance (see High-Impact Practices: Career Exploration in the Community section).

    Take Away Points

    While this piece focuses on a single course at a large research institution, the embedded lessons can be applied to many different settings:

    • 1)     Seek out and find help. Learn about the resource available to you such as institutional funding and offices on campus such as the career center, teaching and learning center, etc. Also, look to TAs and undergraduates to participate in the implementation of classes that are as technically burdensome.
    • 2)     Know your students. Do your students commute, or do they live on campus? Are they 3rd and 4th year students, or are they just starting out? Considerations such as these can help guide your instructional design choices (although all could probably benefit from some instruction on scientific writing, as well as a basic stats review).
    • 3)     While new equipment is fun, it does not make a class. Take advantage of what you have access to, but know that your job is not done once those boxes of new equipment and supplies have been delivered. Implementing high-impact practices can help to ensure important learning experiences for your students, regardless of sophistication of laboratory techniques.

    References

    Bangera, G., & Brownell, S. E. (2014). Course-based undergraduate research experiences can make scientific research more inclusive. CBE Life Sci Educ, 13(4), 602-606. doi:10.1187/cbe.14-06-0099

    Kuh, G. D. (2008). High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. Washington, DC: Association of American Colleges and Universities.

    Lopatto, D. (2007). Undergraduate research experiences support science career decisions and active learning. CBE Life Sci Educ, 6(4), 297-306. doi:10.1187/cbe.07-06-0039

    Schalk, K. A., McGinnis, J. R., Harring, J. R., Hendrickson, A., & Smith, A. C. (2009). The undergraduate teaching assistant experience offers opportunities similar to the undergraduate research experience. J Microbiol Biol Educ, 10(1), 32-42.

    Shapiro, C., Moberg-Parker, J., Toma, S., Ayon, C., Zimmerman, H., Roth-Johnson, E. A., . . . Sanders, E. R. (2015). Comparing the Impact of Course-Based and Apprentice-Based Research Experiences in a Life Science Laboratory Curriculum. J Microbiol Biol Educ, 16(2), 186-197. doi:10.1128/jmbe.v16i2.1045

    University of Toronto (2014). Results of the National Survey of Student Engagement. Retrieved on May 31, 2017 from http://www.provost.utoronto.ca/Assets/Provost+Digital+Assets/NSSE2014report.pdf


  • 01 May 2018 6:10 PM | Anonymous

    Karen Z. Naufel  (Georgia Southern University)

    Psychology sometimes has a public relations problem. People are skeptical of its science (Lillienfeld, 2012) and usefulness (Halonen, 2011). It is important that we teach others about the practicality and ubiquity of psychology. Teaching about these values is not limited to only the classroom. Instead, if people are to learn about psychological science, we as instructors must extend our teaching beyond our academic borders. As others have said, we must teach to the community (e.g., Lilienfeld, 2012; Zimbardo, 2004).

    Over the past several years, I have had this privilege of teaching psychology in the community. The process is different from teaching students. Community members have more freedom in choosing what they want to learn. The technology available in the classroom is not always available in community settings. The chance to correct a misunderstanding of information is limited. Simply put, effective teaching in the community often requires a different subset of skills than effective classroom teaching. In this essay, I present some tips for teaching the community that I've picked up along the way. Although there are many ways to teach in the community, I focus on how to give lectures (or “programs” as they are typically called).

    Tips for Getting Started

    Compared to students, community members have different incentives for learning material: They are not learning to ace tests or get good grades. Instead, they choose to learn when topics appeal to them. Therefore, it is crucial to identify topics that will appeal to a wide, non-academic audience. Identifying topics that will draw in such an audience can be tricky. If a program topic seems relevant and interesting, people come. If a program topic is too narrow, controversial, or academic, then community members may shy away from attending. Here are some tips for generating appealing program topics:

    • Pick topics that meet community needs. If people stereotype psychology as a field that

    only helps others with personal problems, then people are not likely to know how psychology could relate to them. Likewise, if psychology instructors aren’t connected with the community, then instructors also may not know what the community really needs.

    Identifying community needs comes from submersing oneself in the community. It can come from looking at local organizations’ webpages, daily conversations with people at the coffee shop, or a chat with a worker while in the grocery store checkout line. Think about how psychology is connected to the issues that others bring up in these situations. Then, brainstorm program ideas that relate.

    • Teach only what you know. As you generate program ideas, remember the ethicality of teaching only what you know. The American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct has specific provisions regarding making public statements [see Sections 2.01(a & c); 5 & 7]. Additionally, academic freedom does provide some license to talk freely. However, this freedom also comes with the responsibility of providing accurate information (Hunt, 2010). Sometimes, you may be invited (or tempted) to give a program on a topic outside of your area of expertise. In such instances, it is best to decline and instead refer the program to a knowledgeable colleague.

    • Reframe program titles so they don’t create reactance. As we know from our long familiarity with the confirmation bias, people look for information that confirms rather than disconfirms their beliefs (Nickerson, 1998). Therefore, a talk entitled, “Spanking: Why It’s Not a Good Idea” will likely only draw in a crowd of people who already agree with the premise. Those who spank their children—arguably those who need this information more—may avoid the talk altogether. A talk title that is less direct (e.g., “Making the Terrible Twos Less Terrible: Strategies for Raising Healthy Toddlers, Preschoolers, and Children”) may elicit greater reception.

    • Rapport matters. Even with a snazzy title, it can be difficult to get an audience. In tightknit or small communities, activities from newcomers or outsiders may be viewed suspiciously. Therefore, posting fliers about your program around town, creating a public Facebook event, or announcing it in a newspaper may work, but the resulting audiences may be embarrassingly minimal. (Can you imagine giving a talk to only one person? I can. It’s awkward.)

    Personally, the best experiences I have had in getting program gigs have come from connecting with people from the community (see Tip 1). Go to Farmer’s Markets, spin classes, and community events. While waiting for your coffee at the local shop, chat with another frequent customer. Join locally-based Facebook groups or other groups, many of whom can recruit audiences for you. As you foster these friendships, it becomes easier for you to tell them what you do, and easier for them to ask for and value your expertise.

    • Consider how your institution views these activities. Most likely, your institution will herald these activities as important service work. However, consider important policy and legal ramifications. Such service opportunities may also be considered consulting work in certain circumstances—even if your work is free. In these cases, institutions may limit the number of hours a faculty member can engage in consulting behaviors. Some institutions may require permission to use university’s supplies, such as a laptop or printer, for these events. Others may fully cover you should be injured while delivering a program, but the institution may require that formal paperwork be filed beforehand.

    Tips for Developing a Program

    Creating a lecture is not the same as developing a program. Beloved teaching strategies like think-pair-share may seem odd in a community setting, and assigning readings beforehand may not be possible. Instead, an instructor will likely get one brief shot to deliver the information clearly and succinctly. To increase the likelihood that a program goes well, consider these tips:

    • Teach to the community, not to students. I remember a moment I was discussing research with a community member. I used the word “altruistic”— a word with which the community member was unfamiliar. She then said, “you professors like your big words, don’t you?” At that moment, I felt the rapport between us plummet. I had reinforced a stereotype that academics were not connected to the outside community.

    Since then, I’ve aimed to be more mindful of my audience. Americans tend to read an eighth-grade level or less, and a substantial portion of the population lacks basic reading skills (Literacy Project Foundation, 2017). Therefore, lectures for a typical college-level psychology class may be too advanced for many community members, and it is important to adjust accordingly.

    To make it more likely that a program appeals to wide audiences, it’s wise to have people with a variety of educational backgrounds offer feedback on your program’s recruitment materials, program, and activities. Although it is intended for creating health materials, the Center for Disease Control’s brochure Simply Put: A Guide for Creating Easy-to-Understand Materials has transferrable tips for delivering presentations to an audience with a wide range of literacy levels (Center for Disease Control, 2009). Additionally, reading-level calculation tools, such as the Flesch-Kincaid scale, can determine if text (or a transcription of what one plans to say) is at an acceptable level. Many word processing software systems, like MS Word, have such tools built in.

    • Fair use rules for copyrighted material may be different. Do you have a favorite cartoon that you like to show to your classes? Is there a graph in a journal article that really illustrates a concept? The same principles for fair use in academic settings are not necessarily the same ones for use in community settings. To determine what media can be included in a program, consider how these media will be used. For instance, does the organization want to post your program's handouts on their webpage? Will the organization disseminate your program's materials to others? It is pertinent to review fair use policies to determine whether materials can be used.

    Some websites have materials that are free for public use. For instance, Pixabay.com has thousands of photographs available, and it does not require attribution or the creator’s permission to use. Other websites, such as the NOBA project (NOBAproject.com), have license agreements explaining how the material can be used and shared.

    • Plan for no PowerPoint. If planning to use technology as part of the presentation, and your program is off campus, remember that not all organizations have equipment for you to use. BYOT (Bringing Your Own Technology) may be an option. If you choose to BYOT, ask about the room setup prior to coming. Rooms can be too small for a projector, outlets may not be available, or the room setup may not be conducive for using technology. On one occasion, I was told a monitor with an HDMI cable would be available to hookup to my laptop. It was, but the monitor size was much too small for everyone to see the graphics clearly. On another occasion, I was promised a projector. When I arrived, they had a projector, but no projector screen. Unfortunately, art occupied all wall space, which meant I couldn’t project on those surfaces. Luckily, I had brought handouts so I could improvise on the spot.

    Although I love using technology in the classroom, I rarely use it anymore when giving programs to the community. Instead, I have found that giant Post-It® notes can be great for writing quick points or drawing quick visuals. Handouts, too, can provide a summary of key points without relying on the randomness of technology.

    • Be prepared to give programs of varying lengths. Instructors may be used to having nearly an hour or more to give a program. However, community programs vary drastically in time allotment. Though sometimes I have an hour or more to speak, I am usually asked to give shorter (10-20 minute) programs.

    Some programs take place during an organization’s regular meeting. Their regular meeting agenda may run long, which cuts into the program time. I have had to change the length of my program on the spot. Just as it is important to have an idea what to cut from a lecture, it is also a good to have an idea what to cut if giving a program.

    If you find yourself with a tiny time limit, remember these rules: 1) Emphasize a single main point, and 2) Provide participants with specific steps for how to obtain more information upon completion. The last step is particularly important in preventing participants from internet searching pseudoscientific and inaccurate information.

    Tips for Finishing up a Program

    • Assess your work. Techniques that work in classrooms may not work as well in the community. Alternatively, a novel approach in the community may inspire a new teaching technique for your classroom. If at all at all possible, chat with attendees after you give your program. Such chats can provide insight to if and how they will use the information they learned. For longer programs and workshops, it is also acceptable to ask participants to complete a very brief survey about your talk. (You can for shorter programs as well, but it may impinge on your time limit). The assessment aspect, whether formal or informal, is vital for improving your techniques for future programs.

    • Take experiences back to the classroom. Teaching community members can augment the quality of your own classes. Students often crave real-world application of material, and these experiences—unless proprietary—can provide examples to share with your students. Additionally, these experiences can foster the community relationships necessary to have successful and unique service learning opportunities. For instance, a program on creating customer satisfaction surveys for small business owners could transform into an indirect service learning project for students in a research methods course. To maintain a relationship with the community members following a program, the instructor could suggest having students work on the project as part of a course assignment.

    Enjoy the reward. Though teaching students and the community may require different approaches, they do yield similar feelings of reward. When teaching either in the classroom or in the community, we are often providing the first glimpse of psychological science. In both cases, it is exciting to see those wide-eyed moments when people realize the extent to which psychology is valuable to them.

    References

    American Psychological Association. (2017). Ethical principles of psychologists and code of

    conduct (2002, Amended June 1, 2010 and January 1, 2017). Retrieved from

    http://www.apa.org/ethics/code/index.aspx

    Center for Disease Control (2009). Simply out: A guide for creating easy-to-understand

    materials. Retrieved on July 24, 2017

    from https://www.cdc.gov/healthliteracy/pdf/simply_put.pdf

    Halonen, J. (2011). Are there too many psychology majors? White paper prepared for Staff of

    the State University System of Florida Board of Governance. Retrieved from

    https://www.cogdop.org/page_attachments/0000/0199/FLA_White_Paper_for_cogop_posting.pdf

    Hunt, E. (2010) The rights and responsibilities implied by academic freedom. Personality and Individual Differences, 49, 264-271. doi:10.1016/j.paid.2010.01.011

    Lilienfeld, S. O. (2012). Public skepticism of psychology: why many people perceive the study

    of human behavior as unscientific. American Psychologist, 67, 111-129. doi:

    10.1037/a0023963

    Literacy Foundation Project (2017). Staggering Illiteracy Statistics. Retrieved on July 24, 2017

    from http://literacyprojectfoundation.org/community/statistics/

    Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review

    of General Psychology, 2, 175-220. doi: 10.1037/1089-2680.2.2.175

    Zimbardo, P. G. (2004). Does psychology make a significant difference in our lives? American

    Psychologist, 59, 339-351. doi: 10.1037/0003-066X.59.5.339



  • 01 Apr 2018 5:10 PM | Anonymous

    Kamil Hamaoui  (Westchester Community College)

    Empirical studies have established that the testing effect is an effective strategy for improving long-term memory (Brown, Roediger, & McDaniel, 2014; Roediger, Smith, & Putnam, 2011). In short, we can improve our ability to remember information, concepts, and skills when we test ourselves during learning. In terms of the stage model of memory, if we repeatedly practice retrieving a memory from long-term memory into working memory, it becomes more firmly consolidated in long-term memory, preventing future forgetting.

    Most studies of the testing effect have been conducted in the lab, under carefully controlled, artificial conditions, which calls the external validity of the effect into question. However, in recent years, researchers interested in teaching and learning have examined the applicability of the testing effect to the classroom setting. The testing effect can be used in the classroom by administering quizzes on content that students have already learned, whether through reading, lecture, discussion, or some other activity. These quizzes can be multiple-choice, fill-in-the-blank, short-answer, etc. and may be administered at the beginning of a class meeting, at the end, or integrated throughout coverage of the content. Does the periodic use of review quizzes in the classroom lead students to better learn and remember course content?  Will students who are quizzed perform better on the comprehensive exams given after a block of material or at the end of the term?

    Findings from applied studies on the testing effect are mixed, but Nguyen and McDaniel (2015) present some general conclusions in their review of the existing literature. Quizzing does improve exam performance when the exam questions are the same or similar to the quiz questions. However, it seems that there is no improvement if the exam questions test on the same topic as the quizzes, but on different concepts.

    This suggests that if we want to make maximum use of the testing effect to improve student learning, we should quiz students on all the concepts we want them to learn. As any instructor knows, however, regardless of the level of experience, this isn’t feasible. As it is, without any class time devoted to quizzing, we struggle with the issue of what content to cover in class, since we don’t have enough time to cover everything we want students to learn. This raises several questions:

    • ·       Can quizzing serve a purpose beyond the testing effect? 
    • ·       Will having periodic review quizzes on some concepts motivate           students to study outside of the classroom?  If so, will the type of studying they do benefit their long-term memory of the material studied? 
    • ·       Does it make a difference if the quizzes are graded or ungraded?  Will graded quizzes motivate students to study more effectively, leading to better long-term learning?

    In order to address these questions and get some answers for myself, I designed and conducted an experiment on the effects of different types of review quizzes on long-term learning in three sections of my General Psychology course at Westchester Community College. I administered periodic short-essay quizzes testing students’ (n = 75) understanding of specific concepts covered during the previous class sessions. Quizzes were scheduled and designated as counting towards the course grade (graded), not counting towards the course grade (ungraded), or potentially counting towards the course grade (pop). For the latter condition, a coin toss just prior to the quiz determined whether the quiz would be graded or not. A Latin square design was used to control for differences in the difficulty of topics and order effects.  Specifically, each quiz condition was assigned to a different topic (sensation and perception, learning, or memory) in each of the three sections, and each quiz condition was assigned to a different time in the term (first, second, or third) in each of the three sections.]

                  Unannounced, practice tests consisting of short-essay questions were administered halfway through the term and at the conclusion of the term. These tests included questions on the same topics as the review quizzes, but on different concepts. I predicted that students would perform better on the topics which had preceding graded or pop review quizzes than ungraded quizzes, thinking that students would study these topics more in preparation for those quizzes.

                  What did I find? There were no significant differences in test scores between the different quiz conditions. Evidently, the type of studying that students did in preparation for graded or potentially graded quizzes was not beneficial to their long-term learning relative to the type of studying, if any, that students did in preparation for ungraded quizzes. My guess is that most students simply read over their notes for a few minutes right before the quiz as they were waiting for class to begin. This might have been effective for performing well on the quiz, but it did not benefit their long-term learning any more than whatever preparation (probably none) that they did for ungraded quizzes. As we know, reading and understanding what is being read in the moment is not the same thing as learning and remembering something in the long term. Also, “massed practice,” familiar to students as cramming, is not as effective as “distributed practice” or spacing out one’s studying in smaller learning sessions (Brown, Roediger, & McDaniel, 2014). Ironically, at the end of the term, students reported that they thought the graded or pop quizzes were best for their personal learning because they studied more. This suggests that students do not have insight into the studying strategies that are required for long-term retention of course content.

                  What else did students think about the different kinds of quizzes?  Beyond their erroneous belief that the additional studying they did for graded or potentially graded quizzes compared to ungraded quizzes was good for their learning, students reported a strong dislike for the pop quizzes. They preferred predictability, either knowing that a quiz would be worth points or not. If it was worth points, they reported being more motivated to study and felt rewarded for their studying. If it was not worth points, they felt less anxiety and could focus on other classes.

    So what did I learn from this study? How will it inform my teaching? I learned that ungraded quizzes are the way to go. Making quizzes graded or potentially graded does not lead students to study in ways that benefit their long-term learning any more than making them ungraded, and many students experience increased stress from graded quizzes. In addition, making quizzes graded means you have to grade them, which can take considerable time depending upon the type of questions and the size of the class.

    On the other hand, using ungraded, in-class review quizzes has multiple benefits. If exams have similar questions to the quizzes or test on the same concepts, the testing effect will boost students’ learning and performance on the exams. And, a few studies have found that ungraded quizzes actually produce a stronger testing effect than graded quizzes (Khanna, 2015; Wickline & Spektor, 2011).

    In addition, with appropriate feedback, review quizzes can serve as a valuable formative assessment tool. Students can learn what they know and what they don’t know, and how their thinking and test-taking can be improved. If quizzes consist of short-answer or short-essay questions, after students write their responses, the teacher can ask students to share what they wrote and then evaluate the responses for students in class. Many criteria or intellectual standards (Paul & Elder, 2000) factor into the quality of written work. These include accuracy, clarity, precision, logic, depth, breadth, relevance, significance, and fairness. Criteria other than accuracy, which is whether the response is correct or incorrect, often make the difference between a “good,” “very good,” or “excellent” response.

    For example, let’s say that a student writes that evolutionary psychology is the “study of traits and what they do for us.” This is basically correct, but the response is of low quality. The wording “what they do for us” lacks precision and clarity. The wording could be improved by stating that evolutionary psychology is the study of how traits “function to improve our adaptiveness to the environments in which we live.”  The instructor can point out that adaptation, or a variant of the term, is a keyword that should be included in the definition. It could also be pointed out that the response lacks relevance to psychology, which is about behavior and mental processes. To make the response relevant, the wording “behavioral and psychological” traits should be included. Moreover, it’s not just about humans. To make the definition broader, non-human animals, which are studied by comparative psychologists, should be included as well. Taking the time to give this type of detailed feedback in class teaches students about critical evaluation, an important part of critical thinking. It also gives students clear expectations for how their written work on exams and assignments will be graded.

    One last benefit of using in-class review quizzes is that they can be used to incentivize attendance and create a more orderly beginning and end to the class session. If attendance is required for the course, papers students use to write their quiz responses can be collected and used to take attendance. If attendance is not required, quizzes can be offered as all-or-none extra credit. We know how much students love extra credit! If the quiz is given at the very beginning of class, students will be encouraged to come on time. Some students will inevitably arrive late, but they will trickle in quietly without causing a disruption to the learning environment. If the quiz is given at the very end of class, students will be encouraged to refrain from packing up until the class is officially over. No more of that infernal shuffling as we get close to the end time of class!  

    I have been using in-class review quizzes for some time now, but in various ways and off and on in various classes. After completing this study and reviewing the relevant literature, I’m now convinced more than ever in their usefulness and in leaving them ungraded. I recommend making stress-free, ungraded in-class review quizzes part of your teaching tool kit!

    References

    Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, MA: The Belknap Press of Harvard University Press.

    Khanna, M. M. (2015). Ungraded pop quizzes: Test-enhanced learning without all the anxiety. Teaching of Psychology, 42, 174-178. doi: 10.1177/0098628315573144

    Nguyen, K., & McDaniel, M. A. (2015). Using quizzing to assist student learning in the classroom: The good, the bad, and the ugly. Teaching of Psychology, 42, 87-92. doi: 10.1177/0098628314562685

    Paul, R. W., & Elder, L. (2000). Critical thinking: Basic theory and instructional structures handbook. Tomales, CA: Foundation for Critical Thinking.

    Roediger, H. L., Smith, M. A., & Putnam, A. L. (2011). Ten benefits of testing and their applications to educational practice. In B. H. Ross (ed.), Psychology of learning and motivation. San Diego: Elsevier Academic Press.

    Wickline, V. B., & Spektor, V. G. (2011). Practice (rather than graded) quizzes, with answers, may increase introductory psychology exam performance. Teaching of Psychology, 38, 98-101. doi: 10.1177/0098628311401580



  • 01 Mar 2018 9:28 AM | Anonymous

    Ronald G. Shapiro

    Since most students who complete occasional psychology courses and even most undergraduate psychology majors will not enroll in graduate school in psychology or become psychology professionals, it is important to prepare these students for jobs in other fields. This article provides suggestions on how offering a non-majors psychology course in lieu of introduction to psychology for non-majors, making minor changes to other courses, providing different types of opportunities, and focusing recommendations can help to prepare students for jobs in different fields.

    Non-Majors Psychology Course. One of the “facts” I learned in graduate school was that non-majors who earned an “A” in an introduction to psychology course, when asked to retake the final exam a year later did not pass it (Sidney L. Pressey study reported by David Hothersall in History and Systems Class, Ohio State University, circa 1977). This fact has had huge impact on my thinking. If people aren’t going to remember it, why teach it? One might argue that it is easier to relearn material. True, but your non-major may not be very likely to do this. Instead, I would recommend making a list of those items you really want non-majors to remember five years after the final exam and teach those materials and only those materials to undergraduate non-majors. Be thorough in teaching those materials. Teach them in a variety of contexts. One way to do this would be to offer a non-majors psychology course. Structure the non-majors course in ways that students might use the material (rather than as we structure the field with our specialties). Topics might focus on how to use psychology:

    ·       In society (separating “fake news” and “alternative facts” from science);

    ·       In marketing and advertisement;

    ·       In working with others;

    ·       In structuring a work environment;

    ·       In understanding how a person develops from birth through death; or

    ·       As a potential consumer of psychological services.

    This structure would help students better use the materials, and see how what’s being taught might be helpful to them. In this restructured course remember to teach only what you want the students to remember 5 years after the final exam.

    In Today’s Courses. Explain and have students complete numerous projects applying whatever you teach to real world solutions. If the material you teach is basic research that is so cutting-edge that there are no applications for it yet, have the students participate in projects which help them to think about how the materials might be used to change lives a year, a decade or a generation from now. This may require teaching less material, but in more depth. Show students how to become a “citizen expert” (if not a scientist) continuing to follow up on these projects throughout life.

    Providing Advice to Students. Truly understand the student’s objectives (and the objectives of the person paying for the student’s education) before offering advice. Early in my career I would have advised a student that their primary objective in college is to learn all that they can from their academic departments. Everything else is secondary. For some students this is truly the case and I would recommend this today. For example, I have encouraged many high school students to meet faculty on their visits to college campuses and figure out how they can become involved in their research from freshman week onward. For other students, I would today argue that their best bet is to lead a very balanced life. The extracurricular activities, friendships formed, internships, and other experiences might be more valuable to them than what they learn in their academic departments. Encourage these students to take advantage of the numerous benefits provided while they are enrolled in a program (i.e. regular access to faculty, internship programs) that are harder to obtain without the student status. Recommend that students learn as much about business as possible through studying I/O psychology as well as completing courses in business. Also, recommend that students learn as much about technology as their interests allow, because more and more positions will require knowledge about technology   

    Producing a Resume. You may wish to help your students prepare their resume. Resumes for industry are vastly different than academic resumes or CVs. An industrial resume needs to ROAR (be Results Oriented and Relevant). In addition to being much shorter than academic CVs, they need to show a potential recruiter and a potential hiring manager why this applicant is better than the numerous others applying for the same job, in just seconds. A resume that shows real results, and that the applicant took the initiative to show how they would apply their knowledge and experiences to meet the specific employers’ needs are most beneficial. Keywords may be important for the recruiter. Showing real results (rather than job responsibilities) that will demonstrate to a hiring manager how those results translate into action is critical. In response to the frequently asked question “How long should a resume be?” The answer is long enough so that the person reading it becomes more enthusiastic about the candidate with every sentence, not bored with redundant or irrelevant detail. Providing the names of faculty members (e.g., worked in Professor Smith’s lab) are only important if the reader is likely to know or have heard of Professor Smith. References would not normally be included on a resume (to protect faculty from random calls), and the words “References Furnished Upon Request” should never be included on a resume because the point is obvious and also, it is somewhat insulting to the reader (saying “I do not trust you with the names of my references”).

    Writing letters of recommendation. You are writing a letter of recommendation, not a performance evaluation. Your job, should you chose to accept it, is to sell the student to prospective employers by pointing out his or her strengths and why the potential employer will be better off with this student (as opposed to someone else) on their team. Before deciding if you can do this (unless you know you cannot up front) review the student’s resume and ask the student for a listing of content you might include in their letters. If you cannot use the content, explain to the student what you can do for them in a letter and suggest that there are probably others who can do a better job for them. Don’t “kill the student with faint praise.”  Don’t discuss student’s weaknesses or areas for improvement.

    The interview. Help your students to be able to communicate with potential colleagues, managers, people familiar with their work, and people not familiar with their work. In an industrial interview, applicants may meet with many people including recruiters, potential managers, and colleagues. Be sure your students can communicate their research work as well as other topics effectively. They should be able to explain their work (emphasizing their own contributions and differentiating them from the work of others) in one minute, five minutes, ten minutes or a full length presentation and have the listener engaged, excited about the topic, and seeing how the applicant would be the best fit in their organization. Please be sure to do this in the time allocated. One way to do this is to show how their research fits into the company’s mission and requirements. I might add the purpose of the interview is to determine if there is a good fit between the candidate and the position for both the applicant and the company. Accordingly, the applicant should be prepared to ask meaningful questions that will help them to decide if the position is a good fit for them explain how they will be a real asset to the specific company and demonstrate a thorough understanding of the company and enthusiasm for being part of it.

    Decision Making. Businesses need to get products to market in a timely fashion. Thus, decision making is simply different than in academics. In academic basic research one might want to have a standard of p<.05, p<.01, p<.001, etc. In industry decision making may be made with absolutely no evidence (depending upon the industry). If an employee is 50.01% confident in a decision based upon knowledge and research they should be prepared to make a recommendation, as the recommendation is based upon some knowledge. Depending on the circumstances, they should also be prepared to qualify how confident they are in the decision. Rather than using p values for decision making, corporate executives may be more likely to use the 80/20 rule. That is, you can accomplish 80% of what you want to do with 20% of the effort. So, stop the process and go when you are 80% confident. You can help students understand this important distinction.

    Deadlines. Deadlines are critical in business… far more so than in academics. They are real. No matter how thorough a contribution is, if it is late it may be totally useless. There may be some circumstances in which a late contribution may be acceptable, usually when an even more critical process has been delayed. The odds of this are minimal. The academic practice of deducting points for late work really doesn’t apply to much in business. A recommendation a day or a week late is not, for example 80% or 90% as good as a recommendation delivered in a timely fashion. A more realistic way to make decisions about accepting late work would be to shuffle a deck of cards after the late work is completed. Draw a card off the top. If it is, for example, an ace, accept the work. If not, don’t.

    Oral Communications. Communicating in business is simply different than communicating in school. For example, I learned a very bad habit in graduate school. Ask questions to show you understand the work and to show defects in a presenter’s thinking. One of my best managers ever pointed this out to me. His recommendation was to: 1) only ask my questions if everyone else had completed theirs and my question had not been asked and 2) only ask questions for clarification. Otherwise, address the questions with the presenter off line. Be sure that your students understand this important distinction.

    Written Communications. In academics we tend to write long journal articles explaining numerous details about our work. In industry, a brief executive summary is the more important means of communication. Executives trust that we know how to do our work and we may not need to demonstrate how we derived our results to them. When sending written communication, keep the receiver in mind and anticipate their schedule, mind frame, and organizational style (i.e. details versus quick summaries). Chances are that an executive will be very busy, rushed, and stretched thin, in which case having results and next steps up front will go a long way. Keep thorough lab notes. Depending on the corporate culture expected from your executive team, write the detailed report for backup or else skip it all together. In my first report on a study I did at a major corporation, two of us were presenting. My colleague was to present part 1. I was to present parts 2 and 3. Somehow, when he finished I went right into part 3. No one cared that the details were left out. Indeed the comments I received from my client were completely complimentary… that my department had learned how to present more concisely.

    Research Involvement. Offer your students an opportunity to work with you on research. This will help them to develop great skills. Be sure that they can explain what the research was all about, their role in it, and how that research was better because of their participation (as opposed to that of another person). Be sure they can explain this very succinctly as well as in detail.

    Perception of Degree Value. I’ve heard professionals, even a vice president in a major corporation, say “I was a psychology major and it was useless to me. It did not help me get a job.”  That statement may be true. I did point out to her that while the degree may not have helped her secure her first position with the business, what she learned probably helped her to advance very quickly from an entry level position to a high level executive position. She agreed. My recommendation here is to clearly explain to your students what a psychology degree may and may not do for them in the business world, generally when they are considering the major. Explain this at the beginning of the semester for each course. Explain again, at the end of the semester, how the content should help them. In between assign work that will help the students to explain how the content might apply to the business world.

    Seminars. Invite alumni who have gone into industry 1, 5, 10, and 20 years ago to offer seminars at your school showing how their degrees have helped them, and how the students might apply their degrees.

    Internships. Completing one or two successful internships or coops can be extremely valuable for students as a learning experience. If they perform well, it may also be the key to having a great job waiting for them on graduation day.

    In summary, I would say that a psychology major can be an extremely valuable tool to help a professional throughout their career if they make the most of it by becoming extremely involved with their department, research, course work, and internships. If they, on the other hand focus on taking mostly large lecture courses to meet the minimum degree requirements they will be minimizing the value of their degree.

     

    Author note: I would like to thank Industrial Consultant Dr. Margarita Posada Cossuto for helpful comments.


  • 01 Feb 2018 9:27 AM | Anonymous

    Jennifer A. Oliver (Rockhurst University)

    The use of case studies is a common active learning strategy employed in psychology. Case learning is useful for developing critical-thinking skills (Krain, 2010), and for increasing students’ motivation and interest in course material (McManus, 1986a; McManus, 1986b). Researchers have described many positive outcomes of using case studies. These include helping abstract theoretical information become concrete, facilitating understanding; reinforcing course concepts as students analyze, infer, and examine relationships (Graham & Cline, 1980); and integrating students’ learning as they incorporate theory into practice and make practice integral to theory (McDade, 1995).

    But most of the work examining the use of case studies uses pre-written cases. While I wanted to use cases in my Psychology of Disability course, the only cases that I could find were focused either on abnormal psychology or on special education, and neither area was a good fit for this course. So, I decided to have students write their own cases. Few studies have examined having students write their own cases. Successful application of student-generated case studies has been used at both the undergraduate level in business and science, as well as in medical training (Yurco, 2014). In fact, Yurco reported that when students created their own cases, they developed greater confidence, ownership of the learning process, a deeper understanding of the material, and improved critical thinking skills in an introductory neurobiology course. McManus (1986b) reported that having student groups compose a problem-focused case and generate potential solutions to the problem in the case assisted students in consolidating course concepts in an adolescent psychology course.

    In this essay, I describe an applied project that I use in my undergraduate Psychology of Disabilities course, along with information on students’ performance and their views of the project. The Psychology of Disabilities is a 4000-level class (junior and senior level). All of our 4000-level courses require an assignment that involves an integrated literature review but I also wanted to incorporate some application into the course at a broader level than just using exam questions.

    The Project

    In the Psychology of Disabilities course, students chose a disability and wrote their own case study of an individual with that particular disability. The project included:

           An integrative literature review (minimum of 4 double-spaced pages) describing the disability, including psychological and behavioral characteristics, prevalence rate, developmental changes as an individual with the disability moves from childhood to adolescence to adulthood, (possible) causes of the disability, and at least three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world. Students had to cite at least eight credible academic sources, with at least two of the sources being empirical journal articles. They were allowed to use one internet source that summarizes information on the disability; however that source had to be a credible source, written by individuals who are professionals and knowledgeable about the disability. I provided students with examples of sources that would both be acceptable and not acceptable. Students turned in rough drafts of this section at midterm for feedback before the final project was due at the end of the semester.

           A case study of a fictional individual with that disability at two contrasting ages (minimum of 1 full page, single-spaced per age). In keeping with the developmental focus of the class, students could use any ages between preschool and young adulthood (up through the early-20s). In their case study, students needed to apply the characteristics, described in the literature review, that an individual with that disability would exhibit at the chosen ages, and include either a behavioral interaction and/or a verbal interaction between the individual and at least one other person

           A complete description of two possible interventions/treatments that would be appropriate for their fictional individual, including the effectiveness of each intervention/treatment. In addition, students discussed which age from their case each intervention/treatment would be most appropriate for and why.

    An example of a case study and two additional completed projects were available for the students to use as models.

    Student Performance

    In order to determine how well students performed on the assignment, I evaluated the grades on each section of the assignment from 56 students (28 each, in Spring 2014 and in Spring 2015). The percentages of grades for each area of the assignment were as follows:

    Case Study

    Literature Review

    Treatment/Intervention

    A

    58.9

    60.7

    51.8

    B

    32.2

    17.9

    30.3

    C

    5.4

    16.0

    12.5

    Below C

    3.5

    5.4

    5.4

    Overall, students performed well on all three areas of the assignment, with at least 78% earning an A or B on each portion. Over 90% of the students did quite well on the case study portion. Common areas where students missed points were not providing an example of behavioral and/or verbal interactions between the individual and another person, not including all of the characteristics described in the literature review in the case, or not meeting the length requirement. A higher percentage of students received a C or lower on the literature review portion than on the other two sections of the project, which was surprising since they received feedback on a previous draft of this section of the project. Common difficulties on the literature review included not fully describing the disability, choosing inappropriate sources (especially an over-reliance on internet sources), and lack of integration of information from multiple sources. In addition, students were asked to describe three sociocultural factors chosen from: race/ethnicity, gender, socioeconomic status, and differences among regions of the world; students often ignored the actual sociocultural factor choices given in the assignment and came up with their own factors. This was the first psychology course that required a writing assignment this in-depth for some students, which may explain the lower scores on this section. A few students did not incorporate feedback that was provided on their draft. If students lost points on the treatment/intervention section, it was typically because they either did not fully describe the treatment/intervention or failed to discuss the effectiveness of the treatment/intervention. A few students did not discuss how the treatments/interventions related to the case study portion of the assignment.

    I also wanted to assess students’ views of the project. After students had turned in their final project, they completed a 3-item anonymous rating of the project. Each question was rated on a 5-point Likert scale (1=strongly disagree, 5= strongly agree). Students’ average ratings were quite high:

           Completing the case study project increased my understanding of disabilities, M= 4.32 (sd=.69, range 3-5)

           The case study project was a useful way to help me learn the class material, M= 4.29 (s.=.73, range 3-5)

           I rate the project as interesting, M=4.38 (sd =.62, range 3-5)

    Students’ anonymous ratings for the case study project were quite high, with the lowest rating for all three questions as neutral. Thus, this project may be one way to get students more actively engaged in learning about disabilities. In addition to the students’ high ratings of the project, there were numerous unsolicited comments on the course evaluations that they enjoyed the project and it helped them learn to apply course material.

    I was also interested in whether completing a big application project was related to student performance on application-based material on the exams. There are three exams in the course. Each exam has nine application-based multiple-choice questions. I give Exam 1 before students have completed any of the project. I give Exam 2 after students have completed a draft of the literature review but before they have written the case study portion. Students take Exam 3 after they have completed the final project. I looked at these application-based multiple-choice questions on each exam to see if there was improvement after completing the case study.

    Average % correct

    Exam 1

    59.2

    Exam 2

    60.4

    Exam 3

    81.6

    Students, on average, performed better on the application-based multiple-choice questions after completing the case study. While there was no difference between scores on Exams 1 and 2, t(8) = -1.976, p=.084, there were significant differences between performance on Exam 1 and Exam 3, t(8) = -3.086, p=.015 and Exam 2 and Exam 3, t(8) = -3.117, p=.014.

    Performance on the application-based multiple-choice questions on the exams improved after completion of the case study project. Students may be getting better at application-based multiple-questions with repeated practice on the exams but completing the case study project may have also helped in learning to apply information.

    Suggestions for Using the Project in Other Psychology Courses

    While I designed this project for a specific course, it could easily be adapted for use in other Psychology classes, either with or without a literature review, such as:

    • ·       Abnormal Psychology–students pick (or are assigned) a psychiatric disorder and create a fictional individual with that disorder, describing the symptoms specific to the characteristics (age, race/ethnicity, etc.) of the individual. Students could also discuss a specific theoretical orientation toward treatment.
    • ·       Community Psychology–have students create a case about an individual, demonstrating how that individual is connected to his/her environments and how specific problems within the individual’s community have an impact the individual.
    • ·       Developmental Psychology–have students develop a fictional individual and describe how that individual changes while passing through the different developmental time periods. For example, in a child psychology class, what that individual looks like at early childhood compared to middle childhood. Or students could use one developmental period (e.g., adolescence) and describe how physical, cognitive, and social-emotional developmental interacts at that age for that particular individual.
    • ·       Health Psychology–students could create a case study about an individual with a specific health issue, discussing how the individual adjusts and copes with the issue, what behaviors could protect the individual’s health, what behaviors harm the individual’s health, and how those behaviors could be changed.
    • Concluding Thoughts

    I have found this project to be a fun, engaging way to help students learn about disabilities. It demonstrates that the majority of students can apply information and describe how characteristics of disabilities can change developmentally. In addition, students appear to enjoy the assignment and it actually is more fun to read and grade than traditional literature reviews.

    References

    Graham, P.T, & Cline, P.C. (1980). The case method: A basic teaching approach. Theory into Practice, 19(2), 112–116.

    Krain, M. (2010). The effects of different types of case learning on student engagement. International Studies Perspectives, 11, 291-308.

    McDade, S.A. (1995). Case study pedagogy to advance critical thinking. Teaching of Psychology, 22(1), 9-10.

    McManus, J.L. (1986a). “Live” case study/journal record in adolescent psychology. Teaching of Psychology, 13(2), 70-74.

    McManus, J.L. (1986b). Student composed case study in adolescent psychology. Teaching of Psychology, 13(2), 92-93.

    Yurco, P. (2014). Student-generated cases: Giving students more ownership in the learning process. Journal of College Science Teaching, 43(3), 54-58


  • 15 Jan 2018 4:43 PM | Anonymous
    Harwood, E.A., & Marsano, M. (Rivier University)

    Teaching in the age of millennial students is a challenge that should be embraced by all faculty, but what does this entail? Present day students have grown-up alongside technology as a basis for communication and understanding. Termed “digital natives” by Marc Prensky (2001), millennial students spend a great deal of time communicating through technology and are used to having information at their fingertips. Sending an average of 100 texts a day (Lenhart, 2012), the millennial student expects a near immediate response to comments, and can easily find the answer to a question by asking Google. Because millennials have a completely different experience with information than previous generations, especially the ease with which it can be accessed, students may wonder why we don’t instantly respond to email or provide our lecture notes before class (van der Meer, 2012).  Taking notes may seem archaic and pointless if material is always available. Nevertheless, teaching students the skills necessary to navigate through a surplus of information and having them  recognize the importance of quality over quantity are now essential components of college curricula.

    How many times have your students asked you, “Is this going to be on the test?” Although this may seem an annoying question, students may be searching for clues about the essential concepts of the class. Main points that are crystal clear to us may not be as clear to our students (van der Meer, 2012). As experts in our field, we have already created our own organizational frameworks for the concepts we teach. We have formed deep, complex connections that have helped us master the material and make it seem easy for us to understand, while it may remain difficult for our students (Ambrose, Bridges, DiPietro, Lorett, & Norman, 2010). How can we scaffold our “expert” frameworks for our students to build their own connections among course concepts and past experiences? In this essay, we describe several teaching techniques for creating these frameworks, from the way we encourage effective note-taking to the way we speak and incorporate multimedia.

    Why do students struggle with note-taking? Effective note-taking requires extensive cognitive resources, especially working memory capacity (Stefanou, Hoffman, & Vielee, 2008). Listening to the professor while simultaneously writing notes is difficult for many students (van der Meer, 2012). Differences in working memory resources may put some students at a distinct disadvantage depending on the types of notes they take (Bui, Myerson & Hale, 2013). Students with documented and undocumented learning disabilities may also face impediments. If the cognitive load is too great, students may not be able to contextualize or personalize the notes (Stefanou et al., 2008). Some may furiously write down everything you say, while others may copy down only what’s on the PowerPoint slides. Others may just sit back and wait until you put the slides online.

    Nevertheless, writing an idea down can help with long-term retention (Bui et al., 2013). Writing about a concept necessitates active recall and allows the formulation of clearer thoughts and more connections (Bui et al., 2013).  Is it better to attempt to transcribe a lecture or take more condensed, structured notes? While transcription of lectures by computer may help initially with the recording of more notes and immediate recall of facts, taking organized notes shows more durable retention in a 24-hour delay condition (Bui et al., 2013). Although, when students are allowed to study their transcribed lectures, recall is superior, especially for those with lower working memory capabilities (with a 24-hour delay involving transcription of an 11-minute lecture) (Bui et al., 2013). The attention necessary to transcribe a full lecture was not tested, however this research (Bui et al., 2013) once again reminds us that students differ in their capabilities, and what works for one may not work for another.

    Brief, targeted interventions can improve note-taking. Nakayama, Mutsuura and Yamamoto (2016) provided students with two short instructions, once at the beginning and again at the mid-point of a course, on note-taking techniques, which included examples of good notes. This instruction increased student metacognition with regards to note-taking and improved the quality of notes over the course of the semester. Deliberately reviewing and restructuring notes can significantly improve grades as well (Cohen, Kim, Tan & Winkelmes, 2013). For example, outlining, summarizing, and drawing connections between different concepts requires active engagement and leads to better test performances than review alone (Cohen et al., 2013).

    Another technique for note-taking that utilizes scaffolding is directed notes (Harwood, 2016).  Similar to a review guide for an exam, directed notes act as a review guide for that day’s class. Given at the beginning of the class period, directed notes consist of a list of questions and activities about that day’s topics with plenty of space for students to write in their answers. The following are examples from a few different courses:

    1.      Summarize how neurons communicate.                            Action Potential
           How is it like firing a gun?                              Absolute Refractory Period
           Use the terms to the side in your summary                                 Threshold
                                                                                             All or None Response

    2.      Now that we’ve covered the functions of the different brain structures, create your own concept map using your notes

    3.      What advice would you give our aging population given what you know about adult development?

    4.      Describe how each of the following individuals expanded our understanding of attachment.

     

    Name                                                              Contributions

    John Bowlby

    Harry Harlow

    Konrad Lorenz                                                Imprinting
                                                                            Critical Period

    Mary Ainsworth                                              Strange Situation Task

     

    5.      Lambert (1992) proposed 4 therapeutic factors that lead to client improvement. These are

     

    The Big Four                                                 Variance                     Examples

    1. Client/Extra Therapeutic Factors

    2. Therapeutic Alliance

    3. Placebo, Hope, Expectancy

    4. Therapeutic Techniques

     

    6.      Write down your immediate reactions to this individual’s story of heroin addiction.

     

    As you can see, directed notes point students towards important concepts, and assist students in creating their own examples and applying the material. When provided guidelines, but not explicit notes, the student is encouraged to form meaningful connections on the main ideas identified by the professor. Some important guidelines to keep in mind when creating directed notes for your course are to include different types of questions and response formats, leave plenty of space for students to write, and ensure that directed notes are assimilated into the course in some way, whether it be group work or as a test review. Psychology is so pertinent to everyday life that it is ripe with ways to make the material personally meaningful (“If you had to take an anti-depressant, which one would you take and why?”). Take advantage of this and further students’ critical thinking and interest in the field.

    While professors may be tempted to think that directed notes and guided notes are synonymous, there is a distinction between the two. Guided notes are an alternative to traditional PowerPoint slides with information missing to encourage attendance (Barbetta & Skaruppa, 1995). Results among the college population are mixed on whether guided notes provide advantages above and beyond complete PowerPoint slides on test performance (Neef, McCord & Ferreri, 2006). Guided notes may be effective in demonstrating information, but they may fail to encourage students to make connections beyond what’s on the slides.

    Note-taking techniques are one way that scaffolding can be achieved in the classroom, allowing students to organize and detail their thoughts in written form.  In addition   the presentation of information to students provides another opportunity for framing information. For example, one can provide organizational cues during class, such as using explicit language that differentiates main points (“Carl Rogers identified 3 core conditions for a successful therapeutic relationship. The first is unconditional positive regard…”).  Further, one can provide transitional language that encourages students to refocus on a new idea and cues the type of notes to take and their organization (“Now that we understand the structure of a neuron, let’s discuss how neurons communicate” (Titsworth, 2004). We can also encourage students to elaborate beyond what we have explicitly covered since the more information students add to their notes, the higher their scores on applied questions (Stefanou et al.,2008). For example, after explaining a concept or definition, I (Harwood) give students a few moments to write down their own examples (“Give an example of an empathic response to a friend’s problem”) and then have several share with the class. Five-minute writing prompts on a class topic can also foster generative notes and class conversation (“Based on what we’ve covered so far, why do you think heroin is so hard to quit?”).

    Using technology as a tool for creating conceptual frameworks in a course can also be effective with millennial students. PowerPoint slides are a possible method for scaffolding information and cuing students on how to organize their notes (Stefanou et al., 2008). With the integration of technology starting in k-12 schools (Ruggiero & Mong, 2015), students prefer, and may even expect PowerPoint slides (Landrum, 2010). While students may want these slides before class (Babb & Ross, 2009; Landrum, 2010) and it may increase class participation for those who typically participate (Babb & Ross, 2009), it does not appear to aid in test performance (Babb & Ross, 2009), final grades (Bowman, 2009), or the addition of new ideas to one’s notes (Stefanou et al., 2008). We find that for many students, providing slides before class can decrease interest and stunt conversation. My (Harwood) compromise is to provide slides after we have finished the chapter for students to fill any gaps in their notes. 

    Finding the right balance between incorporating PowerPoint or other presentation media into a lecture while meeting students’ needs is a necessary consideration during lesson planning. Some believe that PowerPoint slides may condense the material too much, acting as “CliffsNotes” for the class, or preventing “big picture” thinking with its linear presentation (Kirova, Massing, Prochner, & Cleghorn, 2016).  It may be more effective to think of multimedia presentation technology as an extension of conveying main points and transitional language, rather than being the sole conveyor of information during a lecture. As much as we tend to lump students into the group of “millennials,” it is important to recognize their individual learning capacities and the need for a variety of teaching techniques.

    If you choose to use PowerPoint as a scaffolding technique, there are some common mistakes to avoid. First, don’t use your slides as “cue cards” (Gardner & Aleksejuniene, 2011). They should be made with the students in mind, rather than the instructor. When information is read off a slide, it decreases cognitive understanding by overloading working memory and inhibiting students’ opportunities to create connections. In addition, students tend to lose interest quickly Second, don’t overburden the slides with text (Gardner & Aleksejuniene, 2011; Stefanou et al., 2008). Providing too much information on a slide may result in students copying information rather than recording their own thoughts (Stefanou et al., 2008). In limiting the amount written on the slides, students are given the opportunity to reason through information, which can promote generative learning. Third, integrating images with verbal descriptions is more effective for learning than text alone (Gardner & Aleksejuniene, 2011). Pictures really can say a 1000 words! Seeing the devastating physical effects of methamphetamine use in a series of mug shots is much more powerful than reading about it or hearing a recitation of symptoms from the instructor. And fourth, incorporate video clips and other media that naturally appeal to the millennial student (Garder & Aleksejuniene, 2011). Identifying the symptoms of cocaine abuse from a movie scene is an excellent way to elicit interest from students. Further seize the teachable moment by explicitly discussing how these images and clips relate to course concepts (“What properties of methamphetamine lead to these physical changes” or “What symptoms are the characters showing that indicate stimulant use”). Students may not automatically see these connections on their own.

    PowerPoint slides, organizational cues, and transitional language all aid students in creating their own class notes. Note-taking is a skill often overlooked by college educators who assume their students already know how to do it. In a traditional lecture format, only a small amount of content is accurately captured in student notes (Kiewra, 1985). Considered more than just a “recording technique” (van der Meer, 2012, p. 13), taking notes and reviewing them helps students reconstruct what they have learned and makes it more personally meaningful.  This actively engages the student with the material and increases retention (Bohay, Blakely, Tamplin, & Radvanksy, 2011; Cohen et al., 2013; Kobayashi, 2006). Note-taking is a skill that will follow students long after they have left the classroom, giving them an advantage in the workplace by preventing mistakes and saving time.

    Regardless of the format an instructor chooses to use, it is important to remember that millennial students will benefit from exemplified note-taking and scaffolded frameworks of knowledge. Considering the technology-centered background of today’s millennial student, we would be wise to incorporate media presentations in the classroom because they garner more attention. However, this must be tempered with the understanding that our main focus must be on generative learning and helping students make meaningful connections. Inspired teaching is more than content delivery. It is student-centered and focuses on cultivating skills that lead to a successful life.

    References

    Ambrose, S.H., Bridges, M.W., DiPietro, M., Lovett, M.C., & Norman, M.K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: John Wiley & Sons, Inc.

    Babb, K.A., & Ross, C. (2009).  The timing of online lecture slide availability and its effect on attendance, participation and exam performance. Computers & Education, 52, 868-881. doi:10.1016/j.compedu.2008.12.009

    Barbetta, P.M., & Skaruppa, C.L. (1995). Looking for a way to improve your behavior analysis lectures? Try guided notes. The Behavior Analyst, 18(1), 155-160.

    Bohay, M., Blakely, D. P., Tamplin, A. K., & Radvansky, G. A. (2011). Note taking, review, memory, and comprehension. American Journal of Psychology, 124(1), 63-73. doi: 10.5406/amerjpsyc.124.1.0063

    Bowman, L. L. (2009). Does posting PowerPoint presentations on WebCT affect class performance or attendance? Journal of Instructional Psychology36(2), 104-107.

    Bui, D.C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2), 299-309. doi: 10.1037/a0030367

    Cohen, D. D., Kim, E., Tan, J., & Winkelmes, M. (2013). A note-restructuring intervention increases students’ exam scores. College Teaching, 61(3), 95-99. doi: 10.1080/87567555.2013.793168

    Gardner, K., & Aleksejuniene, J. (2011). PowerPoint and learning theories: Reaching out to the millennials. Transformative Dialogues: Teaching & Learning Journal, 5(1), 1-11.

    Harwood, E. (2016). A Strategy for Active Engagement in the Classroom. In W. Altman, L. Stein, & J. E. Westfall (Eds.), Essays from E-xcellence in Teaching (Vol. 15, pp.  1-4). Retrieved from the Society for the Teaching of Psychology Web site: http://teachpsych.org/ebooks/eit2015/index.php.

    Kiewra, K. A. (1985). Providing the instructor's notes: An effective addition to student notetaking. Educational Psychologist, 20(1), 33-39. doi: 10.1207/s15326985ep2001_5

    Kirova, A., Massing, C., Prochner, L., & Cleghorn, A. (2016). Shaping the 'habits of mind' of diverse learners in early childhood teacher education programs through PowerPoint: An illustrative case. Journal of Pedagogy, 7(1), 59-78. doi:  10.1515/jped-2016-0004

    Kobayashi, K. (2006). Combined effects of notetaking/reviewing on learning and the enhancement through interventions: A metaanalytic review, Educational Psychology: An International Journal of Experimental Educational Psychology, 26(3), 459-477. doi: 10.1080/01443410500342070

    Landrum, R. E. (2010). Faculty and student perceptions of providing instructor lecture notes to students: Match or mismatch? Journal of Instructional Psychology, 37(3), 216-221. Retrieved from http://www.projectinnovation.biz/jip_2006.html.

    Lenhart, A. (2012, March, 19). Teens, smartphones & texting. Retrieved March 16, 2017, from Pew Research Center: Internet, Science & Tech Web Site: http://www.pewinternet.org/2012/03/19/teens-smartphones-texting/#

    Nakayama, M., Mutsuura, K., & Yamamoto, H. (2016). Students’ reflections on their learning and note-taking activities in a blended learning course. The Electronic Journal of eLearning, 14(1), 43-53.

    Neef, N.A., McCord, B.E., & Ferreri, S. J. (2006). Effects of guided notes versus completed notes during lectures on college students’ quiz performance. Journal of Applied Behavior Analysis, 39(1), 123-130. doi: 10.1901/jaba.2006.94-04

    Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6.

    Ruggiero, D., & Mong, C.J. (2015). The teacher technology integration experience: Practice and reflection in the classroom. Journal of Information Technology Education: Research, 14, 161-178

    Stefanou, C., Hoffman, L., & Vielee, N. (2008). Note-taking in the college classroom as evidence of generative learning. Learning Environment Research, 11, 1-17. doi: 10.1007/s10984-007-9033-0

    Titsworth, B. S. (2004). Students' notetaking: The effects of teacher immediacy and clarity. Communication Education, 53(4), 305-320.

    Van der Meer, J. (2012). Students’ note-taking challenges in the twenty-first century: Considerations for teachers and academic staff developers. Teaching in Higher Education, 17(1), 13-23. http://dx.doi.org/10.1080/13562517.2011.590974

  • 18 Dec 2017 4:01 PM | Anonymous
    Help Sheet Content Predicts Test Performance


    Mark R. Ludorf and Sarah O. Clark
    Stephen F. Austin State University

    Readers of E-xcellence in Teaching know the importance of finding the best teaching methods and techniques to reach students. Although instructors rightfully seek to improve their teaching to enhance student learning, often times too much focus is placed on enhancing “input” and not enough focus is placed on enhancing the fidelity of “output”. That is, instructors should explore both the methods to make them better teachers, but also consider innovative methods to create better measurements of what students have learned.

    Professors regularly confront the challenge of teaching to a student population with diverse levels of academic ability. To address such diverse ability instructors have implemented various pedagogical methods, many of which are time consuming and tedious. One method instructors have used to address diverse learning abilities is to allow students to access information during a test. Some instructors limit the amount of information that is accessible (e.g., index card or standard sheet of paper), while other instructors allow access to an unlimited amount of information (i.e., “open book”).

    Ludorf (1994), allowed students to select the amount information they could access on each of five statistics tests. Results showed significantly higher average test performance (72% versus 62%) when less information was accessed than when more information was accessed; a result consistent with previous results (Boniface, 1985)

    During the last 3 decades numerous researchers (e.g., Dorsel & Cundiff, 1979) have explored the role of help sheets (aka cheat sheet or crib sheet) and how the use of help sheets is related to test performance (Dickson & Bauer, 2008; Dickson & Miller, 2005; Hindman, 1980; Visco, Swaminathan, Zagumny & Anthony, 2007; Whitley, 1996), learning (Dickson & Bauer, 2008; Funk & Dickson, 2011) and anxiety reduction (e.g., Drake, Freed, and Hunter, 1998; Erbe, 2007; Trigwell, 1987). Overall the results have been mixed regarding help sheet use and the variables investigated.

    One aspect of help sheets that has received little attention is the relationship between the content of a help sheet and test performance. Most of the research cited above examined the relationship between test performance and whether or not a student used a help sheet. Only a few studies (Dickson & Miller, 2006; Gharib, Phillips, & Mathew, 2012; Visco, et al., 2007) have explored how the specific content of a help sheet is related to performance.

    Dickson and Miller (2006) found significantly higher test performance when students used an instructor provided help sheet compared to a student provided help sheet. However, the result may be confounded as help sheet condition may have varied systematically with the amount of studying students did. Visco et al. (2007), examined student generated help sheets and concluded that students likely need additional direction on what content to include on a help sheet in order to enhance performance. Finally, Gharib et al. (2012) examined the quality of students’ help sheets and found a reliable and positive relationship between the quality of the help sheet content and test performance; where a quality measure was obtained by rating a help sheet for organization and amount of detail.

    To summarize the relevant research, the use of help sheets is not reliably or consistently related to student performance, learning, or anxiety levels. Moreover, help sheet quality appears to vary across students and such variation may explain the body of results. Thus, help sheet content should be examined more systematically.

    The current study provided a systematic exploration to determine whether characteristics of the help sheet content (e.g., overall quality, inclusion of process information, density of information, etc) were related to test performance. Results of the study may be used to provide students guidance (Visco et al., 2007) when constructing a help sheet in order to enhance performance.

    Method

    Participants

     Participants (N = 21) were students enrolled in a required junior level psychological statistics course. Other sections of the course were taught by different instructors; students selected to enroll in this section unaware of the assessment that would be conducted. A majority of the participants were women. No other demographic information was collected.  

    Materials

    Students created a one-page 8.5 × 11 in. [21.6 × 28 cm] help sheet to use on each test. The help sheet could contain any information a student wanted to include and both sides of the sheet could be used. Students were informed that help sheets would be collected.  Both sides of each help sheet were scanned to create an electronic copy. All help sheets were returned when the tests were returned.

    Procedures

    Students were required to construct a help sheet for each test, though there was no requirement to use the help sheet. Based on informal observation during the test, all students appeared to use the help sheet to some degree.

    Tests in the statistics course were all problem based and were graded on a 100 point scale. Student help sheets were collected, scanned, and rated by two raters on the variables of interest below. Both help sheet raters were blind to students’ test performance at the time that the ratings were made.

    Variables of interest. Help sheets were evaluated on the variables of Overall Quality (4 – 0, with 4 being the highest quality); Verbal Process information (i.e., instructions) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Numeric Process information (i.e., solved problems) (1 <very informational> – 3 <neutral>  – 5 <not very informational> ), Density of the information (as rated in deciles – 10 – 100%), Organization of information (1 <very organized> – 3 <neutral>  – 5 <very unorganized> ), use of Color (present or absent) and Submission Order (ordinal position when the test was submitted).

     

    Results

    Analyses

    The analyses were based on students’ help sheets and test performance from a single test. Interrater rater reliability was computed for the two raters across the scales described above. Interrater reliability ranged from moderate to high, .521(Organization) to .978 (Density).

    Help sheet ratings for the two raters were averaged and then regressed against students’ test scores to determine which characteristics of help sheet predicted tested performance. Results showed that higher quality help sheets predicted higher test performance (b = 33.20, p < .001) as did lower density of information (b = -.35, p = .05). Moreover, higher verbal process scores were associated with lower test performance (b = 13.14, p < .01). None of the other variables were related to performance (p > .05).

    Discussion, Conclusion and Recommendations

    Results of the preliminary analyses suggest that it is not enough just to consider whether a student has access to a help sheet or not, but rather a careful examination of the help sheet content is required. Similar to Gharib et al. (2012), overall quality of the help sheet was found to be a very important characteristic of the help sheet. As overall quality increased, test scores also increased.

    Density of information was also significantly related to performance. Although not the strongest effect, it appears that having less information on the help sheet predicted higher performance. Such a pattern is consistent with previous research (Visco et al., 2007) which may indicated that density of information is a proxy for learning in an inverse direction. That is, students who have a robust understanding of the material do not need to include as much information on the sheet and create a less dense help sheet. Conversely, students who do not have a robust understanding of the material must include as much information as possible to compensate for the lack of understanding, thereby creating a high density help sheet.

    One surprising finding was that students who included more verbal process information, which included information like instructions on how to perform some processes, scored lower than those students who included less of this information. Similar to the density argument above, it could be the case that students who included more verbal process information did so because they were not comfortable completing such problems without help sheet information and so they included more verbal process information on their help sheets.

    Finally, in examining the help sheet research there are two notable issues. First, help sheets do not facilitate student performance in courses involving mostly content knowledge including abnormal psychology (Hindman, 1980), developmental psychology (Dickson and Miller, 2005 and 2006), or social psychology (Whitley. 1996). However, when a course includes more process than content knowledge, as in the current course or other studies including statistics (Ludorf, 1994, Philips, et al., 2012) or engineering (Visco et al., 2007), students’ test performance appears to be related to help sheet content. Second, taking into account the research showing that content of a help sheet is related to test performance, we join Visco and colleagues in calling for the need of instructors to become more involved with help sheet construction as a way to provide students of all abilities a high quality help sheet.

    References

    Boniface, D. (1985). Candidates’ use of notes and textbooks during an open-book examination. Educational Research, 27(3), 201-209.

    Dickson, K. L., & Bauer, J. (2008). Do students learn course material during crib card construction? Teaching of Psychology, 35, 117-120.

    Dickson, K. L., & Miller, M. D. (2005). Authorized crib cards do not improve exam performance. Teaching of Psychology, 32, 230–232.

    Dickson, K. L., & Miller, M. D. (2006). Effect of crib card construction and use on exam performance. Teaching of Psychology, 33, 39–40.

    Dorsel, T. N., & Cundiff, G. W. (1979). The cheat-sheet: Efficient coding device or indispensable crutch? Journal of Experimental Education, 48, 39–42.

    Drake, V. K., Freed, P., & Hunter, J. M. (1998). Crib sheets or security blankets? Issues in Mental Health Nursing, 19, 291–300.

    Erbe, B. (2007). Reducing test anxiety while increasing learning – The cheat sheet. College Teaching, 55(3), 96-97.

    Funk, S. C., & Dickson, K. L. (2011). Crib card use during tests: Helpful or a crutch? Teaching of Psychology, 38, 114-117.

    Gharib, A., Phillips, W., & Mathew, N. (2012). Cheat Sheet or Open-Book? A Comparison of the Effects of Exam Types on Performance, Retention, and Anxiety. Psychology Research, 2(8), 469-478

    Hindman, C. D. (1980). Crib notes in the classroom: Cheaters never win. Teaching of Psychology, 7, 166–168.

    Ludorf, M. R. (1994). Student selected testing: A more sensitive evaluation of learning.  Paper presented to the American Psychological Society Institute on The Teaching of Psychology, Washington, DC.

    Trigwell, K. (1987). The crib card examination system. Assessment and Evaluation in Higher Education, 12, 56–65.

    Visco, D., Swaminathan, S., Zagumny, L, & Anthony, H. (2007). AC 2007-621: Interpreting Student-Constructed Study Guides. ASEE Annual Meeting and Exposition Proceedings, Honolulu, HI.

    Whitley, B. E., Jr. (1996). Does “cheating” help? The effect of using authorized crib notes during examinations. College Student Journal, 30, 489–493.

     

    Author Notes

    Mark Ludorf is a Cognitive psychologist who joined the faculty at Stephen F. Austin State University(SFA) in the fall of 1990 and is currently a Full Professor of Psychology. He has served in university wide administrative positions at two universities (SFA and Oakland University in Rochester, MI). He was also an American Council on Education (ACE) Fellow in Academic Administration.  Ludorf has been active in the use technology in higher education. He has taught online since 2001 and developed several online courses. His other academic interests are in leadership and study abroad. Ludorf currently serves as Senior Editor of the Journal of Leadership Studies. He has also offered numerous study abroad programs in Italy.  At SFA Ludorf has been recognized as the Alumni Distinguished Professor and was awarded the SFA Foundation Faculty Achievement Award.

    Sara Clark was an undergraduate teaching assistant in statistics at Stephen F. Austin State University. She completed her Bachelor’s degree in Psychology at SFA. She was also the 2013 recipient of the Jeff and Jackie Badders Award which is given to the top graduating senior psychology major.

Powered by Wild Apricot Membership Software