Society for the Teaching of Psychology: Division 2 of the American Psychological Association

E-xcellence in Teaching
Editor: Annie S. Ditta

  • 02 Nov 2017 5:20 PM | Anonymous

    Do These Things Even Work? A Call for Research on Study Guides

    J. Hackathorn, A. W. Joyce,  and M. J. Bordieri
    Murray State University

    If one had to predict the most common question asked by students each semester, it would be: “What will be on the test?” Moreover, this question is frequently and predictably followed by requests for a study guide. As good, well-meaning instructors, many of us sigh (maybe cry a little) but ultimately provide them. In fact, many of us even include them in course materials prior to the actual request, just to avoid the conversation. Given how common these requests are, it is surprising that there is little actual research regarding the effectiveness of study guides. A quick search, using key terms such as study guides and exam guides, on Google Scholar leads to only a handful of results, many of which are dated and focused on creating study guides (as opposed to assessing them). Thus, we suddenly found ourselves asking: How much do we really know about study guides?   Do these things even work?

    Arguably, any strategy or aid should help students to perform better on exams than nothing. However, some of the resources that students prefer may actually hinder their performance rather than help it. For example, in a recent analysis of learning aid use and exam performance, Gurung (2004) found that students rate textbooks’ bolded key terms as the most helpful study aid to them, but that their perceived helpfulness of this resource negatively relates to exam performance. Conversely, what they rate as least helpful (i.e., active review practices) has the strongest evidence of improving exam performance (e.g. Dickson, Miller, & Devoley, 2005). In another example, a comparison of exam review styles found that, although students do not prefer traditional (i.e., student directed question and answer format) style exam reviews, their exam performance is highest when they use this style, as compared to other styles (Hackathorn, Cornell, Garczynski, Solomon, Blankmeyer, & Tennial, 2012). Ultimately, this suggests there is a mismatch between what we (perhaps both the learner and the instructor) prefer and what actually improves knowledge, understanding, and exam performance.

    To increase our understanding of study guides, the authors of this essay, as well as other faculty members, recently conducted two separate studies (Cushen, et al., currently under review for publication), using the General Psychology population at Murray State University (MSU). In the first study, we conducted a small experiment using all of the sections of General Psychology offered during a single semester at MSU. Using counterbalancing and random assignment of sections, we compared exam performance following an instructor-provided concept list study guide to performance following student generated study guides. Then, at the end of the semester we queried students’ preferences and gave another brief quiz over material from the first two exams. Our results indicate that despite benefiting the most from creating their own study guides, students strongly prefer the instructor-provided guides.

    In a second study, after we realized that we were making assumptions by limiting study guides to only concept lists and student generated guides, we simply asked our students to identify the types of study guides they prefer. In replication of the past studies that showed students tend to prefer the least helpful study tools, we found that students prefer that the instructor provide study guides that include a list of concepts, followed by definitions and examples of application. In other words, students prefer that the instructor create what ostensibly could be referred to as “their notes.”  They prefer excerpts from the textbooks and simple concept lists the least, but prefer an instructor provided concept list style more than nothing at all or creating their own study guide. In examining their preferences, we realize that it is probably not happenstance that the least preferred study guide styles are also the styles that require the most effort from the student to actively summarize, organize, or synthesize course concepts.

    Obviously, the next question is: What do we do with this information?  We do not believe that we should “throw the baby out with the bathwater.” In Fall of 2016, the primary author of this essay attempted to explain to one class why she would no longer provide study guides, and she was almost the victim of a lynch mob. Perhaps that is hyperbole. Still, the students did not appear to believe that the lack of instructor-provided study guides was in their best interest. In hindsight, the instructor may have been too quick to implement this change. There is much more information needed in this regard.

    In our initial experiment, we tested the efficacy of a concept-list style study guide. Basically, we used the style of study guide that answers the ever-present question: “What is on the test?  What should I study?”  Correctly using this style means that students have to then find definitions, create mental models, links, and organization, and create their own application examples. However, it is unclear how many actually do that. It is possible that, instead, students simply look at the list, recognize the terms, and think that they have studied enough to be prepared for the exam. Future research is needed to see exactly what students do with those study guides.

    In that same vein, beyond not knowing how to properly use a study guide, it is also possible that students do not know how to create a study guide. Although it is important for students to know how to facilitate their own learning, many students have defective study strategies (Bjork, Dunlosky, & Kornell, 2013). Our participants were students in a freshman-level course, with the vast majority being first-semester freshmen. Creating a study guide, especially an effective one, is hard work and takes a clear understanding of what type of information is important. Freshmen, specifically, may struggle with this skill. For example, in a recent General Psychology homework assignment, students were asked to create a mnemonic device related to neurotransmitters. The instructor was quite surprised when many of the students created an acronym depicting an arbitrary list of neurotransmitter names. Sadly, there were no exam questions that would ask them to provide a random list of neurotransmitters. Suffice it to say, freshmen may not have a strong understanding of what it takes to succeed on rigorous college-level exams.

    Unfortunately, many new college students will find, perhaps too late, that their high school strategy of simply memorizing definitions will not be as successful in the college classroom. Thus, one of the first steps toward student success may involve taking time to teach them how to create good study aids. In our experiment, we do not report the types of study guides that students self-create. We can only imagine (and have discussed at great lengths) that they are probably terrible. A cursory review across a subsample of our students confirms that the vast majority of our students fail to consistently generate examples of course content and instead provide a simple list of terms and definitions or a chapter outline in their self-created guides. However, regardless of the quality of the study guides, their exam grades are still higher when they create their own study guides. As a result, even if the instructor gives students a foundation with the concept-list style, teaching them how to improve those study guides should prove fruitful. This assumes, of course, that we can convince students to try a new, potentially more intensive and effortful, study technique that they actually utilize rather than backtracking into old habits as the exam date looms closer (Dembo & Seli, 2004).

    Unfortunately, it is still unclear which types of study guides are the most beneficial. Outside of the extensive work of Karen Wood (Wood, 1989; 1993), who outlines various types of study guides and their individualized purposes, there is a dearth of information regarding which types of study guides are the most effective and in which situations they are effective. The type of study guide one might use in an introductory course where students are being given a foundation for future classes is probably very different from the guide one might use in an applied research methods course in which students are practicing a skill. Thus, much more information is needed with regards to not only the general efficacy, but also the relevance and applicability of study guides across different courses and learners.

    Finally, as tends to be the case in many of our classes, students sometimes appear to dislike assignments that really challenge or require effort of them. It is probably not a coincidence that students prefer the study aids that required less of their effort. And, before we all get migraines from rolling our eyes, it is important to consider that the students may not realize that this relationship exists. As an example, in a recent end-of-semester evaluation comment a student requested the following: “I do not want to be spoon-fed the information, but it would be nice if we could be provided with a list of concepts, in order from the most important to the least important, to help us study for exams.”  Clearly, this student fails to see the connection here between spoon-feeding and the study guide that they requested. Moreover, we doubt this student is alone in this desire. As such, asking students to create their own study guides may result in backlash. Importantly some, if not all, of this backlash can be reduced with transparency, communication, and rapport. However, instructors will need to assess the risk/benefit ratio of implementing a change like this.

    The most surprising aspect of our research is that very few of us question our own use of study guides, even though, frankly, we tire of creating them.  Many of us create these study guides because the students ask for them, or to avoid potential mutiny. Yet, as study guides have been around for so long and are so ubiquitous in higher education, very few of us inquire as to whether they work. It is important to note that this does not make us (or you) bad instructors. Care and efforts for students in any form should never be disregarded. In fact, we suspect that there are myriad instructors who have found ways to improve the effectiveness of study guides, but have yet to publish them. Thus, this essay is a mere call to action. Help us, help them; help us, help ourselves.

     

    References

    Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417-444.

    Cushen, P., Vázquez Brown, M., Hackathorn, J., Rife, S. C., Joyce, A. W., Smith, E., …Daniels, J. (Under Review). "What's on the test?": The impact of giving students a concept-list study guide.

    Dembo, M. H., & Seli, H. P. (2004). Students' resistance to change in learning strategies courses. Journal of Developmental Education, 27(3), 2 - 11.

    Dickson, K. L., Miller, M. D., & Devoley, M. S. (2005). Effect of textbook study guides on student performance in introductory psychology. Teaching of Psychology, 32(1), 34-39.

    Gurung, R. A. (2004). Pedagogical aids: Learning enhancers or dangerous detours?. Teaching of Psychology, 31(3), 164-166.

    Hackathorn, J., Cornell, K., Garczynski, A., Solomon, E., Blankmeyer, K., & Tennial, R. (2012). Examining exam reviews: A comparison of exam scores and attitudes. Journal of the Scholarship of Teaching and Learning, 12(3), 78-87.

    Wood, K. D. (1989). The Study Guide: A Strategy Review. Paper presented at the Annual Meeting of the College Reading Association. Philadelphia, PA.

    Wood, K. D. (1992). Guiding readers through text: A review of study guides. Newark, DE, International Reading Association.

     

     

    Biographical Sketch

    Dr. Jana Hackathorn, Dr. Amanda W. Joyce, and Dr. Michael J. Bordieri are all junior faculty at Murray State University in Murray, KY. Between them, they study everything from close relationships to inhibition in children, from sex to mindfulness, and of course from pedagogy to teaching effectiveness. Last year, the entire junior faculty in Psychology at Murray State (there are a total of eight of them) pooled their efforts to conduct a study examining a topic for which they had all complained: student demands for study guides. As a result of the study, they bonded, resulting in much happier happy hours and a very functional, albeit odd, departmental atmosphere.

  • 04 Oct 2017 7:19 PM | Anonymous

    Team-Based Learning: A Tool for Your Pedagogical Toolbox

     Krisztina V. Jakobsen
    James Madison University

    Teachers whose different styles match with the pedagogical methods they use make for a more authentic and effective teaching and learning experience. There are a variety of strategies in the literature for teachers who would like to move away from a purely lecture format. One of those, Team-Based Learning (TBL), is a method I have been using for several years. TBL is a method to encourage students to be actively involved in their learning. Similar to the ideals associated with a flipped classroom (Jakobsen & Knetemann, in press), students learn primary course content outside of the classroom and work in permanent teams with the material during class (Michaelsen, Bauman, Knight, & Fink, 2004). Below, I outline the core components of TBL and share a few studies that my students and I have done examining the impact on student learning.

     The TBL Process

    Readiness Assurance Process

    The first steps in the TBL process involves ensuring that students understand course material; this process—the Readiness Assurance Process—includes preparation outside of class,  quizzes in class, and a short lecture. Students prepare for class by reading the textbook, watching videos, and/or answering guided questions. When students come to class, they take a multiple-choice quiz individually, which assesses student’s understanding of the course material at various levels of Bloom’s taxonomy. The individual quiz holds students accountable for completing their out of class preparation. Next, students work in their teams to complete the same multiple-choice quiz again. Students receive immediate feedback on their team quiz using scratch-off IF-AT forms. After the team quiz, students have a chance to appeal any questions they miss, which requires them to revisit course materials and provides an opportunity to make a compelling case for alternate answers based on the course materials. Finally, teams submit any questions they still have about the material and the instructor gives a short “muddiest points” lecture. The Readiness Assurance Process takes 50-75 minutes to complete.

    Application Exercises

    After the completion of the Readiness Assurance Process, students should have the necessary knowledge to complete application exercises, which usually take 2-4 class periods. Depending on the complexity of the questions, students may complete 2-5 application exercise questions during a class period. The application exercises have a deliberate structure that allows for teams to focus on the relevant course material and facilitates team and class discussions. The keys to developing successful application exercises involve having all teams work on the same questions, requiring teams to make a simple choice, and having teams report their answer choices simultaneously. To demonstrate the importance of the structure of the application exercises, think about the type and quality of discussions students may have with open-ended questions (Question 1 below) compared to more directed questions (Question 2 below).

    Question 1: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Identify at least one way in which each of the theories below helps you understand why the TBL structure is an effective teaching method.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories

    Question 2: This class is structured using Team-Based Learning (TBL), in which you learn the primary course content outside of class and then work in permanent teams during class to get a deeper understanding of the material. Decide which of the following theories is most prominent in the TBL structure.  Be prepared to support your answer.

    A.    Operant conditioning
    B.     Piaget’s theory
    C.     Vygotsky’s theory
    D.    Information processing theories
    While Question 1 asks students to apply what they know about the theories to the structure of TBL, it may not generate much discussion. Question 2 meets the requirements of each of the deliberate components of the application exercises. All teams are presented with the same problem. Teams have to make a choice among options A-D. For this particular question, all of the answer choices are correct, so what will generate discussion among teams is the rationale behind their decisions. Finally, because the answer choices are very clear, it is easy for teams to simultaneously report their decisions by holding up cards, for example.

    Does it Work?

    Students generally have positive experiences with TBL. They also seem to enjoy the structure (e.g., Adelkhalek, Hussein, Gibbs, & Hamdy, 2010) and report perceiving TBL as an effective teaching method (e.g., Haberyan, 2007). The results are mixed in terms the impact of TBL on academic outcomes compared to more traditional teaching methods (e.g., Carmichael, 2009; Jakobsen, McIlreavy, & Marrs, 2014), and little work has been done regarding how TBL impacts retention (e.g., Emke, Butler, & Larsen, 2016). Over the years, I have worked with student research assistants to collect data in lab-based and classroom-based studies to examine the effectiveness of TBL in promoting recognition memory and retention compared to other pedagogical methods. Here, I present the results of two of those studies.

    In a lab-based study, time-slots were randomly assigned to each of our conditions, as follows:

    • Team-Based Learning: Participants read an article upon arrival to the session, then completed the Readiness Assurance Process and application exercises.
    • Lecture: Participants received a lecture based on the content of the article and took notes during the lecture.
    • Reading: Participants read the article and took notes as they read.
    • Control: Participants completed an anagram.

    One week later, all participants took a 10-item multiple-choice quiz to measure their retention of material from the week before. The results revealed that participants in the TBL and Lecture session did not differ on their scores (p = .141), but participants in the TBL session outperformed participants in the Reading (p = .018) and Control sessions (p < .001). The results of this study suggest that TBL and lecture are both effective ways of teaching, particularly in short-term sessions (e.g., workshops).

    In a class-based study, two classes were randomly assigned to be taught using TBL or Lecture. During the semester, students in the TBL class completed the Readiness Assurance Process and application exercises, while students in the Lecture class received lectures with active learning components. Students’ understanding of course material was assessed at three time points: (1) pre-test at the beginning of the semester, (2) final at the end of the semester, and (3) post-test three months after the completion of the course. Students completed 28 multiple-choice questions at each of the three time points. We based our analyses on students who contributed data at all three time points (N = 34). Students in the TBL and Lecture class did not differ on their pre-test scores (p = .052) or their post-test scores (p = .052). Students in the TBL class performed better than students in the Lecture class on the final (p = .021), suggesting that TBL may enhance short-term retention of course material. The results of this class study are consistent with those of Emke et al. (2016), in which TBL led to better short-term, but not long-term, retention of course material.

    Implementation and Conclusions

    Implementing TBL as outlined above requires some upfront investment for organizing and creating preparatory materials, quizzes, and application exercises. The good news is that components of TBL can be implemented in nearly any class with relative ease. For example, it is easy to incorporate a team quiz to already existing individual quizzes, and once students have the content knowledge (e.g., through lectures), application exercises can be added a little at a time.

    While there is likely no one pedagogical technique that will work for every instructor, data from the TBL literature and my research suggest that TBL is at least as good as other strategies. These results should encourage teachers to work in areas in which they are most comfortable and to cultivate skills they feel important, whether they are central to the course objectives or merely desirable.

    Author note

    Portions of this essay were presented at STP’s Annual Conference on Teaching, Decatur, Georgia, October, 2016. This project was supported by the Society for the Teaching of Psychology’s Scholarship of Teaching and Learning Grant and the Alvin V., Jr. and Nancy C. Baird Professorship to KVJ.

    Resources

    The following website offer wonderful resources for learning more about and getting started with TBL: Learntbl.ca and www.teambasedlearning.org/

    References

    Abdelkhalek, N., Hussein, A., Gibbs, T., & Hamdy, H. (2010). Using team-based learning to prepare medical students for future problem-based learning. Medical Teacher, 32, 123–129. doi: 10.3109/01421590903548539

    Carmichael, J. (2009). Team-based learning enhances performance in introductory biology. Journal of College Science Teaching, 38, 54–61.

    Emke, A. R., Butler, A. C., & Larsen, D. P. (2016). Effects of Team-Based Learning on short-term and long-term retention of factual knowledge. Medical Teacher, 38, 306-311. doi: 10.3109/0142159X.2015.1034663

    Haberyan, A. (2007). Team-based learning in an industrial/organizational psychology course. North American Journal of Psychology, 9, 143–152.

    Jakobsen & Knetemann. (in press). Putting structure to flipped classrooms using Team-Based Learning. International Journal of Teaching and Learning in Higher Education.

    Jakobsen, K. V., McIlreavy, M., & Marrs, S. (2014). Team-based Learning: The importance of attendance. Psychology Learning & Teaching13(1), 25-31. doi: 10.2304/plat.2014.13.1.25

    Michaelsen, L. K., Knight, A. B., & Fink, L. (2004). Team-based learning: A transformative use of small groups in college teaching. Sterling, VA: Stylus Publishing.

     

    Krisztina V. Jakobsen is an Associate Professor in the Department of Psychology at James Madison University. She teaches developmental psychology classes in the General Education Program an in the Department of Psychology. Her research interests include studying effective teaching methods and social cognition in infants.

     

     

  • 07 Sep 2017 9:15 AM | Anonymous
    Technology Bans and Student Experience in the College Classroom

     Thomas Hutcheon, Ph.D.

    Bard College

     

    Personal technologies, including laptops and cell phones, have infiltrated the college classroom.  Instructors must now decide whether to implement a ban on the unsupervised use of personal technologies in their courses.  Anecdotal evidence (“students always seem to be looking at their computer screens and not me during class”), and results from recent studies linking the unsupervised use of technology with reductions in academic performance, have led to declarations that the time to ban technology use in the classroom is now (Rosenblum, 2017).  However, it is important for individual instructors to critically evaluate and understand the empirical evidence in favor of technology bans when deciding on the approach to take in their classroom.  Moreover, the impact bans have on student’s experience within the course remains unknown.  The purpose of this essay is to review the evidence in favor of a technology ban, to describe recent results, which suggest a ban can be harmful to students’ engagement and to provide recommendations for instructors to aid in the development of a technology policy for their classrooms.

    Broadly speaking, two primary mechanisms have been proposed to explain the relationship between unsupervised technology use in the classroom and reduced academic performance: misdirection of cognitive resources and superficial encoding of information. First, the presence of personal technology in the classroom allows students a direct line to distracting information via social media, games, and the internet.  Diverting cognitive resources towards online shopping or texting with friends necessarily draws resources away from what is happening in the classroom.  This misdirection of resources means that students do not process the material presented during lecture and this can harm performance (Fried, 2008; Wood et al. 2012).  Importantly, the use of technology may lead to the misdirection of resources, not only for the student using the technology, but for students sitting nearby, and even the instructor (Aguilar-Roca, Williams, & O’Dowd, 2012).  Second, even when students are prevented from accessing the internet or other distractions, the use of laptops leads to a relatively superficial encoding of lecture information.  Students randomly assigned to take lecture notes using a laptop perform worse on follow-up memory tests of lecture material compared to students randomly assigned to take lecture notes using paper and pencil (Hembrooke & Gay, 2003; Mueller & Oppenheimer, 2014).  This finding has been explained by differences in note taking strategies.  Specifically, students using a laptop appear to adopt a verbatim strategy in which they type everything that is said during the lecture.  In contrast, students using paper and pencil reframe and write down the information from the lecture into their own words.  This reframing requires deeper encoding of the information and leads to better retention of the material (Mueller & Oppenheimer, 2014).  Thus, despite successfully resisting temptation and devoting resources to the task of taking notes, the use of laptops is still harmful to the retention of material presented during a lecture.

    However, there are three things to keep in mind when implementing the findings reviewed above as the basis for your personal classroom policy.

    Broadly speaking, studies cited as evidence for the implementation of technology bans use either an experimental or correlational approach.  In the typical experimental approach, participants are randomly assigned to use a laptop or paper and pencil to take notes while listening to a lecture.  Learning is frequently assessed by a quiz on the material that is presented at the end of the lecture (Wood et al., 2012).  Although students using laptops tend to perform worse than those who do not, this procedure is different from students learning the information over the course of a semester, as they likely enact strategies during studying to make up for distracted moments when using online resources, such as reading the textbook or asking a fellow student.  The correlational approach collects various measures of student performance, such as GPA and exam grades, and correlates these with student’s reported cell phone and laptop usage.  The negative correlation between GPA and frequency of technology use is commonly interpreted as technology usage causing a decrease in performance.  However, due to the nature of correlational research, it could similarly be interpreted that weaker students tend to bring their laptops into the classroom (Fried, 2008).  In other words, since a causal relationship cannot be drawn between the use of laptops and class performance, removing access to laptops might not lead to changes in performance. 

    The real-world impact of technology usage on student performance needs to be considered.  What does a statistically significant reduction in performance for students using laptop mean for an individual student sitting in one of our classes?   One illustrative example comes from a rigorous, large-scale study conducted at the United States Military Academy at West Point.  For an entire semester, first year students enrolled in Principles of Economics were randomly assigned to take notes on either a laptop, tablet, or using paper and pencil.  The results from this sample of over 700 students yielded a statistically significant impact on performance.  Specifically, students in the laptop and tablet conditions performed worse on the final exam compared to students in the paper and pencil condition.  Although a statistically significant reduction, the effect amounted to a decrease of 1.7% on the final exam for students in the laptop or tablet condition (Carter, Greenberg, & Walker, 2016).  Thus, despite the presumed chronic misdirection of resources and the superficial encoding of information students experience when using technology, the real-world performance benefits are small.  While any improvement in performance is welcome, there are many simple techniques that instructors can implement over the course of the semester which can show improved exam performance to a greater extent, including retrieval practice at the end of a lecture (e.g. Lyle & Crawford, 2011).

    To date, little research has assessed the impact of a technology ban on student experience within the class.  However, recent research conducted in my lab, which was presented at the Society for the Teaching of Psychology Annual Conference on Teaching (Hutcheon, Richard, & Lian, 2016), indicates that implementing a technology ban reduces student engagement.  Specifically, using data from sixty-nine undergraduate students across four sections of Introduction to Psychology taught by the same instructor, students randomly assigned to a technology-ban section reported lower levels of engagement in the course compared to students randomly assigned to the technology-permitted section, as assessed by the student course engagement questionnaire (SCEQ) (Handelsman, Briggs, Sullivan, & Towler, 2005).  Interestingly, the students surveyed in our sample reported relatively low frequency of cell phone use during a typical class (mean = 2.38) and the vast majority reported a preference for taking notes using paper and pencil (N=61) compared to laptops (N = 8).  In fact, looking at the data for the 61 students who reported a preference for taking notes using paper and pencil, we observed a significant reduction in engagement as a function of laptop ban.  In other words, the technology ban impacted engagement of students who would not even have used technology in the classroom.  These findings suggest that students are sensitive to the structure or rules within the classroom environment, and rules viewed as limiting their choices may impact how much students engage with the material and the instructor. 

    In contrast to reports of Carter et al. (2016), we observed a marginally significant reduction in end of year grades for students in the technology ban compared to the technology permitted condition.  This suggests that the impact of a technology ban on student’s performance in the classroom may not be the same for all classroom environments.  Specifically, students enrolled in a more traditional, small liberal arts environment (Bard College compared to West Point) may be more impacted by the implementation of such bans.

     Recommendations

    Consider the make-up of your class.  If you are teaching a small class in which students might not spontaneously use technology, the implementation of a technology ban could negatively impact student experience and performance in the class.  In contrast, if you are teaching a large lecture class in which students might feel less engaged to begin with, the ban might help their experience and performance.

    Minimize the distraction of others.  If you decide not to implement a ban, you should think about ways that you can prevent those students who chose to use laptops from distracting others who choose not to use a laptop.  Methods to alleviate this concern include having specific sections of the classroom dedicated to laptop and technology users (Aguilar-Roca et al., 2012). 

    Provide rationale for your decision.  If you decide to implement a technology ban, providing students with a clear explanation as to why the ban is in place, supported by relevant research is one potential method for reducing the impact of a ban on student engagement.  In conclusion, there is little doubt that under certain situations, unsupervised technology usage can negatively impact academic performance.  However, full consideration regarding the type of course and composition of students within the course is advised before implementing a blanket technology ban.

     

    References

    Aguilar-Roca, N. M., Williams, A. E., & O’Dowd, D. K. (2012). The impact of laptop-free zones on student performance and attitudes in large lectures. Computers & Education, 59, 1300-1308.

    Carter, S. P., Greenberg, K., & Walker, M. (2016). The impact of computer usage on     academic performance: Evidence from a randomized trial at the United States     Military Academy (SEII Discussion Paper #2016.02).

    Fried, C. B. (2008). In class laptop use and its effects on student learning. Computers & Education, 50, 906-914.

    Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research, 98, 184-191.

    Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in learning environments. Journal of Computing in Higher Education, 15, 46-64.

    Hutcheon, T. G., Richard, A., & Lian, A. (2016, October). The impact of a technology ban on student’s perceptions and performance in introduction to psychology. Poster presented at the Society for the Teaching of Psychology 15th Annual Conference on Teaching, Decatur, GA.

    Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38, 94-97.

    Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard:  Advantages of longhand over laptop note taking. Psychological Science, 25, 1159-1168.

    Rosenblum, D. (2017, January 2). Leave your laptops at the door to my classroom. The New York Times. Retrieved from http://www.nytimes.com/2017/01/02/opinion/leave-your-laptops-at-the-door-to-my-classroom.html?_r=0

    Wood, E., Zivcakova L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58, 365-374.

     

    Tom Hutcheon is a Visiting Assistant Professor in the psychology program at Bard College. Tom earned his B.A. in psychology from Bates College and his M.S. and Ph.D. in Cognition and Brain Science from Georgia Tech.  Tom received the Early Career Psychologist Poster award at the 2016 Society for the Teaching of Psychology (STP) Annual Conference on Teaching as well as a 2017 Early Career Psychologist Travel Grant sponsored by STP.  Tom’s research interests include cognitive control, cognitive aging, and effective teaching. Tom can be reached at thutcheo@bard.edu.

     

  • 15 Aug 2017 11:31 AM | Anonymous

    That’s What She Said: Educating Students about Plagiarism

     

    Elizabeth A. Sheehan

    Georgia State University

     

    Dealing with plagiarism is one of the more unpleasant aspects of our job as instructors. There is the sinking feeling you get when you suspect plagiarism, the moment that your Google search returns the exact passage from your student’s paper, the uncomfortable conversation with the student, the documentation to your department, and the potential hearing with the honor board. I would venture to say most of us have either dealt with these ourselves or at least supported another colleague through the process. These cases range from the cringe-worthy (e.g. copying directly from an instructor’s own published article, turning in a paper written by another student in a past semester) to the more minor infringements (e.g. unintentionally omitting quotation marks around a direct quote).

    At the teaching conferences I attended over the last few years, there seems to have been more emphasis on learning outcome assessment and reliance on the APA’s learning outcomes for undergraduates with a psychology major (APA, 2007). One of those outcomes is for students to “demonstrate effective writing skills in various formats” (p. 18). There also never seems to be a lack of presentations on how to incorporate writing assignments into your courses. Increasing writing assignments in your courses might mean increasing the chance you will encounter plagiarism; however, we might be able to prevent some of these cases with a greater focus on educating our students about plagiarism. Moreover, educating our students about plagiarism helps us address other APA learning outcomes about ethical behavior.

     

    WHY DO STUDENTS PLAGIARIZE

    To decrease plagiarism, a good place to start would be to try to understand WHY students plagiarize. At the last meeting of the National Institute on the Teaching of Psychology, I led a Participant Idea Exchange (PIE) on educating students about plagiarism (Sheehan, 2013). These PIE sessions are roundtable discussions on a topic. My group generated the following list of potential reasons students plagiarize:

    • ·         difficulty comprehending a reading;
    • ·         rushing through an assignment;
    • ·         convenience;
    • ·         cultural misunderstanding;
    • ·         poor understanding of the definition of plagiarism;
    • ·         not knowing how to integrate/synthesize/paraphrase;
    • ·         plagiarism is all around us in society; and
    • ·         not confident in their ability to write.

    You may be familiar with some of these, especially time constraints, difficulty with reading comprehension, and the inability to paraphrase. The idea of culture stood out to me from the PIE discussion. First, some cases of plagiarism could be due to cultural misunderstanding. Stowers and Hummel (2011) provide some examples of how students from an Eastern culture may view the use of another’s work. For instance, they assert some Asian students may see it as a sign of disrespect to paraphrase or change someone else’s words.

    A second example of culture is how plagiarism takes place all around us in society. We regularly use the functions of copy and paste on our computers in many different settings. People re-post others’ writing on their Facebook pages, re-blog someone else’s blog entry, forward youTube videos to friends, etc. Usually these events can be accomplished through one or two clicks. While these aren’t examples of academic writing, they do provide precedents that we have to overcome in our courses.

     

    EDUCATING STUDENTS ABOUT PLAGIARISM

    We had a discussion about plagiarism in my department, and our faculty reported a number of problems in pursuing cases of plagiarism, including some cases not being reported at all, faculty handling cases on their own, cases meeting our discipline’s definition of plagiarism being overturned by the college, not knowing the university reporting procedures, etc. It was clear we needed consistency and clarity. We also decided we wanted to focus less on policing, and to favor educating our students to prevent future plagiarism. You could probably guess that this led to a subcommittee (and the idea for my PIE). Our subcommittee created a standard definition of plagiarism that went into all syllabi, a writing workshop on plagiarism, a quiz, a contract for students, a flow chart of how to report plagiarism, and class activities to teach the identification of proper paraphrasing and citations. These materials (Lamoreaux, Darnell, Sheehan, & Tusher, 2012) are publicly available on the Society for Teaching of Psychology website (http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating%20Students%20about%20Plagiarism.pdf).

                At my PIE, I asked other faculty how they educated their students about plagiarism. Below are the techniques they listed:

    • a quiz on plagiarism;
    • a quiz on student handbook;
    • list policies in the syllabus on paraphrasing and/or a link to school policy;
    • discussion on the first day of class;
    • starting early in introductory classes or freshman year before students are allowed to register for classes; and
    • using technology (e.g. Turnitin or SafeAssign).

    One quiz recommended by multiple instructors is available through Indiana University, and can be found at https://www.indiana.edu/~istd/. At this site, students can complete a tutorial on plagiarism, see examples, take a quiz, and get a certificate of completion. My department uses this site as a part of our plagiarism training for students.

                A lot of us put policies on plagiarism in the syllabus and reference it on the first day of class; however, this alone is not enough. First, we can’t always rely on students to read it or to follow a link to the university policy. Second, we can’t assume they will understand the policy. Gullifer and Tyson (2010) present data demonstrating students have a great deal of confusion over what constitutes plagiarism despite online access to a policy. Students in their study also reported wanting education on plagiarism. These findings are also corroborated by data from Holt (2012).

                Holt provided basic information about plagiarism to a control group of students and training in paraphrasing to an intervention group. The control group received a definition of plagiarism in the syllabus, a link to the university policy, one example of proper paraphrasing, and a 10-minute demonstration of improper phrasing in class. The intervention group received training in paraphrasing and proper citations, along with assignments in class. As you might expect, the group with additional training was able to identify plagiarism more accurately than those without training. This study identified reasons for unintentional plagiarism as well. For example, students thought that quotations were not needed or materials didn’t have to be paraphrased if a citation was provided.

                Something as simple as a weekly paraphrasing activity can help. For 6 weeks of the semester, Barry (2006) gave students a paragraph from a famous developmental theorist. Students had to paraphrase the passage and provide a proper citation. After completing the activity, students’ definitions of plagiarism were more complex than those offered at the onset of the study. Not only did they define plagiarism as “taking someone else’s idea”, they added “not giving credit” to their definition. This isn’t necessarily evidence that this activity would reduce the number plagiarism cases, but it is evidence of students gaining a better understanding of plagiarism.

                You could also incorporate plagiarism as a theme in your course. Estow, Lawrence, and Adams (2011) designed a research methods class where the assignments and projects in the class related to the topic of plagiarism. For example, their students designed a survey about plagiarism, collected data, and wrote a research report on their findings in one set of assignments. The researchers compared the progress of this class to one with the same assignments but a different theme. The students in the plagiarism-themed course were able to better identify plagiarism and generate more strategies for avoiding plagiarism.

    Plagiarism is scary, for both professionals and students. The consequences can be steep. It has resulted in failed assignments, expulsion from school, revoked degrees, and even ended careers. Students often tell me how terrified they are of unintentional plagiarism; Gullifer and Tyson’s participants also expressed fear of unintentional plagiarism and the consequences of plagiarism. Implementing some of these fairly simple ideas in our courses will enhance our students understanding of plagiarism. A better-informed student should be less fearful, more confident in their ability to write, and less likely to plagiarize.

     

    References

     

    American Psychological Association. (2007). APA guidelines for the undergraduate psychology major. Retrieved from http://www.apa.org/ed/precollege/about/psymajor-guidelines.pdf

    Barry, E. (2006). Can paraphrasing practice help students define plagiarism? College Student Journal, 40(2), 377-384.

    Estow, S., Lawrence, E. K., & Adams, K.A. (2011). Practice makes perfect: Improving students’ skills in understanding and avoiding plagiarism with a themed methods course. Teaching of Psychology, 38(4), 255-258.

    Gullifer, J., & Tyson, G.A. (2010). Exploring university students’ perceptions of plagiarism: A focus group study. Studies in Higher Education, 35(4), 463-481.

    Holt, E. (2012). Education improves plagiarism detection by biology undergraduates. BioScience, 62(6), 585-592.

    Lamoreaux, M., Darnell, K., Sheehan, E., & Tusher, C. (2012). Educating students about plagiarism. Retrieved from  Office of Teaching Resources in Psychology for Society for the Teaching of Psychology website: http://teachpsych.org/Resources/Documents/otrp/resources/plagiarism/Educating Students about Plagiarism.pdf

    Sheehan, E. A. (2013, January). Kick plagiarism to the curb: How to educate students before they head down that road. Participant Idea Exchange conducted at the National Institute on the Teaching of Psychology, St. Pete Beach, Fl.

    Stowers, R. H., & Hummel, J. Y. (2011) The use of technology to combat plagiarism in business communication classes. Business Communication Quarterly, 74(2), 164-169.

     

     

    Elizabeth Sheehan is a Lecturer at Georgia State University. She earned her PhD in Psychology from Emory University in Cognition and Development. She currently teaches Intro Psychology, an integrated version of Research Methods and Statistics, and Forensic Psychology. She has presented her work on designing study abroad programs, teaching with technology, and incorporating writing assignments into courses at teaching conferences, such as the Southeastern Conference on Teaching of Psychology and the Developmental Science Teaching Institute for the Society for Research in Child Development.

     


  • 01 Aug 2017 8:36 AM | Anonymous

    Supporting Students Using Balanced In-Class Small Groups

     

    Hung-Tao Michael Chen

    Eastern Kentucky University

     

    The usage of in-class small groups has been shown to improve students’ learning experience (Johnson & Johnson, 2002). Although many studies have demonstrated this effect, few studies have looked at how the specific composition of group members could support students who are at risk of dropping out from college. This essay describes a pilot study that uses the College Persistence Questionnaire to group students (Davidson, Beck & Milligan, 2009). Preliminary results are inconclusive, showing that high performing students might be benefitting more from the small groups than low performing students. 

     

    Creating Small Groups in the Classroom

    Student persistence has been one of the greatest challenges faced in higher education (Seidman, 2005; Tinto, 2006; Tinto 2010). While many researchers have identified students who are at risk of dropping out and proposed intervention strategies, few have looked at the effectiveness of balanced in-class small groups to promote peer networking and support. Conventionally, most instructors who use small groups in the classroom would form the groups by random selection or allow the students to form their own groups. The author of this essay proposes, instead, to form the small groups by first identifying students who have high risk of dropping out from college and group these students with those who are not at risk. These “balanced” small groups should provide students with greater peer support in the classroom.

    We have all encountered students who are underperforming in the classroom and are at risk of dropping out. Factors that include personal, cultural, economic, and social forces all affect a student’s ability to persist in college (Tinto, 2006). Strategies such as building learning communities and cohort systems have been implemented by many universities to improve student retention rate (Tinto, 2010). The problem with many of these retention strategies is that they generally require institutional support and substantial financial backing to ensure success and longevity. Is there a strategy that an instructor could easily implement in the classroom, does not require major course re-design and does not require financial support?

    One strategy that only requires a small investment from the instructor is the usage of balanced small groups in the classroom. The usage of small groups in the classroom is not a new idea and it has proven to be an effective way of promoting learning (Johnson & Johnson, 2002, 2015). Past research has also shown that peer support would increase a student’s college persistence (Eckles & Stradely, 2012; Skahill, 2002). However, not much research has been done to address the usage of small groups to support students who are at risk of dropping out from college. When students are randomly grouped or form groups of their own, there will inevitably be a few groups that are comprised of students who are all at high risk of dropping out. The idea behind the balanced small groups is simple—students who are at high risk and low risk of dropping out should be evenly distributed across all groups. If the cognitive and social mechanisms behind the effectiveness of small groups hold true, then students who are at lower risk of dropping out should be able to support and anchor students who are at higher risk of dropping out. This idea is based on the social interdependence theory that people, when placed in cooperative groups with a positive environment, will help each other to achieve a common goal (Johsons & Johnson, 2015).

     

    Implementing and Evaluating the Idea

    The first step in creating balanced small groups is to identify and classify students who are at high risk, moderate risk, and low risk of dropping out. The author of this essay used a modified version of the College Persistence Questionnaire (CPQ) to gauge students’ likelihood of persisting in college at the beginning of the semester (Davidson, Beck & Milligan, 2009). The original CPQ by Davidson and colleagues was modified to fit the specific characteristics of the author’s home institution. The modified questionnaire was built in Qualtrics and distributed to the students at the beginning of the semester. It should be noted that the author of this essay adopted a “flipped classroom” teaching model, where at least half of the class period involved small group problem solving (Lage, Platt & Treglia, 2000). The students had to work together to solve short answer questions and multiple choice quizzes. Each group had to turn in one copy of the short answer worksheet and one copy of the multiple choice quiz at the end of every class period. The in-person class met twice a week for 75 minutes each. The first 30 minutes of the class was in the form of a lecture with interactive clicker questions. The other 45 minutes was used to solve an in-class worksheet and a multiple choice quiz question in groups of four. Students were allowed to use their notes while solving the worksheet but they were not allowed to use their notes while completing the multiple choice quiz during the final fifteen minutes of class. A total of four undergraduate teaching assistants who were not enrolled in the specific class assisted with the small group problem solving portion of the class.

    After students’ response for the CPQ had been collected, the author calculated a cumulative score for each student based on the student’s response on the questionnaire. The students were then divided into four categories: those in the bottom 25th percentile, those in the 26th-50th percentile, those in the 51st to 75th percentile, and those in the top 76th percentile. Those in the top 76th percentile were students who were at very low risk of dropping out, those in the bottom 25th percentile were the ones who were at high risk of dropping out. The class had a total of 80 students; half of the students were put into balanced small groups using their CPQ scores and half of the students were placed into small groups randomly, regardless of their CPQ score.  Each group had four students. The balanced groups one student from each of the four CPQ categories; the random groups were created based on student ID number. The students stayed in the same group throughout the semester and they were encouraged to collaborate with each other. The author of this essay used a variety of bonus points and team building tasks throughout the semester to help the students foster a positive and cooperative learning environment (Johnson & Johnson 2015).

      This method of balanced small groups was first piloted during the Spring 2015 semester at a large state university. The results were inconclusive because the comparison between the random groups and the balanced groups did not yield any significant difference. The general trend of the means, however, seemed to show that students who were already at low risk of dropping out were benefitting more from the balanced small groups than students who were at high risk of dropping out. Future studies should probably compare balanced groups with students of varying risk levels, against matched groups where students of similar risk levels were grouped together. Qualitative data and survey data should also be gathered in addition to student performance data. There was also the concern that the balanced-group manipulation appeared to benefit the higher performing students more than the lower performing students who were at high risk of dropping out. This was probably a result of social loafing effect where the high performing students were doing most of the work in the class. The worksheets and the quizzes were graded per group but they should have been issued and graded on an individual basis. Future studies should design the assessments such that every student is held equally responsible. This way, any effect of social loafing should be minimized.

     

    Author’s note: This essay was based on a study presented at a poster session at the Society for the Teaching of Psychology’s 15th Annual Conference. Decatur, GA, October 2016. 

     

     

    References

    Davidson, W. B., Beck, H. P., & Milligan, M. (2009). The College Persistence Questionnaire: Development and validation of an instrument that predicts student attrition. Journal of College Student Development, 50(4), 373-390.

    Eckles, J. E., & Stradley, E. G. (2012). A social network analysis of student retention using archival data. Social Psychology of Education15(2), 165-180.

    Johnson, D. W., & Johnson, R. T. (2002). Learning together and alone: Overview and metaanalysis. Asia Pacific Journal of Education22(1), 95-105.

    Johnson, D. W., & Johnson, R. T.  (2015). Theoretical approaches to cooperative learning.  In R. Gillies (Ed.), Collaborative learning:  Developments in research and practice (pp. 17-46).  New York:  Nova. 

    Lage, M. J., Platt, G. J., & Treglia, M. (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. The Journal of Economic Education31(1), 30-43.

    Seidman, A. (2005). College student retention: Formula for student success (ACE/Praeger series on higher education; American Council on Education/Praeger series on higher education). Westport, CT: Praeger Publishers. 

    Skahill, M. P. (2002). The role of social support network in college persistence among freshman students. The Journal of College Student Retention: Research, Theory, and Practice, 4(1), 39–52.

    Tinto, V. (2006). Research and practice of student retention: what next?. Journal of College Student Retention: Research, Theory & Practice, 8(1), 1-19.

    Tinto, V. (2010). From theory to action: Exploring the institutional conditions for student retention. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 51-89). Netherlands: Springer.

     

    H.-T. Michael Chen is an Assistant Professor of Psychology at Eastern Kentucky University in Richmond, KY. He graduated from Berea College with a degree in Biology, and earned his M.S. and Ph.D. in Experimental Psychology from the University of Kentucky. He teaches courses in research methods, cognition, and human factors. His research interests include teaching strategies in the classroom and the design of better educational technologies.

     

  • 16 Jul 2017 1:29 PM | Anonymous
    STP’s SoTL Writing Workshop: A.K.A. How I Wrote a Paper in Two Days

    Michelle A. Drouin

    Indiana University–Purdue University Fort Wayne

     

    In this paper, I describe my experiences with the Society for the Teaching of Psychology’s (STP) Scholarship of Teaching and Learning (SoTL) Writing Workshop. I first describe the obstacles preventing me from joining such efforts and then describe the process and structure of STP’s Writing Workshop. As a result of my participation, I not only wrote a manuscript from (practically) start to finish in two days, but I also finished three other SoTL papers and developed and implemented a SoTL Writing Retreat on my own campus.

    It is very difficult to say “no” to Regan Gurung. He is charming and charismatic, and as the former President of STP, he is kind of a psychology celebrity. So in May, 2012, when Regan invited me to apply to the STP’s Scholarship of Teaching and Learning (SoTL) Writing Workshop (www.teachpsych.org/conferences/writing/index.php#.UcpdcZzNnUk), try as I might, I could not say “no.”

     “But it’s hard for me to travel,” I said. “I have two young children, five and three.”

    “Perfect! Mine are six and four,” Regan responded.

    “I actually have a lot of projects going on, so I am really doing well on my SoTL writing,” I countered.

    Regan smiled, “Are they finished? You owe it to teachers and students everywhere to get them out.”

    “Teachers and students everywhere?” I pondered, “That’s a lot of people depending on me... .”

    “Ok, I’m in” I replied.

    Thus began my journey with STP’s SoTL Writing Workshop.

     

     

    The Obstacles

    As I look back on that day, I can clearly identify the obstacles that were keeping me from engaging in writing workshops generally and this one specifically:

    • I thought I had SoTL writing figured out. I had a few SoTL research papers published and had written two invited book chapters. Although I did not consider myself an expert in SoTL, I was certainly one of the SoTL leaders at my university. I knew I could do the work, so I really did not know what the SoTL Writing Workshop could do for me.
    • I did not think I had the time for a workshop. I was already time pressed—hence the many unfinished projects—so how would I find the time to travel and participate in a workshop?
    • I thought that unfinished projects were a normal part of academic life. My colleague (who has been in his position for 9 years) still has an unfinished project from graduate school. I have many unfinished projects, and as the years go by, that list is growing, especially for SoTL projects. I accepted this as a normal part of my academic journey.
    • I am actually a good, prolific writer. I don’t struggle with writing. I spend much of my academic work time writing both disciplinary and pedagogical papers, and I am successful in getting my work published. According to the 2010-11 UCLA Higher Education Research Institute Faculty Survey, only about 20% of faculty at all baccalaureate institutions had five or more papers accepted or published in the last two years (Hurtado, Eagan, Pryor, Whang, & Tran, 2012), and I am pleased to say that I am in that 20%.

    Engaging

    Despite my many internal protests, I engaged. Two weeks later, I was describing via email my various unfinished SoTL projects to my three fellow group members and reading Optimizing Teaching and Learning: Practicing Pedagogical Research (Gurung & Schwartz, 2009), which Regan sent to workshop participants. I was also learning more about the workshop through email and had received a participant timeline with “soft deadlines to make the workshop most effective”:

    May:  Introductions and basic idea sharing.

    June-August:  Preliminary consultations.

    August 30th:  Project proposal/status—Write a 1-2 page proposal for the topics you would like to research. If there is data collected, then list key hypotheses driving the study and draft a method section.

    September 15th: Complete a preliminary literature search for articles relating to topic of interest or study conducted (outline Intro section).

    Oct 1st: Final report on activity/project status due to Mentors.

                                                 (R. Gurung, personal communication, May 29, 2012)

    Through this email correspondence, I also learned two important things: (1) that the mentors would provide follow-up consultations and draft reading (or other types of assistance) post-workshop, and (2) that the goal of the workshop was to have a SoTL publication submitted by the end of the 2012-13 academic year. As I hoped to finish at least one of my papers by that deadline, I thought this was a realistic goal for me. However, one of the hurdles I faced during my preliminary consultations with Regan was trying to decide which of my many projects to bring to the workshop.


    Getting Organized

    At the time of our initial correspondence, I had SEVEN unfinished SoTL projects. I was already in the writing phase of an online lecture paper and decided to finish that one outside of the SoTL writing workshop; the workshop only accelerated my timeline. Thereafter, I turned my focus to three others: an iPad project, an online decision tree for psychology majors project, and a lecture capture project. In preparation for the August 30deadline, I was overzealous and finished and submitted the decision tree paper, which left me with five papers to complete and nothing firm to bring to the writing workshop. At this point I had to reassess and emailed Regan in desperation—“what project should I now bring to the SoTL writing workshop?”

    Regan replied, “Given that you are progressing well, how about you aim to send a plan of what YOU hope to have done on EACH of the 3-4 topics.  A few sentences on each so you have a clear picture of goals.” (R. Gurung, personal communication, August 29, 2012).

    At this point, I finally committed to paper the goals I had for my various SoTL writing projects and constructed a table that would guide me through the rest of the process. In this table, I listed my five unfinished projects and the goals I had for them for the October workshop (summarized here):

    • iPad cohort & lecture capture projects: Data analyzed; results and methods sections written, literature review mostly done
    • Research assistantship, blogs as learning tools, and research review and presentation projects: Data cleaned; sources gathered

    Creating this table gave me clarity. This was the first time in my academic career that I had actually listed all of my ongoing projects and created goals for each. Until this point, the projects were all quite nebulous—I did not even know how many unfinished SoTL projects I had. After I created the table, I had a visual reminder of my goals, and this was a breakthrough. As I thought about my goals, I knew that if I could arrive at the writing workshop with at least cleaned data sets and relevant sources gathered, I would be able to make the most of the personalized statistical consultations and also be able to get advice on publication. Minimally, this is what I hoped to accomplish, and in the end, this is what I had accomplished when I boarded the plane for Atlanta in October, 2012.


    Attending the Conference

    Early in my career, I heard a rumor about two professors who would get together and complete manuscripts (from start to finish) in a weekend. I remember the questions that rushed through my head at the time—“How did they do it? What did they do to prepare for this writing extravaganza? Did they each work independently, or did they work collaboratively?” Because the source of this rumor had so few answers, I dismissed it as urban legend. However, now I know that this feat can be accomplished.

    When I arrived in Atlanta for the SoTL writing conference, I had 733 words (mostly methods), a cleaned data set, and sources gathered for a manuscript on the effects of using lecture capture in an introductory psychology course. I focused on this paper because after cleaning the data sets of three other projects (research assistantship, blogs, and research review), I decided I needed to collect more data. Meanwhile, although I had enough data for the iPad project, it was not specific enough to psychology to make use of the mentorship I was about to receive. Thus, my lecture capture project became my official SoTL workshop baby.

    The SoTL writing conference runs concurrently with STP’s Best Practices Conference, so we were able to attend the keynote addresses for the Best Practices Conference; however, the rest of the time we were to devote ourselves to our SoTL projects. The structure of the conference was:

    Day 1: Evening arrival, dinner, presentation on doing SoTL research by Regan Gurung, large-group introductions with explanations of our SoTL projects.

    Day 2: Writing, individual consultations with mentor, individual consultations with statistician and ToP editor.

    Day 3: Writing, presentation by Drew Christopher (Editor, Teaching of Psychology) on getting published, departure in the afternoon.

    I spent most of my time writing, in the hotel lobby, side by side with other workshop participants, pausing at times to ask them their feedback on something that I had written but mostly just in my own private writing abyss. I had a few consultations with Regan, where he pointed me to relevant sources and asked me to include additional information. I talked through my statistical analyses with Georjeanna Wilson-Doenges, who helped me see that what I was actually proposing was a mediation model. And I also spoke at length with Drew Christopher, who encouraged us all to be tenacious with our papers. When I boarded the plane to go home, I had 5,697 words and a paper that was nearly complete. A few days later, I sent it to Regan for feedback, and approximately one week later, I sent it out for review.


    Results

                A few months later, my paper (Drouin, 2014) was accepted with minor revisions for publication in Teaching of Psychology. However, this was not the only positive outcome of my SoTL writing workshop experience. Two other papers I prepared as part of this process (lecture format study and iPad project) have now been accepted for publication, and I am currently revising another (online decision tree) in response to a revise and resubmit decision. This is the greatest number of SoTL papers I have even written in a one-year time frame and is equivalent to the number of SoTL articles I had accepted before I joined this workshop.

                These accomplishments are overshadowed though by my biggest take-home of the conference. In May, 2013, just one year after my initial conversation with Regan, I coordinated my own SoTL Writing Retreat on my campus. We had 12 participants, working side-by-side with four experienced SoTL mentors, a statistical consultant, and librarians, who assisted with source gathering and finding publication venues. Sponsored by IPFW's Committee for the Advancement of Scholarly Teaching and Learning Excellence, this SoTL writing retreat was the first of its kind on our campus and was a great success. Although I did not follow the STP Writing Workshop model exactly (e.g., due to time constraints, we did not provide consultations in advance, and we also did not create a firm structure for follow-up consultations), we included key elements that were helpful in making the workshop a success for me. More specifically:

    1. We had an application process. Participants were asked to describe the projects they were working on, where they were in the process, and what they hoped to accomplish during the retreat.
    2. Participants were paired with mentors who had knowledge of the content area or data collection method. Based on the applications, we formed mini-groups composed of people who were working on similar projects or using similar data collection methods, and we matched mentors with writers on this basis.
    3. The writing retreat lasted only two days. Longer writing workshops or writing lockdowns that have meetings over weeks or months, like those highlighted by Belcher (2009) or Jakobsen and Lee (2012), certainly have their strengths, but my university already had writing groups, and I had never engaged because I feared the long commitment. Workshops of a limited duration are perfect for commitment-phobes like me, and because this model had worked for me with STP’s workshop, I wanted others to be able to experience this model.
    4. It was a retreat, with large chunks of time devoted to writing. We had only two short workshops on IRB proposals and publication venues; the rest of the time was devoted to manuscript writing or other types of SoTL writing activities (e.g., writing an IRB proposal, writing out a plan for the research).

    Feedback on the workshop was overwhelmingly positive, but I did have suggestions to do more preparatory work with participants before the retreat, which aligns well with STP’s model. Overall, participants appreciated the time devoted exclusively to working on their projects and the synergy we created during those two days in the campus library. It was inspirational for me, and in a sense, I felt that I was paying it forward.

    As I closed the writing workshop, I chose my words carefully: Echoes of a year before and foreshadowing for the essay you are now reading— “This is important work. You owe it to students and teachers everywhere to get it out.”


    References

    Drouin, M. (2014). If you record it, some won’t come: Using lecture capture in introductory psychology. Teaching of Psychology, 41(1), 11-19.

    Hurtado, S., Eagan, M. K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010–2011 HERI Faculty Survey. Los Angeles: Higher Education Research Institute, UCLA.

    Gurung, R. A. R., & Schwartz, E. (2009).Optimizing teaching and learning: Pedagogical research in practice. Malden, MA: Blackwell.

    Jakobsen, K. V., & Lee, M. R. (2012). Faculty writing lockdowns. In J. Holmes, S.C. Baker, & J. R. Stowell (Eds.), Essays from E-xcellence in Teaching (Vol. 11, pp. 26–29). Retrieved from the Society for the Teaching of Psychology Web site: http://teachpsych.org/ebooks/eit2011/index.php


    Michelle Drouin earned her bachelor’s degree in psychology from Cornell University and her DPhil in Experimental Psychology from University of Oxford, England. She is an associate professor of psychology at Indiana University-Purdue University Fort Wayne and teaches courses in introductory psychology, developmental psychology (child and lifespan), social and personality development, and language development. Her research, both disciplinary and pedagogical, is focused on literacy, language, and the ways in which technology affects communication and learning. She has written numerous pedagogical papers and invited book chapters focused mainly on online teaching and the integration of technology in the classroom.

     

  • 02 Jul 2017 4:41 PM | Anonymous

    Flipped out: Methods and outcomes of flipping abnormal psychology

    Amanda K Sommerfeld, Ph.D.
    Washington College

     

    The Background

    Abnormal psychology is taught in virtually every undergraduate psychology department across country (Perlman & McCann, 1999). However, despite its popularity, the course is not immune from critiques. Like many college courses, abnormal psychology is often lecture-based (Benjamin, 2002). Although such a pedagogical approach is popular among faculty because of its effectiveness in maximizing content delivery (Kendra, Cattaneo, & Mohr, 2012), in some cases lectures also may be less effective than other methods for promoting learning (c.f., Halonen, 2005).

    Abnormal psychology courses have also been critiqued as lacking both context and nuance. As Norcross, Sommer, and Clifford (2001) note, in abnormal psychology classes, “the painful human experience of psychopathology is frequently overshadowed by descriptions of disembodied symptoms and impersonal treatment” (p. 126). As a result, despite many professors’ intentions to use abnormal psychology courses to decrease stigma (Kendra et al., 2012) and increase student understanding of the contextual factors that shape psychiatric conditions (Lafosse & Zinser, 2002), courses may fall short of these desired outcomes. That was certainly my experience when I first taught abnormal psychology.

     

    The Issue

    Psychopathology I (PSY 233) is a core course for students who are majoring in psychology with a clinical/counseling concentration at my college. Because of this, as well as the content, the class is frequently filled to capacity (40 students). When I inherited the class in Fall 2014, I kept using what Benjamin (2002) refers to as “the Velveeta (cheese) of teaching methods” (p. 57), otherwise known as a lecture-centered approach (which is comparable to the cheesy foodstuff in that despite the fact that no one admits to liking it, it remains the most popular pedagogical approach; Halonen, 2005). I enhanced the class with media critiques, group projects, and in-class discussions, however class time remained lecture-driven.

    According to my students the course was successful. Students gave high ratings on course evaluation items (rated from 1=strongly disagree to 5=strongly agree) such as “The use of teaching aids was effective” (μ=4.9) and “The instructor answered questions in class in a patient and helpful manner“ (μ=4.9). Students’ qualitative feedback supported these ratings.

                Despite this positive feedback, I was dissatisfied with several aspects of the course. For example, lower student ratings on items such as, “I learned a great deal in this class” (μ=4.6) and “The course raised challenging questions or issues” (μ=4.6), led me to wonder if students were basing their assessments on how much they liked the course rather than their actual learning. What is more, at the end of the semester I didn’t feel confident that I’d met my objective of challenging students to consider how cultural norms and biases contribute to psychiatric conditions. As a result, I was left with the sense that, because of the format, I had reduced the course content to a list of diagnostic criteria, leaving little time for acknowledging symptom variation, challenging stereotypes, or encouraging the development of advocacy attitudes. To combat these shortcomings, I decided to change the class radically, and, with the support of a grant from my college’s Cromwell Center for Teaching and Learning, I flipped—or inverted—the class.

     

    The Solution

    There is no single definition of flipped instruction (He, Holton, Farkas, & Warschauer, 2016). However, the underlying intent of the approach is to move lecture-based material outside of class, leaving in-class time for “face to face engagement between students and teachers” (Forsey, Low, & Glance, 2013, p. 472). This is commonly achieved by delivering course content before class meetings using recorded lectures, podcasts, or videos. Material is then applied during face-to-face meetings through discussions, activities, and hands-on demonstrations.

    To date, the research on flipped instruction is incomplete. As O’Flaherty and Phillips (2015) note, few studies have “actually demonstrated robust evidence to support that the flipped learning approach is more effective than conventional teaching methods” (p. 94). Despite this, anecdotal evidence is encouraging, with some studies claiming that flipped instruction results in greater student engagement (c.f., Jamaludin & Osman, 2014) and higher test scores and overall grades (c.f., Mason, Shuman, & Cook, 2013). Based on this available evidence, and the issues that I observed in the first iteration of Psychopathology I, flipping the class seemed a worthwhile venture.

     

    The Implementation

                Flipping Psychopathology I required me to create two sets of materials: out-of-class and in-class. The bulk of class content (i.e., diagnostic criteria, prevalence rates, treatment approaches, etc.) was delivered outside of class through video lectures that were uploaded to the course’s online learning platform. For the first iteration of the flipped course, these lectures were simple, with my voice recorded over PowerPoint slides using SnagIt. These videos were limited to ten minutes so students could easily review information. Prior to class students were required to watch between one and three videos and complete an online quiz. The quizzes were intended to encourage mastery, so students were able to repeat the quizzes multiple times.

    In-class time was focused on application and discussion (Pluta, Richards, & Mutnick, 2012). This required me to create individual and group activities for each class meeting. Sample activities included having students evaluate media depictions of psychiatric disorders for accuracy, writing vignettes of imaginary clients, and discussing the systemic factors that affect how clients manifest symptoms.

                I evaluated the effectiveness of the flipped versus traditional instruction based on data collected at two times: following a lecture-based course in Fall 2014 (N=27) and following a flipped-style course in Fall 2015 (N=34). Data I collected at both points in time included student test scores and grades, student course evaluations, student responses to questions developed for the Web Learning Project (Calderon, Ginsberg, & Ciabocchi, 2012), and instructor reflections.

     

    The Outcomes

                Data from the traditional and flipped offerings of Psychopathology I suggested the pedagogical change affected outcomes in three domains: student learning, student engagement, and instructor experience.

     

    Impacts on student learning

                Researchers suggest that flipped instruction is successful because students are able to learn and review pre-class material on their own time and at their own pace (McDonald & Smith, 2013). Many of my students agreed with this assessment, sharing on course evaluations that, “I like how the videos were before class. It allowed for deeper understanding of the material because I can pause, write down questions, and review as needed.” Accordingly, students also rated the “adequacy of resources” as significantly higher than students in the lecture class (t(54)=-2.11, p=.04).

                In opposition to the literature, the accessibility of material outside of class did not translate into higher grades for my students. In fact, although there were no significant differences in final grades between the two classes, students in the flipped class had significantly lower exam grades than students in the lecture-based class (t(58)=2.42, p=.02). What is more, student responses to the item “I learned a lot in this course” were lower in the flipped course (μ=4.3) than in the lecture course (μ=4.6).

                It is possible that some of the student learning drawbacks of the flipped class were related to perceptions of the difficulty of the course. In comparison to students in the lecture class, students in the flipped class rated the course as having a significantly higher “workload” (t(56)=-6.02, p=.00) and being more “difficult” (t(55)=-3.19, p=.00). Further, student qualitative feedback reinforced these ratings, suggesting the flipped style made learning more difficult for some students.

    This perception runs contrary to previous studies that have suggested students perceive flipped courses as less difficult than courses taught using traditional methods (He et al., 2016). However similar to previous research (c.f., O’Flaherty & Phillips, 2015), students in my flipped course suggested the difficulty predominantly stemmed from the increased responsibility they felt: “The flip style makes learning just a little bit harder because it puts all the responsibility on what you do outside of the classroom.”

     

    Impacts on student engagement

                The fundamental purpose behind flipped instruction is to use in-class time for active learning. Given this, some of the feedback from students in the flipped class led me to question the effectiveness of my in-class activities. For example, students in the flipped course rated the “learning value of in-class materials” significantly lower than students in the lecture course (t(56)=2.326, p=.02). These data were supported by comments such as, “class meetings are interesting but not necessarily informative.”

    Based on these data, it seems that my implementation of flipped pedagogy may have fallen short because of how I structured face-to-face meetings. It may be that, similar to O’Flaherty and Phillips’ (2015) findings in their scoping review, I failed to explain the link between the pre-class activities and the face-to-face sessions. As a result, the in-class material may not have engaged the students.

                With that said, data also suggested students interacted more in the flipped class, which may have facilitated student engagement. For example, students in the flipped course rated the amount and quality of “interaction with other students” as significantly greater than students in the lecture course (t(56)=-6.06, p=.00). Student comments reinforced these data, with one student noting, “I like that we get more time to ask questions in class,” and another mentioning that “the interaction during class time helps to solidify the information.”

     

    Impact on instructor

                Researchers who study flipped instruction routinely note how demanding it is on instructors. That was certainly my experience in flipping Psychopathology I. Similar to other instructors’ experiences, it took considerable planning and preparation for me to design engaging, interactive in-class activities (c.f., Mason et al., 2013). A great deal of lead-in time was also required to record and edit lectures in advance of class meetings.

                The process of making the videos was also complicated by the limited technical support available to me. Although I consulted with members of the academic technology team at my institution, they did not have the time or resources to help me record or edit the videos. As a result, I had to learn how to use the software and troubleshoot issues on my own. Perhaps because of the amount of time and expertise required to create even simple videos, it is not surprising that researchers have recommended having a support staff or technical team available (c.f., Ferreri & O’Connor, 2013).

                Despite these issues, I also found the flipped course had multiple strengths. Most importantly, because students could access and review the lectures before class meetings they were less concerned with taking notes in class. This freed up the students to listen to their classmates, contribute to discussions, and engage fully in activities. As a result, a greater proportion of students participated in the flipped versus the traditional class.

                Finally, I also found the flipped class provided students with increased opportunities to consider more nuanced issues related to psychiatric disorders. In particular, because the students were introduced to diagnostic criteria and prevalence rates prior to class, they were more prepared to apply and critique that material in class, opening up discussions about stigma, social norms, and systemic forms of privilege and oppression that affect psychological health and illness.

     

    The conclusions

    The flipped version of Psychopathology I had both strengths and weaknesses. Students appreciated the opportunities for review that the flipped style provided, were better able to consider the nuances of psychiatric conditions, and were more engaged during in-class meetings. On the other hand, some students reported the flipped style made learning more difficult and I found the flipped course took more time to prepare. Given these data, it is not possible to say flipping Psychopathology I improved the course as a whole, at least not after the first offering. However, with revision the flipped course could hold considerable promise to help students develop more critical perspectives on topics relevant to abnormal psychology.

     

    References

     

    Benjamin, L. (2002). Lecturing. In S.F. Davis & W. Buskist (Eds.), The teaching of psychology: Essays in honor of Wilbert J. McKeachie and Charles L. Brewer. Mahwah, NH: Lawrence Erlbaum Associates, Inc.

    Calderon, O., Ginsberg, A.P., & Ciabocchi, L. (2012). Multidimensional assessment of pilot blended learning programs: Maximizing program effectiveness based on student and faculty feedback. Journal of Asynchronous Learning Networks, 16(3), 23-37.

    Ferreri, S., & O'Connor (2013). Instructional design and assessment: Redesign of a large lecture course into a small-group learning course. American Journal of Pharmaceutical Education, 77(1), 19.

    Forsey, M., Low, M., & Glance, D. (2013). Flipping the sociology classroom: Towards a practice of online pedagogy. Journal of Sociology, 49(4), 471-485.

    Halonen, J.S. (2005). Abnormal psychology as liberating art and science. Journal of Social and Clinical Psychology, 24(1), 41-50.

    He, W., Holton, A., Farkas, G., & Warschauer, M. (2016). The effects of flipped instruction on out-of-class study time, exam performance, and student perceptions. Learning and Instruction, 45, 61-71.

    Jamaludin, R., & Osman, S. Z. (2014). The use of a flipped classroom to enhance engagement and promote active learning. Journal of Education and Practice, 5(2), 124–131.

    Kendra, M.S., & Cattaneo, L.B., & Mohr, J.J. (2011). Teaching abnormal psychology to improve attitudes toward mental illness and help-seeking. Teaching of Psychology, 39(1), 57-61.

    Lafosse, J.M., & Zinser, M.C. (2002. A case-conference exercise to facilitate understanding of paradigms in abnormal psychology. Teaching of Psychology, 29(3), 220-222.

    Mason, G., Shuman, T., & Cook, K. (2013). Comparing the effectiveness of an inverted classroom to a traditional classroom in an upper-division engineering course. IEEE Transactions on Education, 56(4), 430435.

    McDonald, K., & Smith, C. M. (2013). The flipped classroom for professional development: Part I. Benefits and strategies. The Journal of Continuing Education in Nursing, 44(10), 437.

    O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. Internet and Higher Education, 25, 85-95.

    Norcross, J.C., Sommer, R., & Clifford, J.S. (2001). Incorporating published autobiographies into the abnormal psychology course. Teaching of Psychology, 28(2), 125-128.

    Perlman, B., & McCann, L.I. (1999). The most frequently listed courses in the undergraduate psychology curriculum. Teaching of Psychology, 26(3), 177-182.

    Pluta, W., Richards, B., & Mutnick, A. (2013). PBL and beyond: Trends in collaborative learning. Teaching and Learning in Medicine, 25(S1), S9S16.

  • 15 Jun 2017 1:15 PM | Anonymous
    Using The iPad In Your Academic Workflow:
    Best iPad Productivity Tools For Your Classroom Practices


    David Berg, Ph.D.
    Community College Of Philadelphia


    This document is based on workshops I presented at 35th Annual National Institute on the Teaching of Psychology.

    Introduction To “Using The iPad In Your Academic Workflow”
    In the academic world, our workflow involves a number of different elements which may include planning and scheduling, project management, reading and writing, information management (gathering, sorting, storing), collaboration (students, colleagues, department, college, and organizations), participation in meetings and committees, and interfacing with cyberspace (email and web). We could add many more things to the list, however it’s best to emphasize that workflow for the iPad looks like old S→P→O Psychology. The workflow starts with the INPUT (stimulus) into the iPad from either your computer (via iTunes sync), from the cloud (via DropBox or WiFi), or from your thoughts and ideas. The workflow ends with the OUTPUT back to your computer, to the cloud, to a projector, or perhaps to a printer. OUTPUT can take many forms: written and marked-up documents, media (audio/video/artistic/photos), presentation materials, podcasts, collaborative documents, and so on. What goes on in the middle is the PROCESSING which entails the use of many interconnected tools or apps on the iPad itself -- the majority of this essay focuses on the Process.

    iPad In The Classroom
    Over the past two years or so, more and more faculty have been making use of the iPad as the “tool of choice” in their academic lives. As the iPad (and iOS) have matured, we’ve seen greater numbers adapting the device for their personal use. What about the iPad in the classroom? Beyond some simple usage, most faculty have not tapped the full potential of the iPad—still relying on laptops, smart carts, and the classroom smart podium (nice if your classroom has one). My favorite classroom is currently outfitted with 1976-era technology: a 27” wall-mounted monitor with attached VHS/DVD player (that works most of the time). Schlepping the smart cart from A/V services around the campus is a Herculean chore not for the faint of heart; getting all of the parts working and set up for class...well...resistance is futile!

    So I made an executive decision. Though on a shoe string budget, I decided that I would not upgrade my old laptop but invest in the new tablet technology instead, and adapt it to both my classroom needs and my academic workflow. Mind you, I have a decent up-to-date desktop computer that provides a way around some of the content creation issues that come up regarding the use of tablet computing.




    The next section is aimed at the professional user who wants to make the most out of using the iPad in the classroom. It does not cover classes in colleges that give everyone an iPad (we should only be so lucky), but rather how to make use of the iPad as your go-to-technology.

    Issues:
    The four biggest issues usually raised when we discuss using the iPad are: Content Creation vs. Consumption, Laptop vs. iPad, Device Integration, and College vs. High School teaching. When the original iPad was first released, it really functioned as a superb consumption device—great for personal use but lacking in many ways to create content. Times have changed! You can create to your heart’s content albeit with some limitations in a few areas; however, there isn’t much that you can’t do. Probably (for academics) the most serious limitations are in creating major presentations (PowerPoint and Keynote), developing large media projects, and other areas such as business applications (large excel spreadsheets and such). You can do these things, but not with the same ease as on a laptop or desktop computer.

    Of course this brings us to the next issue of Laptop vs. iPad. The iPad excels as a portable device whether at college, in the classroom, at home, or for travel. In a classroom, the iPad can be connected to any monitor or projector with ease, and further it can be used as a whiteboard making for an interactive class. The laptop may be preferential in terms of data management, content creation of presentations and media, or for research and data. If you need to make a decision, think in terms of what your needs are rather than in terms of what device to buy. I have a wonderful desktop machine so I have given up my old laptop in favor of my iPad; when I retire, I will give up the desktop machine. If you do not have access to a good working computer, you might think about updating.

    Once these first two issues get sorted out, you can then consider the third, Device Integration. NOT A PROBLEM. When the iPad first appeared, about the only way to get information in and out was through iTunes sync. Now, with the proliferation of cloud computing, the issue is no longer a difficulty. I prefer to connect my iPad to my computer every few days and use the sync apps-file sharing method in iTunes. However, many people prefer to use DropBox as their primary means of transferring information between their iPad and their Mac or PC. For specific types of documents, both Google and Microsoft have also introduced their own versions of the cloud for document syncing and collaboration.

    Finally, high school Psychology teachers may have other responsibilities that college instructors don’t have to deal with, such as interfacing with an administrative network, putting together course lessons for five day/week classes, and making lesson plans available to supervisors. There are now a number of apps to facilitate these functions.



    Fair Use Guidelines & Copyright Issues
    We need to exercise great caution in what we download, copy, and/or display. Distribution of copyrighted materials is a serious issue but simply displaying the material may not be. There are strict copyright guidelines regarding such matters. Understanding the fair use guidelines and the exceptions is very important. My experience has been that an email asking permission is easily obtained and avoids many hassles. For an overall view, the Center For Social Media has provided a “best practices” paper dealing with copyright and provides a FAQs review (http://centerforsocialmedia.org/fair-use/related-materials/codes/code-best-practices-fair-use-online-video).



    Accessories:
    Some accessories are a must to make full use of the iPad. Choose among the categories based upon personal look, feel, and expense. Try before you buy is always best, so speak to other colleagues and friends to determine what works best for you. If you live near an Apple store or BestBuy then go play. If you cannot, then four reliable online sources for accessories are Amazon.com, Meritline.com, Buy.com, and Handhelditems.com. Must have accessories include:   

    • Bluetooth Keyboard (stand-alone or in a folio case, approximately $50)
    • Folio style case or iPad cover (approximately $35)
    • Stylus (approximately $20) and Screen Cleaner (approximately $10)
    • Auxiliary speakers & headphone (range in price from $5 to $200)
    • Extra charger for office or auto (approximately $20)       


    Resources:
    There are a few excellent websites that will be helpful for both workflow and classroom teaching with the iPad.




    What Do You Want To Do?
    Probably the biggest question is “What do you actually want to do with your iPad?” This needs to be well thought out because it will entail investments of time, training, and some cash (for apps and accessories). I have arbitrarily divided the use of the iPad in both the workflow and the classroom into a number areas. These overlap and are by no means exhaustive. I’ve also listed apps that are highly rated in each category; some are free and others not. Check them out at the iTunes Store online or the App Store app on the iPad. Download the freebees and play. For those that cost, read the reviews and click the “most critical” in the reviews link before buying.



    The Workflow and Classroom Categories & Specific Apps
    Beginning and Ending the Workflow: Input and Output

    Getting your documents into the iPad is a fairly straightforward procedure called syncing.

    The two most popular and efficient ways are through iTunes sync and DropBox. Simply drag a file to DropBox on your computer (PC/Mac), and it will show up on your iPad (assuming that both are in the same wifi network). Once you have the document on the iPad, use the “open in” command to move the file to the appropriate app. Reversing this process moves the document back to your computer.

    iTunes sync occurs when you attach your iPad to the computer. There is a window in iTunes that contains all of the apps that share your documents. Simply add your document into this window, and it will sync to your iPad. The reverse process updates the document which can then be saved.

    The advantage of DropBox is that you don’t have to attach the iPad to the computer; further, you can set up folders to share with other people over any network. iTunes sync’s advantage is better organization and control of your documents. I prefer iTunes sync.

    Output from the iPad is pretty much the reverse of the processes listed. In addition, we can add email and printing as output methods. While I list presentation and communication apps later, printing is a special case, because it can take several steps to print. Some apps are AIRPRINT enabled meaning that they will, without any extra steps, print to an AIRPRINT ENABLED PRINTER. All of the major manufactures make them so if you are purchasing a new printer, look this up in the specs. For those of us who do not need a new printer, several apps are available in the iTunes store that will enable you to use a printer in the same wifi network. Choose apps that have two versions: a lite (free and trial) as well as a paid version. Download the lite and give it a try. If it works, then purchase the full paid version. Loading the app onto the iPad, and the computer version on to your Mac or PC will enable you to print wirelessly over your network. There are several choices: I have used PrintCentral from Eurosmartz ($10) since the iPad came out (it was one of the first apps) and it works just fine for me.

    Project and Task Management
    This category includes apps useful for project and event planning. The particularly popular apps are those that use the built-in Calendar and Reminders; those of you who use Google’s apps may want to integrate the Google Calendar into your iPad use. Additionally for those who really like to have more control, there are numbers of To-Do apps (e.g., Wunderlist, which is free, and ToDo, which costs $5). If you want to do graphic layouts of projects, Popplet and Corkulous are quite good. For special presentations and projects, Exhibit A ($10) is worth investigating. (Costs of the apps below are listed with the app; free apps are denoted by “F”)

    Project and Task Management Apps
    • Calendar (F)   
    • Corkulous (F + $5)   
    • Popplet Lite (F)  
    • ToDo ($5)  
    • Wunderlist (F)    

    Writing and Collaboration And Communication Tools and Apps
    These apps include writing and note taking apps, grading papers, email, Skype, Google docs, Dropbox, Podcast and Screencast production, internet.

    Apps to Substitute for MS Office and Note Taking
    • CloudOn (F)
    • DocsToGo ($10)
    • Google Docs (F)
    • Notability ($1)
    • Pages ($10)
    • Penultimate ($1)
    • Smart Office ($5)
    • SoundNote ($5)
    .
    Good Utilitarian Browsers
    • Chrome (F)
    • Life Browser ($1)
    • Safari (F)

    Browsers That Play Flash
    • Photon ($5)
    • Puffin (F)
    • SkyFire ($3)

    Utility Apps for Recording, Communications, Bar Code Reading
    • Dictate (F)
    • Display Recorder ($10) 
    • FaceTime (F)
    • i-nigma (F) (QR codes)
    • Skype (F)    
    • Twitter (F)       

    Utilities for Printing                 
    • PrintCentral ($10)  
                    
    Utilities for Displaying
    • Reflector ($15)    
    • Splashtop ($2)

    Finding WiFi
    • Wi-Fi Finder (F)

    Information Management
    These apps include textbooks, readers, database for information materials, lecture note replacement, and pdf readers/annotators.

    Apps for information storage -- A personal file cabinet
    • DropBox (F)     
    • EverNote (basic app is free, there is also a premium version for $5/month)
    • Exhibit A ($10)
    • GoodReader ($5)   
    • Google Drive (F)           

    WebPage Storage Apps (Read webpages offline without an internet connection)
    • Instapaper ($4)
    • JotNot ($2)
    • Offline Pages ($5)
    • Pocket (F)
    • Safari (F)

    Research and Reading and Reference
    • APA Journals (F) (priced by subscription)
    • CourseSmart (F) (books – prices vary)
    • Inkling (F) (books – prices vary)
    • Mendeley Lite (F)   
    • Wolfram Alpha ($5)

    PDF annotation, Pdf readers, Book Readers
    • iAnnotate ($10)
    • iBooks (F)
    • Kindle (F)      
    • neu.Annotate+ ($2)
    • Nook (F)

    Presentations
    Apps to use for Presentations, Whiteboard, Digital Jukebox, Survey and Polls (without clickers).  For a digital jukebox use GoodReader, Keynote, or any app that will play PowerPoint Slides
    • GoodReader ($5)
    • Keynote (F$10)
    • Lecture Tools (F)
    • Poll Everywhere (F+)
    • SlideShark (F)


    Classroom Management
    This category includes apps that are used for organizing the class such as calendars, grade books and attendance (roll book). If working with these types of apps feels cumbersome, then setting up a spreadsheet grade book on your computer and transferring it to the iPad may be a good choice. (I personally use the spreadsheet methods but some faculty like an all-in-one app.)
    • Calendar (F)
    • Google Calendar (F)
    • Numbers ($10) (an office spreadsheet)
    • Reminders (F)
    • ToDo ($5)
    • Wunderlist (F)

    The following are specific apps to organize classrooms, attendance, and gradebooks.
    • Class Organizer Complete ($5; for students)
    • GradeBook Pro ($10)
    • InClass (F; for students)
    • TeacherKit (F)
    • Teacher’s Aide (F)
                               


    Demonstration Apps
    This category includes specific psychology-related demonstration apps. These vary from those that can be used as “labs,” class A/V displays, digital jukeboxes (brain and body), and informational for both the professor and students. The list is by no means exhaustive.

    General Psychology Information Apps
    • Psych Drugs (F)
    • PsychExplorer (F)
    • PsychGuide (F)
    • PsychTerms (F)
    • PsycTest Hero ($4)
    • Psychology Latest (F)

    Lab Demos
    • Cardiograph ($2)  
    • PAR CRR ($4)
    • Puffin (APA OPL) (F)
    • Stroop Effect (F)   
    • TouchReflex (F)

     
    Anatomy & Physiology
    • 3D Brain (F)
    • Brain Tutor (F)
    • Cardiograph ($2)
    • EyesandEars ($1)
    • Grays Anatomy ($1)
    • iMuscle ($2)

    Sensation & Perception             
    • 3D illusions (F)
    • Eye Illusions ($2)
    • EyeTricks ($1)

    Audio/Visual Informational Resources
    • iTunes U (F)    
    • Podcasts (F)   
    • SoundBox ($1)

    DIY Presentations
    • Educreations (F)
    • Explain Everything ($3)

    Video Presentations
    • Apple Video (F)
    • NetFlix ($8 monthly subscription for streaming)
    • YouTube(F)

    Social Media
    • FaceBook (F)
    • Twitter(F)


    You can find a digital version of this document with LIVE internet links (where applicable) on my college webpage (http://faculty.ccp.edu/faculty/dsberg/) and click on “TUTORIALS & DEMOS.




    David Berg is Professor of Psychology at Community College of Philadelphia where he was the recipient of the Lindback Foundation Award for excellence in college teaching, and where he served as past chair of the Behavioral Sciences Department. He received his Ph.D. from Temple University in experimental psychology and completed postdoctoral training in family systems theory from Drexel University/Hahnemann Medical College. David has pioneered workshops focusing on “wellness in the workplace” and has presented these to government, business, and educational institutions. He trains other psychologists to enable them to perform similar workshops. Dr. Berg has presented a number of workshops that focus on the use of writing in Psychology courses, both at NITOP and at APA. Further, he has presented a number of NITOP workshops on use of technology in the classroom. Since the advent of laptop computers, David has consulted with academic teaching faculty to bring them up to the cutting edge in using technology in the classroom. He also serves as a resource for those who teach in institutions on a “shoestring budget” like his own. He views and uses technology as a means to heighten the standards of critical thinking and writing in teaching rather than as a mere adjunct to lecturing.

     


  • 01 Jun 2017 8:19 AM | Anonymous

    Flipping the Classroom Improves Performance in
    Research Methods in Psychology Courses

     Ellen Furlong
    Illinois Wesleyan University

    Despite having taught it many times Research Methods in Psychology remains one of the most challenging courses I teach. The difficulty arises primarily because Methods has two major goals: (1) to teach students the required concepts (2) to be able to understand, evaluate, design, and conduct research. In short—we must teach both content (what is a hypothesis?) and skill (where is the hypothesis in this article? Is it strong? What is my hypothesis?), usually in just one semester.

    The first few times I taught Methods I tackled this problem by covering content in class and relying on a semester-long APA-style research proposal for students to practice. On the surface this worked modestly—students typically wrote interesting papers with at least superficially solid ability to apply their knowledge.  

    One semester I challenged my students to something new: I assigned a very short 2 page article (Kille, Forest & Wood, 2013) and asked questions about it (i.e., True or False. One of Kille and colleagues (2013) hypotheses was a rating of the likelihood that marriages of four well-known couples would break up in the next 5 years). This activity was a disaster. Although students readily defined a hypothesis or a dependent variable, almost none could correctly identify or differentiate them in the article. This revealed both a shallowness of understanding of the psychological concepts and a lack of practice applying and working with them.

    I found this troubling not only for my students who would go on to graduate school or take upper level seminars, but perhaps most troubling for my students who would likely not receive more training in methods and might graduate without the ability to consume research critically. Successful consumers of research need to not only describe the concepts involved in research, but apply them readily to newspapers, blog posts, or Buzzfeed articles that they read. This is especially important in today’s age of ‘disinformation’ and false news.

    In short, the problem with Research Methods is that to practice the skills involved in research, students first need to understand the concepts. And given the pressures of the semester we often don’t have enough time for them to do both.

    This is hardly a new problem; others with similar difficulty have often turned to flipped classrooms (see, for example Peterson, 2016 and Wilson, 2013 who have used flipped courses for similar reasons in a statistics course). A typical flipped classroom involves presenting traditional lecture-based material (i.e., the foundational concepts) in an online video that students watch on their own before coming to class. During class students then work together under the guidance of the instructor to practice applying these concepts and honing skills (e.g., Lage, Platt & Treglia, 2000). This allows students to do the “easy” parts of learning—listening to a professor lecture, memorizing material, etc. —at home, while doing the hard parts—actually thinking about and applying the material—in the classroom with the professor’s help.

    Flipped classrooms have many advantages. First, students can learn the content at their own pace because they can watch the lectures as often as they need to in order to understand the content. Second, through classroom activities, students can assess their own knowledge early, so they know what they don’t know before the exam, and target their practice accordingly. Third, because students practice their research skills in the classroom I can provide one-on-one time with them. I can offer instant feedback, can see where they struggle, and can scaffold them to success. I can correct their mistakes while they are making them, and adjust activities in the moment to ensure they fully meet my course goals.  When students practice their skills at home I may have no idea where or how they struggle.

    In effect, flipping the classroom allows me to move from a “sage on the stage” to a “guide on the side”, emphasizing the skill involved in assessing and designing research rather than providing definitions and rote memorization of the jargon.

    Implementing a flipped classroom is very time consuming and difficult—for every 10-20-minute video I made, I spent at least 3 hours writing a script (don’t think you can do this on the fly—you hem and haw and students feel like you’re wasting their time), creating slides, recording the video, editing it, and posting it to our course management system. Sometimes I found other people’s work that was far better than what I could have done (see Ben Goldacre’s Battling Bad Science TED Talk: https://www.youtube.com/watch?v=h4MhbkWJzKk) and that saved me hours, but for the most part I made my own lectures. I wrote online quizzes and discussion forums to ensure that students watched the lectures, and on top of all that I had to create an entirely new set of in-class activities to help my students practice their skills—the entire point of this exercise (The Society for the Teaching of Psychology  (http://topix.teachpsych.org/w/page/19980993/FrontPage), Teach Psych Science (http://www.teachpsychscience.org/),  and others have excellent resources for help on their websites). Each of these took at least another 2-3 hours to prepare, many of them much longer. In short, between making your own videos, exploring other people’s work, writing quizzes, and developing new in class exercises this is a daunting exercise, not to be assumed lightly.

    However, despite the immense amount of time and effort it took to flip my course the outcomes were phenomenal and I hope that will be encouraging enough to motivate others to pursue it and, equally importantly, to motivate your students to give a flipped class a chance.

    A brief word about what I will show you here—in the Fall of 2013 I taught Methods in a traditional lecture-based course and in the Fall of 2014 I taught the same course flipped with 16 video lectures spread throughout the semester. I chose to compare two fall semesters although my first time flipping the course occurred in the Spring of 2014. I did not examine this data as students in fall and spring typically differ in systematic ways (i.e., more first-semester juniors in the fall and more second-semester sophomores in the spring).

    I assessed three measures over the course of both semesters: (1) applied exam questions, (2) a large APA style research paper, and (3) student evaluations of instruction scores. I chose exam questions that focused on particularly difficult foundational questions and for which there were least two questions per topic. For the APA style research paper, I randomly selected 5 student papers per class for in-depth assessment. These were scored on a scale of 1 (absent) to 6 (exceeds expectations). There was a good correlation between these scorings (r = .87) and the grading rubric I had initially used to score the papers. Student evaluation of instruction scores ranged from 1 (strongly disagree) to 5 (strongly agree) and included a number of questions that I will discuss below. Finally, because the sample size was low I accepted alpha values of .10.

    T-tests revealed that students in the flipped course (F) and the traditional course (T) scored fairly similarly on most applied exam questions (Design: F 88%, T: 90%, p = .82; Hypotheses: F 81%, T 76%, p = .69; Sampling/ Assignment: F: 85%, T: 80%, p = .38; Reliability/Validity: F: 83%, T: 78%, p = .39) but for two of the hardest concepts, variables and causation, students in the flipped course greatly outperformed students in the traditional course (Variables: F: 90%, T: 79%, p = .06; Causation: F: 92%, T: 73%, p = .015).

    Though this was impressive, the largest improvement showed in the APA style research papers. Interestingly students in the flipped course used evidence better (F: 5.2; T: 3.4, p = .02), had better argument organization (F: 4.8, T: 3.2, p = .05), stronger hypotheses (F: 6, T: 4.2, p = .03), better proposed methods (F: 5.13, T: 4.13, p = .03), were able to discuss their predicted findings in more profound ways (F: 5.6, T: 4.35, p < .01), and had overall better papers than students in the traditional course (F: 5.45, T: 4.5, p = .06). Students in the flipped course were also marginally better at synthesizing information across sources (F: 5, T: 3.8, p = .11). However, it wasn’t just that students in the flipped course were better writers (Writing style: F: , 4.54, T: 4.47, ns) or better at following directions (APA Style: F: 5.13, T: 4, ns) so their improvements in these areas seems targeted and important.

    Student evaluation of instruction scores also told an interesting tale—students in the flipped course were more likely to recommend the course (T: 4.13, F: 4.70, p = .10) even though they found it provided a greater intellectual challenge (T: 4.40, F: 4.90, p = .06) and they found the difficulty level less appropriate (i.e., they reported that the course was too hard: T: 4.67, F: 4.10, p = .01). So even though students found the course harder they were more likely to recommend the flipped class to others compared to those in the traditional course.

    While we’re talking about student evaluation scores, I will point out that my evaluation scores suffered a little the first semester I flipped the course (Spring 2014). While they dropped in some areas (i.e., students found me less available for help; thought my comments were not as useful) their overall evaluation scores stayed fairly similar (4.58 vs 4.59). Further, this ‘hit’ to my evaluations disappeared after one semester. My interpretation here is that I was frantically writing lectures and prepping in-class activities and didn’t have as much time to spend with the students and on comments. Now that all that work is done I have more time than ever to spend on my students. Since then my evaluation scores have stayed the same or risen (average 2014/2015: 4.58, 2015-2016: 4.60, 2016-2017: 4.82). Open ended student evaluations indicate that they very much valued the flipped experience and used it just as I would hope. For example, one representative comment said:

    Teaching this particular material in a “flipped course” was effective. The nature of the material is generally easy to understand with previous experience in psychology but it was not always as simple to apply it; therefore, practicing application in class was helpful. Overall this fostered the ability to apply the knowledge across useful areas both in this course and other courses.

    In summary, flipping the course in Research Methods is hard, but it benefits the students. While this benefit may not necessarily show up on every exam it shows where it counts—when students use their knowledge of methods to evaluate articles or design their own research. They are better able to think about important scientific controls, to design better experiments, and to keep their interpretation within reach of their data set. In short, this improves their training as scientists and consumers of research which we hope will persist throughout their lives. Though this work is hard (for both you and the students), it pays off.

    I’ll leave you here with a few quick words of advice about flipping your own course: First, you don’t need to flip your entire course all at once. Consider flipping one day this semester and see how it goes. Next semester, add another. Second, borrow from people who have done this already. Raid listservs and teaching websites. Email me and I will happily send you my materials (scripts, videos, quizzes, activities, etc.) or give you a pep talk. Talk to your colleagues and share with them. Third, tell your students they will be in a flipped course and, importantly, why. Give them the data I’ve given you—reassure them that their papers will be stronger, their grades will be better, and they will be happier. They will get on board. Fourth, and perhaps the scariest for junior faculty like myself, accept that the first semester you flip, your teaching evaluations may take a hit. Know that you’re gambling, yes, but it’s on a good bet—they will likely rise higher down the road once you’ve sold your students, once they know what they’re getting by enrolling in your course, and once you have mastered the flip.

     

    References

    Kille, D.R., Forest, M.L. & Wood, J.V. (2013). Tall, dark, and stable: Embodiment motivates mate selection. Psychological Science, 24, 112-114.

    Lage, M.J., Platt, G.J., & Treglia, M. (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. The Journal of Economic Education, 31, 30-43.

    Peterson, D.J. (2016). The flipped classroom improves student achievement and course satisfaction in a statistics course: A quasi-experimental design. Teaching of Psychology, 43(1), 10-15.

    Wilson, S.G., (2013). The flipped class: A method to address the challenges of an undergraduate statistics course. Teaching of Psychology, 40(3), 193-199.

     

    Biographical Sketch

    Ellen Furlong is an Assistant Professor in Psychology and Director of the Comparative Cognition Lab at Illinois Wesleyan University. She received her B.A. in Mathematics from Transylvania University and her Ph.D. in Psychology from The Ohio State University. Before joining the faculty at Illinois Wesleyan University in 2013, she served as a postdoctoral fellow at Yale University. Ellen has taught several courses with "flipped" components including a survey level fully online course, a writing intensive research methods course with flipped lectures, and a team-taught, cross-institution (Illinois Wesleyan and Transylvania Universities) May Term travel course with flipped lectures and skyped class sessions.


  • 16 May 2017 8:57 AM | Anonymous

    Teaching with Affordable Technology to Increase Student Learning


    Judith Pena-Shaff (Ithaca College)

     Amber Gilewski (Tompkins Cortland Community College)

     

    Last year at the APA Convention in Orlando, we participated in a symposium about the use of Open Educational Resources (OER) to increase student learning. Judith had little familiarity with OER, while Amber had been using these resources in her classes for the past two years, on the recommendation of her Provost who was enthusiastic about them.  A few days later, the president of Judith’s institution began his all-faculty meeting cautioning about the threat that OER known as Massive Open Online Courses (MOOCs) posed to traditional institutions of higher education. As a current participant in an Introduction to Psychology class offered through Coursera, questions about the educational and learning values of these resources came to Judith’s mind. Will OER increase students’ learning? And if so, how? In this essay, we discuss the value of open educational resources to increase student learning opportunities, as well as their challenges and promises.

    Open Educational Resources (OER) are “teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use or re-purposing by others” (Atkins, Brown, & Hammond, 2007, p 4). Inspired by the Open Source Software (OSS) and the Open Access (OA) movements in the mid 90’s (Baraniuk, 2008; Wiley & Gurrell, 2009), OER are relatively new phenomena that aim to 1) provide free or at least affordable access to knowledge and digital educational and research resources; and 2) reduce the high cost of teaching materials. Philanthropically, it is hoped that OER will help to equalize worldwide access to knowledge, and provide everyone with the opportunity to share, re-use, and re-conceptualize knowledge (Atkins et al., 2007; Baraniuk, 2008). OER include, but are not limited to, learning resources such as full online courses, courseware (e.g., syllabi, lectures, quizzes, and homework assignments), learning objects, assessment tools, software (e.g. IHMC CmapTools program), learning management systems (e.g. Sakai), textbooks, encyclopedias (e.g. Wikipedia), simulations, and other resources or techniques used to support access to knowledge (Hylén, 2006; Downes,2007). Some well-known open education projects are Connexions, which started in 1999; Wikipedia, launched in 2001; a series of OER projects sponsored by the Hewlett Foundation ; MIT Open Courseware, which began in 2002; and more recently, platforms such as Coursera Udacity, and edX (a joint venture between Harvard and MIT), which offer MOOCs.

    There are many reasons why psychology instructors might decide to adopt OER in their traditional face-to-face or distance learning classes. First, OER allow us to provide students with affordable access to information and knowledge. For example, Gilewski provided students with the option to use an OER textbook in her general psychology community college classes (Gilewski, 2012). They could either read the book online or print it for a small fee. She found, in contrast with previous semesters, that students spent less for their class materials, their grades improved, and there was a reduction in the number of course withdrawals. However, it is impossible to know if these results were caused by students’ access to affordable reading material.

    Second, OER allows instructors the opportunity to customize their course materials, providing students with different types of learning aids that better fit the course objectives and benefit different types of learners. For example, Audley-Piotrowski and Magun-Jackson (2012) used a custom-designed DVD with different types of learning resources to increase student preparation and involvement in a Developmental Psychology course. Their study revealed that different types of learning aids engaged different types of students. Non-traditional students and students who defined themselves as independent learners benefited the most from the ancillary the course CD offered than more traditional and dependent learners.

    In addition, OER can be used to combine different tools to help students develop shared knowledge through communities of practice. Draper (2012) explored how knowledge-building activities, such as individually and collaboratively creating concept maps, helped her students develop knowledge convergence. She used Moodle, a free course management system, an asynchronous online communication system for student collaboration, and IHMC CMap tools, a concept mapping software package that can be downloaded for free at http://cmap.ihmc.us/download/. Integrating these learning resources with instructional activities increased student engagement and participation and fostered the development of complex knowledge structures both in online and blended classroom environments.

    So far, we have presented the inclusion of OER in somewhat traditional course environments. MOOCs, however, are a different species of OER. Although the first course using the name MOOC was offered in 2008, the term became a buzzword at the beginning of 2012, with the creation of Coursera, an online platform that offers entire college courses for free. This company, started by two Stanford professors, now has contracts with well-known universities that offer free courses, although not yet for credit, through its online platform. Judith’s experience taking an Introduction to Psychology class taught by University of Toronto professor Steve Joordens has been very positive so far, although not very challenging. The lectures are 15-minutes or less, and are geared to introduce a few basic psychology concepts and theories to a very diverse audience in terms of age, occupation, and geographical location. At the end of each lecture there are two multiple-choice items related to the lecture (not graded), links to free online videos (usually from YouTube), and additional readings. The online discussions are lively, and some participants have been promoted to the level of teaching assistants because of the feedback they often give to others. Other participants write lecture notes and share them with the class. Judith, as others, just watches the lectures. To obtain a certificate of completion a student must complete two multiple-choice exams with a grade of 70 or higher. These tests permit a review of the lecture and retest on the items, to allow the student to correct wrong responses (very like B. F. Skinner’s Programmed Instruction technique). In addition, a short, peer-reviewed argument paper can lead to a “certification of completion with distinction.”

    From these examples we can see that OER offer instructors and students certain advantages. Students find them more affordable than commercial sources. Thus, if access to textbooks is an issue for our students, then OER become very appealing. OER also provide equal access to learning resources worldwide. For example, in the Coursera Introductory Psychology course, all participants have access to the videos and readings, no matter where they live or their levels of education. Many of the resources can be customized by instructors (e.g. editing the textbook, adding or simplifying information). They also give instructors the flexibility to combine different learning resources to better serve their students, to favor different pedagogical approaches (from memorization to knowledge construction), and to complement the textbook. They can be designed to follow a non-linear format. Instructors can link the course syllabus to the readings, videos, and Internet resources to help students gain a better understanding of the course content. All these factors sound very appealing.

    For faculty interested in infusing more OER in their own courses, some resources may include, but are not limited to the Community College Consortium for Open Educational Resources (http://oerconsortium.org), Carnegie Mellon Open Learning Initiative (http://oli.cmu.edu), Saylor (www.saylor.org), and OpenStax College (http://openstaxcollege.org). Amber has been involved with the Kaleidoscope Project (http://www.project-kaleidoscope.org), a cross-institutional collaboration for using the best existing OER for the past few years. They are always looking for new adopters in this grant-funded work.

    However, there are also challenges in adopting OER. For example, increased access does not necessarily mean enhanced or increased learning or motivation. Research shows that less than 30% of psychology students read their textbooks before class and less than 70% read them before an exam (Clump, Bauer, & Bradley, 2004). Of the 60,000 individuals who registered for the Coursera-based Introduction to Psychology class that Judith is observing, 12,000 (20%) were still actively participating at the time we wrote this essay (class announcement, June 4, 2013). This was before the first assessment took place. We wonder how many participants will actually complete all the course assignments and finish the course.

    Also, research on students’ perceptions of textbooks’ pedagogical aids (Marek, Griggs, & Christopher, 1999) shows that students tend to prefer aids that directly relate to test preparation (such as chapter glossaries, boldface definitions, chapter summaries and self-tests) rather than aids that might lead to a deeper understanding of the course material. Therefore, it was not surprising that students in Audley-Piotrowski’s and Magun-Jackson’s (2012) case study focused only on the readings and concepts and not on the other resources, since the test focused mainly on the readings.

    Issues also arise from our lack of familiarity with and concerns over the quality of OER resources. Of course, this is not much different than when we try to select textbooks in our area. The main difference is that we can always get some feedback from colleagues about textbooks. Since OER are not so well known, we are less likely to get feedback so we have to figure things out on our own. Also, we must find the OER while the textbooks usually come to our offices via publishers’ representatives.

    A major challenge relates to the sustainability of OER in terms of funding (so far most OER funding has come from educational institutions’ or foundations’ grants), technical upkeep (e.g., What happens when a problem occurs? Who maintains the sites?), and content (updating the content, reliability of sources, and so on). Several models have been proposed, particularly for the sustainability of MOOCs, such as charging participants for certificates of completion, charging employers who might be given access to participants’ grades, and of course, sponsors.

    While we have different, affordable learning technologies available today, some of the problems we face as instructors are still the same. For example, Hammer (2012) discussed students’ lack of metacognitive skills and learning strategies. Basically, many of our students do not know how to study or which learning strategies work best for them. We need to teach students these strategies directly, and help them become more conscious and purposeful in their learning. One way to do this could be by creating assignments that make them reflect on how they learn, regardless of the type of learning resources or environment where learning takes place.

    Students also need to be active in learning. To encourage more active learning in her Introduction to Psychology classes, Amber has been involved with the Carnegie Mellon Open Learning Initiative, which provides a more interactive approach to learning the material. Students read material online, watch embedded videos, engage in “Learn-By-Doing” and “Did-I-Get-This?” activities that provide immediate, targeted feedback, before they go on to take graded Checkpoints after each module. She has seen a dramatic increase in her students’ success and interaction with course material, which she’ll present at a symposium at the APA’s 2013 Convention in Hawaii. 

    In conclusion, OER provides affordable access to learning resources. Integrating OER and active learning strategies might help to foster complex knowledge structures. Our role is to guide our students so they use and take advantage of these resources.

    References

    Atkins, D.E., Brown, J.S., & Hammond, A.L. (2007). A review of the Open Educational Resources (OER) movement: Achievements, challenges, and opportunities (Report to the William and Flora Hewlett Foundation). Retrieved June 2013 from:  http://www.hewlett.org/uploads/files/ReviewoftheOERMovement.pdf.

    Audley-Piotrowski, S.R. & S. Magun-Jackson, S. (2012, August) Textbook alternatives and student learning in a lifespan development course. In A.M. Gilewski and D.C. Draper (chairs), Teaching with affordable technology to increase student learning: What works. Symposium presented at the annual convention of the American Psychological Association, Orlando, FL.

    Baraniuk, R. G. (2008). Challenges and opportunities for the open education movement: A Connexions case study. In T. Iiyoshi & M. V. Kumar (Eds.), The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge (pp. 229-246). Cambridge, MA: MIT Press.

    Clump, M.A., Bauer, H. & Bradley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum, Journal of Instrumental Psychology, 31, 227-233.

    Downes, S. (2007). Models for sustainable open educational resources. Interdisciplinary Journal of Knowledge and Learning Objects, 3, 29-44. Retrieved June, 2013 from: http://www.ijklo.org/

    Draper, D.C. (2012, August), Instructional strategies to promote knowledge convergence in online communities of practice.  In A.M. Gilewski and D.C. Draper (chairs), Teaching with affordable technology to increase student learning: What works. Symposium presented at the annual convention of the American Psychological Association, Orlando, FL.

    Gilewski, A.M., (2012, August). Using open educational resources to improve student success in introduction to psychology courses. In A.M. Gilewski and D.C. Draper (chairs), Teaching with affordable technology to increase student learning: What works. Symposium presented at the annual convention of the American Psychological Association, Orlando, FL.

    Hammer, E.Y (2012, August). Meta-studying: Teaching metacognitive strategies to enhance student success. Paper presented at the annual convention of the American Psychological Association, Orlando, FL.

    Hylén, J. (2006, September). Open educational resources: Opportunities and challenges. Proceedings of Open Education 2006: Community, culture and context.  Utah State University (pp. 49-63). Retrieved June 10, 2013 from: http://library.oum.edu.my/oumlib/sites/default/files/file_attachments/odl-resources/386010/oer-opportunities.pdf.

    Marek, P., Griggs, R. A., & Christopher, A. N. (1999). Pedagogical aids in textbooks: Do college students' perceptions justify their prevalence? Teaching of Psychology, 26(1), 11-19.

    Wiley, D., & Gurrell, S. (2009). A decade of development. Open Learning, 24(1), 11-21.  doi:10.1080/02680510802627746.


    *****************************************

     

    Judith Pena-Shaff is an associate professor and chair of the psychology department at Ithaca College. She earned her Ph.D. in educational psychology from Cornell University in 2001. Dr. Pena-Shaff’s research interest is in instructional technology. Specifically, she is interested in the knowledge construction processes students use in computer-mediated learning environments with the purpose of creating a taxonomy to help instructors assess student learning.  In addition, Dr. Pena-Shaff is highly engaged in her community, often conducting evaluations of educational programs run by schools and local organizations.

     

    Amber Gilewski is an assistant professor of psychology at Tompkins Cortland Community College in upstate NY. She is a Psychology Fellow on the Kaleidoscope Project, which is a Next Generation Learning Challenges grant-funded collaboration of colleges in the U.S. devoted to improving student success and retention in general education courses, through the use of OER. She earned her master’s degree in Clinical-Counseling Psychology from LaSalle University in 2002 and has been teaching at community colleges since 2004.

     

Powered by Wild Apricot Membership Software