STP Logo

Society for the Teaching of Psychology
Division 2 of the American Psychological Association

GSTA Blog

Welcome to the GSTA blog! 

In an effort to keep the Graduate Student Teaching Association (GSTA) blog current, we regularly welcome submissions from graduate students as well as full-time faculty. Recently we have made the decision to expand and diversify the blog content to include submissions ranging from new research in the area of the Scholarship of Teaching and Learning (SoTL), public interest topics related to teaching and psychology, occasional book reviews, as well as continuing our traditional aim by including posts about teaching tips. The blog posts are typically short, ranging from about 500-1000 words, not including references. As it is an online medium, in-text hyperlinks, graphics, and even links to videos are strongly encouraged!

If you are interested in submitting a post, please email us at gsta@teachpsych.org. We are especially seeking submissions in one of the five topic areas:

  • Highlights of your current SoTL research
  • Issues related to teaching and psychology in the public interest
  • Reviews of recent books related to teaching and psychology
  • Teaching tips and best practices for today's classroom
  • Advice for successfully navigating research and teaching demands of graduate school

We would especially like activities that align with APA 2.0 Guidelines!

This blog is intended to be a forum for graduate students and educators to share ideas and express their opinions about tried-and-true modern teaching practices and other currently relevant topics regarding graduate students’ teaching.

If you would like for any questions to be addressed, you can send them to gsta@teachpsych.org and we will post them as a comment on your behalf.

Thanks for checking us out,

The GSTA Blog Editorial Team:

Hallie Jordan, Sarah Frantz, Maya Rose, and Charles Raffaele


Follow us on twitter @gradsteachpsych or join our Facebook Group.


  • 01 May 2019 10:00 AM | Anonymous member (Administrator)
    By Jennifer A. McCabe, Ph.D., Goucher College

    Last year, as part of my portfolio for promotion to Full Professor, I wrote a teaching philosophy statement. As this is required for nearly every teaching position in academia, this was not my first draft. In fact, I had written a solid statement for my tenure case just six years ago. At first I asked myself, did anything really change in that time? I soon realized that, in a way I could not articulate at earlier points in my career, I could now identify six core principles that guide every teaching-related decision I make. I hope that by sharing these principles I can encourage others to identify and develop their own set of guiding principles as higher education practitioners.

     

    1. Strategies for Durable Learning

    My scholarship focuses on learning strategies that benefit long-term memory, and I have become more intentional about my responsibility to integrate these evidence-supported memory principles into the structure and delivery of my courses. Early in my career I was worried that some of these choices would be unpopular with students. It took more time and confidence in the classroom to commit fully.

    Now I do so transparently and unapologetically. This includes the use of frequent, effortful, low-stakes, cumulative, spaced (distributed) retrieval practice (a.k.a. quizzes) - followed by discussion to encourage elaboration and connections - in all of my courses. It’s amazing how readily students get on board with these strategies even though they require more time and effort than traditional class practices.


    2.              Interest-First Approach

    Memory researchers also know about the self-reference effect, that we remember information more easily if it relates to ourselves. I enact this principle by adjusting the entry point to many topics and assignments, allowing student interest to be the guiding force. If they begin work on a complex topic using a question or a problem sparked by natural curiosity, I trust that deep and durable learning of the content will follow. I give students much more agency in their assignments and approach to learning than I did in my early years of teaching.

    I also prioritize a narrative approach to course material - learning psychological science through stories. I assign popular press books and articles whenever possible to promote real-world applications of concepts, ensuring a connection between my students’ coursework and their lives. To further engage student interest, I make sure to include (and prioritize) interactive elements such as demonstrations and discussions during each class period.


    3.              Integration and Connection

    I intentionally structure my classes to encourage students to make meaningful connections, drawing on the principle of elaboration. Every day in class, I emphasize how current material links with past topics, and prompt them to continue this work in assignments outside of class.

    For example, I ask students to connect various course topics at the end of the semester as part of a final exam take-home essay. They may identify connections between various aspects of the class with course themes, draw on course material in writing a narrative about each stage of cognition needed to play a game of their choosing, or write a story about how each component of memory would contribute to a day in a person’s life. I have also enjoyed the challenge of embracing college-wide Theme Semesters in my courses, including topics of Mindfulness and Storytelling.


    4.              Authentic and Shared Learning

    I strive to increase the types of activities and assignments that are situated in authentic real-world issues, and that are shared beyond a submission to the instructor. I have found that students are far more engaged—and submit higher-quality products—under these circumstances.

    For example, sharing can happen among classmates in the form of each student reading a different article on a certain topic and then coming to class ready to teach (and learn from) peers in small group discussions. Or they may present mini-TED Talks, consisting of an engaging 5-minute oral presentation on a course-related topic of their choice. There is also great value in having students share their work in other settings, such as presenting at the college’s student symposium or creating resources on course topics for the public. Each semester my students in a seminar on Cognition, Teaching, and Learning complete a “Translational Project for a General Audience.” Formats include podcasts, infographics, videos, games, and one time even a children’s book. Several students who wrote posts in the style of The Learning Scientists blog were subsequently published on this site. Talk about sharing the authentic work of learning!


    5.              Metacognitive Self-Reflection

    My research program also focuses on metacognition, specifically the extent to which students know about and use effective strategies. This scholarly interest permeates all my courses with the goal that students learn about, and reflect on their own, learning. The main idea is to embrace desirable difficulties: learning strategies that are initially slower and harder, but produce more durable memories. Students experience in-class demonstrations showing the memory benefits of these strategies (e.g., spacing, elaboration, testing), followed by activities and assignments to encourage an examination of their own learning beliefs and misconceptions. Then they brainstorm the best ways to communicate this information to peers, and plan for how they will utilize these strategies.

    Metacognitive development is also a natural side effect of frequent, low-stakes, cumulative quizzing. Testing is not only an effective learning strategy, it also provides metacognitive feedback about the state of one’s own knowledge. I encourage students to use testing for both purposes, knowing that the best way to avoid the fluency illusion (believing you have learned something because it seems familiar or easy) is to take a test that requires effortful retrieval from memory. Further, in classes with major tests, I administer a post-exam metacognitive debrief activity aimed to help students understand if they were overconfident, how they studied, the reason(s) they answered items incorrectly, and a preparation plan for the next exam.


    6.              Transparent Course Design with Intentional Scaffolding

    I have improved my communication with students regarding the goals and objectives in my courses, and the purpose behind how I structure the class and assess student learning. In other words, I think about course design in a more integrated way, connecting learning objectives to teaching strategies and assessments. I work to enhance the clarity and detail of my syllabi and assignment instructions, making them as purpose-driven and explicit as possible. To close the loop on this process, one of my favorite activities is to ask students on the last day of class to reflect on the course objectives, evaluating their progress, and identifying components of the course that helped them improve.

    With regard to scaffolding, I remind myself that each of us is in a developing state of expertise (to borrow growth-mindset language). Some students in my class will need a lot of support and opportunities to get to the level of expertise I expect, and others will need less. The best remedy for this, in my opinion, is to offer early and frequent opportunities for formative feedback. Complex assignments can be broken down into scaffolded components, with feedback at every step. This approach leads to both higher-quality end products and a more positive learning experience for students. It is also a step toward a more inclusive classroom that allows for students of diverse backgrounds and abilities to grow.


    A central theme that unites all of the above is something I express to my students early and often: I care about your learning. Learning, by definition, is about change. And I am committed to nurturing that growth in my students, as well as in myself as an ever-evolving practitioner. In this way, I can maintain high standards in my courses while helping students feel informed and supported in their efforts to achieve success. In turn, they can (and should) have high standards for me, including expectations of preparation, availability, clear and consistent communication, prompt feedback, authentic and enthusiastic engagement, and—maybe most importantly—ongoing efforts to improve. After all, teaching is about change too.



    Jennifer A. McCabe is a Professor in the Center for Psychology at Goucher College in Baltimore, Maryland, where she has taught since 2008. She just completed her 15th year of full-time teaching, having also taught at Marietta College in Ohio. She earned her B.A. in Psychology from Western Maryland College (now McDaniel College), and her M.A. and Ph.D. in Cognitive Psychology from the University of North Carolina at Chapel Hill. She has taught courses including Introduction to Psychology, Cognitive Psychology, Human Learning and Memory, Statistics, Research Methods, and Seminar in Cognition, Teaching, and Learning. She has won teaching excellence awards from Marietta College and Goucher College. Her research interests include memory strategies, metacognition, and the scholarship of teaching and learning. She has been published in Memory and Cognition, Teaching of Psychology, Scholarship of Teaching and Learning in Psychology, Instructional Science, Frontiers in Psychology, Journal of Applied Research in Memory and Cognition, and Psychological Science in the Public Interest. Supported by Instructional Resource Awards from the Society for the Teaching of Psychology (STP), she has also published two online resources for psychology educators on the topics of mnemonics and memory-strategy demonstrations. She has served as a Consulting Editor for Teaching of Psychology, and is currently a Consultant-Collaborator for the Improve with Metacognition project.

  • 30 Mar 2019 12:00 PM | Anonymous member (Administrator)

    By Teresa Ober, Kalina Gjicali, Eduardo Vianna, and Patricia Brooks

    To be informed and responsible citizens, students should be able to make sense of data—and in this day and age, we live in a world with an abundance of it!  As such, developing students’ quantitative literacy (QL) has become one of the overarching goals of undergraduate education (Sons, 1994). QL is considered “an aggregate of skills, knowledge, beliefs, dispositions, habits of mind, communication, capabilities, and problem solving skills that people need in order to engage effectively in quantitative situations arising in life and work” (as cited in Steen, 2001, p. 7). QL often involves applying mathematical thinking skills to real-world data with the purpose of drawing informed conclusions about issues of personal and/or societal concern (Elrod, 2014). Students who possess strong QL skills need not have strong computational backgrounds, but should be able to identify and interpret quantitative relations (e.g., in visual graphs), organize quantitative information (e.g., in spreadsheets) and communicate effectively about the relevance of quantitative data in everyday life (Blair & Getz, 2011). It has been argued that QL is most effectively taught when embedded across the curriculum given a critical component of its application involves identifying quantitative relations in varied real-world contexts (Hughes-Hallett, 2001). Embedding QL in college classes across disciplines helps students develop skills they will need to engage effectively in work and life beyond college. In this regard, instruction around QL that uses psychology content can support development and application of practical quantitative reasoning skills outside of the classroom. Hence, psychology departments are increasingly recognizing the need to teach QL as a core cross-curricular requirement (Lutsky, 2008). This means using data and mathematical thinking in all psychology courses, and not just in statistics and research methods courses.

    In this post, we offer some perspectives on how to promote QL across the psychology curriculum. Some of the tools and ideas for integrating QL activities into psychology courses were presented during a recent GSTA-sponsored workshop held at the Graduate Center of the City University of New York on March 6, 2019. During the workshop, Kalina Gjicali, PhD candidate in Educational Psychology, presented best practices for using visual graphs to help students develop quantitative concepts and skills in interpreting data. Dr. Eduardo Vianna of LaGuardia Community College shared resources developed through the Numeracy Infusion Course for Higher Education (NICHE) / Numeracy Infusion for College Educators (NICE), a consortium of educators who share a common mission of promoting quantitative reasoning across various college-level courses. He also shared information about a new CUNY-wide project to improve college students’ QL skills and his own experiences in teaching QL in an introductory (general) psychology course. Teresa Ober, PhD candidate in Educational Psychology, described sources of secondary data and free open-source statistical programs that can be used to develop students’ data analysis skills

    Engaged Pedagogy and Quantitative Literacy

    Educational research suggests that QL is best taught through student-centered, progressive pedagogies that promote active and inquiry-based learning (Lowney, 2008). In particular, studies have demonstrated the following strategies to be especially effective for teaching QL (Carver et al., 2016): (a) active learning, in which students are engaged learners rather than passive recipients of information; (b) inquiry-based learning, which emphasizes conceptual thinking rather than rote skills and memorization of facts as well as the use of problems and examples that are relevant to real-life situations; and (c) the use of technology to analyze actual data in real-life situations. According to constructivist learning perspectives (Cobb, 1994; Fosnot, 1996; Keeling, 2004), students learn most effectively when they explore new concepts and ideas while working out solutions to meaningful problems and considering the implications of research findings. In other words, students should figure out for themselves how new information, concepts, and ideas relate to their existing systems of knowledge and beliefs, and have opportunities to revise and expand their views in response to new knowledge.

    Strategies for Teaching Quantitative Literacy

    Interpreting Graphs

    One way to build QL skills is to focus on data and visual representations of data. Introducing effective graphic displays of data into college lectures can take the focus away from text-heavy slides that summarize information as if it were established fact (as opposed to research findings that may be in need of replication) and instead towards student-centered learning and knowledge construction. By being presented with appropriate visual representations of social science data, students can be expected to (Beaudrie et al., 2013):

    • Articulate their ideas

    • Express themselves with precision  

    • Ground their observations in evidence  

    • Test claims and hypotheses  

    • Participate in civil discourse

    • Represent what they are ill-equipped to see

    • Recognize and weigh uncertainty

    • Construct a context to attract interest and to inform critical thinking

    You can build QL with your college students with the free online feature “What’s Going On in This Graph?”created by The New York Times Learning Network in partnership with the American Statistical Association. Updated on a weekly basis, this resource features graphs of different types and within different contexts, such as varied topics from labor and automation to teen smoking habits (see figure below) that can be used to ask students the following questions:

    1. What do you notice?

    2. What do you wonder?

    3. What’s going on in this graph?

    4. What are the implications for ________ (e.g., understanding health risks of teenagers)?


    All releases are archived, so instructors can use previous graphs anytime. Visit this introductory post and this article about how teachers use this powerful activity.

    Analyzing Data in-Class

    Hands-on opportunities to work with actual data can open many doors for students, especially for those who have had limited experiences in data analysis and have anxiety about it. The concept of using secondary data to teach students about psychological science is not a novel one (see Sobel, 1981), but has received reinvigorated interest due to the vast amount of open access data currently available. In thinking through an in-class data demo, it might be useful for instructors to consider these questions:

    1. How is this data source meaningful to students within the course?

    2. What tools are available to students to help them analyze the data?

    3. What strategies/resources can be made available to students to help them interpret the data?

    4. How can we apply the findings from the data to everyday life?

    The Data

    Selecting a dataset for a data demo project is a crucial first step, and will depend on course content as well as the skills and reasoning abilities that you want your students to develop. Resources abound, with data sources including Kaggle, UNData, OECD, IES, and more than several amazing OSF repositories (e.g., EAMMi2) as well as data provided by local and regional government agencies. In choosing the dataset, consider what topics might be of interest to students, what problems/questions they can use the data to address, and whether there is sufficient documentation to support student learning. For example, for a course covering language development, the CHILDES database (part of Talk Bank) is an invaluable resource. This database contains transcripts of parent-child conversations in a variety of languages, often with accompanying audio or video, and includes datasets for children growing up in multilingual environments as well as datasets with various clinical populations (e.g., developmental language disorder, autism spectrum disorder, hearing loss). This resource includes CLAN software for analyzing conversational interactions and manuals to help you get started.

    Another option for integrating data collection into instruction is to ask students to complete a brief survey during class time. GoogleForms is a very convenient way to collect and present such data quickly, but other survey programs can work just as well. Asking students to complete a short-form survey may be an effective way to introduce them to a dataset by helping them become familiar with the actual scales used in the original study.

    The Tools

    Considering which statistical software programs are available and accessible to students is critical. While many undergraduate psychology courses use proprietary programs like SPSS or STATA to teach statistics, whether students own a license or actually have access to such programs off-campus is often questionable. Thus, it might be more advantageous for students long-term to consider free and open source programs, such as JASP or R. Sometimes the more sophisticated statistical programs may not even be necessary for teaching QL. Rather, in many cases, using a spreadsheet application such as GoogleSheets or Excel might actually be sufficient for teaching basic statistics (DiMaria-Ghalili & Ostrow, 2009). Many students have access to Excel on their personal computers, but benefit from instruction on how to use it to make pivot tables or charts.

    The Interpretation

    In preparing a lesson around the use of secondary data, it is important to consider students’ prior knowledge, skills, and interests to ensure that the instruction is developmentally appropriate. You might start by distinguishing research questions that relate to frequencies (How often?), associations (Are X and Y related?), or causal relationships (Does X cause Y?) as this can lead to a fruitful discussion of how to fit one’s analytic approach to the research question at hand. Students may need instruction to decide what sorts of graphs are appropriate for different types of data (e.g., line graphs, bar graphs, scatterplots). This can lead to further discussion of how to present findings in APA format, determine statistical significance, and interpret p-values.  Along the way, you might consider outliers, skewed distributions, and various threats to the validity of the research, such as the representativeness of the sample. As you guide your class in interpreting research findings, allow for spontaneity by offering students opportunities to test their own hypotheses and develop ideas for future research.

    The Significance, … or rather, Relevance

    Finally, it might be helpful to consider what possible applications can come out of the findings presented in class. Even if the findings might seem intuitive, walking students through the process of analyzing and interpreting the findings should ultimately lead them to feel empowered in working with data. When findings are non-significant and hypotheses are not supported, students have opportunities to learn that this sort of “productive failure” is part of the research process. For this reason, using secondary data as opposed to artificially generated data can lead to a more practical learning experience, particularly when resources for conducting secondary data analysis are plentiful.

    Conclusions

    Strengthening QL has been recognized as an imperative of undergraduate education, with students best served when instructors use an “across-the-curriculum” approach to ensure they have sufficient opportunities to develop QL skills. Like any well-implemented curriculum, teaching QL necessitates planning. When datasets are analyzed or in-class demonstrations are conducted, instructors should take extra precautions to ensure that the lessons achieve their objectives. Lack of clarity during a demonstration, improper analyses, or technical problems can greatly interfere with learning opportunities when conducting in-class demonstrations. Nevertheless, we hope the resources described above may at the very least offer some initial inspiration for incorporating QL instruction into all of your courses.

    Resources

    Data

    Teaching Tools

    References

    Beaudrie, B., Ernst, D. & Boschmans, B. (2013). First semester experiences in implementing a mathematics emporium model. In R. McBride & M. Searson (Eds.), Proceedings of SITE 2013-Society for Information Technology & Teacher Education International Conference (pp. 223-228). New Orleans, Louisiana, United States: Association for the Advancement of Computing in Education (AACE). Retrieved March 21, 2019 fromhttps://www.learntechlib.org/primary/p/48098/.

    Carver, R., Everson, M., Gabrosek, J., Horton, N., Lock, R., Mocko, M., … Wood, B. (2016). Guidelines for assessment and instruction in statistics education: College report, Alexandria, VA: American Statistical Association.

    Cobb, P. (1994). Where is the mind? Constructivist and sociocultural perspectives on mathematical development. Educational Researcher, 23(7), 13-20.

    DiMaria-Ghalili, R. A., & Ostrow, C. L. (2009). Using Microsoft Excel® to teach statistics in a graduate advanced practice nursing program. Journal of Nursing Education, 48(2), 106-110.

    Elrod, S. (2014). Quantitative reasoning: The next "across the curriculum" movement. Association of American Colleges & Universities Peer Review, 16(3), 4. Retrieved online: https://www.aacu.org/peerreview/2014/summer/elrod.

    Fosnot, C. T (Ed). (1996). Constructivism: Theory, perspectives, and practice. New York, NY: Teachers College Press.

    Hughes-Hallett, D. (2001). Achieving numeracy: The challenge of implementation. Mathematics and democracy: The case for quantitative literacy, 93-98.

    Keeling, R. (Ed.). (2004). Learning reconsidered: A campus-wide focus on the student experience. Washington, DC: American College Personnel Association and National Association of School Personnel Administrators.

    Lowney, K. S. (Ed.). (2008). Teaching social problems from a constructivist perspective, New York: W.W. Norton.

    Lutsky, N. (2008). Arguing with numbers: Teaching quantitative reasoning through argument and writing. Calculation vs. context: Quantitative literacy and its implications for teacher education, 59-74.

    Sons L. R. (1994). Quantitative reasoning for college graduates: A complement to the standards. Mathematical Association of America. Retrieved online: https://www.maa.org/programs/faculty-and-departments/curriculum-department-guidelines-recommendations/quantitative-literacy/quantitative-reasoning-college-graduates.

    Steen, L. A. (Ed.). (2001). Mathematics and democracy: The case for quantitative literacy. Report prepared by the National Council on Education and the Disciplines. Retrieved online: https://www.maa.org/sites/default/files/pdf/QL/MathAndDemocracy.pdf

    Author Bios

    Patricia J. Brooks is Professor of Psychology at the College of Staten Island and the Graduate Center, CUNY and GSTA Faculty Advisor.  Brooks was recipient of the 2016 President’s Dolphin Award for Outstanding Teaching at the College of Staten Island, CUNY.  Her research interests are in two broad areas: (1) individual differences in language learning, (2) development of effective pedagogy to support diverse learners.​

    Kalina Gjicali is a doctoral candidate in Educational Psychology at The Graduate Center, CUNY and a Quantitative Reasoning Fellow for the University at the Quantitative Research & Consulting Center (QRCC).

    Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center, CUNY. Teresa is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

    Eduardo Vianna, Professor of Psychology, has taught at LaGuardia since 2005. He has a Ph.D. in developmental psychology from the GC- CUNY after completing his medical studies in Brazil. Building on recent advances in Vygotskian theory, especially the Transformative Activist Stance approach,  his works focus on research with transformative agendas. His recent recent work includes applying critical-theoretical pedagogy to build the peer activist learning community (PALC), which was featured in the New York Times. In 2010 he received the Early Career Award in Cultural-Historical Research by the American Educational Research Association and currently he is chief editor of Outlines Critical Practice Studies and  Co-PI in the NSF grant "Building Capacity: A Faculty Development Program to Increase Students' Quantitative Reasoning Skills.’
  • 13 Mar 2019 1:03 PM | Anonymous member (Administrator)

    By David Kreiner, Ph.D., University of Central Missouri

    On a cosmological level, time may be infinite, but we constantly run out of it in our daily lives. I have seen students and faculty struggle with it. I have certainly experienced it myself:

    • When my class time is up but I haven’t finished everything I wanted to.
    • When I’m planning a 16-week class and there’s just not enough time.
    • When I thought I could finish a draft of a paper in one afternoon but I didn’t even get close.

    Similarly, your students may:

    • have trouble meeting course deadlines;
    • fail to use effective study methods because they don’t have enough time;
    • be unrealistic about allocating time for the different components of a major project;
    • or find that they are about to graduate before they had a chance to accomplish all their goals.

    I propose that we look to the rich literature on the psychology of time in the same way that we have looked to the science of learning for more effective studying and teaching methods. I will describe one example to illustrate what I mean, but there is much more out there. If only we had time to explore it all!

    Kahneman and Tversky (1979) defined the planning fallacy as a tendency to underestimate how much time we need to complete larger tasks and overestimate the time we need for smaller tasks. We tend to be confident in these estimates – confident, but wrong (Buehler, Griffin, & Peetz, 2010). Think about how this affects your plans for a large project like your thesis or dissertation. Also think about how your students might struggle with finishing a project on time, or why they might run out of time and submit work that is less than their best.

    Fortunately, there is research on how to estimate more accurately how much time it will take to do something. One strategy is to avoid anchoring effects, in this case anchoring on the present when making a time estimate. LeBouef and Shafir (2009) found that people could make better estimates if they identified a future date at which they thought they would finish instead of estimating how many days from now.

    Another way to make more accurate time forecasts is to consider how much time similar tasks took in the past (König, Wirz, Thomas, & Weidmann, 2015). It also helps to think about possible obstacles that can cause delays (Buehler et al., 2010). When your student is estimating that she can knock out that paper in three hours, she may not be considering possible interruptions, technology issues, or finding out that the key article she needs is not available full-text.

    Imagining from the perspective of an observer can also improve accuracy (Buehler et al., 2010). What would your friend say about your plan to complete the literature review of your dissertation in one week?

    We might ask whether making better time estimates is that important. It doesn’t speed anything up or save time, right? But if our estimates are inaccurate, we will make mistakes in budgeting our time. Other things may fall through the cracks – sleep, for example – which could affect our well-being and success. One way to improve our relationship with time is to get a better handle on how much time we need. The research suggests that we can get better at it.

    At the upcoming APS-STP Teaching Institute, I will share a few other examples of how we can make use of the literature on the psychology of time. I hope to see you there …. if you can find the time!

    References

    Buehler, R., Griffin, D., & Peetz, J. (2010). The planning fallacy: Cognitive, motivational, and social origins.  In P.Z. Mark & M.O. James (Eds.), Advances in experimental social psychology (Vol. 43, pp. 1-62). New York, NY: Academic Press. doi: 10.1016/S0065-2601 (10)43001-4

    Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12, 313-327.

    König, C.J., Wirz, A., Thomas, K.E., & Weidmann, R.Z. (2015). The effects of previous misestimation of task duration on estimating future task duration. Current Psychology, 34(1), 1-13. doi: 10.1007/s12144-014-9236-3

    LeBouef, R.A., & Shafir, E. (2009). Anchoring on the “here” and “now” in time and distance judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition. doi: 10.1037/a0013665

    David S. Kreiner is Professor and Chair of the School of Nutrition, Kinesiology, and Psychological Science at the University of Central Missouri, where he has taught since 1990. He earned his B.A. in Psychology and Ph.D. in Human Experimental Psychology from the University of Texas at Austin. He has taught courses including General Psychology, Orientation to Psychology, Research Design & Analysis I & II, History of Psychology, Advanced Statistics, Cognitive Psychology, and Sensation & Perception.  His research interests include language processing, memory, and the teaching of psychology.  He often collaborates with students on research projects and has coauthored publications and conference presentations with undergraduate and graduate students. 


  • 12 Feb 2019 3:32 PM | Anonymous member (Administrator)

    By Ashley Waggoner Denton, Ph.D., University of Toronto

    As an undergraduate student, I learned that being primed with the stereotype of professor could make me act smarter, that I might deplete my self-control if I refused the tempting cookies presented to me at a meeting, and that if I conducted a study whose findings were unexpected, I could just rewrite my introduction and tell a new story. Thankfully, I also learned how to learn, which prevented me from becoming trapped in a knowledge time warp. Psychological “facts” have an estimated half-life of seven years (Arbesman, 2013). Seven! This means that by the time you have completed graduate school, half of the psychological findings you learned as an undergraduate will have been updated, revised, or deemed outright wrong. Such is the nature of scientific progress. However, this helps drive home the point that one of our most important goals as teachers is to help our students develop into lifelong learners who will be able to continue learning (effectively and across a range of topics) long after they have left our classrooms. 

    The term learning how to learn comes from Fink’s taxonomy of significant learning (2013), which includes six major categories: foundational knowledge, application, integration, human dimension, caring, and learning how to learn. If you are not familiar with Fink’s model, I highly recommend checking it out (see recommended reading below). Learning how to learn takes a number of different forms, and in each of the courses I teach, at least one of these forms is emphasized. The first form is learning how to be a better student, the second form is learning how to construct new knowledge in a discipline, and the third form involves helping students become “self-directing learners” (Fink, 2013, p. 59), the key to which involves the ability to critically reflect on one’s own learning. Below I provide some examples of how I encourage these various forms of learning how to learn in different courses that I teach.

    Learning How to Be a Better Student

    Without a doubt, this form of learning how to learn gets emphasized the most in my Introductory Psychology class. In order to encourage my students to adopt better learning strategies, I don’t just teach them what psychologists have learned about effective study strategies (see links to helpful resources from the Learning Scientists below). Instead, I first let the students tell me (via a survey or in-class response system) how they typically study, and then I frame the lesson around their responses. Specifically, I address the limitations of their common habits (e.g., cramming) and study strategies (e.g., re-reading), explain why these strategies seem appealing despite their limitations, and then provide the students with more effective replacement strategies (e.g., retrieval practice), including an overview of the research that has been done on each strategy and specific tips for how to implement these strategies in Intro Psych. Rather than presenting this information a preachy way (“everything you are doing is wrong and I know better!”), I want the students to recognize that they are not alone in using these common strategies, and that I completely understand why they use them, but that I have good reason to believe they can learn even more effectively by adopting some new strategies.

    In a similar vein, I also present students with research on the effects of technology use on learning (both in the classroom and when they are studying on their own). Again, I ultimately leave it up to the students (as self-directing learners!) to make their own decisions, but I arm them with the information that will allow them to make informed decisions about whether they take notes with a laptop or on paper, where they should leave their phone during class or a study session, and so on. A detailed slide-deck that can be used for covering this material in your own classes is available via a link below.  

    Learning How to Construct New Knowledge

    We all know that students should practice writing and get hands-on experience doing research as much as possible. Encouraging this form of learning how to learn is standard in any research methods or laboratory class. But it’s worth spending a moment to reflect on the type of inquiry and knowledge construction students are engaging in across all of your courses. Are they being pushed enough? Are they being asked to truly write and think “like a [social/cognitive/clinical etc.] psychologist,” or are they simply getting practice using some new terms and theories? As an example, students in my Intro to Social Psychology course used to complete an assignment where they analyzed an event from a social psychological perspective. It was a perfectly good assignment, but what were the students actually learning? Application is important, don't get me wrong (it has its own category in Fink’s model), but I have since replaced this assignment with an observational study project where the students must develop a hypothesis; design a study; collect, analyze, and interpret their data; and write everything up in a final APA-style report. This new assignment obviously requires a lot more scaffolding and resources, but the students walk away from the course not just being able to apply the knowledge they’ve learned, but with the ability to potentially contribute to that knowledge base. Additionally, they are in a better position to recognize the limitations of drawing conclusions from single studies and the importance of replication and reproducibility.

    Learning How to Become Self-Directing Learners

    Most of what our students do, they do because we tell them to. For example, students in my Social Psychology Laboratory class complete a research proposal because that is what they are told to do. They develop their own research question and hypothesis and design their own experiment, which all seems perfectly “self-directed.” However, the task falls short of its goal if the students fail to engage in a critical reflection of their learning throughout this process. The way that I encourage this (in this class and others) is through the use of reflective learning journals. Reflection changes everything. When students are encouraged to reflect on their learning it can improve their self-monitoring and goal-setting capabilities as well as lead to changes in study habits and other skills. It encourages students to focus more on the how and why of their learning, rather than simply on what they are learning. Students who are able to critically reflect on their learning are much more likely to develop into self-directing learners, so I do whatever I can to give my students practice with reflection. More information on how I have implemented reflective learning journals into my statistics course can be found in the Waggoner Denton (2018) article listed below.

    Self-directing learners are able to recognize gaps in their understanding and formulate plans for filling those gaps. As a developing teacher, you are likely to start noticing all sorts of gaps in your knowledge and skills (all those things that manage to go unnoticed until we actually have to explain it to someone!). The next time you go about filling in one of those gaps, take some time at the end to reflect on the process you just undertook. Who did you talk to? What did you read? Could you have done it better or more efficiently? And how did you know how to do these things? Would your students know what to do?

    Below are some resources that may be useful as you consider how to incorporate certain aspects of learning how to learn more fully within your own courses!

     

    Additional Reading/Resources:

    • Reflective Learning Journals in Statistics: Waggoner Denton, A. (2018). The use of a reflective learning journal in an introductory statistics course. Psychology Learning and Teaching, 17, 84-93. DOI: 10.1177/1475725717728676

     

    References

    Arbesman, S. (2013). The half-life of facts. New York: Penguin. 

    Fink, D.L. (2013). Creating significant learning experiences: An integrated approach to designing college courses.  San Francisco: Jossey-Bass.


    Ashley Waggoner Denton is an Associate Professor, Teaching Stream in the Department of Psychology at the University of Toronto. She received her Ph.D. in Social Psychology from Indiana University and completed her bachelor's degree at the University of Toronto. She teaches courses including Introductory Psychology, Social Psychology, Statistics, and the Social Psychology Laboratory. She also supervises undergraduate research projects that examine questions related to the social psychology of teaching and learning.

  • 17 Jan 2019 10:00 AM | Anonymous member (Administrator)

    By Teresa Ober, Elizabeth Che, and Patricia J. Brooks, GSTA Leadership

    In the Fall 2018, the GSTA distributed a short survey to gather informal input about the preferences of graduate students with regards to a possible mentorship program. We were specifically interested in gauging whether graduate students would be interested in a program where they would be mentored by early career psychologists.

    There have been past efforts to apply mentorship programs within the framework of existing professional organizations. The Society for the Teaching of Psychology has recently formed a mentorship program pairing early career psychologists and advanced graduate students with more senior full-time faculty. The mentorship program was featured in a recent GSTA blog by Dr. Diane Finley which describes some of the history and benefits of mentorship. Mentorship is thought to encourage networking, collaboration, and sharing of instructional resources and ideas. In addition to these benefits, mentorship has also been shown to relate to decreased work-family conflicts and increased job satisfaction in the long-term (Tenenbaum et al., 2001).

    To date there has been relatively little systematic and quantitative research on mentorship as an evidence-based practice (Troisi, Leder, Stiegler-Balfour, Fleck, & Good, 2015), and virtually none on mentorship of graduate students in psychology. Existing research on professional mentorship between faculty and students indicates that it consists of two distinct components: instrumental and psychosocial help (Tenenbaum et al., 2001). “Instrumental help” involves coaching and training. “Psychosocial help” includes empathizing and counseling. In conducting this survey, we were particularly interested in the types of instrumental help that graduate students might seek in a mentorship program, as well as what types of mentorship models and modes of communication would be preferred. Research in this area is necessary to understand whether graduate students have unique needs and interests as potential mentees.

    Survey

    We sought to identify interests related to professional mentorship among graduate students, particularly those with a background in teaching. Last fall (October 12-November 7, 2018), the GSTA distributed a short survey to gather informal input about the preferences of graduate student instructors that would help to guide recommendations for a possible mentorship program. Graduate students were invited to participate in the survey through various STP channels of communication, including the STP and GSTA social media pages (e.g., Facebook, Twitter) and email (STP/DIV2 listserv). The survey received a total of 78 responses, summarized below.

    Sample Characteristics

    Graduate student respondents were asked various questions about their areas of specialization and years in graduate school. Approximately one out of four respondents indicated their field was social psychology (25.6%). There were equal proportions of respondents from clinical and cognitive psychology (14.1%), followed by developmental psychology (9.0%), and neuroscience (7.7%). Nearly half of respondents were in the second (23.1%) or third (25.6%) year of their program, followed by those in the first year (16.7%). Respondents in their fourth (12.8%), fifth (14.1%), sixth (6.4%) or seventh or higher (1.3%) year in the program represented about one out of three respondents.

    When asked about their post-graduation plans, more respondents indicated an interest in working at a research-based institution (61.5%) than at a teaching-based institution (41.0%); note that respondents could indicate interest in both. Respondents indicated a preference to work at a public institution (59.0%) over a private institution (42.3%). There appeared to be a negligible difference in the preference for working at a large institution (46.2%) as opposed to a small institution (44.9%). A minority of respondents indicated an interest in working at a nonprofit organization post-graduation (2.6%).

    Interest in a Mentorship Program

    Over 9 in 10 of the respondents indicated either a potential interest (51.3%) or a definitive interest (39.7%) in being mentored by an early career psychologist. The remainder (9.0%) did not indicate an interest, nor did they provide an explanation for why they did not have an interest.

    The survey asked about their topics for mentorship; note that respondents could indicate interest in multiple topics. Half of the respondents indicated they would like mentorship to focus on how to prepare for the job market (50.0%). Others indicated they would also like mentorship around teaching (11.4%), how to prepare work for publication (10.0%), research advisement (10.0%), engagement in service (8.6%), innovation in the field (1.4%), and jobs outside of academia (1.4%). Some indicated they were open to and interested in mentorship for all of the above noted topics (5.7%).

    Respondents were asked what types of mentorship models they would most prefer. Over half of the respondents indicated an interest in dyadic mentorship (60.6%), while a minority indicated interest in a group mentorship model (35.2%). Other respondents were content with either option (4.2%).

    The survey asked respondents how frequently they would like to communicate with their mentor(s). It appeared that most respondents preferred meeting about once a month (46.5%) or twice a month (36.6%). Less popular, though preferred by some respondents, involved communicating on a weekly basis (11.3%). Even fewer respondents indicated an interest in communicating less frequently, or about once every three months (5.6%).

    Respondents also indicated their preferred channels of communication for a potential mentorship program, with respondents given the option to select multiple options. The vast majority of respondents indicated a preference for email (88.7%) or in-person (85.9%) communication. About half also indicated a preference for video calls (47.9%). Other respondents indicated phone (36.6%) or text messaging (36.6%) as preferred channels of communication as well.

    Summary of Key Findings

    Mentorship opportunities may be especially beneficial for graduate students as they try to gain a professional footing. Such opportunities can connect graduate students studying psychology to others in the field, possibly leading to long-term collaborations. Without a previous systematic investigation into the needs and interests of potential graduate student mentees, we distributed this survey to gather this information. The responses indicated a preference for a mentorship program structured around a dyadic mentor-mentee arrangement. The results also suggested that respondents preferred communication on an approximately monthly or bi-monthly basis. The most popular means of communication appeared to be email and in-person; however, over a third also indicated a preference for video call, phone, or text messaging. These findings shed light on the effective ways to organize a mentorship program.

    With regards to the focus of the mentorship, given that we recruited through STP and GSTA communication channels, we were surprised that fewer than half of the respondents (41.0%) indicated interest in a teaching-based position post-graduation, and even fewer (11.4%) indicated interest in mentorship around teaching. Most of the respondents were in the earlier years of their program (first to third), suggesting that there is demand for a mentorship program geared towards students in the earlier phase of their doctoral studies.

    Our findings pointed towards a greater interest and need among graduate students for mentoring on issues centrally related to preparing for the job market. Recent news articles have featured the many challenges associated with entering the job market (Smith, 2019), particularly for those who are pursuing careers in academia (Smith, 2017). Given the context of such a competitive job market even for highly skilled individuals, a successful mentorship for graduate students should incorporate both aspects of help described by Tenenbaum et al. (2001), with a focus on preparing students with the instrumental knowledge necessary for applying for jobs, and the psychosocial support to buffer the challenges and inevitable rejections they will experience in the process.

    Participation in mentorship may create expectations around the education and training of graduate students as a continuous endeavor (Epstein & Hundert, 2002). Such a perspective may be particularly helpful for advanced graduate students and recent post-graduates who anticipate preparing for a competitive job market, particularly in academia. Professional mentorship opportunities may be one way to better prepare recent graduates for a long-term career, rather than forcing them to abruptly recalibrate their job ambitions. Having such opportunities beyond the formal student-advisor relationship may be one means by which institutions and organizations can promote a culture where the continual development of professional competency is held in high regard.


    References

    Epstein, R. M., & Hundert, E. M. (2002). Defining and assessing professional competence. Journal of the American Medical Association, 287, 226–235.

    Smith, N. (2017, Oct 4). Too many people dream of a charmed life in academia. Bloomberg, Retrieved from https://www.bloomberg.com/opinion/articles/2017-10-04/too-many-people-dream-of-a-charmed-life-in-academia

    Smith, N. (2019, Jan 9). Burned-out millennials need careers, not just jobs. Bloomberg, Retrieved from https://www.bloomberg.com/opinion/articles/2019-01-09/millennial-burnout-young-adults-need-careers-not-jobs

    Tenenbaum, H. R., Crosby, F. J., & Gliner, M. D. (2001). Mentoring relationships in graduate school. Journal of Vocational Behavior, 59(3), 326-341.

    Troisi, J. D., Leder, S., Stiegler-Balfour, J. J., Fleck, B. K., & Good, J. J. (2015). Effective teaching outcomes associated with the mentorship of early career psychologists. Teaching of Psychology, 42(3), 242-247.


    Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center of the City University of New York. Teresa designed and created Manuscript Builder in completion of the certificate program in Interactive Technology and Pedagogy at the Graduate Center. She is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

    Elizabeth S. Che is a doctoral student in Educational Psychology at the Graduate Center, CUNY and the GSTA Deputy Chair. Her research interests include individual differences in language development, creativity, and pedagogy.

    Patricia J. Brooks is Professor of Psychology at the College of Staten Island and the Graduate Center, CUNY and GSTA Faculty Advisor.  Brooks was recipient of the 2016 President’s Dolphin Award for Outstanding Teaching at the College of Staten Island, CUNY.  Her research interests are in two broad areas: (1) individual differences in language learning, (2) development of effective pedagogy to support diverse learners.​

  • 12 Dec 2018 11:09 AM | Anonymous member (Administrator)

    Carolyn Stallard, Ph.D. Student, The Graduate Center, CUNY 

    This past October, I had the privilege of volunteering for and presenting at the 9th Annual Pedagogy Day held at the Graduate Center CUNY and organized by members of the GSTA. At first I was concerned; would my non-psychology-focused presentation go over well at a conference hosted by the Psych department? The conference was open to anyone, but as a music educator would I benefit from the presentations?

    As the proceedings began, it was immediately evident that my concerns were for naught; from the start I could see that this was a conference of great benefit to anyone interested not only in psychology but also in higher education pedagogy in general. I truly feel as if I learned something useful from every presentation I attended. In particular, I loved the message of keynote speaker Sue Frantz (click here to see a recording of the keynote address), who challenged the audience, when preparing to teach an Intro Psych course for undergraduates, to consider what their real-life neighbors might need to know about psychology. Though the topic at hand was purely psychology, I found the question to be relevant to any course taught to students who are not planning to major in a particular subject. When teaching an introductory course, it is important for educators to remember that the students enrolled will someday be our neighbors –construction workers, educators, dentists, pilots – so what do they need to know about the subject being taught?

    My contribution to Pedagogy Day was a bit different than Sue Frantz’s. Rather than challenge the audience to think about what non-majors might need to know, I challenged them to think more creatively about the collection and retrieval of information, particularly to encourage/improve research and critical thinking skills. I shared information on a mod – a modification of a pre-existing game – called Superfight by Jack Dire. In the original version of this self-described “game of absurd arguments,” players draw three each of two kinds of cards: Characters and Attributes. For instance, a player may have a hand containing three characters – Abraham Lincoln, Superman, and Godzilla – and three attributes – the ability to shrink in size, a water gun, and a beard full of bees. Each player then chooses a character and an attribute from their hand and “battles” through debate against another player to determine who would emerge victorious in a fight. A third player judges the debate and decides who wins.

    Figure 1. Example of the original Superfight game by Jack Dire

    My version, called Music Melee, takes this debate concept and the accompanying game mechanic of randomization and applies it to music history. In the basic version, character cards contain information about a significant musician we’ve studied in class, and attribute cards are instruments. Students form groups of three (two to actively debate, one to judge), choose an artist and instrument from their hand, and make an argument for why their choice would outperform their opponent in a battle of the bands.

    Figure 2. Example of Superfight cards created by students

    To make the game run smoothly in my 50-student class, after each of the initial three players in a group has battled each other (creating a “best two out of three” scenario) the two losing players join the winner as support, resulting in a new team of three. This pod of three then finds two other pods to debate (with a larger arsenal of characters to choose from now) and again, the two losing pods join the winner, creating teams of nine. This continues until there are three massive teams debating in the room. At this time, for the final round, only the team champion (the single person on the team who has yet to lose any debate) can speak for their team in a live debate in front of the class.

    This game is useful for a number of reasons:

    1. Students create the cards. In the week(s) leading up to the game, each student must choose a couple of artists to research. Thus, the students themselves create the playing deck, requiring very little preparation from the instructor. I give the students a number of specific points they must include on their cards, treating each as a small research project, but another professor might adjust this in a different way.
    2. It encourages not only information retrieval, but also critical thinking. Often, the instrument/musician pairings are not ideal; the best combo a student might produce from their randomized hand might be, for instance, Umm Kuthum with a didgeridoo. In this situation, students must get creative in their arguments, not only recalling information learned in class but also considering what factors might create a persuasive argument (in this example, a student might argue that because Umm Kulthum is considered such an important vocalist in Egypt and the broader Middle East, she may have the lung power to master a didgeridoo better than say, Jimi Hendrix, who did not play any aerophone instruments).
    3. Students learn what does or does not make a strong argument, which can later be applied to research papers. In my class, we spend time before the game having a full class discussion to determine what does or does not make a strong argument. We create a sort of rubric on the board, which students can then refer to when making or judging an argument during gameplay.
    4. The game can be easily modified to up the ante. For instance, you may add in random situations/scenarios mid-debate: Suddenly the musicians are performing in 18th century Venice or can only perform acoustically; how does this affect the argument being made?
    5. Likewise, the game can be modified to teach a number of different subjects. At Pedagogy Day I asked the audience for ideas, and a number of instructors mentioned the idea of creating a deck of psychologists as the character cards, with various research variables/items (B.F. Skinner’s rats could be a card, for instance) as the attributes. 

    My interest in games as a tool for learning has led me to become involved in the CUNY Games Network. As part of the steering committee for the CUNY Games Network, I firmly believe that game-based learning is a useful method for teaching any subject in higher education. Music Melee is a simple game, but whenever I introduce it in class the students latch on; I am always surprised when students who have been quiet all semester suddenly come alive during their debates.

    This is just one example of game-based learning (GBL) and “modding.” To learn more, I highly encourage anyone interested to take advantage of the plethora of resources and fellow pedagogues interested in GBL in higher education here at CUNY. The CUNY Games Network is open to anyone (CUNY or non-CUNY, as long as your interest is GBL in higher education), and we welcome those with previous GBL experience as well as those just starting out. To sign up for the mailing list and read more about games in the classroom, visit cunygames.org. There, you will also find information about the upcoming CUNY Games Conference 5.0, which will be held January 18, 2019 at Borough of Manhattan Community College (click “Events” to find the conference info.).

    Carolyn Stallard is an Ethnomusicology student and Senior Teaching Fellow at the Graduate Center and an adjunct instructor at Brooklyn College.  She is a member of the steering committee for the CUNY Games Network and researches game-based learning in higher education.

  • 29 Nov 2018 3:30 PM | Anonymous member (Administrator)

    By Laura Freberg, Ph.D., California Polytechnic State University, San Luis Obispo

    Choosing the right materials to support your course is one of the most important decisions an instructor must make. Whether you choose your own materials independently, serve on a textbook decision committee, or administer a course for which materials are chosen for you, this decision will have significant implications for the quality of the course experience for you and your students.

    Today’s instructors face a bewildering array of choices, which has both an upside and a downside. On the positive side, having many choices is always a good thing, as courseware can be tailored to a specific group of students with characteristics best understood by their professor. On the downside, reviewing the many available materials represents a significant commitment in instructors’ time, which is already in very short supply.

    The point of this article, then, is to help instructors focus on some of the key variables involved in courseware decisions. In the interest of transparency, I am actively authoring two traditional textbooks for Cengage as well as serving as lead author for a lower-cost electronic textbook for TopHat. I have also worked with the APS Wikipedia Initiative and even sat on a panel for APA on “Teaching Without Textbooks.” While I fully appreciate the success of open source software, the typical model for open education resources (OER), I am also willing to pay for outstanding proprietary products like those from Adobe. The point is to obtain the tools that best fit your needs.

    Pros and Cons

    Each type of courseware has its own set of strengths and weaknesses. By examining these, we can begin to identify areas where the materials differ.

    The Traditional Textbook              

    The traditional textbook provides the complete package. Not only do you get a heavily peer-reviewed document, which minimizes errors, but publishers generally provide testbanks, instructors’ manuals (e.g., lecture notes, activities, lists of TED talks and videos), online homework and enrichment activities, and PowerPoints. This option is literally “Doc in a Box.” The textbooks are also updated at regular intervals. This might not be essential in algebra, but it’s a must in sciences like psychology.

    On the negative side is the elephant in the room—cost. Many people do not know why the costs of traditional textbooks are high, which contributes to the mentally lazy vilification of traditional publishers as “evil corporations.” The actual printing cost of a book is relatively little. Most of the cost represents work by a fairly large group of people, not just the authors. We have development editors who help shape our content, copyeditors, photo researchers, indexers, and sales teams. Hundreds of paid peer reviewers scour our work for errors. Still others produce the testbanks and other ancillaries, which are also reviewed. The publisher must ensure that online materials present a positive user experience, leading to an ever-increasing need for expensive IT people and equipment. Traditional publishers are held to a very high standard of accessibility and ADA compliance, which is also expensive.

    Publishers of both textbooks and books for the general public face these same challenges. What makes life much harder for textbook publishers is the impact of the used book and rental markets. Who sells or rents their copy of Harry Potter? Sacrilege! The relatively tiny printing cost is the only variable that depends on the number of books produced. The remaining costs that I mention must be paid regardless of how many books are sold. If you spread these costs over single payers (each reader of Harry Potter or an assigned textbook), traditional textbooks would be as affordable as Harry. This doesn’t happen, of course, as the vast majority of students assigned a textbook will purchase used or rental copies. In spite of “don’t sell” notices adorning instructors’ desk copies, some instructors sell them anyway for extra spending money. Amazon affiliates and the campus bookstores are the main recipients of this largesse. They pay the student very little at buyback, store the book on the shelf for a few days, then sell it again at nearly new prices. None of this money, of course, goes back to the publisher to offset any production costs. What makes the traditional textbook expensive is the fact that new book sales represent a relatively small fraction of overall users.


    Image : The upper graph demonstrates the effects of the used book market on publisher sales and the lower graph demonstrates the effects of both the rental and used book market over the six semester lifespan of a textbook (Benson-Armer, Sarakatsannis, & Wee, 2014). Publishers only recoup production costs from new sales, not total use of their intellectual property. 1 = Assumes percentage of students who do not acquire textbooks shifts from 20% to 8% due to introduction of the rental market (which reaches 30% penetration).

    Just as the music industry did to avoid the hemorrhage that was Napster, publishers have moved to electronic versions of textbooks, or the iTunes model. The advantage to the publisher is not due to lower printing costs, which are quite small anyway, but rather to the spreading of production costs across more users because resale is limited. Electronic books are typically half or less of the cost of the print version and will go lower as adoption of electronic books increases. Incidentally, the idea that students learn better from print than electronic books appears to be a myth, at least according to careful research presented by Regan Gurung (2017). For students who insist on something they can hold in their hands, publishers make loose-leaf versions available for a few extra dollars over the electronic book cost.

    Electronic materials have another advantage. Many students do not buy their assigned text. When instructors use electronic books and their associated homework, they know exactly who does and does not purchase a textbook. The analytics associated with the electronic books even show you how much time each student spends with the materials, which can be very helpful when advising a student doing poorly in your class.

    We still don’t know how well the revolutionary Cengage Unlimited model is going to work. This model (nicknamed the Netflix model) allows students access to ALL Cengage titles while paying a slightly higher fee than they would for a single electronic title. Students rarely pay attention to the publishers of their assigned textbooks, but this model might make them more sensitive to that. If successful, we can anticipate all of the major publishers will begin offering this service.

    Before jumping to conclusions that students ALWAYS want low-cost or free textbooks, consider the following. Many of the lowest income students are receiving federal grants that include the purchase price of new textbooks. Washington surely doesn’t want the textbooks back at the end of the term, so the student is free to sell the books. This provides a significant income for these students, who object strenuously to OER or electronic books with no resale value. Affluent students often follow a similar strategy. Their parents pay for books but do not consider their resale value, so the student gets a bit more discretionary income without having to ask for it.

    A final key aspect of the decision to use a traditional textbook is whether the instructor actually needs a textbook. If you provide students with comprehensive study guides and PowerPoints, and assess exclusively on that content, it should come as no surprise that students either do not purchase the text or complain about having to purchase the text. Texts are only valuable if they are used.

    Low-Cost Textbooks        

    All of us receive frequent emails from indie textbook publishers, both print and electronic, that promise a less expensive option of perhaps $40 or so. These options vary substantially in quality and in the support provided for the instructor.

    My all-electronic TopHat project probably represents the higher end of this classification in terms of quality. We have very capable development editors and were supported by a copy editor and graphics designer. Costs are cut by using photos that were open source. Photo permissions for traditional textbooks can be very pricey. I actually purchased the “blue/black or gold/white” illusion dress from eBay and wore it for a photo for my Cengage books (and yes, I can attest to the fact that it really is blue and black) when the difficulties of obtaining permission to use the original photo were insurmountable. Another cost savings was the relatively sparse pre-publication review. We had one person per chapter review our work prior to publishing compared to hundreds in traditional text publishing, which I must say made me nervous. TopHat assumes that crowdsourcing will fix problems after publication, a point of view shared by many open educational resource (OER) advocates.

     

    Image 1 Caption: To save money, I actually purchased a version of the blue/black/white/gold dress on eBay so we could include a photo in my textbooks. The original photo is on the left and I am wearing the dress on the right. My husband, armed with his cell phone camera, and I walked around the house until we found lighting that allowed us to duplicate the illusion.

    In spite of these cost savings, TopHat charges $61 for our book as opposed to the $95 cost of the basic Cengage electronic books. So even when you do not have to worry about the effects of used and rental textbooks on your sales, there is an underlying truth about the costs of producing quality materials – it can only go so low.

    Open Education Resources (OER)                

    The largest advantage of OER is cost to the student. Who doesn’t like free stuff? Having paid for my own college education, I am not unsympathetic to this. Instructors can endear themselves to their students and administrations by using OER, resulting in higher evaluations. Their classes get special recognition in the registration process, in a not-so-subtle public shaming of instructors who prefer traditional or low-cost materials. Note that OER is not “free” at the institutional level. Colleges and universities spend considerable resources on grants and support personnel for OER that reduce support for other functions.  

    OER advocates tell me that cost is not the only advantage. You can bring in a multitude of materials in addition to a free text and instructors can adapt the material to fit their needs. I agree, but there’s nothing stopping you from doing these things WITH the electronic versions of traditional texts, which have the capability of embedding videos, assessments, activities, and documents seamlessly. In many cases, the publishing company staff will set up the course the way the instructor wants it.

    On the downside, if traditional textbooks are “Doc in a Box,” OER materials are stone soup. It might be possible to simply use materials as-is, but I know very few people who do that. Some people enjoy the revising and curating process, but others simply can’t fit additional course prep time into a heavy research and service load. Additionally, instructors might not have the necessary skillset for these tasks. Being a great teacher and researcher does not automatically make you a courseware expert.

    Continuity and updating of OER materials by entities such as OpenStax is somewhat vague. Foundation money might provide for original production costs, but who has ownership of the ongoing health of these materials?

    As mentioned previously, OER materials, unlike traditional text materials, are rarely assessed for ADA compliance. You don’t have to look at too many materials before finding some with blinding areas of inaccessibility. Bringing materials into compliance is expensive and time-consuming, and lawsuits are even more so. Until now, OER materials have been given a “pass” not enjoyed by traditional publishers, but that is not likely to last forever.

    Head-to-Head Comparisons

    In 2017, Regan Gurung undertook direct comparisons between OER materials in introductory psychology (NOBA) and three traditional textbooks (Hockenbury & Hockenbury, Cacioppo & Freberg, and Myers & DeWall). You might have heard of numerous studies that show that students using OER have the same level of achievement as students using traditional textbooks, but Gurung carefully points out and controls for the design flaws in those efforts. He concluded that traditional textbook “users enjoyed their classes less and reported learning less than OER users but still scored higher on the quizzes.” In other words, just because students seem happier with OER does not mean they are learning more.

    OER are usually presented on campus as a positive contributor to social justice. This claim might be tempered if in fact student outcomes are superior with traditional materials. If less affluent, less-prepared students are more likely to be offered materials that result in lower performance, this is actually working in the opposite direction of true equity.

    Making the Decision

    As anyone teaching heuristics knows, the human decision-making system is subject to flaws. We can possibly avoid some of those flaws by thinking more systematically. One such approach is a utility model, where we assign ratings and weights to variables of interest and let the math point us in the right direction. If you’d like to try that out, I’ve provided a model you can use or adapt to your own needs.


    Begin by considering each “Feature” and assigning it a “Weight,” with “5” being “very important” and “1” being “not important at all.” Next, examine your sample materials, and assign each a “Rating,” with “5” being “very good” and “1” being “not very good.” Then, all you have to do is multiply Weight by Rating and sum the results. Ideally, this should give you an idea about which type of materials is likely to bring you the greatest level of satisfaction.

    No one type of courseware is likely to meet the specific needs of all students and their instructors. As empiricists, we should be willing to experiment. If what we’re doing isn’t working, we should try other things. Ultimately, our feedback and the feedback from our students can help producers of content to develop even better materials.


    References

    Benson-Armer, R., Sarakatsannis, J., & Wee, K. (2014). The future of textbooks. Retrieved from https://www.mckinsey.com/industries/social-sector/our-insights/the-future-of-textbooks

    Gurung, R. A. R. (2017). Predicting learning: Comparing an open educational resource and standard textbooks. Scholarship of Teaching and Learning in Psychology, 3(3), 233-248. http://dx.doi.org/10.1037/stl0000092


    Laura Freberg is a Professor of Psychology at California Polytechnic State University, San Luis Obispo, and adjunct instructor for Argosy University Online. Dr. Freberg received her bachelors, masters, and Ph.D. from UCLA and conducted her dissertation research with Robert Rescorla of Yale University. She is serving as the 2018-2019 President of the Western Psychological Association. 

  • 21 Nov 2018 10:00 AM | Anonymous member (Administrator)
    By Charles Raffaele, Ph.D. Student, The Graduate Center, CUNY

    Writing abilities are among the most important skills for psychology students to develop for work in the “real world” after college, regardless of the area of higher education or employment they pursue. Writing effectively is necessary for tasks ranging from communicating with collaborators on a project to generating proposals to convince others to invest time or money behind a plan, as well as for everyday situations in which writing with finesse and efficiency are essential. In addition, writing can be a method for students to perform higher-order thinking, using writing as a tool to help put varied thoughts into a logical sequence, organized discretely around a focus or in the service of a broader goal.

    For the purpose of helping professors in their implementation of writing in coursework, Kaitlin Mondello and I recently gave a presentation at the GSTA’s 2018 Pedagogy Day event on how we teach writing skills to students and teach content to students by way of these writing skills. Our workshop was based in the Writing Across the Curriculum (WAC) movement, which addresses the importance of writing’s place in any and all academic disciplines, and has developed over decades (see Northern Illinois University’s A Short History of WAC page for more information about the history of this movement). WAC has yielded various approaches to implementation of writing in the classroom may be utilized for achieving the aforementioned purpose. This current blog post, following generally the structure of the workshop Kaitlin and I gave, will cover a few central ideas of WAC. These will be advanced roughly in the sequence of, initial planning of writing → construction of writing assignments → use of rubrics for showing which competencies an assignment taps → how to leave effective feedback on students’ papers.


    Backwards Course Design (Starting with Objectives and Designing Assignments Accordingly)

    Start with your personal course objectives: What is it you want your students to get out of the course? Do you want them to remember a lot of theory or research findings? Apply psychological ideas to the real world? Attain a more general critical/questioning lens? By identifying what your learning goals for the students are, you will be better able to construct writing assignments that help your students to achieve those goals. See the American Society for Microbiology’s page Starting at the End: Using Backward Course Design to Organize Your Teaching for more information about this approach.

    Ensure you are constructing writing assignments all from a ‘writing to learn’ standpoint – this is not elementary school, where children are largely ‘learning to write.’ This is also not high school, where students are mainly already writers, but are often engaged in more rote or other low- to mid-cognitive engagement writing tasks (e.g., summarization, regurgitation of facts). This is college, where the writing in which pupils are most importantly engaged revolves around constructing knowledge (i.e., taking course content and giving their own analyses or making connections utilizing the content). Your own existing assignments that may not fully meet this criterion may be modifiable to attain these characteristics. For example, perhaps the assignment you already have asking students to summarize Bandura’s Social Cognitive Theory and its key elements could be modified to have students analyze a situation (either from their own experiences or one you provide to them) through the lens of the theory. This adjustment to the assignment would cause them to put their own signature on the paper and thereby ingrain it more firmly in their own memory.


    Incorporating Both Low- and High-Stakes Assignments

    Utilize both low-stakes and high-stakes assignments. Low-stakes assignments are small and low-grade impacting (e.g., 3–5 minutes of freewriting in response to a question), while high-stakes assignments are large, formal and high-grade impacting (e.g., a 4–6 page paper organized into paragraphs including APA style references). Effective use of these in tandem can help more of your students achieve execution of higher-order thinking on important topics by writing well-organized in-depth papers (i.e., in high-stakes papers), built up to by their having been provided lower-level scaffolding to help them work towards those higher goals (i.e., in low-stakes assignments; e.g., by trying out possible paper topics, or practicing how to cite relevant information from a journal article).


    Using Rubrics (Communicating Expectations to Students Systematically)

    It is very important that a rubric for an assignment doesn’t (a) just redundantly re-describe the assignment, duplicating the instructions in your initial written prompt, or (b) indicate arbitrary criteria that do not line up with your learning objectives or with the weighting system used for grading the assignment. Instead, a well-constructed rubric should help students gain further insight into the skills you’re asking them to practice/demonstrate before writing and, after receiving feedback, allow them to know better on which areas they performed well/poorly. It will also help you grade in an objective, standardized, and transparent manner. A few rubric formatting tips that may be useful:

    Use what Bean (2011) calls a task-specific, analytic rubric. The task-specific guideline recommends use of one rubric per assignment rather than one rubric for all assignments. Rubrics that highlight the assignment-specific elements make grading and feedback clearer for students. The analytic guideline recommends using a rubric with different sub-grades for each competency rather than a rubric that combines all competencies into a global evaluation or grade. See Figure 1 for an example of a task-specific, analytic rubric.

    Example of a task-specific, analytic rubric (Bean, 2011)

    Figure 1. Example of a task-specific, analytic rubric (Bean, 2011)

    In using an analytic rubric, keep the number of competency categories to 3–6. Fewer than 3 categories gives the student too little detail on the breakdown of their grade, and more than 6 can be over-encumbering to both you who have to grade the student in all those categories and the student who has to interpret such a complex breakdown.


    Giving Effective Feedback on Students’ Papers (Minimal Marking, and in a Coach-Like Style)

    Have you ever received feedback on a paper and been discouraged by the reviewer’s comments that only mentioned what was ‘wrong’ with your paper? Or a reviewer leaving so many notes on your paper that you don’t even know where to begin in reviewing them? If you are willing to adjust how you look at students’ written work when giving feedback, these issues could be ameliorated for your students. In addition, making certain adjustments in how you give feedback may help your students improve their writing more efficiently, both in terms of revision and in future writing they generate from scratch. The suggested adjustments are these: when you grade, only point out the few most important points for the student to be aware of, and give both positive feedback and feedback on areas that could benefit from modification, rather than areas that are ‘wrong’. This manner of feedback-giving is more like ‘coaching’ in that it is similar to how a sports coach would both encourage what the learner is doing right and also suggest areas to modify as the learner continues practicing the skill. It is also similar to how a coach would only give the feedback that is appropriate to helping the student reach the next level of ability. For example, a student writing a paper for your class may feel more encouraged to keep trying and persist in editing previously misunderstood theoretical aspects of Information Processing Theory if you also compliment the student on the paper’s accurate description of effects the theory had on the field of psychology. In addition, this student may feel more encouraged to make the aforementioned edits if you stick to just mentioning those and not every grammatical mistake the student made. (Note: See Figures 2 and 3 for examples of both unsuccessful and successful feedback given on student work.)

    Example of unsuccessful feedback given All on grammar and none on contentideas (Bean, 2011)

    Figure 2. Example of unsuccessful feedback given: All on grammar and none on content/ideas (Bean, 2011)


    Example of successful feedback given Concentration on contentideas, and a combination of encouragement and suggestions for future revision (Bean, 2011)

    Figure 3. Example of successful feedback given: Concentration on content/ideas, and a combination of encouragement and suggestions for future revision (Bean, 2011)


    All in all, WAC’s approach to feedback helps us realize that feedback is not most crucial for justifying a grade – it is most crucial for helping students continue to develop their skills in the field. After all, did we become college instructors to only make sure students know why their grades were as far below an A as they were, or to provide students with manageable next steps they can take and inspire them to reach for those stars?

    Through the use of these foundational elements of WAC applied with psychology instruction in mind, you may find substantial changes in your experience of teaching and the work you receive from students. These may be achieved with only a few easy-to-implement (but of great significance) adjustments to your use of writing in teaching. In fact, I hope I will experience these enrichments in the future as an instructor as well, as my semesters of teaching college thus far have all been before my induction as a WAC Fellow at Queensborough Community College. I am excited for my return to the classroom and possibly realizing the benefits I have seen happen for professors who are taught to incorporate the main WAC principles into their classrooms. In the end, we all recognize the importance of writing in college teaching from the outset, so why not make it a personal goal to improve the way writing is implemented in our courses to the greatest degree possible?


    References

    Bean, J. C. (2011). Engaging ideas: The professor's guide to integrating writing, critical thinking, and active learning in the classroom. San Francisco, CA: Jossey-Bass.


    Acknowledgments: A big thank you to Dr. Kaitlin Mondello, who as mentioned in this post I had the pleasure of presenting on WAC with recently at the GSTA’s 2018 Pedagogy Day event; the Writing Intensive Training Program at Queensborough Community College, where I have had the opportunity to perform the bulk of my training and work in WAC; and Dr. John Bean, whose great ideas I take from routinely through his book cited here.


    Charles Raffaele is a doctoral student in the Learning, Development, and Instruction specialization in Educational Psychology at The Graduate Center, The City University of New York. His research focuses on theoretical domains of second language acquisition, the use of multimedia and games in learning, and intersections between these. He is an editor of the GSTA Blog, webmaster for DE-CRUIT and the AERA SIG Studying and Self-Regulated Learning, and a member of the Child Interactive Learning and Development (CHILD) Lab.

  • 14 Nov 2018 12:36 PM | Anonymous member (Administrator)

    By Patricia J. Brooks, Ph.D., and Jessica E. Brodsky, Ph.D. Student, The College of Staten Island and The Graduate Center, CUNY

    In today’s media-saturated world, we are likely to encounter false or biased information in our news feeds, as well as images that have been altered or miscaptioned. It is challenging to determine who is behind the information that we consume, and we struggle to distinguish between credible and untrustworthy content (Wineburg & McGrew, 2017).

    At the 9th Annual Pedagogy Day Conference recently held at the Graduate Center, CUNY on October 26, 2018, we shared resources from the AASCU’s Digital Polarization Initiative  (DPI)—a national effort involving 11 colleges (see Figure 1) that aims to help students develop fact-checking skills and become more critical consumers of online information.


    Figure 1. The 11 colleges participating in the AASCU’s Digital Polarization Initiative. (Black Hills State University, CUNY College of Staten Island, Georgia College, Indiana University Kokomo, Metropolitan State University of Denver, Millersville University of Pennsylvania, San Jose State University, Texas A&M International University, Texas A&M University-Central Texas, University of North Carolina Charlotte, Washington State University Vancouver)

    The DPI, led by Mike Caulfield of Washington State University Vancouver, builds on the work of the Stanford History Education Group (SHEG), which published an influential study suggesting that students of all ages lack fact-checking skills (McGrew, Breakstone, Ortega, Smith & Wineburg, 2018). The SHEG researchers developed a set of problems (available on their website) to test students’ civic online reasoning abilities, encompassing knowledge of how to determine who is behind information, what evidence supports their claims, and what other sources have to say about the information. They found that, unlike professional fact-checkers, students rarely engaged in lateral reading (see Wineburg & McGrew, 2017) by opening up multiple tabs on their browsers to find out what other trusted sites (such Snopes.com, Wikipedia, or NPR’s fact-check website) have to say about a particular topic or information source.

    At the College of Staten Island, CUNY (see far right marker in Figure 1), we are pilot-testing the DPI’s web literacy curriculum in COR100—a required general education course for first-year students. COR100 focuses on contemporary American society and democracy, and its curriculum aligns well with the DPI’s emphasis on building students’ civic, information, and web literacy. In COR100, we teach students the four moves and a habit of expert fact-checkers (see Table 1) through a series of linked online homework assignments and assessments. To develop our assignments, we used online news stories and images from Caulfield’s blog , which he regularly updates with new materials. These examples are free, and you can use them to create lessons in web literacy for your students. More information about the four moves and a habit is also available in Caulfield’s free online book (2017).

    We have also begun incorporating these materials into PSY100, where we draw connections between media literacy and critical thinking skills. We use a dual-systems model of thinking (see Kahneman, 2011) to help students develop the habit of checking their emotions (a System 1 reaction) as they learn to investigate online sources of information (a System 2 response). Throughout the semester, our lesson plans build students’ metacognitive awareness of processing biases and shortcuts that influence how we take in information. In our first class, we use illusions to highlight the extent to which our information processing system generates and acts on representations of the world that may be inaccurate (see Table 2). We also contrast System 1 and System 2 thinking using the Cognitive Reflection Test (Frederick, 2005). See below for other examples of key terms that we discuss in relation to information processing and online media consumption.


    We encourage instructors, particularly those teaching general education courses, to consider ways that they can teach students to think critically about the online content they consume. The four moves and a habit of expert fact-checkers can be introduced to students across disciplines as efficient and effective strategies for evaluating online news stories and images. Additionally, instructors can also identify opportunities in their courses to develop students’ metacognitive awareness of how their biases and mental shortcuts affect the ways they perceive and interpret information.

    References

    Caulfield, M. (2017). Web literacy for student fact-checkers… and other people who care about facts. Retrieved from https://webliteracy.pressbooks.com/

    Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. Retrieved from https://doi.org/10.1257/089533005775196732

    Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

    Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018.

    Lilienfeld, S. O., Lynn, S. J., Ruscio, J., & Beyerstein, B. L. (2009). Fifty great myths of popular psychology: Shattering widespread misconceptions about human behavior. Chichester, UK: Wiley-Blackwell.

    McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 46(2), 165–193.

    Mitchell, A., Gottfried, J., Barthel, M. & Sumida, N. (2018, June 18). Distinguishing between factual and opinion statements in the news. Pew Research Center. Retrieved from: http://www.journalism.org/2018/06/18/distinguishing-between-factual-and-opinion-statements-in-the-news/

    Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. New York, NY: Penguin.

    Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and learning more when evaluating digital information. Stanford History Education Group Working Paper No. 2017-A1 . Retrieved from: http://doi.org/10.2139/ssrn.3048994

    Patricia J. Brooks is Professor of Psychology at the College of Staten Island and the Graduate Center, CUNY and GSTA Faculty Advisor.  Brooks was recipient of the 2016 President’s Dolphin Award for Outstanding Teaching at the College of Staten Island, CUNY.  Her research interests are in two broad areas: (1) individual differences in language learning, (2) development of effective pedagogy to support diverse learners.​

    Jessica E. Brodsky is a doctoral student in Educational Psychology at the Graduate Center, CUNY and a member of the GSTA. Her research interests include assessing and fostering media literacy in adolescents and college students, as well as using games to train executive function skills in adolescents.



  • 14 Nov 2018 12:00 PM | Anonymous member (Administrator)

    By Teresa Ober, Ph.D. Candidate, The Graduate Center CUNY

    Many students appear to struggle with writing tasks in college for at least several reasons. In college, there is typically a higher demand for high-quality writing across many college-level courses and disciplines (Oppenheimer, Zaromb, Pomerantz, Williams, & Park, 2017). In addition, for a student who is unfamiliar with a topic, writing may be particularly straining on attention and memory capacities (Kellogg, 2001; Kellogg, Olive, & Piolat, 2007). Although practice, along with adequate feedback, is considered essential for improving one’s writing (Graham & Perin, 2007), there may be few opportunities for college students to practice writing not tied to their grades. From the instructor’s perspective, however, managing time necessary to provide adequate feedback to students on written work through detailed commentary on their initial drafts is not always feasible. In light of these challenges, Manuscript Builder was designed to provide structure through a set of ordered prompts to support students' writing. In the latest iteration of the tool, it now also provides a platform for conducting peer reviews.

    Prior studies have shown that student-led peer review can be effective at improving students’ writing skills (Topping, 1998; Van Zundert, Sluijsmans, & Van Merriënboer, 2010), tends to be valid and reliable assessment of student work (Cho, Schunn, & Wilson, 2006), and is viewed both as motivational and meaningful by students (Hanrahan & Isaacs, 2001). Student-led peer review is also effective when conducted through a computer-mediated format (Cho & Schunn, 2007).

    Student-led peer review can be conducted through an online tool available to instructors and students. As a pedagogical tool, Manuscript Builder is designed to facilitate the planning and outlining process by providing a purpose, a text structure, and prompts to guide students in drafting a written research report. The user can navigate between pages, with most representing a specific section that would appear in the final written report (e.g., Introduction, Methods, Results and Data Analysis, Discussion, References). Each page contains a set of prompts and an input text field. Users can create an account on the site and return to previously drafted work by logging-in at any time. When users have finished writing responses to prompts, they can copy their responses and share as a finalized post. The finalized post can then be used for further writing, editing, student-led peer review, and finally, publishing on the site. The steps below outline the process for adding a manuscript and conducting a peer review.

    Instruction around writing, and communication more generally, remains an essential component of the undergraduate curriculum, regardless of students’ majors. Direct writing instruction for undergraduate students has benefits for learning (Graham & Perin, 2007). It also stands as a core goal of the APA Guidelines for the Undergraduate Major, version 2.0 (APA, 2016). By leveraging peer-feedback, Manuscript Builder helps to facilitate student-led peer-review processes, which may be one effective means by which to provide students an engaging and collaborative activity to promote their writing skills.


    Steps for Conducting a Peer-review Activity in Manuscript Builder

    Instructions for Creating Your Manuscript Builder Account

    1)    Log-in to your computer and open a web-browser.

    2)    Go to the URL: https://manuscriptbuilder.newmedialab.cuny.edu/register/


    3)    Enter your first and last name, the class code your instructor gave you, a username, your email, and a password that you will remember.


    Instructions for Publishing Your Report

    1)    Once you are logged-in, go to the “Publish Your Work” page (URL: https://manuscriptbuilder.newImedialab.cuny.edu/post-form-page/).


    2)    Add the title for your report and copy-and-paste the main body of your report.

    3)    When you are ready, just click the button at the bottom of the page.

    4)    You can always return to your drafts by going to the “View Your Work” page (URL: https://manuscriptbuilder.newmedialab.cuny.edu/view/). On this page you can choose to continue editing your draft or delete it entirely.


    Instructions for Review: Conducting a Peer-Review

    1)    Go to the “Published Work” page (URL: https://manuscriptbuilder.newmedialab.cuny.edu/published/).


    2)    There you should see a list of published reports.

    3)    Click on one.

    4)    After you are redirected to that page, you should see a side-bar open up on the right side like this image below.

     

    5)    Log-in to Hypothesis with your Hypothesis account information. (Note that this is a separate log-in from your Manuscript Builder account.)

     

    6)    Once logged in, you can begin commenting on your peer’s work!


    References

    Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education, 48(3), 409–426.

    Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891.

    Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99(3), 445–476.

    Hanrahan, S. J., & Isaacs, G. (2001). Assessing self-and peer-assessment: The students' views. Higher Education Research & Development, 20(1), 53–70.

    Kellogg, R. T. (2001). Competition for working memory among writing processes. The American Journal of Psychology, 114(2), 175–191.

    Kellogg, R. T., Olive, T., & Piolat, A. (2007). Verbal, visual, and spatial working memory in written language production. Acta Psychologica, 124(3), 382–397.

    Oppenheimer, D., Zaromb, F., Pomerantz, J. R., Williams, J. C., & Park, Y. S. (2017). Improvement of writing skills during college: A multi-year cross-sectional and longitudinal study of undergraduate writing performance. Assessing Writing, 32, 12–27.

    Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276.

    Van Zundert, M., Sluijsmans, D., & Van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20(4), 270–279.


    Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center of the City University of New York. Teresa designed and created Manuscript Builder in completion of the certificate program in Interactive Technology and Pedagogy at the Graduate Center. She is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

Powered by Wild Apricot Membership Software