Welcome to the GSTA blog! 

In an effort to keep the Graduate Student Teaching Association (GSTA) blog current, we regularly welcome submissions from graduate students as well as full-time faculty. As a blog team, we advocate for and promote inclusion, equity, and anti-racism in pedagogy (see updated GSTA Position Statement from the Steering Committee). At this critical juncture in history, we have declared our solidarity with #BlackLivesMatter and are motivated to use this platform to feature voices for change in the following areas as outlined by the GSTA:

  • Suggestions relating to decolonizing syllabi by including the work of scholars and psychologists from diverse identities and backgrounds.

  • Tips on adopting anti-racist and culturally responsive teaching and assessment practices.

  • Recommendations on creating inclusive learning environments that celebrate diversity and do not tolerate discrimination.

  • Strategies on discussing how discrimination and inequity have shaped the field of psychology and the world around us  with students and colleagues.

  • Tips on engaging with students and colleagues across disciplines in activism to create change in classrooms, institutions, and communities.

  • Input on being compassionate and supportive to students, colleagues, and ourselves during these times.

We are also still committed to diversifying blog content to include submissions ranging from new research in the area of the Scholarship of Teaching and Learning (SoTL), public interest topics related to teaching and psychology, occasional book reviews, as well as continuing our traditional aim by including posts about teaching tips. Example topic areas include:

  • Highlights of your current SoTL research

  • Issues related to teaching and psychology in the public interest

  • Reviews of recent books related to teaching and psychology

  • Teaching tips and best practices for today's classroom

  • Advice for successfully navigating research and teaching demands of graduate school

  • We would especially like activities that align with APA 2.0 Guidelines!

The blog posts are typically short, ranging from about 500-1000 words, not including references. As it is an online medium, in-text hyperlinks, graphics, and even links to videos are strongly encouraged!

As we focus the spotlight on inclusion and non-discrimination, we will continue to provide  graduate students and faculty an outlet to share their experiences, ideas, and opinions regarding graduate students’ teaching practices.

If you would like for any questions to be addressed, you can send them to and we will post them as a comment on your behalf. If you are interested in submitting a post, please email us at 

Thanks for checking us out,

The GSTA Blog Editorial Team:

Hallie Jordan, Sarah Frantz, Maya Rose, Raoul RobertsTashiya Hunter, Laura Mason and Megan Nadzan

Follow us on twitter @gradsteachpsych or join our Facebook Group.

  • 18 Nov 2017 9:00 AM | Anonymous

    By Dr. Beth Morling, Ph.D., University of Delaware

    Here’s a tale from my graduate course on the teaching of psychology. It was the second half of the semester and students were engaged in microteaching, preparing short lessons for each other. On her chosen week, a 3rd year Ph.D. student delivered an intro psych lesson on learning theory. She started with a mini-lecture with illustrated slides, then performed a short demonstration of the phenomenon. She conveyed a warm personal presence, used student names, and delivered responsive feedback. Her demo involved every student in the room; the audience loved it. There was only one problem: Her slides had introduced classical conditioning terms, but her demonstration involved only operant conditioning. She didn’t realize she had muddled the difference between the two types of learning.

    And here’s a tale from the National Institute on the Teaching of Psychology (NITOP). During a keynote presentation, distinguished developmentalist Dr. Nora Newcombe (2016) described the weak scientific support for Piaget’s stage theory and presented alternatives such as Vygotskian and information processing approaches. She openly wondered why textbooks persist in their focus on Piaget, given how the field has moved on. She speculated that Piaget remains in textbooks because his stages are simple to teach. Testbank authors can easily write multiple choice questions about Piaget’s stages and students feel mastery easily. While many in the audience were inspired to modernize their lessons, others seemed to resist. Why are some teachers and textbooks content with outdated research?

    It might seem obvious that we need both pedagogy and accurate, modern content to be effective psychology teachers. However, these two events illustrate how sometimes content can take a back seat. 

    Faculty used to complain that “nobody ever teaches you how to teach in graduate school!” sometimes adding, “I only learned how to conduct research and read journal articles.” Graduate students didn’t get trained in pedagogy because they focused on developing expertise in the field.

    Luckily, the pedagogical training of graduate students has been improving. More graduate students take courses on teaching, and psychology’s vibrant teaching culture engages both faculty and graduate students. Teaching pre-conferences are attached to APS, SPSP, and SRCD, and there are free-standing teaching events such as NITOP and ACT. We’re developing a body of knowledge about active learning, course design, feedback, and student engagement. It’s all good. But our new focus on pedagogy should never eclipse expertise. Teachers of psychology need to know their content deeply, they need to know where students struggle with it, and they need to constantly update their understanding.


    There’s a saying that goes: “Good teachers can teach anything!”  Or perhaps you’ve heard, “those who can’t do, teach.” Although we don’t have much data at the college teaching level, the K-12 literature disagrees. Students learn more from teachers who have high levels of content knowledge in their specific discipline. For example, Willingham (2013) blogged about a study of middle school science teachers (Sadler et al., 2013). It found a main effect such that students learn more from teachers who know their stuff. The pattern was also moderated by student ability. When teachers were low in subject-matter knowledge, their high-ability students could still learn something—presumably from the textbook. But their low-ability students learned….. nothing.  At the college level, we might reason that if students just use think-pair-share, just-in-time-teaching, and writing-to-learn, they will be engaged enough that learning will just happen. But such techniques are empty pedagogical shells until they are filled with content.

    We have to convey content to our students because critical thinking—the skill we all value highly—cannot take place in a content-free space.  Content knowledge enables better learning and thinking in our students (Willingham, 2006).

    Ruth Ault raised a similar point in the context of the job market. In a chapter about teaching at a liberal arts college, she wrote:  

    “When candidates boast that they can teach anything in the discipline, our suspicions are aroused that the person does not understand the rigor of our courses or the caliber of our students.” (Ault, 2014, p. 167)

    I think Ault’s statement is exactly right. A teaching-focused academic career does not preclude being steeped in the nerdy details of one’s discipline.

    Ideally, content knowledge includes knowing what students struggle with.  Shulman writes:
    “content knowledge includes an understanding of what makes the learning of specific topics easy or difficult: the conceptions and preconceptions that students of different ages and backgrounds bring with them to the learning of those most frequently taught topics and lessons.” (1986, p. 9).

    Indeed, the Sadler et al. study (2013) introduced above also measured teachers’ knowledge of student misconceptions. Teachers were best able to produce learning when they were content experts and when they knew what students struggled with.


    Building and Sustaining Content Knowledge

    How can you ensure your preparation for college teaching includes both pedagogy and content? First, as you develop expertise in graduate school, track metacognition as well. My microteaching student got into trouble because she didn’t know what she didn’t know. Metacognitive accuracy comes from feedback (and probably humility, too). Put yourself in situations that answer, “What do I still need to learn?” Chart the course of your own misconceptions and learning because it’s likely your students will get snagged in similar spots.

    Second, let your excitement about mastering content as a graduate student transition into a sustainable career of learning new things. I estimate that up to 90% of what I use in the classroom is stuff I learned after graduate school. My graduate education never touched behavioral genetics, gene-culture coevolution, zero-acquaintance accuracy, learning science, or Bayesian statistics, but I’ve learned them (OK… the last one’s still a work in progress). A lifetime of learning is probably what attracted you to the professoriate, but it’s not always easy. I’ll admit that when there’s a body of knowledge I’ve needed to learn, I grumbled and tried to avoid it. It can be hard on the ego to be the amateur in the room (see: Bayesian statistics, above). Acknowledge your resistance, but then get yourself to the library.

    You can keep your learning going by regularly attending academic conferences---and not only the sessions on pedagogy. Even at NITOP, we take care to make sure our program includes content updates by subject matter experts as well as pedagogical talks. We know that our attendees need both.

    Although there are no shortcuts, an enjoyable approach is to read (or listen to) trade books written by psychologists. I follow a rule that my audiobooks have to be nonfiction, so I’ve  “read” 8 psychology-related titles this year (including this one,  this one, this one, and this one.) If you’re about to point out that such books are not peer-reviewed and don’t dig into the research details—you’re right. But when it comes to introducing research I should know about and providing excellent real-world examples, they are invaluable.



     Shulman (1986) noted that 100 years ago, U.S. qualifying exams tested teachers’ knowledge of mathematics, spelling, grammar, penmanship, history, and so on—with only a few questions about pedagogy. But now, K-12 teaching standards focus on pedagogical topics such as organization, classroom management, and cultural awareness; not content. Shulman asked, “Where did the subject matter go?  What happened to the content?”  (p. 5). In our own enthusiasm for the latest pedagogical techniques for psychology, let’s not let our content knowledge stagnate: Keep the balance between the two.

    As a member of GSTA, you’re commended for supplementing your rigorous content training with pedagogical engagement. As you embark on your career, I hope you’ll also find sustainable ways to deepen your expertise so you can share the constantly-changing wonders of our field with your students.  


    Ault, R. L. (2014). Four desirable qualities for teaching at a small liberal arts college. In J. N. Busler, B. C. Beins, & B. Buskist (Eds.) Preparing the New Psychology Professoriate: Helping Graduate Students Become Competent Teachers, 2nd ed. Retrieved from the Society for the Teaching of Psychology web site:

    Newcombe, N. (2016, January 4). New Ways of Thinking about Cognitive Development: Implications for Teaching. Keynote presentation at NITOP, St. Petersburg Beach, FL.

    Sadler, P. M., Sonnert, G., Coyle, H.P., Cook-Smith, N., & Miller, J.L. (2013) Student learning in middle school science classrooms. American Educational Research Journal, 50, 1020-1049.

    Shulman, L.S. (1986). Those who understand: Knowledge growth in teaching. Educational Research, 15, 4-14.

    Willingham, D. (2006) How knowledge helps. It speeds and strengthens reading comprehension, learning—and thinking. American Educator (online edition).

    Willingham, D. (2013). What science teachers need to know. Downloaded from

  • 16 Nov 2017 4:00 PM | Anonymous

    By Rachel J. Chapman, PhD Student of Urban Education, The Graduate Center and Teaching Fellow at Queens College of Elementary & Early Childhood Education

    Now more than ever, it is pertinent to provide a space for students to voice their experiences of schooling and culture as it relates to their identity development. Most school curriculum reflects the dominant group culture, whereby non-dominant narratives are often silenced. Silencing can lead to shame, doubt, cultural and language loss, as well as a feeling of unbelonging. The Cultural Identity Map exercise is intended to foster community and relationship building and awareness of cultural identity formation within U.S. society, while providing opportunities for students to practice empathy.

    Within CUNY alone, we enroll 500,000 primarily working class and immigrant students, who come from communities around the world. More than half come from low-income families earning less than $30,000 a year (Edelman, 2016). Many come from economically-devastated and war-torn regions, fleeing for survival and the need for a new life. The year 2017 has been marked with increasing attacks by racist and nationalist regimes, including the most recent Trump administration. The Muslim ban, proposed expansion to the border wall, increased police brutality, immigrant deportations, injustice at Standing Rock, dismantling of environmental and economic regulations and push to defund healthcare, all come as increasing attacks on working and immigrant communities.

    Writing from prison under Mussolini’s fascist regime in Italy, Antonio Gramsci’s work (2011) on cultural hegemony helps us understand that attacks on working class and immigrant communities can become normalized through consent to the dominant values and cultural norms. Because school curriculum tends to reflect the knowledge and norms of the dominant culture, the Cultural Identity Map exercise can provide a space for counter-narratives within the classroom, while also building identity awareness and community.

    For the Cultural Identity Map, begin by writing your first name in the middle of a large piece of paper or small poster. Because I am an instructor in Teacher Education, part of the map includes students’ experiences within the K-12 system. However, you can cater it to your course content, which can also include various possibilities such as students’ hobbies, childhood pastimes, meaning of first name, spirituality practices, etc. Using crayons or markers, I ask the students to fill their papers with the following, represented by drawings & symbols:

    1. Three or four aspects of your culture.
    2. One aspect of your culture you like.
    3. One aspect of your culture you dislike or would like to change.
    4. One positive memory from school.
    5. One negative memory from school.

    In order to encourage a variety of designs, I grade the maps based on creativity and following directions. I generally give them 15 minutes to work in class and the rest they finish for homework to present at the following class.

    According to Tatum (1997), development in late adolescence and adulthood is circular as we face new physical, psychological and social challenges. For example, late adolescence and adulthood are often marked with greater responsibility and employment concerns, as well as increased family and community involvement. Fear and silence regarding one’s identity can lead to isolation and difficulty in social relations and communication. Evidence from research shows that incorporating multicultural methods in the classroom builds group and self-awareness (Banks & Banks, 2016). It can also create spaces for storytelling, community and relationship growth. Additionally, it can lead to practicing empathy in listening to peers’ similar struggles with identity.


    Banks, J. A., & Banks, C. A. M. G. (2016). Multicultural education: Issues and perspectives.

    Edelman, M. (2016). CUNY Faculty’s Lost Decade & The Risk Ahead. The Gotham Gazette. Retrieved from:

    Gramsci, A., Buttigieg, J. A., & Callari, A. (2011). Prison notebooks. New York: Columbia University Press.

    Tatum, B. D. (1997). "Why are all the Black kids sitting together in the cafeteria?" and other conversations about the development of racial identity. New York: BasicBooks.

  • 15 Nov 2017 2:59 PM | Anonymous

    By Valkiria Duran-Narucki, Ph.D., Patricia J. Brooks, Ph.D., & Elizabeth S. Che, College of Staten Island, CUNY

    With the proliferation of “fake news” and the ever-present need to “fact check” information, we all need to exercise the critical thinking skills that accompany scientific research in our everyday lives. Efforts to curb the amount of poor quality information on the Internet are futile, particularly if we want to live in an open society with a freeform Internet where everyone has the opportunity to craft content and express themselves. A more effective and just approach would be to help our students to become educated citizens who can apply scientific thinking and research skills to make sense of current affairs and become more discerning consumers of information. In this blog post, we describe an activity developed for a research methods course in psychology and how we adapted it for an honors section of Introductory Psychology to develop critical thinking and research skills.

    Valkiria Duran-Narucki first introduced the activity in one of the first classes of a semester-long research methods course. Although, in many instances, a research methods course might not be appreciated by students because of the lack of connection between research methods and everyday problems, Dr. Duran-Narucki sought to demonstrate how research and critical thinking skills could help students evaluate information relevant to a pervasive “fake news” claim that vaccines cause autism. Students were asked to watch two videos in order to gather information about autism and vaccines.

    Video 1: CDC Whistleblower Confesses to Vaccine-Autism Fraud. In this video Andrew Wakefield described how the U.S. Centers for Disease Control and Prevention lied about the safety of immunizations and includes a comparison to the Public Health Service involvement in the Tuskegee Study.

    Video 2: Vaccines: An Unhealthy Skepticism. In this video, The New York Times’ RetroReport describes how a measles outbreak in Disneyland brought attention to parents who chose not to vaccinate their children and the bias a growing number of parents have against vaccinations.

    After watching the two videos, they were asked to write answers to the following prompt:

    A Facebook friend posted that she doesn’t know whether she should vaccinate her baby.

    What advice would you give her based on the evidence from the videos? How do you know whether the information you tell her is from a reliable source? If you wanted to share one of these videos on Facebook, which one would choose and why?

    Students were encouraged to talk to each other or use their cell phones to search for information connected to the topic, but they had to write out their responses individually and turn them in. The assignment was graded pass/fail and served as a demonstration of the kinds of in-class assignments that would be used throughout the semester to help students develop skills in locating scientific evidence and evaluating claims from the media.

    Patricia Brooks and Elizabeth Che adapted this activity for their honors section of Introductory Psychology. At the start of the semester, we administered a 25-item Myth Busters quiz that included items such as "We only use 10% of our brain and If students do not drink sufficient amounts of water, their brains shrink." Although our first-year students did pretty well on the quiz overall, 40% of them endorsed the statement that "Vaccines can cause autism," which suggested that they had heard this view and assumed it to be true.

    In a subsequent class we showed students the two videos about the presumed risks and benefits of vaccinating infants, and asked them to take notes while watching each video. We then demonstrated how to check the facts about vaccines and autism using Google Scholar. We discovered that many of our first-year students had never heard of Google Scholar and had never looked at primary source research articles. We used the search terms “autism” and “vaccination” and pulled up numerous articles that disputed the myth that "Vaccines can cause autism," as well as articles such as Venkatraman, Garg, and Kumar (2015) documenting a proliferation of anti-vaccination views on the Internet, as identified via searches on Google and YouTube. We used time in-class to read the abstracts of journal articles retrieved, which provided for some students their first exposure to scientific discourse.

    We then looked up Andrew Wakefield on Wikipedia to learn more about his medical career, his 1998 paper in the Lancet that claimed evidence for an association between the measles, mumps, and rubella (MMR) vaccine and occurrence of autism, and the subsequent allegations of scientific fraud, retraction of his research papers by journal editors, and the loss of his license to practice medicine.

    The first-year students, for the most part, were open to changing their beliefs about vaccines and autism. We then discussed the psychological phenomenon of illusory truth as a way of understanding how pseudoscientific beliefs are established through exposure to fake news and false claims. We also introduced the concepts of confirmation bias and belief perseverance to explain how people have biases to notice things that confirm their preexisting beliefs and to discount evidence that contradicts their beliefs.

    In both courses, the controversy around vaccines and autism proved to be fertile grounds for discussing how fake information is spread via social media and our responsibility, as informed citizens, to fact check what we read before jumping to conclusions. Most importantly, through activities like the one described, it is possible to show the relevance of critical thinking to real “life and death” situations, and to the everyday challenges that students experience in their current and future lives.


    Venkatraman, A., Garg, N., & Kumar, N. (2015). Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism. Vaccine, 33(12), 1422-1425.

  • 27 Oct 2017 6:00 PM | Anonymous

    By James Christopher Head, Ph.D. Candidate, The Graduate Center CUNY

    Hello out there.

    How are you?

    Who are you?

    What interests you?

    What do you find meaningful?

    If you could make this space amenable for things you find meaningful, would you find it more engaging?

    If you could use this space to pursue your interests, would you?

    These are some questions that interest me and compel me to spend much of my time thinking about pedagogy. They are some of the questions that inspired the pedagogical approach that I presented at the 2017 Pedagogy Day conference at the CUNY Graduate Center. I will briefly discuss the approach near the end of this blog post.

    These questions also provide some insight as to why I hesitated in writing this blog post. Several times over the past few years, GSTA leaders have asked me to write a post for the GSTA blog, and throughout that time, I have never found the idea appealing. The fact that I grew up in a pre-internet world probably has something to do with it, but I think my aversion to blogging, more likely, relates to some relational practices inherent in the act – some that I do not find very alluring, and some that I do not understand. I’m sure someone could educate me as to why I should embrace blogging, but I don’t think this kind soul would be relating to me, but rather, to a hypothetical you. They might even write a blog post about it: “Why you should embrace blogging.” I would prefer a dynamic conversation based on mutual respect and some shared sense of purpose. I would hope that the ethical commitment to avoid de-facing the other as much as possible (see Levinas, 1969), and to honoring the otherness of the other, could be operative in our interaction – at least by me.

    I have, quite often, had the misfortune of operating in spaces where the questions at the top of this post were not asked – classes where I was rendered passive, organizations where I was manipulated, jobs where I was exploited. I do not find these types of spaces – and the relationships they engender - appealing, and have little desire to replicate them in my pedagogical practice.

    I have also been involved in spaces that allowed me to negotiate how I would be in the space. For example, when I was a senior in high school, my art teacher allowed me to film and edit a skateboard video for my cumulative assignment – provided that the project incorporated principles I learned in class. I put more care, effort, and love into that project than anything I had done in any class up to that point.  Since then, I have tried to take the lessons I learned while engaging in that project and apply them to other relational spaces – other classes, other jobs, other organizations. I have found that the more I am able use these types of spaces as sites for the active construction of meaning – or for the active negotiation of how meaning could best be made – the more productive I was, the more I felt compelled to participate in whatever function the space was intended to serve, and the more I appreciated my experience.

    These experiences influence a pedagogical approach I have been cultivating for years – one in which I structure courses in a manner that facilitates students’ pursuit of meaning-making projects and their negotiation of course parameters. In brief, this approach is grounded on structuring courses as self-reflective qualitative research projects and builds upon the notion that the reflexive and reflective engagement in iterative, interpretive processes that foreground analysis, synthesis, and evaluation are central to both the production of qualitative research and the facilitation of deep and meaningful learning. In other words, this approach transforms the institutional requirement for comprehensive assessment into an occasion to do meaningful pedagogical work – namely, the facilitating of meaningful modes of engagement, the cultivation of deep thinking and writing, and the development of useful analytic skills. This approach rests on a relational reorientation of conventional course designs. Instead of structuring courses around statements like “In this course, students will learn X, Y, and Z,” this approach prioritizes questions like “How can students use this course to engage in projects that stoke their passions, cultivate their talents, and scratch their intellectual and existential itches?” and “How can I, as instructor, best aid students with their projects?”  That is to say, this approach is grounded in pedagogical accompaniment, and encourages students’ active construction of their learning experience. This type of learning, I think, has a practical utility for students that extends well beyond our time together.

    At the Pedagogy Day conference at the CUNY Graduate Center on October 27th, 2017, Joshua W. Clegg and I presented this approach in more detail.  In so doing, we demonstrated how we have applied this approach in our courses.  For example, we discussed how we structured one class around the production of autoethnographic research projects in which students examined their experiences as they navigated a powerful social institution.  Throughout the course, students engaged in a variety of research practices (establishing research questions, designing a study, gathering data, conducting an analysis, writing a report, etc.) that they developed in relation to a topic of their choice.  These practices helped students develop research skills, but also facilitated post-formal critical thinking by creating opportunities for students to engage in those practices.  In describing how we have applied this approach, our intent was not to distribute pedagogical technologies, but to invite others to adapt this approach for their own purposes – to engage in a negotiative process that facilitates students’ engagement in negotiative processes.  That is to say, we aimed to share and work with an approach grounded in structuring courses around relational practices more in line with dynamic conversations based on mutual respect and a shared sense of purpose.


    Levinas, E. (1969). Totality and infinity (A. Lingis, trans.). Pittsburgh, PA: Duquesne University.

  • 18 Oct 2017 10:30 AM | Anonymous

    By Peri Yuksel, Ph.D., New Jersey City University (Email:, Twitter: @drperi_)

     “Bodily exercise, when compulsory, does no harm to the body; but knowledge which is acquired under compulsion obtains no hold on the mind.” 

    -Plato (student of Socrates)

    By now it is no secret that very few students complete their required textbook readings before coming to class and a large number of students only start to read their textbook when preparing for an exam (Clump, Bauer, & Bradley, 2004; Phillips & Phillips, 2007). So why do we still believe that assigning required textbook readings is an effective means of making learning stick? Reading a textbook is viewed as essential to build a factual knowledge base, especially for introductory classes. Without such content knowledge, we assume it is not possible for students to develop critical thinking and writing skills. Students have the tendency to think that instructors will explain the whole textbook and tell them what will be on the test. Yet great teachers inspire and teach their discipline beyond the textbook, allowing students to reflect and connect their academic context to real-life settings.

    Given the difficulties of motivating students to read the textbook, I suggest that you assign a textbook that is affordable to your students and complement assigned readings with homework, group presentations, and in-class activities that require students to utilize their textbook. For example, when teaching Developmental Psychology, I use an older version of Berk’s Development through the Lifespan textbook and organize activities and assessments around the text to improve students’ memory by spacing learning over time (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). When students spend more time with the course material, they attend more closely to their own learning and develop metacognition. Here are four simple techniques I have used to augment the textbook reading experience.

    a)    Complement the Textbook with TED Talks

    I ask my students to watch ten TED Talks that complement the assigned textbook readings and I prompt them to draw connections between the TED Talks and textbook content/course discussions (Yuksel, 2017). TED speakers portray an array of diverse backgrounds and offer multifaceted perspectives and cutting edge research findings on topics that are also covered by a course textbook. Each of these elements helps students to understand global issues and fosters their understanding of the subject they are studying. By going beyond the textbook, TED Talks can inspire students to think about their own passions and conceive of ways to develop their own paths.

    b)    Demonstrate and Use the Textbook for In-Class Activities

    Reading about research methodology and theory can be dry and daunting, especially for students who have not taken a psychology course before and who have never seen a research lab. Periodically, I ask my students to bring in their textbooks and use it to complete an activity package (Experiments with Infants and Toddlers) that I have designed. Students see the exact same textbook images demonstrating the research paradigm (e.g., violation-of-expectancy, deferred imitation) and are asked to fill in information about the research question, study design, age of children, overall findings, and developmental explanations. In class, students also watch short video-clips that illustrate the relevant experiments. These clips go beyond the textbook, create memorable visual images from real lab settings, and foster deeper learning and understanding of hypothesis testing (Berk, 2009).

    c)     Encourage Group Presentation Targeting the Textbook

    From a list of topics selected from the textbook, students pick one and give a short group presentation. In addition to creating a set of PowerPoint slides, students also submit a one-page summary paper discussing the relevance of the chosen topic to their current or future professional goals. By doing so they are signaling that this topic has self-reference and is worth remembering (Wade, Tavris, & Garry, 2014). The group presentations go beyond the textbook and allow students to collaborate on a focused project and apply ideas from the textbook to important societal problems. Students also gain insights into socio-political issues and learn techniques that help them make healthy and ethical choices.

    d)    Let Students Create Their Own Mind Maps to Organize Textbook Content

    Especially in the beginning of the semester when the first exam is approaching, students often remind me that we have not covered the entirety of each assigned textbook chapter. I give them a simple answer: it is not important that we cover everything but that you discovered something. I provide them with simple learning strategies and tools to organize information from the textbook, such as outlining the chapters with relevant vocabulary and creating mind maps, i.e., visual diagrams that manage, summarize, and highlight their notes.

    There are many reasons why students do not read the textbook. If you explicitly integrate the textbook into your course activities and assessments and make reading relevant to psychological discoveries that go beyond the classroom setting, then students will be inspired to read and expand their views on the everyday science of psychology. They will come to understand that knowledge is power and contributes to creativity and imagination.


    Berk, L. E. (2014). Development through the Lifespan. New York: Pearson.

    Berk, R. A. (2009). Multimedia teaching with video clips: TV, movies, YouTube, and mtvU in the college classroom. International Journal of Technology in Teaching and Learning5(1), 1-21.

    Clump, M. A., Bauer, H., & Breadley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology31(3), 227-232.

    Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning and comprehension by using effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14, 4–58.

    Phillips, B. J., & Phillips, F. (2007). Sink or skim: Textbook reading behaviors of introductory accounting students. Issues in Accounting Education22(1), 21-44.

    Wade, C., Tavris, C., & Garry, M. (2014). The Nine Secrets of Learning. Psychology (11th ed.). Upper Saddle River, N.J.: Pearson Education, Inc.

    Yuksel, P.  (2017). Ten TED Talk Thinking Tasks: Engaging College Students in Structured Self-Reflection to Foster Critical Thinking. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks, (Eds). (2017). How We Teach Now: The GSTA Guide to Student- Centered Teaching.  Retrieved from Society for the Teaching of Psychology web site:

  • 16 Oct 2017 10:00 AM | Anonymous

    By Tanzina Ahmed, Ph.D., Department of Social Sciences, Bronx Community College CUNY (Email:

    Do you know who Betsy DeVos is?

    Chances are, if you’re reading a blog on education, you know that name and exactly why it’s so revered (or reviled) in American schools today. Furthermore, you probably know how she, as the current Secretary of the United States Department of Education, is shaping the policies and practices of schools across the entire country. You may or may not be enthused about her political priorities but you know of the pivotal role she plays in the education and lives of millions of American enrolled all the way from pre-K to graduate school.

    Yet, if you asked a typical undergraduate class on child psychology or human development to identify her... what would you hear? From my experience, a chorus of crickets. A few students might recognize the name, but most don't know who she is or how her decisions profoundly influence students and their families across a wide span of ages and institutions. Even once they know who she is, they’re often at a loss to understand how her educational policies affect the lives of hundreds of millions of American students. Such would also be the case for previous Secretaries of the Department of Education, such as Arne Duncan and John King.

    It's easy to overlook the influence of people like Betsy DeVos because their influence is so widespread and pernicious. Yet given recent political events, it's more important than ever to help psychology students in undergraduate institutions understand how the Secretary of Education and the administration she represents wield their power over the life of others. Professors teaching developmental psychology classes may have a special responsibility to help their students understand how matters discussed in political science classes might shape people’s academic and career trajectories over a lifetime.

    However, it can be difficult to bring politics into the study of psychology in a way that helps students understand the issues at stake. Lecturing students on this issue usually doesn't help, especially since the intersection between politics and psychology can be abstract and murky. Thus, to help students put some “skin in the game” and understand how families are affected by the decisions of the Department of Education, I’ve created an activity called “Being Betsy DeVos.”

    In this assignment, I break students out into small groups to work together on the American education system. I tell them that they are now all Betsy DeVos and need to answer critical questions on funding and promoting education within our country. Their challenges include the following:

    1. President Trump’s 2017 budget plan would take away $2.4 billion for teacher training grants and $1.2 billion for funding summer-school and after-school programs, weaken or eliminate funding for 20 educational programs, and cut $200 million in federal programs that help low-income, first-generation and disabled students (Bendix, 2017). What recommendations would you make to President Trump about funding these programs? What arguments would you make on what programs to keep and what to cut?
    2. Under the 2017 budget, the Trump administration would like to spend $1.4 billion to expand use of vouchers in public and private schools, eventually spending $20 billion a year in funding vouchers (Bendix, 2017). Many of these funds will go toward private and/or charter schools that hold different educational standards from public schools (for instance, some of these private and charter schools may choose to teach creationism rather than evolutionary biology). Do you support the President’s proposal to pull money away from public schools and redirect it toward private and/or charter schools? What are the pros and cons of his proposal?
    3. Should charter schools funded with public money have the same academic accountability standards as other public schools when they are all competing for the same students and resources? Should charter schools have the right to teach material they want to teach (such as leaving out evolutionary biology to teach creationism) and to not publicly report their students' grade and test scores on statewide exams?
    4. The Trump administration has argued that the federal government should get rid of Obama-era restrictions on giving federal funding to for-profit colleges (like University of Phoenix and DeVry) that allegedly use predatory sales techniques and are more likely than non-profit institutions (like CUNY) to leave students with large debts but no degree (Mitchell, 2017). Do you believe that the federal government should allow students to use federal funding for whatever college institution they wish to go to, even if these institutions have poor track records?

    Once students are sorted out into small groups to answer these questions, they work together for 30 to 45 minutes. They must write a set of notes on the major ideas and examples that come up during their discussion. They must also collaborate to present a 5-minute oral report on their answers to the rest of their classmates at the end of the class hour.

    In the past I have asked students to produce both the written and oral reports in order to challenge them to come up with strong supporting arguments to their answers. These two activities help them sharpen their collaborating, note-taking, and presenting skills.

    From my personal experience, students often end up vigorously debating the policies and priorities of the United States Department of Education when asked to answer these questions. In doing so, they investigate both their prospective policies and priorities and end up questioning those of others. For instance, I have known several religious students who have argued for more funding for charter schools and for the right to decide what their children should learn. On one memorable occasion, a student proclaimed that: “Parents should have the choice to send their children to whatever schools they want. If I want my child to have a Christian education, they should have one!”

    Needless to say, she got pushback from several other students in her group, many of whom were concerned about students who would have to remain in traditional public schools while money was hoovered away into private and charter schools. As another student said, “[the] Government has to make rules to help every child, not just the lucky children who win the lottery [for vouchers or placement in private/charter schools]!”

    It can sometimes be a struggle to keep the class both excited and civil—and to ask them to be respectful toward the diverse views of their fellow-classmates. Yet students who partake in this activity often become more interested in understanding how the Department of Education works after engaging with the debates on education that are raging in the country today. In having to argue for one side or another in these thorny debates, students are confronted with the understanding that the policies they – or Betsy DeVos – promote will affect the lives of millions of American students.

    Betsy DeVos will not always be the Secretary of Education anymore than Donald Trump will always be the president. Still, as the years go by and Secretaries of Education come and go, psychology professors can continue to modify this activity to engage students in discussions of how politics shapes the lives of students from pre-K to the college level. By offering students the chance to work on the thorny questions of education policy today from a vantage point of power, professors can help them better understand how political science intersects into everyone’s development.

    Furthermore, by contrasting students’ decisions about funding or promoting educational policies and programs against the actual decisions of the Secretary of Education an the President of the United States, students can better understand how their priorities are not necessarily the priorities of those in power. This is a realization that can shock students, but also help them understand the ramifications of political progress (or lack thereof) in their lives and the lives of others. Ultimately, this activity can help students understand how they are similar to or different from the people whose policies rule their lives, and helps them better understand how contemporary politics affects their development in many ways.


    Bendix, A. (2017, March). Trump’s education budget revealed. The Atlantic. Retrieved from

    Mitchell, J. (2017, June). Trump administration scraps Obama-era rules on for-profit colleges. Wall Street Journal. Retrieved from

  • 05 Oct 2017 10:00 AM | Anonymous

    By Karyna Pryiomka, Doctoral Candidate, The Graduate Center, CUNY

    In four years of teaching statistical methods in psychology, I have noticed that students often experience difficulty recognizing the relationship between a hypothetical construct, its operational definition, and the interpretation of results. This often leads to over-generalization, incorrect inferences, and other interpretive mistakes. Operational definitions of hypothetical constructs represent an important component of research in psychology. Operationally defining constructs and understanding the implications of these definitions for data interpretation then constitute key competencies for a psychology student. While operationalism is widely taught in research methods courses, its discussion in statistics courses is often reduced to a few paragraphs in an introductory chapter. To help my students better understand the collaborative, iterative, and context-bound process of creating appropriate operational definitions, I employ a low-stakes group activity during which students work in groups of 3 or 4 to create operational definitions of hypothetical constructs, such as confidence, in two distinct contexts: individual-level decision making and research design. The learning objective of the activity is to demonstrate the role of context in deciding how to operationalize a given construct and to illustrate the process of developing consensus about the meaning of constructs and their operational definitions.

    Here are the steps that I take to implement this activity:

    1.  I begin by assigning students into groups of 3 or 4, depending on class size. Ideally, at least 2 groups should work on the same problem. Each group receives only one variation of the problem. Below are examples of the two prompts. I give students about 15 minutes to work on the task in their groups.

    Individual-Level Decision-Making Context: A growing cat food company, Happy Kibble, is expanding its sales department and asks you, a group of industrial-organizational and personality psychologists, to use your expertise and help them hire the best sales people so that they can convince cat owners around the country to switch to Happy Kibble. You know from research that people who are confident often make good sales people. How would you define and operationalize confidence in this context in order to select a good employee? What questions would you ask candidates? What behavior would you pay attention to during an interview? Assume that the human resources office has pre-selected the candidates so they all qualify for the job based on the minimum education and professional experiences requirements.

    Research Context: A growing cat food company, Happy Kibble, has partnered with your research team to investigate if there is any relationship between the confidence level of a sales person and their professional success. Happy Kibble wants to conduct a real scientific study to answer this question. The company needs your expertise in defining and measuring confidence; however, you are on a tight budget so conducting individual interviews might not be an option if you want to collect a large enough sample to draw meaningful conclusions. How would you define and operationalize confidence in this context in order to be able to measure this trait in as many people as you can.

    2.  Once groups have created their definitions, a representative from each group is invited to write their definitions and measurement/assessment plan on the board.

    3.  I like to begin the discussion by emphasizing the differences between the two contexts. We then focus on establishing consensus among groups that worked on the same problem. We discuss the similarities and differences between the operational definitions produced by these groups, discuss the strengths and limitations of the proposed measurement/assessment plan, and reconcile any differences. We then compare the consensus definition produced for the interview context with the consensus definition produced for the research context. We outline key differences in contexts, discuss what type of evidence can be collected in each, and how the context influences the interpretation of data.
    For example, students in both contexts often mention eye contact as one of the behaviors representing confidence. We then discuss how they would measure/observe eye contact in the context of a job interview compared to a research study. Students in a job interview context point out that they would be direct qualitative judgments, made as they engage with the interviewees. Students in a research context often say that they would use video equipment to observe how sales representatives establish eye contact with their customers. In this context then, unlike their colleagues conducting job interviews, students are less likely to make direct qualitative judgments about individual people, but would rather observe, record, and quantify their behavior remotely.

    In my experience, students eagerly engage in the discussion, justifying their decisions and challenging those of others. They also begin to ask questions and think critically about the inferences that could be made based on the operational definitions they have proposed.  For instance, a group once suggested that a particular speech pattern or the use of specific words could constitute a variable to assess confidence. This suggestion led to a discussion of the relationship between language and existing standardized assessments like IQ or potential bias against non-native English speakers, making students question whether the proposed operational definition would fairly and accurately reflect someone’s confidence instead of another potentially related trait.

    Overall, I found this activity to be a great way to engage students in the discussion of important principles of research design, while promoting critical thinking about the role of operational definitions and measurement procedures in data collection and subsequent interpretation

  • 29 Sep 2017 5:00 PM | Anonymous

    By Teresa Ober, The Graduate Center CUNY

    Dr. Jon E. Grahe is Professor of Psychology at the Pacific Lutheran University. His research interests include interpersonal perception, undergraduate research, and crowd-sourcing science. The GSTA editing team recently had a chance to talk with Dr. Grahe about his views on how innovations in undergraduate education can be used to address some of the current problems facing psychological science. In speaking with him, we learned quite a lot about Open Science, the Replication Crisis, and the Collaborative Replication and Education Project. Here are some of our burning questions about these topics and an edited summary of Dr. Grahe’s responses.

    Teresa Ober: Let’s start with a simple question. What exactly is “Open Science”?

    Jon Grahe: There are two levels here when we talk about “Open Science.” At one level, we might be referring to open-access resources, which is not my primary focus, but it refers to making sure everyone can read all publications. At another level, we are talking about transparency in the research process. Transparency in the research process can be manifested in at least three ways, including: 1) sharing hypotheses and analysis plans, 2) sharing information about the data collection procedures; and 3) sharing data and the results of the research process.

    There are certain tools available today that allow researchers to conduct open science according to the second level mentioned. Many of these tools are being developed by the people at the Center for Open Science. The Center for Open Science was formed by Brian Nosek and Jeffrey Spies to encourage more scientific transparency.  One of their products is the Open Science Framework, an electronic file cabinet with interactive featers that makes it easier for researchers to be transparent during the research process and serves as a hub where researchers can document, store, and share content related to the process of their research projects.

    TO: Why is Open Science so important?

    JG: When we learn about science as children, we are taught that replication and reproducibility is a big part of the scientific process. To achieve the possibility of replicating research, accurate documentation and transparency are necessary parts of the methods. Transparency is mainly what open science is about, and it is important because it allows us to test and retest our hypotheses. It is just fundamental for the scientific process of iterative hypothesis testing and theory development.

    TO: There has been discussion around the transparency of “Open Science” as a kind of revolution in the philosophy of science? What are your thoughts about this? Do you view it as a radical transformation, or a natural continuation given technological advancements or changed world views that make people more disposed towards sharing information in ways not previously possible?

    JG: The recent interest in openness in the scientific process has likely emerged due to the calls for the improved quality of science, which hit a critical juncture after the replication crisis. Transparency in science also became more feasible with advances in technology that allowed researchers to document and share research materials with relative ease. Before digital storage was cheap, it was very difficult to share such information.  Social networking platforms also encourage more distant connections and allow for better working relationships between people who never meet face to face. The digital age allow us to experience this revolution.

    TO: Tell us a little more about the “Replication Crisis.”

    JG: When we talk about the replication crisis, it is important that we recognize that it affects psychological science, but not exclusively. Though the field of psychology emerged as the center of attention for this issue, other scientific disciplines are likewise affected, and in some ways, the crisis of replication happened to affect psychology sooner.

    The Replication Crisis in psychology seemed to emerged around 2011 as a result of three events. The first event involved a set of serious accusations against a researcher who had reportedly fabricated data on multiple studies. The second issue was the publishing of findings that seemed outrageous and a misuse of proper statistical procedures. The third issue was a general swelling of the volume of research that had been shown to fail to replicate. In general, when many doctoral students and other researchers attempted to replicate published, and supposedly established, research findings, they were unable to do so. Since then, a lot of looking around has evolved in other fields, as well. These issues have led some researchers to speculate that as many as half of all published findings are false.

    TO: How are “Open Science” initiatives such as the Open Science Framework are attempting to address this issue?

    JG: By promoting transparency in the scientific process, replication becomes more feasible. In my own experience, I approached the replication crisis as a research methods instructor seeing a wasted resource in the practical training that nearly all undergraduate students must undertake. Before the crisis, my colleagues and I had been arguing for large-scale collaborate undergraduate research that was practical and involved efforts on the part of students to replicate research findings that had previously been published, see Grahe et al., (2012), see School Spirit Study Group (2004).

    TO: We’ve talked about how “Open Science” is good for research, but I am wondering if you could elaborate how such initiatives can be good preparing undergraduate and graduate students as researchers?

    JG:  Over 120,000 students graduate each year with a psychology degree, of whom approximately 90-95% must take a research methods class to fulfill their degree requirements. Of those, it is estimated that approximately 70-80% also complete a capstone or honors project and about 50% collect actual data to complete the project. Thus, there are tens of thousands of such projects that involve data collection each year in the U.S. alone. As a research methods instructor, I am concerned about making sure that my students have practical training that will help them professionally and allows them to learn about the research process more meaningfully. Further, by having class projects contribute to science, my work as an instructor was more clearly valued in tenure and promotion. In my classes, participating in “authentic” research projects is always a choice, and in my experience, many students embrace the chance to conduct an actual study and collect data and are also excited to receive training on conducting open science. 

    TO: This sounds very interesting. Tell us more about the Collaborative Replication and Education Project (CREP)?

    JG: CREP is actually the fourth project that I have undertaken to engage undergraduates in “authentic” research experiences  within a pedagogical context. The CREP is specifically geared towards replication, whereas  the earlier projects were oriented toward  getting students’ to contribute to science while learning to conduct research.

    As far as I know, the first-ever crowd-sources  study in psychology was published in a 2004 issue of the Teaching of Psychology (School Spirit Study Group, 2004; That project leader found collaborators by invited them to measure school spirit at both an institutional level and an individual level. Students could use the individual data for their class papers, and the different types of units of analysis made for interesting classroom examples.

    The same year this was published, the same project leader, Alan Reifman invited us again to collectively administer a survey, this time it was about Emerging Adulthood and Politics (Reifman & Grahe, 2016). Because the primary hypothesis was not supported from about 2005 until about 2012, no one bothered to examine the data. However, when I was starting to focus on increasing participation in these projects, I saw this data set (over 300 variables from over 1300 respondents from 10 different locations) as a valuable demonstration of the project potential. We organized a special issue of the Emerging Adulthood Journal where nine different authors each answered a distinct research question using the data set. A follow up study called the EAMMi2 collected similar data from over 4000 respondents from researchers at 32 different locations. Both of these survey studies demonstrate that students can effectively help answer important research questions.

    Another undergraduate focused survey project that occurred just before CREP was launched Psi Chi collaborated with Psi Beta on their National Research Project (Grahe, Guillaume, & Rudmann, 2013). For this project, contributors administered the research protocol from David Funder’s International Situations Project to respondents in the United States.    

    In contrast to these projects, the CREP focuses on students completed experimental projects and students take greater responsibility for the project management. While I had one earlier attempt at this type of project, it didn’t garner much interest until the Replication Crisis occurred. At that point, there was greater interest from other individuals about the argument that students could help contribute to testing the reproducibility of psychological sciences.  Of note, one of the earliest contributors was a graduate student teaching research methods. As we have developed over the past 5 years and learned how to best manage the system, I’m now curious to see if there are potential partners in other sciences. There is nothing in the name that says psychology and the methods should generalize well to other disciplines

    TO: Why is the Logo for the CREP a bunch of Grapes?

    JG: The logo for CREP consists of a grape, which helps prime people to say the acronym as a rhyme for grapes, but is also a useful metaphor for replication studies in science. When you think of replications, you can think about a bunch of grapes. Even though each of the grapes consists of the same genetic material, there is some variation in the size and shape of each grape. Each grape in a bunch is like the results of a replication study. While grapes of the same genetics can differ in relative size, replications examining the same question will also vary in sample size yielded different sized confidence intervals. And replications can’t be exact, they are only close. So while grapes on the same vine may have slight differences in taste due to variability in ripeness, replication studies can have subtle differences in their conclusions, while striving to test the same underlying phenomenon. Replication studies can only be close never exact because of differences in participants or researchers conducting the study, , research contexts of time, location, slight variations in materials, and so forth. These differences can produce vastly different results even if effect is still there. Conducting a successful replication study doesn’t mean you’re guaranteed to find the same effect. And of course, there are varieties of grapes, just as there are varieties of replications. Close replications and conceptual replications are trying to address different questions just as different varieties have different flavors. The CREP has a Direct+ option where contributors can add their own questions to the procedure as long as it is after the original study or collected as additional conditions. This more advanced option provides different flavors of research for the CREP. There are many comparisons that make grapes a good metaphor for replication science, and I hope that the CREP continues to help students contribute to science while learning its methods.

    TO: If I can ask a follow-up question, then what could be considered a “successful replication”?

    JG: For researchers, a successful replication is one that, to the best of a researcher’s abilities, is as close to the methods of the original study. It is not about whether a finding comes out a certain way. When considering students, a successful replication study is further demonstrated when  the students demonstrates understanding of the hypothesis and why this study was designed to test that hypothesis. Can they reproduce the results correctly and can they interpret the findings appropriately. In other words, did they learn to be good scientists while generating useful data?

    TO: If you are working with students on a CREP replication study, do you allow them the freedom to choose a paper to replicate, or should instructors be a little more direct in this process?

    JG: The selection process for choosing replications is straightforward. We tend to select several highly cited articles each year, or about 36 studies total. We then code them for feasibility of undergraduate replication and selected those which were most feasible. We do this not based on the materials that are available, because often the researchers are willing to provide these, but rather to identify important studies that students can complete during a class.

    In my classes, students have complete choice on what studies they want to conduct, and often there are options beyond the CREP. However, I know others who provide students a list of studies that will be replicated or limit choice in other ways. There are many methods and the instructor should find a system they like the best.

    TO: How can graduate student instructors become more involved in the CREP initiative?

    JG: The CREP website gives instructions on what to do. In my chapter in the recent GSTA handbook, I talk about conditions for authentic research to be successful. If there is no IRB currently in place for conducting the research with undergraduates, then it simply cannot happen. The institution, department, and any supervising research staff need to be on board with it. When considering authentic research opportunities, it is always a good idea to talk to the department chair.

    For graduate students who would like to get involved with CREP, we are always looking for reviewers. The website contains some information about how to apply as a reviewer.

    Another thing that graduate student instructors can do is to take the CREP procedures and implement them into the course. The Open Science Framework is a great tool and even if an instructor cannot use CREP for whatever reason, they could try to use of the Open Science Framework to mimic the open science trajectory. Even if data never leave the class, there is information on the CREP website about workflows and procedures.

    TO: What sorts of protections are there for intellectual property under the Open Science Framework?  Can you briefly explain how the Creative Commons license protects the work of researchers who practice “Open Science”?

    JG: The Open Science Framework allows you to choose licenses for your work. In terms of further protections, the Open Science Framework itself doesn’t really provide protections on intellectual property, but rather the law itself does. If a research measure is licensed and published, there is still nothing that protects it except for the law. In any case, following APA guidelines and reporting research means that you are willing and interested in sharing what you do and your findings.

    TO: We see that you just recently came back from the “Crisis Schmeisis” Open Science Tour. Tell us how that went and about the music part.

    JG: Earlier this year, I agreed to conduct a workshop in southern Colorado.  Because I’m on sabbatical, I decided to drive instead of fly and then scheduled a series of stops throughout several states. These travels became the basis of the “Crisis Schemisis” tour ( In total, there were 13 meetings, workshops, or talks at 7 different institutions. I had the chance to speak with provosts and deans, as well as students in research methods classes or at Psi Chi social events. During these visits, I showed how to use the Open Science Framework for courses or research, or gave talks presenting about the CREP or EAMMi2 project.  As demonstrations of ways to interface with the Open Science Framework.

    I somewhat jokingly called this the “Crisis Schmeisis” tour to help explain that even if someone doesn’t believe there is a replication crisis, the tools that emerged are beneficial and worthwhile to all. Throughout the year, I will continue the tour by extending existing trips to visit interested institutions.

    The Crisis Schmeisis tour almost looks like a musical tour, is that intentional?

    It is, I am also planning to write a series of songs about Scientific Transparency. Because it is an “Open Science Album, I’m putting the songs on the OSF ( There is a rough track of the first song titled “Replication Crisis.” The lyrics of the song convey the basic issues of the crisis and I’m hoping that other Open Scientists will add their own tracks so that there is a crowd-sourced recording. I’m currently practicing “Go Forth and Replicate” and have a plan for a song about pre-registration. My goal is to complete 12 songs and to play them live at the Society for Improving Psychology Science conference next summer (

    TO: What happened in your career as a researcher or teacher that inspired you to become an advocate for the OSF?

    JG: During my first sabbatical, I was very unhappy with my place as a researcher and scholar. Did you know that the modal number of citations for all published manuscripts is exactly zero? That means that most published work is never cited, even once. As a researcher, I thought about my frustrations around working on something that would not matter, and as a teacher, I was concerned that students were not getting good practical training.

    At one point during my first sabbatical, I became frustrated in the process of revising a manuscript after receiving feedback from an editor. Instead of being angry about a manuscript that might never get cited anyway, I thought about areas where I was passionate and might be able to make a difference. I decided there was a better way to involve undergraduates in  science and that there were likely many research methods instructors like me who were also feeling underused and undervalued. After that point, my career changed directions. At the time, I was formulating these ideas, it was not about open science, per se, it was really about undergraduates making contribution and gaining experience from it.

    TO: Beyond replicability--what is the next crisis facing psychological science and how can we prepare?

    JG: I would like to see an interest in more expressive behaviors rather than key-strokes that typically define the research process. So much of the research that is conducted in a psychological lab is pretty far removed from daily interactions and I would like to see psychologists work harder to demonstrate meaningful effect sizes in authentic settings. The size of some of the effects we find in research are quite small, and it seems that we spend a lot of time talking about effect sizes that explain less than 3% of the variability in a given outcome variable.

    TO: Any final thoughts?

    JG: Just a note about the distinction between preregistration and preregistered reports. I think these often get confused in the Open Science discourse. Preregistration is the act of date stamping hypotheses and research plans. Preregistered Reports are a type of manuscript where the author submits an introduction, methods, and preregistered analysis plan. The editors make a decision to publish based on this information because the study is important regardless of the outcome of the results findings. There is also the possibility to write and submit an entire manuscript that has a preregistration as part of it. I see a lot of confusion about this topic.


    Bhattacharjee, Y. (2013, April). The mind of a con man. The New York Times [Online]. Retrieved from

    Carey, B. (2011, January). Journal’s paper on ESP expected to prompt outrage. The New York Times [Online]. Retrieved from

    Grahe, J. E. (2017). Authentic Research Projects Benefit Students, their Instructors, and Science. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks (Eds.) How We Teach Now: The GSTA Guide to Student-Centered Teaching, p. 352-368. Retrieved from the Society for the Teaching of Psychology web site:

    Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science7(6), 605-607.

    Hauhart, R. C., & Grahe, J. E. (2010). The undergraduate capstone course in the social sciences: Results from a regional survey. Teaching Sociology38(1), 4-17.

    Hauhart, R. C., & Grahe, J. E. (2015). Designing and teaching undergraduate capstone courses. John Wiley & Sons.

    Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine2(8), e124.

    Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence?. Perspectives on Psychological Science7(6), 528-530.

    School Spirit Study Group. (2004). Measuring school spirit: A national teaching exercise. Teaching of Psychology31(1), 18-21.

  • 28 Sep 2017 5:00 PM | Anonymous

    By Regan A. R. Gurung, University of Wisconsin-Green Bay

    There are many ways to learn. I like to think that armed with a curious mind and the right resources and motivation, anyone can learn by themselves. Of course, when we think of learning we don’t think of the solo pursuits of motivated individuals. We tend to think of schools and colleges. While master teachers can inspire with their passion and masterfully deliver content, most students rely heavily on course materials the faculty assign (though the students may not always read all of it) to solidify content acquisition. Consequently, the quality of course material is of tantamount importance. For years, faculty required students to buy textbooks. Students mostly bought them (and sometimes read them). Now there are a variety of free resources available. How do they compare to the expensive versions?  Are they all created equal?

     Once upon a time, you could rely on the simple heuristic that “pricey equals quality.” After all, standard textbooks (STBs) have the backing of major publishing companies who invest large sums of money to ensure quality products. The development editors, slew of peer reviewers examining every draft of every chapter, and focus groups should ensure a quality product. Then there are the bells and whistles.  STBs are packed with pictures, cartoons, and come with a wide array of textbook technology supplements (online quizzes, etc.; Gurung, 2015). Many believe that given a STB is put out by a publisher whose name is recognizable it must be good.  If an author who is familiar writes an STB, it must be good.  In fact, these are all empirical questions that are never really tested. The market research that big publishers cite and the student and faculty endorsements peppering the back covers and promotional materials of STBs rarely (if ever) represent true empirical comparisons of learning. To be fair, true comparisons of learning are difficult. A variety of factors- the student, the teacher, the textbook- all influence learning, which makes such research difficult.

    Are all STBs equal? In one study I did some years ago students rated a number of most adopted textbooks in the introductory psychology market (Gurung & Landrum, 2012). Students did differentiate between texts rating some books better than others but does the student preference matter? In a number of national studies, colleagues and I had students using different textbooks take a common quiz (ours) so we had a common measure of learning (Gurung, Daniel, & Landrum, 2012; Gurung, Landrum, & Daniel, 2012). Quiz scores did not vary.  Students seem to learn similarly from different textbooks regardless of the company. But now for the big question: Given that STBs are extremely expensive (and students complain) what about textbooks for free?

    Enter Open Educational Resources (OERs). OERs provide students and faculty with free electronic materials. For a great review of the growth of the OER movement see Jhangiani, and Biswas-Diener (2017). The OER movement sprouted from the creation of MERLOT by California State University in 1997. MERLOT provided access to curriculum materials for higher education, and Open Access and the Budapest Open Access Initiative further fueled the rise of the OER movement.  OER strode into the public consciousness when MIT, with funding from the Mellon and Hewlett foundations created OpenCourseWare, online courses designed to be shared for free. Are OERs better than STBs?

    The best studies using standardized or similar exams show no differences in exam scores between OER users and STB users. Sadly, the bulk of the studies available are fraught with limitations and validity issues. In an attempt to transcend the limitations of extant studies, I recently published a study (Gurung, 2017) comparing a group of OER users to STB users. In two large, multi-site studies, I compared students using OERs with students using STBs, and measured key student variables such as study techniques, time spent studying, ratings of the instructor, and rating of the quality and helpfulness of the textbook. All students completed a standardized test using a subset of items from a released Advanced Placement exam.

    In both studies, students using an OER scored lower on the test after controlling for ACT scores. Study 2 also compared book format (hard copy or electronic) and showed OER hard copy users scored lowest. Using books predicted significant variance in learning over and above ACT scores and students variables. Results provide insight into the utility of OERs and the limitations of current attempts to assess learning in psychology. On the upside, students using an OER rated the material as more applicable to their lives.

    When we talk about quality in higher education we tend to rely on the credibility of authors and the peer review process. While my findings urge caution in using OERs, it sheds light on how little learning outcome data there is for the use of STBs. Faculty still adopt these books, requiring students to pay thousands of dollars a year in textbook costs.

    Well-curated OERs, those where the writing and content is monitored and reviewed by peers and contributed by credible sources, deserve to likewise bask in the reflected glory of STBs. While OERs are ready for their time in the spotlight, scholars of teaching and learning need to work to assess true quality of all educational resources. OERs present the opportunity for every member of the public to learn for no cost. We all need to pay attention to what we can get for free but also to ensure materials are tested for effectiveness.


    Gurung, R. A. R. (2015). Three investigations of the utility of textbook teaching supplements. Psychology of Learning and Teaching, 1, 48-59.

    Gurung, R. A. R. (2017). Predicting learning: Comparing an open education research and standard textbooks. Scholarship of Teaching and Learning, 3, 233-2498.

    Gurung, R. A. R., Daniel, D.B., & Landrum, R. E. (2012). A multi-site study of learning: A focus on metacognition and study behaviors. Teaching of Psychology, 39, 170-175. doi:10.1177/0098628312450428

    Gurung, R. A. R., & Landrum, R. E. (2012). Comparing student perceptions of textbooks: Does liking influence learning? International Journal of Teaching and Learning in Higher Education, 24, 144-150.

    Gurung, R. A. R., Landrum, R. E., & Daniel, D. B. (2012). Textbook use and learning: A North American perspective. Psychology of Learning and Teaching, 11, 87-98.

    Jhangiani, R. S., & Biswas-Diener, R. (Eds.). (2017). Open: The philosophy and practices that are revolutionizing education and science. Retrieved from

  • 28 Sep 2017 10:00 AM | Anonymous

    By Jessica Murray, The Graduate Center CUNY

    The relentless forward march of technology can be overwhelming at times, for students and teachers alike. It doesn't help that some public universities can fall behind in keeping up with the latest technology because of limited financial resources, or choose proprietary tools which become familiar, only to replace them with cheaper options later on. The Futures Initiative started out a few years ago with a mission to reshape higher education. One of their key aims was to use network and communications tools to build community and foster greater access to technology. At the time, the CUNY Academic Commons (built in WordPress) was available only for graduate students, so the Futures Initiative created a new WordPress multisite, or network of sites, that was open for graduate students to develop course sites that they could use with their undergraduate students. As part of my role as a fellow for the Futures Initiative, I maintain this website and I teach people how to create their own site on our network. Many schools now host platforms like ours and the CUNY Academic Commons. If your school doesn't offer a place for you to create your own website, you can also create one on This post offers a brief introduction to WordPress, but more importantly, encouragement, or what I'm calling my "pep-talking points." Hopefully by the time you finish reading, I will have convinced you to create your own course website on WordPress.

    WordPress is a free and open source content management system that has grown from being a small blogging platform in 2003 to being the most commonly used website creation platform in the world, accounting for more than 25% of all the websites on the entire internet. For those unfamiliar with the lingo, open source means that the core software, over 50,000 plugins that extend the core functionality, and thousands of themes which control the look and feel of WordPress sites, are developed by a community of programmers around the world. Content management system describes the very act of putting a website together (managing and displaying different types of content), and more importantly, is a tool designed so people without coding experience can create and edit the content of their site with a web browser. Before WordPress and other content management systems, we had to create static web pages in HTML, including placing text, images, and hyperlinks into the appropriate spots, styling the pages with CSS, uploading all of the files via FTP, and testing to see how it displayed on different browsers. Back then, if your web designer went on vacation, you may have had to wait for their return so they could fix a typo, but today, you can login to your site, fix the error and publish the changes in a few minutes without special software. This may be appreciated more by people who remember the old way of doing things (myself included), but it also demonstrates pep-talking point number one: technology is getting easier, not harder. Once you get started, you'll see how easy it really is.

    The Futures Initiative now hosts more than 50 course websites, some of which have more than 30 users, which illustrates pep-talking point number two: if hundreds of people at CUNY can create dozens of course websites in only a few years time, you can, too! Here at CUNY, some teachers have chosen to use WordPress instead of Blackboard because it can do all of the same things. One major benefit is that teachers have control over how they can use the site once their class is finished. Sharing documents securely, having a place for your syllabus, and creating discussion forums are some of the functions that can be replicated on WordPress. There are also some things that WordPress can do that Blackboard can't – a major one being, the opportunity for your students to write public posts. This is directly related to pep-talking point number three: creating content with WordPress is empowering! I have witnessed the undeniable look of satisfaction on the faces of many a workshop participant when they figure out how to add a header image to a page, publish their first test post, and see their changes happen in real-time. Once that happens, they're hooked. Making a website doesn't have to be daunting, and it won't be once you start creating your content. And, while you're doing it (pep-talking point number four): you and your students are learning valuable, marketable skills that will not only be a great addition to your CV, but also give you the tools to create your own online identity that won't cost you a dime. If I still haven't convinced you, and you don't know where to start, let me give you pep-talking point number five: start anywhere! WordPress has a pretty limited number of menu options. The key is to realize that you won't break anything that you can't fix, and the very best way to learn any software is to try things and see what happens. If you get stuck, Google your question and you'll find countless resources from a massive online community. There is a lot you can do with WordPress, but the most important thing is to publish that first post, bask in the glow of satisfaction that can only come from creating your own little sliver of the internet, and plan to inspire that confidence in your students by creating a shared course website in WordPress. 

Powered by Wild Apricot Membership Software