GSTA Interview: A Conversation with Jon Grahe about Open Science and How Practical Undergraduate Research Experiences Can be Used to Address the Replication Crisis

29 Sep 2017 5:00 PM | Anonymous

By Teresa Ober, The Graduate Center CUNY

Dr. Jon E. Grahe is Professor of Psychology at the Pacific Lutheran University. His research interests include interpersonal perception, undergraduate research, and crowd-sourcing science. The GSTA editing team recently had a chance to talk with Dr. Grahe about his views on how innovations in undergraduate education can be used to address some of the current problems facing psychological science. In speaking with him, we learned quite a lot about Open Science, the Replication Crisis, and the Collaborative Replication and Education Project. Here are some of our burning questions about these topics and an edited summary of Dr. Grahe’s responses.

Teresa Ober: Let’s start with a simple question. What exactly is “Open Science”?

Jon Grahe: There are two levels here when we talk about “Open Science.” At one level, we might be referring to open-access resources, which is not my primary focus, but it refers to making sure everyone can read all publications. At another level, we are talking about transparency in the research process. Transparency in the research process can be manifested in at least three ways, including: 1) sharing hypotheses and analysis plans, 2) sharing information about the data collection procedures; and 3) sharing data and the results of the research process.

There are certain tools available today that allow researchers to conduct open science according to the second level mentioned. Many of these tools are being developed by the people at the Center for Open Science. The Center for Open Science was formed by Brian Nosek and Jeffrey Spies to encourage more scientific transparency.  One of their products is the Open Science Framework, an electronic file cabinet with interactive featers that makes it easier for researchers to be transparent during the research process and serves as a hub where researchers can document, store, and share content related to the process of their research projects.

TO: Why is Open Science so important?

JG: When we learn about science as children, we are taught that replication and reproducibility is a big part of the scientific process. To achieve the possibility of replicating research, accurate documentation and transparency are necessary parts of the methods. Transparency is mainly what open science is about, and it is important because it allows us to test and retest our hypotheses. It is just fundamental for the scientific process of iterative hypothesis testing and theory development.

TO: There has been discussion around the transparency of “Open Science” as a kind of revolution in the philosophy of science? What are your thoughts about this? Do you view it as a radical transformation, or a natural continuation given technological advancements or changed world views that make people more disposed towards sharing information in ways not previously possible?

JG: The recent interest in openness in the scientific process has likely emerged due to the calls for the improved quality of science, which hit a critical juncture after the replication crisis. Transparency in science also became more feasible with advances in technology that allowed researchers to document and share research materials with relative ease. Before digital storage was cheap, it was very difficult to share such information.  Social networking platforms also encourage more distant connections and allow for better working relationships between people who never meet face to face. The digital age allow us to experience this revolution.

TO: Tell us a little more about the “Replication Crisis.”

JG: When we talk about the replication crisis, it is important that we recognize that it affects psychological science, but not exclusively. Though the field of psychology emerged as the center of attention for this issue, other scientific disciplines are likewise affected, and in some ways, the crisis of replication happened to affect psychology sooner.

The Replication Crisis in psychology seemed to emerged around 2011 as a result of three events. The first event involved a set of serious accusations against a researcher who had reportedly fabricated data on multiple studies. The second issue was the publishing of findings that seemed outrageous and a misuse of proper statistical procedures. The third issue was a general swelling of the volume of research that had been shown to fail to replicate. In general, when many doctoral students and other researchers attempted to replicate published, and supposedly established, research findings, they were unable to do so. Since then, a lot of looking around has evolved in other fields, as well. These issues have led some researchers to speculate that as many as half of all published findings are false.

TO: How are “Open Science” initiatives such as the Open Science Framework are attempting to address this issue?

JG: By promoting transparency in the scientific process, replication becomes more feasible. In my own experience, I approached the replication crisis as a research methods instructor seeing a wasted resource in the practical training that nearly all undergraduate students must undertake. Before the crisis, my colleagues and I had been arguing for large-scale collaborate undergraduate research that was practical and involved efforts on the part of students to replicate research findings that had previously been published, see Grahe et al., (2012), see School Spirit Study Group (2004).

TO: We’ve talked about how “Open Science” is good for research, but I am wondering if you could elaborate how such initiatives can be good preparing undergraduate and graduate students as researchers?

JG:  Over 120,000 students graduate each year with a psychology degree, of whom approximately 90-95% must take a research methods class to fulfill their degree requirements. Of those, it is estimated that approximately 70-80% also complete a capstone or honors project and about 50% collect actual data to complete the project. Thus, there are tens of thousands of such projects that involve data collection each year in the U.S. alone. As a research methods instructor, I am concerned about making sure that my students have practical training that will help them professionally and allows them to learn about the research process more meaningfully. Further, by having class projects contribute to science, my work as an instructor was more clearly valued in tenure and promotion. In my classes, participating in “authentic” research projects is always a choice, and in my experience, many students embrace the chance to conduct an actual study and collect data and are also excited to receive training on conducting open science. 

TO: This sounds very interesting. Tell us more about the Collaborative Replication and Education Project (CREP)?

JG: CREP is actually the fourth project that I have undertaken to engage undergraduates in “authentic” research experiences  within a pedagogical context. The CREP is specifically geared towards replication, whereas  the earlier projects were oriented toward  getting students’ to contribute to science while learning to conduct research.

As far as I know, the first-ever crowd-sources  study in psychology was published in a 2004 issue of the Teaching of Psychology (School Spirit Study Group, 2004; http://www.tandfonline.com/doi/abs/10.1207/s15328023top3101_5). That project leader found collaborators by invited them to measure school spirit at both an institutional level and an individual level. Students could use the individual data for their class papers, and the different types of units of analysis made for interesting classroom examples.

The same year this was published, the same project leader, Alan Reifman invited us again to collectively administer a survey, this time it was about Emerging Adulthood and Politics (Reifman & Grahe, 2016). Because the primary hypothesis was not supported from about 2005 until about 2012, no one bothered to examine the data. However, when I was starting to focus on increasing participation in these projects, I saw this data set (over 300 variables from over 1300 respondents from 10 different locations) as a valuable demonstration of the project potential. We organized a special issue of the Emerging Adulthood Journal where nine different authors each answered a distinct research question using the data set. A follow up study called the EAMMi2 collected similar data from over 4000 respondents from researchers at 32 different locations. Both of these survey studies demonstrate that students can effectively help answer important research questions.

Another undergraduate focused survey project that occurred just before CREP was launched Psi Chi collaborated with Psi Beta on their National Research Project (Grahe, Guillaume, & Rudmann, 2013). For this project, contributors administered the research protocol from David Funder’s International Situations Project to respondents in the United States.    

In contrast to these projects, the CREP focuses on students completed experimental projects and students take greater responsibility for the project management. While I had one earlier attempt at this type of project, it didn’t garner much interest until the Replication Crisis occurred. At that point, there was greater interest from other individuals about the argument that students could help contribute to testing the reproducibility of psychological sciences.  Of note, one of the earliest contributors was a graduate student teaching research methods. As we have developed over the past 5 years and learned how to best manage the system, I’m now curious to see if there are potential partners in other sciences. There is nothing in the name that says psychology and the methods should generalize well to other disciplines

TO: Why is the Logo for the CREP a bunch of Grapes?

JG: The logo for CREP consists of a grape, which helps prime people to say the acronym as a rhyme for grapes, but is also a useful metaphor for replication studies in science. When you think of replications, you can think about a bunch of grapes. Even though each of the grapes consists of the same genetic material, there is some variation in the size and shape of each grape. Each grape in a bunch is like the results of a replication study. While grapes of the same genetics can differ in relative size, replications examining the same question will also vary in sample size yielded different sized confidence intervals. And replications can’t be exact, they are only close. So while grapes on the same vine may have slight differences in taste due to variability in ripeness, replication studies can have subtle differences in their conclusions, while striving to test the same underlying phenomenon. Replication studies can only be close never exact because of differences in participants or researchers conducting the study, , research contexts of time, location, slight variations in materials, and so forth. These differences can produce vastly different results even if effect is still there. Conducting a successful replication study doesn’t mean you’re guaranteed to find the same effect. And of course, there are varieties of grapes, just as there are varieties of replications. Close replications and conceptual replications are trying to address different questions just as different varieties have different flavors. The CREP has a Direct+ option where contributors can add their own questions to the procedure as long as it is after the original study or collected as additional conditions. This more advanced option provides different flavors of research for the CREP. There are many comparisons that make grapes a good metaphor for replication science, and I hope that the CREP continues to help students contribute to science while learning its methods.

TO: If I can ask a follow-up question, then what could be considered a “successful replication”?

JG: For researchers, a successful replication is one that, to the best of a researcher’s abilities, is as close to the methods of the original study. It is not about whether a finding comes out a certain way. When considering students, a successful replication study is further demonstrated when  the students demonstrates understanding of the hypothesis and why this study was designed to test that hypothesis. Can they reproduce the results correctly and can they interpret the findings appropriately. In other words, did they learn to be good scientists while generating useful data?

TO: If you are working with students on a CREP replication study, do you allow them the freedom to choose a paper to replicate, or should instructors be a little more direct in this process?

JG: The selection process for choosing replications is straightforward. We tend to select several highly cited articles each year, or about 36 studies total. We then code them for feasibility of undergraduate replication and selected those which were most feasible. We do this not based on the materials that are available, because often the researchers are willing to provide these, but rather to identify important studies that students can complete during a class.

In my classes, students have complete choice on what studies they want to conduct, and often there are options beyond the CREP. However, I know others who provide students a list of studies that will be replicated or limit choice in other ways. There are many methods and the instructor should find a system they like the best.

TO: How can graduate student instructors become more involved in the CREP initiative?

JG: The CREP website gives instructions on what to do. In my chapter in the recent GSTA handbook, I talk about conditions for authentic research to be successful. If there is no IRB currently in place for conducting the research with undergraduates, then it simply cannot happen. The institution, department, and any supervising research staff need to be on board with it. When considering authentic research opportunities, it is always a good idea to talk to the department chair.

For graduate students who would like to get involved with CREP, we are always looking for reviewers. The website contains some information about how to apply as a reviewer.

Another thing that graduate student instructors can do is to take the CREP procedures and implement them into the course. The Open Science Framework is a great tool and even if an instructor cannot use CREP for whatever reason, they could try to use of the Open Science Framework to mimic the open science trajectory. Even if data never leave the class, there is information on the CREP website about workflows and procedures.

TO: What sorts of protections are there for intellectual property under the Open Science Framework?  Can you briefly explain how the Creative Commons license protects the work of researchers who practice “Open Science”?

JG: The Open Science Framework allows you to choose licenses for your work. In terms of further protections, the Open Science Framework itself doesn’t really provide protections on intellectual property, but rather the law itself does. If a research measure is licensed and published, there is still nothing that protects it except for the law. In any case, following APA guidelines and reporting research means that you are willing and interested in sharing what you do and your findings.

TO: We see that you just recently came back from the “Crisis Schmeisis” Open Science Tour. Tell us how that went and about the music part.

JG: Earlier this year, I agreed to conduct a workshop in southern Colorado.  Because I’m on sabbatical, I decided to drive instead of fly and then scheduled a series of stops throughout several states. These travels became the basis of the “Crisis Schemisis” tour (https://osf.io/zgrax). In total, there were 13 meetings, workshops, or talks at 7 different institutions. I had the chance to speak with provosts and deans, as well as students in research methods classes or at Psi Chi social events. During these visits, I showed how to use the Open Science Framework for courses or research, or gave talks presenting about the CREP or EAMMi2 project.  As demonstrations of ways to interface with the Open Science Framework.

I somewhat jokingly called this the “Crisis Schmeisis” tour to help explain that even if someone doesn’t believe there is a replication crisis, the tools that emerged are beneficial and worthwhile to all. Throughout the year, I will continue the tour by extending existing trips to visit interested institutions.

The Crisis Schmeisis tour almost looks like a musical tour, is that intentional?


It is, I am also planning to write a series of songs about Scientific Transparency. Because it is an “Open Science Album, I’m putting the songs on the OSF (https://osf.io/y2hjc/). There is a rough track of the first song titled “Replication Crisis.” The lyrics of the song convey the basic issues of the crisis and I’m hoping that other Open Scientists will add their own tracks so that there is a crowd-sourced recording. I’m currently practicing “Go Forth and Replicate” and have a plan for a song about pre-registration. My goal is to complete 12 songs and to play them live at the Society for Improving Psychology Science conference next summer (http://improvingpsych.org/).

TO: What happened in your career as a researcher or teacher that inspired you to become an advocate for the OSF?

JG: During my first sabbatical, I was very unhappy with my place as a researcher and scholar. Did you know that the modal number of citations for all published manuscripts is exactly zero? That means that most published work is never cited, even once. As a researcher, I thought about my frustrations around working on something that would not matter, and as a teacher, I was concerned that students were not getting good practical training.

At one point during my first sabbatical, I became frustrated in the process of revising a manuscript after receiving feedback from an editor. Instead of being angry about a manuscript that might never get cited anyway, I thought about areas where I was passionate and might be able to make a difference. I decided there was a better way to involve undergraduates in  science and that there were likely many research methods instructors like me who were also feeling underused and undervalued. After that point, my career changed directions. At the time, I was formulating these ideas, it was not about open science, per se, it was really about undergraduates making contribution and gaining experience from it.

TO: Beyond replicability--what is the next crisis facing psychological science and how can we prepare?

JG: I would like to see an interest in more expressive behaviors rather than key-strokes that typically define the research process. So much of the research that is conducted in a psychological lab is pretty far removed from daily interactions and I would like to see psychologists work harder to demonstrate meaningful effect sizes in authentic settings. The size of some of the effects we find in research are quite small, and it seems that we spend a lot of time talking about effect sizes that explain less than 3% of the variability in a given outcome variable.

TO: Any final thoughts?

JG: Just a note about the distinction between preregistration and preregistered reports. I think these often get confused in the Open Science discourse. Preregistration is the act of date stamping hypotheses and research plans. Preregistered Reports are a type of manuscript where the author submits an introduction, methods, and preregistered analysis plan. The editors make a decision to publish based on this information because the study is important regardless of the outcome of the results findings. There is also the possibility to write and submit an entire manuscript that has a preregistration as part of it. I see a lot of confusion about this topic.


References

Bhattacharjee, Y. (2013, April). The mind of a con man. The New York Times [Online]. Retrieved from http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html

Carey, B. (2011, January). Journal’s paper on ESP expected to prompt outrage. The New York Times [Online]. Retrieved from http://www.nytimes.com/2011/01/06/science/06esp.html

Grahe, J. E. (2017). Authentic Research Projects Benefit Students, their Instructors, and Science. In R. Obeid, A. Schwartz, C. Shane-Simpson, & P. J. Brooks (Eds.) How We Teach Now: The GSTA Guide to Student-Centered Teaching, p. 352-368. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/

Grahe, J. E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science7(6), 605-607.

Hauhart, R. C., & Grahe, J. E. (2010). The undergraduate capstone course in the social sciences: Results from a regional survey. Teaching Sociology38(1), 4-17.

Hauhart, R. C., & Grahe, J. E. (2015). Designing and teaching undergraduate capstone courses. John Wiley & Sons.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS medicine2(8), e124.

Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence?. Perspectives on Psychological Science7(6), 528-530.

School Spirit Study Group. (2004). Measuring school spirit: A national teaching exercise. Teaching of Psychology31(1), 18-21.