Jennifer Samson
Queens University of Charlotte
I love teaching Research Methods. When I accepted my present position, I enthusiastically agreed that this would be my primary teaching responsibility. (I think they might have expected me to run away screaming instead.) Our year-long methods sequence is relatively unique in that every year, my 25 or so students propose, design, implement, and report on their own projects in any area of Psychology. Every year there are some unbelievably thoughtful, creative, and interesting projects. But also every year, I have to drag myself through grading the drafts. Even the “good” projects. Especially when I feel like I’m providing the same feedback on draft after draft with little to no improvement.
How can we help students understand the writing process as iterative instead of “one [draft] and done”? How can we as instructors help students become effective critics of their own writing? How can we keep our grading load manageable and still provide students with the feedback they need? This essay describes trials and errors, lessons learned, and lessons I’m still learning in my search for the elusive answers to these questions.
Spoiler: There’s not one easy solution.
Background:
The year-long research methods sequence I teach is required for all Psychology majors in their junior year at my small, liberal arts university. By the time they get to me, students have completed their first-year writing sequence as well as the specific course prerequisites, including Introduction to Psychology, Statistics and a class we call Information Literacy – reading and writing in a professional Psychology setting, including a focus on literature review. Ultimately, the majority of my students are relatively well-prepared for college-level work but are still learning to write the type of professional academic paper required for the course. It is also worth noting that a sizable minority every year take Information Literacy simultaneously with the fall semester of Methods and so need extra support in my course.
The research methods sequence is centered around students’ individual empirical projects. In the fall, students complete a four-credit class where we delve into the study of research, emphasizing design for association versus cause/effect and critiquing for different types of validity (designing and critiquing research is an explicit goal for undergraduate students set out by the American Psychological Association, APA, 2023). Concurrently with the class, they complete a two-credit-hour lab where they write a proposal to identify a research question and propose methods to answer it. In the spring, students complete a second four-credit class where they collect data, analyze it, and revise/extend their proposal so it becomes a complete, journal-style empirical paper. They also present their work in a poster session at a local undergraduate conference. The completion of not only the proposal but the entire project and the opportunity for every student to present in a conference setting is a hallmark of our program.
What I Tried:
At the beginning of last academic year, I knew I needed to do something different. An influx of late transfers caused the class size to swell by almost a third, and I knew that, short of learning to clone myself, there was no way I could keep up with marking everyone’s drafts in a timely manner (see Ambrose et al. 2010, about importance of constructive, timely feedback for student motivation). Meanwhile, I had already been looking at ways to increase student buy-in for writing as an iterative process and to increase students’ meta-cognitive skills as evaluators of their own work (see Ambrose et al., 2010; Bain, 2004). Therefore, I implemented the following procedures.
I cancelled lab at key points in the semester (e.g., as outlines were coming together) to instead conduct 20-30 minutes oral check-ins with each student individually during the pre-writing process. In these meetings, we discussed how the ideas were going to be organized within the paper. Meetings, and requiring outlines in the first place, hopefully got students thinking about their papers earlier in the semester than many would have otherwise and encouraged them to engage in prewriting organization rather than diving right into drafting as many are prone to do). I returned minimal written feedback for these preliminary steps and marked primarily completion credit; if they did it thoughtfully and in a timely manner, they earned all the available points for that preliminary step.
So far, what I was trying was not very different from what I’d done previously. But then, after students turned in their drafts of each section (e.g., literature review, methods), instead of marking it and returning it, I asked them to complete a short self-evaluation. The open-ended questions on the self-evaluation prompted them to focus on the areas of content and organization, common mechanical issues, time management completing the draft, and goals for revision. I then met with each student one-on-one to discuss their drafts. In these 30-minute meetings, we read the draft together and marked some key suggestions using Word’s track changes and comments features. I often used their self-evaluation as a starting place, especially if they had identified strengths or areas for improvement similar to those I noticed. I made a point to not mark the whole paper, but targeted examples. For instance, if my suggestion was for the student to use more parenthetical and fewer narrative (“___ found”) citations, we edited one or two paragraphs together to show them what that might look like.
At the end of these meetings, we completed the grading rubric (separate from the self-evaluation) together. Students left the meeting with their marked-up paper and (after my first round of meetings where I learned it would be more efficient to record the score and send it on the spot) the completed rubric. Working together to evaluate the drafts not only got them graded more efficiently, but provided students ownership in their learning process and therefore, theoretically, more buy in for the learning process (see Doyle, 2011). The assessment was now part of the learning process (see Bain, 2004).
At the end of the semester, students submitted not only their final paper, but also a revision reflection (similar to a revise and resubmit letter to the editor) in which they described the feedback they received, what they changed, and what they didn’t change (and why). On this revision reflection, I prompted them to describe the feedback they’d received on each section of their paper and how they’d incorporated it (or not).
Conclusions So Far:
Overall, I would say my experiments were a success and this is moving in a good direction, although maybe not there yet. (Will it ever be perfect? Probably not.) Many of the self-evaluations were thoughtful and, anecdotally, I believe more of the students at least registered and gave some thought to the feedback they received. By the end of the academic year, after meetings for the literature review, methods, and results, I noticed that students were doing more of the evaluating as we discussed the rubric for their discussion drafts, rather than waiting for me to tell them what score they earned. I asked the class for their thoughts and, even on the anonymous evaluations, most of them chose not to comment (I’ll take that as, “no complaints”). One student did tell me that they liked having meetings instead of a paper returned so they could ask clarification questions.
From a professor workload point of view, this approach was exhausting during meeting weeks, when I often had 6-8 meetings per day for several days in a row, but generally much more efficient than grading and returning papers. There were a few (less than 5%) students who delayed scheduling their meetings and/or with whom I had to make special arrangements for an evening or weekend due to athletics, jobs, or other outside commitments taking up most of their days, but we made it work. In the future I should probably be clearer that the onus is on the student to take more initiative and get these scheduled (aka your poor planning is not my emergency, schedule early to have enough choices that will work for you).
In part because I was forced to stay on schedule, students got their feedback in a timelier manner, even though I spent about the same amount of time on each student (30 minutes per paper to mark vs. 30 minutes meetings). I’m hopeful, although I only have anecdotal evidence thus far, that the feedback was clearer. For example, instead of writing a comment, “be sure to clarify the main idea of this paragraph,” I was able to ask students face to face, “what’s the main idea of this paragraph supposed to be? Yes. Write that.” I am also hopeful that feedback was deeper. It’s easy when I’m reading a paper to get caught up in marking the details, but I found with a one-on-one conversation, I could focus more on discussing the bigger pictures of organization and what points they were trying to make. One area where I saw a marked change was in reference formatting; in a face-to-face situation, I could point to a correctly formatted example and say, “this is correct. How is it different from this other one [that has an error]?”
In short, I will keep this self-evaluation and oral feedback approach, but with some tweaks. First, I will likely spend some time scaffolding useful self-evaluation, so maybe more students will use the self-evaluation to their best advantage instead of (as I’m sure some did) seeing it as another box to check. For instance, I might do the first self-evaluations in class on the day after drafts are due and maybe show them some of my self-evaluation process on nearly-complete papers. I might also add another meeting, even earlier in the process as students are collecting their potential sources. The biggest change, though, is timing. Last year I cancelled lab on the day the draft was due and held meetings then. This year, I’ll move meetings to the week after the draft due date. I think meeting as a lab on the day the draft is due will allow me to get the students started on next steps more efficiently and having a gap between due date and feedback will allow me to do more skimming ahead and preparation for more effective one-on-one meetings.
Nothing I’ve written about here is ground-breaking or even particularly innovative. But, sometimes it’s difficult to break from the way we were taught or the way we’re used to doing things. I hope that by sharing my journey so far I might contribute to the conversation as we, as individuals and as a field, strive for that magic solution that will be sustainable for us but still provide our students with the best possible learning experience.
References:
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. Jossey-Bass.
American Psychological Association [APA]. (2023). APA guidelines for the undergraduate Psychology major. Version 3.0. American Psychological Association. https://www.apa.org/about/policy/undergraduate-psychology-major.pdf
Bain (2004). What the Best College Teachers Do. Cambridge: Harvard University Press.
Doyle, T. (2011). Learner-centered teaching: Putting the research on learning into practice. Sterling, VA: Stylus Pub.