Society for the Teaching of Psychology: Division 2 of the American Psychological Association

  • News
  • Putting Operational Definitions in Context: A Teaching Activity for a Statistics Course

Putting Operational Definitions in Context: A Teaching Activity for a Statistics Course

05 Oct 2017 10:00 AM | Anonymous

By Karyna Pryiomka, Doctoral Candidate, The Graduate Center, CUNY

In four years of teaching statistical methods in psychology, I have noticed that students often experience difficulty recognizing the relationship between a hypothetical construct, its operational definition, and the interpretation of results. This often leads to over-generalization, incorrect inferences, and other interpretive mistakes. Operational definitions of hypothetical constructs represent an important component of research in psychology. Operationally defining constructs and understanding the implications of these definitions for data interpretation then constitute key competencies for a psychology student. While operationalism is widely taught in research methods courses, its discussion in statistics courses is often reduced to a few paragraphs in an introductory chapter. To help my students better understand the collaborative, iterative, and context-bound process of creating appropriate operational definitions, I employ a low-stakes group activity during which students work in groups of 3 or 4 to create operational definitions of hypothetical constructs, such as confidence, in two distinct contexts: individual-level decision making and research design. The learning objective of the activity is to demonstrate the role of context in deciding how to operationalize a given construct and to illustrate the process of developing consensus about the meaning of constructs and their operational definitions.

Here are the steps that I take to implement this activity:

1.  I begin by assigning students into groups of 3 or 4, depending on class size. Ideally, at least 2 groups should work on the same problem. Each group receives only one variation of the problem. Below are examples of the two prompts. I give students about 15 minutes to work on the task in their groups.

Individual-Level Decision-Making Context: A growing cat food company, Happy Kibble, is expanding its sales department and asks you, a group of industrial-organizational and personality psychologists, to use your expertise and help them hire the best sales people so that they can convince cat owners around the country to switch to Happy Kibble. You know from research that people who are confident often make good sales people. How would you define and operationalize confidence in this context in order to select a good employee? What questions would you ask candidates? What behavior would you pay attention to during an interview? Assume that the human resources office has pre-selected the candidates so they all qualify for the job based on the minimum education and professional experiences requirements.

Research Context: A growing cat food company, Happy Kibble, has partnered with your research team to investigate if there is any relationship between the confidence level of a sales person and their professional success. Happy Kibble wants to conduct a real scientific study to answer this question. The company needs your expertise in defining and measuring confidence; however, you are on a tight budget so conducting individual interviews might not be an option if you want to collect a large enough sample to draw meaningful conclusions. How would you define and operationalize confidence in this context in order to be able to measure this trait in as many people as you can.


2.  Once groups have created their definitions, a representative from each group is invited to write their definitions and measurement/assessment plan on the board.

3.  I like to begin the discussion by emphasizing the differences between the two contexts. We then focus on establishing consensus among groups that worked on the same problem. We discuss the similarities and differences between the operational definitions produced by these groups, discuss the strengths and limitations of the proposed measurement/assessment plan, and reconcile any differences. We then compare the consensus definition produced for the interview context with the consensus definition produced for the research context. We outline key differences in contexts, discuss what type of evidence can be collected in each, and how the context influences the interpretation of data.
For example, students in both contexts often mention eye contact as one of the behaviors representing confidence. We then discuss how they would measure/observe eye contact in the context of a job interview compared to a research study. Students in a job interview context point out that they would be direct qualitative judgments, made as they engage with the interviewees. Students in a research context often say that they would use video equipment to observe how sales representatives establish eye contact with their customers. In this context then, unlike their colleagues conducting job interviews, students are less likely to make direct qualitative judgments about individual people, but would rather observe, record, and quantify their behavior remotely.

In my experience, students eagerly engage in the discussion, justifying their decisions and challenging those of others. They also begin to ask questions and think critically about the inferences that could be made based on the operational definitions they have proposed.  For instance, a group once suggested that a particular speech pattern or the use of specific words could constitute a variable to assess confidence. This suggestion led to a discussion of the relationship between language and existing standardized assessments like IQ or potential bias against non-native English speakers, making students question whether the proposed operational definition would fairly and accurately reflect someone’s confidence instead of another potentially related trait.

Overall, I found this activity to be a great way to engage students in the discussion of important principles of research design, while promoting critical thinking about the role of operational definitions and measurement procedures in data collection and subsequent interpretation


Powered by Wild Apricot Membership Software