Article
Free Access

Contemporary College Students’ Attitudes about Deception in Research

Jon Lasser

Professor of school psychology at Texas State University

Search for more papers by this author
Gail Ryser

Research fellow and project manager with Methodology, Measurement, and Statistical Analysis (MMSA) at Texas State University

Search for more papers by this author
Dora Borrego

Earned her degree in psychology at Texas State University

Search for more papers by this author
Emma Ham

Graduate student in the school psychology program at Texas State University

Search for more papers by this author
Karla Reyes Fierros

Psychology undergraduate at Texas State University

Search for more papers by this author
Julia Pruin

Psychology undergraduate at Texas State University

Search for more papers by this author
Peyton Randolph

Biology and psychology undergraduate at Texas State University

Search for more papers by this author
First published: 22 January 2020

(Click DOI to download directly)

ABSTRACT

Given the widespread use of deception in psychological experiments and the frequent recruitment of college students as participants, scholars have taken an interest in the ways college students assess the potential costs and benefits of deception studies. It stands to reason that the engagement of participants not as mere subjects, but rather as participant partners, demands at least an awareness of how such participants consider the moral dimensions of deception. To this end, the present study replicates a project conducted almost 25 years ago to determine whether today's college students think about deception in research any differently than their counterparts did in the early 1990s. This article reviews some of the literature on deception, describes the original study conducted, and presents the results of the replication study.

Deception has been used for decades in biomedical, social, and behavioral science research and has been justified when ethics review boards determine that the benefits of the research outweigh the risks or harms.1 Researchers using deception withhold information from participants about the purpose of a study or about various aspects of it (as in Solomon Asch's conformity studies,2 Stanley Milgram's obedience studies,3 and Philip Zimbardo's Stanford prison experiment4). Many researchers in social psychology conducting inquiries about the nature of behavior in social situations have relied on deception because it yields data that could not otherwise be obtained.5 For example, some social psychologists may deceive participants in an effort to avoid the Hawthorne effect, which is the altered behavior of research participants when they become aware that they are being watched.6

Those trying to assess the prevalence of deception in research have reviewed published articles to determine what proportion of studies employed deceptive methods. The most recent of such efforts tracked trends from 1921 to 1994.7 From 1921 to 1948, the percentage of studies that used deception ranged from 0% to 13%; these numbers grew to about 30% in the 1960s. Most studies prior to 1968 did not use deception, while at least half of the studies published from 1968 to 1979 in the Journal of Personality and Social Psychology (JPSP) did. Though the use of deception declined in the 1980s—perhaps due to increased scrutiny from institutional review boards (IRBs)—approximately 25% to 30% of studies published in JPSP employed deception through 1994, the most recent year for which these data were reported.8

Although federal regulations governing research with humans do not specifically prohibit deception in research, some commentators have argued that deception is inherently unethical because one cannot obtain informed consent from those who are not fully informed about the research, while others find no objections to deception in research when there is scientific value and minimal harm to research participants.9 And some take a contextual approach, arguing that deception may be permissible under certain conditions—for example, if participants are informed in advance that there may be deceptive elements to the study but that the elements will not be revealed until the study ends.10 Although opinions vary regarding the degree to which deception studies are ethical, IRBs continue to selectively approve studies that incorporate the use of deception.

College students are frequently recruited to participate in psychology studies because they are easily accessible to faculty investigators. Many psychology departments have a long history of requiring students to either participate in research for course credit or meet the course requirements with alternative tasks such as writing a paper. Since research in psychology often includes deception, Fisher and Fryberg conducted a study in the 1990s (published in 1994) to elicit college students’ perspectives about deception in research.11

In our study, we partially replicated the Fisher and Fryberg study in an effort to determine what today's college students think about deception in research. There may be good reason to believe that contemporary students have different perspectives about deception than the students who participated in the Fisher and Fryberg study. Current college students are more likely than their counterparts in the 1990s to have read and learned about deception in research because there are many more deception studies in the literature. Moreover, college students today have much more experience than their predecessors did in trying to filter out honest information from deceptive sources online.12 Consider that, in 1995, only 9.24% of the U.S. population used the Internet, compared to 90% of adults in 2019.13 And today's college students encounter multiple examples and discussion of social psychological deception in news and entertainment (such as Sacha Baron Cohen's films, the exploits of Johnny Knoxville's Bad Grandpa character, concerns from politicians and media outlets regarding “fake news,” and so on) and may view deception as more normative and mainstream.

Study Methods

Although Fisher and Fryberg surveyed 90 undergraduate students, we doubled the sample size to enhance our statistical power, with our final sample consisting of 194 undergraduate participants. We followed a procedure very similar to that used in the original study for recruitment and for the survey. Participants were recruited from a university psychology department's online human participant pool and were directed to our online study, which was reviewed and approved by the university's IRB. Students were provided information about the study and consented to participate. After this, they were randomly assigned to read and answer questions about one of the three studies described below. Participants were not informed that the present study was a replication of another research project. They were asked to read a summary of a deception study that had been published in a journal and then answer questions that might help psychologists

Current college students are morelikely than were their counterparts in the 1990s to have read and learned about deception in research because there are many more deception studies in the literature.

determine whether such studies should be conducted in the future. We have included one of the study summaries and the questions that follow it in an appendix available online (see the “Supporting Information” section at the end of this article).

Summaries of deception studies

We provided participants with the same research study summaries that Fisher and Fryberg used. The summaries were derived from articles published in JPSP and selected to represent implicit, technical, and role deception. One study (“Mood”) deceived participants by manipulating their mood via an audio recording designed to induce a happy, sad, or neutral mood.14 Another study (“Gaffe”) deceived participants into believing that they had spilled the contents of a cup onto the belongings of a confederate (that is, someone pretending to be a research participant but who, in reality, worked for the researcher).15 The third study (“Help”) examined how “shy” and “not shy” participants requested assistance from a confederate when they were deceived into thinking that an apparatus had malfunctioned.16 Each research summary was organized into the same sections (for example, purpose of study, participants, procedure, and so on). In our study, 64 participants responded to the Gaffe study, 63 responded to the Mood study, and 67 responded to the Help study. The summaries of the studies did not specifically address the value of the research or the need for deception.

Study questionnaire

The original study used a separate questionnaire for each of the three deception studies presented to participants, although most items used identical wording in all studies. For our questionnaires, we worded a few items slightly differently because they asked specifically about the study under investigation, although these slight variations did not affect the intent of the question. For example, question number two asked about the procedures of the study to which participants were responding and thus had slight variations in the wording. The stem of the questions was the same across our three studies; the end of the sentence related to the study under consideration. The original study used a four‐point Likert‐type scale; we increased this to a seven‐point Likert‐type scale (the choices were “not at all important,” “low importance,” “slightly important,” “neutral,” “moderately important,” “very important,” and “extremely important”). We did this because the original study analyzed each question individually and treated the questions as if they were continuous in the analysis. For our questionnaires, most of the questions were worded positively, in that higher scores (5 to 7) indicated that participants thought positively about the study in terms of the question asked. A few questions were worded negatively, so higher scores indicted that participants thought negatively about the study in terms of the question asked. One question (4b) was worded in a negative direction for the Help and Gaffe experiments and in a positive direction for the Mood experiment. We reverse scored this question for the Mood experiment before we analyzed the data. Table 1 shows the response options for each question, whether high scores (5 to 7) were positive or negative, and which construct the question measured.

Table 1. Response Options, Direction, and Construct Measured for Questions on the Surveys
Question Response options Whether higher scores were positive or negative Construct
1a. “Not at all important” to “Extremely important” Positive Scientific value and validity: hypothesis
1b. “Not at all important” to “Extremely important Positive Methodological alternatives: value of laboratory manipulation
2. “Not at all effective” to “Extremely effective” Positive Scientific value and validity: procedures
3. “Not at all important” to “Extremely important” Positive Scientific value and validity: results
4a. “Not at all sad/uncomfortable embarrassed” to “Extremely sad/uncomfortable/embarrassed” Negative Psychological discomfort: experimental condition
4b. “Not at all happy/uncomfortable/embarrassed” to “Extremely happy/uncomfortable/embarrassed” Positive (Mood); Negative (Help and Gaffe) Psychological discomfort: comparison condition
5a. “Did not believe experimenter” to “Believed experimenter” Positive Reactions to dehoaxing: belief
5b. “Not at all embarrassed or distressed” to “Extremely embarrassed or distressed” Negative Reactions to dehoaxing: embarrassment
5c. “Not at all annoyed or distressed” to “Extremely annoyed or distressed” Negative Reactions to dehoaxing: annoyance
5d. “Most would not tell” to “Most would tell” Positive Reveal feelings following dehoaxing: if embarrassed
5e. “Most would not tell” to “Most would tell” Positive Reveal feelings following dehoaxing: if annoyed
5f. “Would be much less willing to tell” to “Would be much more willing to tell” Positive Reveal feelings following dehoaxing: if participating for course credit
6a. “Would be much less willing to participate” to “Would be much more willing to participate” Positive Methodological alternatives: participate if forewarned of deception
6b. “Would be much less willing to participate” to “Would be much more willing to participate” Positive Methodological alternatives: participate if forewarned of manipulation
7a. “The costs to the participant are much greater than the benefits to the society” to “The societal benefits are much greater than the costs to the participant” Positive Cost‐benefit balance: favoring benefit
7b. “Yes” or “No” Cost‐benefit balance: % favoring implementation

The original study identified six major constructs that focused on the scientific merit of deceptive research designs: scientific value and validity, methodological alternatives, psychological discomfort, reactions to dehoaxing, willingness to reveal negative reactions during dehoaxing, and cost‐benefit balance. Each item on the questionnaire measured aspects of one of these constructs (see table 1).

Study Results

We surveyed 194 participants: 126 female and 68 male undergraduates (50.0% freshmen, 35.5% sophomores, 9.3% juniors, and 5.2% seniors) enrolled in psychology classes in which course credit was given for research participation. Approximately 60% of psychology majors at the university recruitment site were female, so our sample closely matches the student demographics

The results of our study suggest that, much like their counterparts who participated in the Fisher and Fryberg study in the 1990s, today's college students may be inclined to believe that the use of deception is justified.

in the psychology department. According to self‐reporting of ethnic or cultural backgrounds, 133 participants were white or Caucasian, 24 were black or African American, 9 were Asian, 6 were American Indian or Alaskan Native, and 14 said they had two or more ethnic or cultural backgrounds. Ninety‐seven of the participants (50%) indicated that they were Hispanic or Latino. Eight participants did not indicate their ethnic or cultural backgrounds.

We analyzed the Likert‐type questions with a multivariate analysis of variance (MANOVA). In a MANOVA, multiple dependent variables are analyzed at the same time to reduce the type I error rate. In this study, all 15 questions rated on a seven‐point Likert‐type scale were included in the model as dependent variables. These included all questions except the open‐ended ones and question 7b, which asked if the participants would favor that the study be implemented, which was answered either yes or no. There were two additional yes‐or‐no questions that asked participants if their introductory psychology class had discussed research ethics and, if so, whether the discussion included deception research. These two items were analyzed using a chi‐square analysis comparing experiment counts by yes and no counts.

There were two between factors for the MANOVA: experiment (Mood, Help, and Gaffe) and sex (male and female), resulting in a 3 × 2 factorial MANOVA. The results of the MANOVA (using Wilks's lambda) showed a statistically significant main effect for experiment F (30, 348) = 5.079, p = < .001, effect size (partial eta squared) = .31, which is a large effect size. The main effect for sex and the interaction of experiment and sex were not statistically significant, which is also what Fisher and Fryberg found. We followed up the statistically significant main effect of experiment with univariate F tests for each question using Scheffe's post hoc analyses to control for the type I error rate. The Scheffe's test can be used with unequal sample sizes. Table 2 shows the construct under investigation with the questions on the survey measuring that construct, the mean, and the standard deviation of each question for each experiment and the F statistic for the univariate analyses. Lastly, for those questions that were found to have statistically significant F values, we conducted Tukey's post hoc analyses to determine which of the experiments’ means were statistically significantly different.

Table 2. Means, Standard Deviations, and Univariate F Tests on Students’ Evaluations of Three Deception Studies
Question Study F
Mood Help Gaffe
M SD M SD M SD
Scientific value and validity
hypothesis (1a) 5.71 1.08 5.12 1.20 5.02 1.28 4.57∗p < .05
procedures (2) 4.97 0.93 4.84 1.22 4.65 1.28 0.09
results (3) 5.42 0.87 5.52 1.09 5.00 0.97 2.90
Methodological alternatives
value of laboratory manipulation (1b) 5.41 1.18 5.19 1.30 4.64 1.38 2.76
participate if forewarned of deception (6a) 3.15 1.31 3.42 1.39 3.33 1.05 0.70
participate if forewarned of manipulation (6b) 3.40 1.39 3.60 1.22 2.59 1.03 11.90∗∗∗∗∗∗p < .001
Psychological discomfort
experimental condition (4a) 4.73 1.15 5.43 1.12 5.27 1.36 7.03∗∗∗∗p < .01
comparison condition (4b) 5.05 0.96 2.85 1.45 3.98 1.40 12.13∗∗∗∗∗∗p < .001
Reactions to dehoaxing
belief (5a) 5.11 1.13 4.80 1.28 4.73 1.47 0.96
embarrassment (5b) 4.14 1.33 4.49 1.18 3.83 1.22 4.63∗
annoyance (5c) 4.09 1.37 3.94 1.31 3.75 1.38 1.88
Reveal feelings following dehoaxing
if embarrassed (5d) 3.38 1.40 3.45 1.61 4.50 1.59 10.45∗∗∗∗∗∗p < .001
if annoyed (5e) 3.61 1.55 3.59 1.66 3.44 1.63 0.05
if participating for class credit (5f) 3.94 1.67 4.28 1.76 4.72 1.51 4.75∗p < .05
Cost‐benefit balance
favoring benefits (7a) 4.91 1.17 5.00 1.25 4.76 1.38 0.56
% favoring implementation (7b) 77.8% 79.7% 76.7%
  • p < .05
  • ∗∗ p < .01
  • ∗∗∗ p < .001

As table 2 shows, experiments’ means were statistically significantly different for seven questions. The first of these (1a) had to do with the hypothesis under study and was worded positively, with higher scores indicating that participants felt more positively about the importance of the hypothesis of the experiment. While all participants indicated that the hypothesis of the experiment they reviewed was important, post hoc analyses showed that participants who responded about the Mood (M = 5.71) experiment thought the hypothesis was more important than those who responded about the Help (M = 5.12) or Gaffe (M = 5.02) experiment did.

The next question of interest (6b) examined participants’ feelings about deceptive procedures in research studies. Specifically, this question asked about the extent to which our study participants thought that subjects in the published experiments would participate in those studies if they knew that manipulation would occur. The question was worded positively in that higher scores indicated that our participants thought the answer was yes. As indicated by the means of the question for all three experiments, all our participants thought subjects would be less willing to participate in the experiment if forewarned about manipulation. Post hoc analysis showed that means were statistically significantly different between those who responded to the Mood (M = 3.40) and Help (M = 3.60) experiments and those who responded to the Gaffe (M = 2.59) experiment. Those who responded to the Gaffe experiment indicated that they thought subjects would be less willing to participate if forewarned about manipulation.

The next two questions that were statistically significant had to do with feelings of discomfort or negative emotion involving participation in deception studies depending on whether the subjects were in the experimental (4a) or comparison (4b) group (lower means indicate negative emotion). For the experimental condition, participants who responded to the Mood (M = 4.73) experiment thought that the subjects would feel more negative emotions than participants who responded to the Help (M = 5.43) experiment did. There were no statistically significant differences for the Gaffe experiment and the other two experiments. For the comparison condition, participants who responded to the Gaffe (M = 3.98) experiment thought subjects would feel more negative emotions than participants who responded to the Mood (M = 5.05) experiment did, while participants who responded to the Help (M = 2.85) experiment thought subjects would feel more negative emotions than participants responding to either the Mood or Gaffe experiment thought subjects would.

The next item (5b) that was statistically significant had to do with reactions of the subjects who participated in the three experiments after they found out the true purpose of those studies. We asked our participants if they thought the subjects in those experiments would feel embarrassed or distressed after being told the true purpose of the experiment. Those who responded to the Gaffe (M = 3.83) experiment thought that the subjects would be more embarrassed or distressed than those who responded to the Help (M = 4.49) experiment did. There were no statistically significant differences for the Mood experiment and the other two experiments. Next, participants were asked if they thought subjects in the experiment would reveal their feelings of embarrassment and distress to the experimenter (5d). Participants who responded to the Gaffe (M = 4.50) experiment thought that subjects would be more likely to tell the experimenter that they were embarrassed or distressed than participants who responded to the Mood (M = 3.38) or Help (M = 3.45) experiment did. In addition, participants were asked if the subjects would reveal their feelings to the experimenter if they were participating in the experiment for course credit (5f)(students had the option of completing an assignment for credit if they did not want to participate in research). Those who responded to the Gaffe (M = 4.72) experiment believed that subjects would be more willing to reveal their feelings to the experimenter if they were receiving course credit than those who responded to the Mood (M = 3.94) experiment did. These findings support one another in that participants responding to the Gaffe experiment thought that subjects would feel more embarrassed or distressed and be more likely to reveal these feelings to the researcher.

We also conducted two Pearson chi‐square analyses to investigate if there were differences by experiment in the number of participants whose classes had discussed ethical research. If a participant's class had discussed this, we asked if the discussion included deception studies. There were no statistically significant differences by experiment in the number of students whose classes had discussed ethical research. The majority of the participants were in classes that had discussed ethical research (74% to 81%), and of those, more than half stated that the discussions contained information on the use of deception in experiments.

Discussion

The results of our study suggest that, much like their counterparts who participated in the Fisher and Fryberg study, today's college students may be inclined to believe that the use of deception is justified. Given the fact that the use of deception in research is common and that college students are frequently used as participants in research, these findings may be of value to researchers and IRBs as they consider the ethical dimensions of deception studies.

While the results of the two studies were similar, the samples differed in some noteworthy respects. In the original study, students sampled were all enrolled in an introduction to psychology course, whereas our study did not limit participants to a particular course. More students in the original sample were Caucasian, and the sample was equally divided by sex, whereas almost 65% of participants in our study were female. We also increased our sample size for greater statistical power.

Because both studies enrolled college students, it is unclear how individuals from the general population feel about the use of deception in research. A focus on a general population for this topic would be helpful, given that college students may not be very representative of the general population. Moreover, the degree to which deception in contemporary politics and entertainment has influenced adults who are not currently in college remains unknown.

Finally, we acknowledge the limitation of studies that ask participants to read about deception research and then make judgments about the ethics of the study design. In such scenarios, the participants reading about deception research are detached from the pressures and emotions that may be very tangible to those who are deceived and debriefed. Further research is needed to better understand how those who actually participate in deception studies feel about that experience.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.