Crowdsourced Research: Vulnerability, Autonomy, and Exploitation
(Click DOI to download directly)
The use of crowd workers as research participants is fast becoming commonplace in social, behavioral, and educational research, and institutional review boards are encountering more and more research protocols concerning these workers. In what sense are crowd workers vulnerable as research participants, and what should ethics reviewers look out for in evaluating a crowdsourced research protocol? Using the popular crowd‐working platform Amazon Mechanical Turk as the key example, this article aims to provide a starting point for a heuristic for ethical evaluation. The first part considers two reputed threats to crowd workers’ autonomy—undue inducements and dependent relationships—and finds that autonomy‐focused arguments about these factors are inconclusive or inapplicable. The second part proposes applying Alan Wertheimer's analysis of exploitation instead to frame the ethics of crowdsourced research. The article then provides some concrete suggestions for ethical reviewers based on the exploitation framework.
In this article, “crowdsourced research” refers to the practice of recruiting research participants from online platforms that connect people who post tasks with other people (crowd workers) who will perform these tasks for a small sum of money. The use of crowdsourced research subjects for filling out surveys is now ubiquitous in social, behavioral, and educational research. Reputable journals in psychology and other social sciences regularly publish research that is done using such subjects.1 The ubiquity can be attributed to convenience and speed of response and the fact that crowd workers are more diverse than college undergraduates, a traditionally popular source for human research subjects.2 Moreover, the representativeness of crowd workers as human research subjects has been demonstrated in some fields.3 The rise of crowdsourced research means that more research protocols for these studies are being submitted to institutional review boards (IRBs) for ethical review.
Crowd workers are a recent category of research subjects, and while they may overlap with some groups, like “economically impoverished” individuals, that are considered vulnerable by some ethics guidelines,4 how and to what extent members of the group are vulnerable is still an open question. Due to the fragility of human beings, we are all vulnerable to harm. However, many harms that we are vulnerable to are not ethically relevant. For example, a windsurfer's vulnerability to a lightning strike is not ethically relevant because the harm is not caused by an ethically accountable agent. One way to express ethically relevant vulnerability is by deeming an individual vulnerable if she is at a greater risk of being wronged relative to some baseline, rather than being harmed simpliciter.5 Vulnerable individuals have traditionally been identified by membership in preestablished vulnerable groups. In the last decade, attention has been drawn to the fact that vulnerability can also be contextual.6 A person who belongs to a vulnerable group, such as a prisoner, may not experience the degree of vulnerability traditionally attributed to her group if the prison system is respectful of prisoners, cares for their health, and provides them with paid work and education, with the goal of reintegrating them into society.7 A person who does not belong to a vulnerable group—say, an affluent, well‐educated, and otherwise healthy young man—could be considered vulnerable when waiting in the emergency unit of a hospital after being injured in a car accident. Major ethics codes take both the group and contextual aspects of vulnerability into consideration.8
Identifying vulnerabilities that research subjects have by virtue of being crowd workers affords ethics committees a heuristic for reviewing protocols for crowdsourced research. Here, I address concerns that participants in these types of studies may be a vulnerable group due to threats to their autonomy. I also show that in the ethical review of crowdsourced research, it might be more useful to consider aspects of participant exploitation, rather than threats to participant autonomy. Although I focus on Amazon's popular crowd working platform Mechanical Turk (MTurk), the points I make in this article are meant to generalize to other platforms that share the relevant attributes of MTurk. On the MTurk platform, “Workers” sign up for “human intelligence tasks” listed by “Requesters.” A human intelligence task consists of a question posed by a requester along with information that a worker requires to answer the question. Workers earn money by completing the tasks.9
In the first two sections below, I dismiss two factors commonly used to support claims that crowdsourced research participation is a threat to autonomy: undue inducements and dependent relationships. I then apply Alan Wertheimer's analysis of exploitation to crowdsourced research on MTurk and show that it is more useful to ethical review than is focusing on threats to individuals’ autonomy. Finally, I show how framing the problem as exploitation allows concrete IRB responses to be identified.
Economic Vulnerability and Autonomy
From the literature, it is possible to extract an argument that some crowd workers are economically vulnerable and that their economic vulnerability compromises their autonomy in consenting to performing crowd work. Discussing the fact that human intelligence tasks “can be part of a pool of tasks that accumulate to provide a low‐paid service income,” Ilka Gleibs associates fair pay with “the ethical principle of respect for autonomy.”10 WeAreDynamo, a community of MTurk workers and requesters concerned about the ethics of crowdsourced research on MTurk, has established Guidelines for Academic Requesters, which is intended for requesters using the platform. The “Fair Payment” section of the guidelines states, “Underpayment of crowd workers is anything less than the current federal minimum wage in the United States. Tasks paying less
than this are likely to tap into a highly vulnerable work pool and constitutes [sic.] coercion.”11 Yet how exactly is the principle of respect for autonomy compromised for people in dire economic situations who consent to undertake low‐paying human intelligence tasks?
Identifying vulnerabilities that research subjects have by virtue of being crowd workers affords ethics committees a heuristic for reviewing protocols for crowdsourced research.
Impeding a worker from consenting to participate in such a task can be seen as violating the principle of respect for autonomy. Ruth Faden and Tom Beauchamp propose three necessary conditions for an agent's (X's) autonomous action: “X acts autonomously only if X acts 1. intentionally, 2. with understanding, and 3. without controlling influences.”12 Faden and Beauchamp's conditions admit to degrees, however, given that the limitations of human knowledge and those imposed by environmental factors make it impossible to achieve fully autonomous action. A prospective consenter does not need to fully satisfy conditions 1, 2, and 3 to successfully give consent.
Wertheimer and Franklin Miller disagree with Faden and Beauchamp's model. One reason is that Wertheimer and Miller see the model as imposing too high an informational requirement on the autonomy of consent, which does not generalize. For example, consent to sexual relations does not require an understanding of the habits, social relations, and medical history of the prospective sexual partner; indeed, the request for such information “distorts the activity to which consent is given.”13 In addition, a prospective consenter could consent to a medical intervention while declining relevant information about the intervention because they think it is too time consuming or stressful to comprehend the information or because they trust the doctor to act in their best interest.14 For Miller and Wertheimer, consent is successful if it is given under conditions in which the prospective consenter is treated fairly, and it is reasonable for the consent seeker to believe that the consenter has given consent.15
While these two theories of informed consent are incompatible, they share a necessary condition for the autonomy of consent that can be deployed to assess whether a worker's autonomy is compromised when the worker consents to participate in human intelligence tasks. Consent is successfully given only if it is given voluntarily.16 This condition is ensconced in Miller and Wertheimer's fair transaction theory of consent: “One advantage of the fair transaction approach is that it offers a theoretically attractive unifying account of the various criteria of morally transformative consent—voluntariness, information, and competence.”17 This necessary condition sheds light on a coercion argument suggested above. For economically vulnerable people, the opportunity to earn even less than minimum wage can coerce them into performing human intelligence tasks. The coercion renders the consent that economically vulnerable workers give in performing the tasks defective because they did not voluntarily consent to performing them.
A potential worker who is in desperate economic need consents to perform low‐paying crowd work because she thinks that this, given the constraints that she faces in her life at the moment, is the best means to making ends meet. On the face of it, the consent is voluntary because her action of taking on the crowd work is willed in accordance with a plan of negotiating constraints and aimed at making ends meet. A worker's action is less autonomous to the extent that an unwelcome offer is irresistible.18 An unwelcome offer to avoid highly unpleasant consequences with the aim of influencing action is a threat.19 Coercion occurs when someone is influenced to act by an intended, credible, and irresistible threat.20 Coercion is incompatible with autonomy.21 The coercion argument claims that the voluntariness is undermined by the worker's economic vulnerability because her desperate need to earn money makes her choice irresistible, the way an inescapable and grave enough threat that accompanies a demand to hand over one's money to a robber makes the choice of that action irresistible.
The argument from economic vulnerability, resting as it does on coercion, is unconvincing for two reasons. First, while the above quotation from Guidelines for Academic Requesters suggests that workers are coerced into accepting low‐paying human intelligence tasks by their economic desperation, such a situation qualifies as what Faden and Beauchamp call a coercive situation, which does not deny voluntariness. First, the worker chooses voluntarily because there is no party who intentionally coerces the worker and whose removal takes away the constraint to the worker's freedom of action.22 Second, an option that holds out the hope of alleviating a worker's desperate economic situation is a welcome offer to the worker at that time. A worker's actions of receiving and accepting a welcome offer is voluntary because they proceed “from the dictates of his or her own will.”23 Hard circumstances alone do not detract from the proper functioning of one's will in consenting to welcome offers with the intention of improving one's position.24
However, an offer can be too welcome and can interfere with an agent's, or worker's, autonomy by affecting the weight that she places on risk. Such offers, known as undue inducements, can conflict with autonomy. Autonomy can be negative, in the form of freedom from relevant constraints, or positive, in the form of freedom to chart one's own course in life. Positive autonomy requires that one acts on what one considers good reasons, reasons that one arrives at by the application of one's rational capacities. Undue inducements hamper one's rational capacities or decision‐making processes by causing one to put less weight on the same risk than one would in the absence of such inducements.25
While this is a plausible line of thought for other contexts, its inapplicability in the case of crowd workers for human intelligence tasks is apparent when we consider a contrasting, paradigmatic case where the concept of undue inducement is applicable. Suppose a $20,00026 payment for a first‐in‐human (phase 1) drug trial causes Billy, an economically vulnerable stay‐at‐home father, to apply even though he knows of deaths27 resulting from phase 1 trials and that the risks for this trial are necessarily unknown. He would not have signed up for such trials were the payment lower than $10,000. With $20,000, he can foresee comfortably affording his groceries, rent, and utilities for the year. In contrast, consider Willy, a stay‐at‐home father in a similar economic situation, thinking of participating in a human intelligence task. The wage for this task is $6 per hour, which is below the minimum wage. If he accepts the task, he may be passing up the chance for a more lucrative one. However, lucrative human intelligence tasks are hard to come by. He accepts the task, and it is one among the many that he completes so that he can buy groceries and pay for his monthly housing and other expenses.
The difference between Billy and Willy is that Billy's estimation of risk is compromised by the value of the high pay for his quality of life and the direct effect this consideration has on his estimation of a risk that would otherwise have led him to forgo the opportunity. In contrast, Willy's case is one in which his rational capacities are being exercised—the pay is adequate only in aggregation with the income from many other human intelligence tasks, each of which requires him to deliberate whether or not to accept it, but which do not on their own present him with an overwhelmingly welcome offer. The piecemeal offers with low pay that crowd workers need to accept to make enough money are incompatible with the idea that the money offered for a human intelligence task is undue inducement. How about the case in which a requester pays a lot for such a task, say, $20 per hour? No matter the payment, there is another relevant disanalogy from the phase 1 trial example: the risk of performing a human intelligence task is minimal. The sum, whether large or small, does not interfere with one's capacity to reason about risks because an individual task threatens neither life nor livelihood.
Worries about the riskiness of answering survey questions—the form that crowdsourced research participation takes—center on the mental distress that sensitive questions can invoke in a respondent. They may evoke traumatic memories that lead to suicide attempts. Having reviewed the literature in this area, Simon Whitney argues that evidence‐backed research is at best inconclusive about the harm that can ensue through mental distress provoked by survey questions.28 And Eve Carlson's study has shown that, while a proportion of people who were subjected to potentially distressing questions do agree that they find the questions distressing, they also welcomed the insights about themselves that they gained by completing the surveys.29
Stanley Milgram's infamous 1961 psychosocial experiment about obedience required participants to turn up a dial that they thought sent a current of electricity to a fellow participant in the next room who gave wrong answers in a word‐pair memory test. When participants followed instructions that they believed allowed them to administer even the highest voltage, what they thought were screaming and banging on the walls in the next room would cease. The distress suffered by some of the participants who followed instructions was palpable. Milgram wrote, “Subjects were uncomfortable doing so, and displayed varying degrees of tension and stress. These signs included sweating, trembling, stuttering, biting their lips, groaning, digging their fingernails into their skin, and some were even having nervous laughing fits or seizures.”30 Elliot Aronson, who studied Milgram's subjects, remarked, “Not one of Milgram's subjects complained, and none reported having suffered any harm.”31 On the contrary, Milgram's subjects opined that participation in the study was useful to them.
While these cases do not show conclusively that answering distressing survey questions does not harm respondents, they temper catastrophizing about the effects of sensitive and potentially distressing survey questions. The Milgram case, while not about answering survey questions, serves as a paradigm of participant distress. If, even under such extreme distress from participation, the research subjects who were asked did not complain of harm, it is likely that adult human subjects are psychologically resilient enough to avoid ethically relevant harm in answering survey questions. Potential survey respondents are discerning enough to recognize that they have benefited from insights yielded by participating despite being distressed. The risk of harming potential respondents can be further mitigated by including a general cautionary note in the consent form or introductory page of the human intelligence task, explaining that some of the questions in the survey can be distressing to some respondents.
Power Imbalance and Autonomy
The power imbalance between requesters and the crowdsourcing platform on the one hand and workers on the other hand suggests another argument about potential ethical problems. Gleib calls this the “issue of asymmetrical power relations, which are related, for example, to problems with ‘withdrawal‐without‐prejudice’ on crowdsourcing marketplaces.” He continues, “[W]e have to consider the ethical implications of making research participation a source of income, which are linked to the power differential created by an employer‐contractor relationship and the question of whether workers can ‘afford’ to reject tasks.”32 Power imbalances are found wherever there are hierarchies. Hierarchies serve to maintain coordination and division of labor, and they are particularly useful where rapid coordinated action is required, for example, in the armed forces or in health care. Hierarchies are also useful for the transmission of knowledge in schools, universities, or vocational institutes. Gleib's remarks thus require interpretation in order to extract an argument about unethicality associated with power imbalance.
MTurk categorizes workers as “independent contractors”: “Workers perform Tasks for Requesters in their personal capacity as an independent contractor and not as an employee of a Requester or Amazon Mechanical Turk.”33 The categorization denies to workers the legal protection of rights that are generally thought to accrue to employees, such as those concerning minimum wage, overtime pay, health and safety, employment antidiscrimination, family leave, and union organizing.34 For employees, but not for independent contractors, these rights are protected in the United States by laws like the National Labor Relations Act (1935), the Fair Labor Standards Act (1938), Title VII of the Civil Rights Act (1964), the Occupational Safety and Health Act (1970), and the Family and Medical Leave Act (1993). The rights confer corresponding powers, for example, the power to earn at least minimum wage or the power to collectively negotiate for better conditions. Without legal protection of the rights, the corresponding powers are diminished, accounting for the subordinate positions of crowdsourced workers in the power imbalance.35
The point of Gleib's argument appears to be that making research participation a source of income puts workers in a position of dependence on requesters to pay and treat workers fairly. Is the problem then one of dependent relationships? Dependent relationships feature in ethical deliberations for institutional contexts where there is a standing power imbalance, where the superordinate power has influence over many important aspects of a subordinate's life. In such contexts, a dependent relationship indicates a position of ethical vulnerability wherein the subordinate's capacity for autonomous action is diminished. This proceeds either through the fear of reprisal by the authority or the prospect of being favored by the authority. Examples of such contexts are the uniformed services, hospitals, schools, and prisons.36
The ethical significance of dependent relationships lies in the fact that an authority has the standing ability to exert unwelcome influence over some important aspect of a subordinate's life. The fact that the subordinate feels the constant possibility of unwelcome influence and the fact that the subordinate does not know over which important part of her life the influence will be exercised grounds the threat to autonomy independently of the facts of any particular fear of unwelcome influence. This point applies, mutatis mutandis, to the prospect of gaining the favor of the authority—it is the standing ability of the authority to favor and reward, which allows a “dependent relationship” to be invoked whether or not the prospect actualizes.
Dependent relationships do not apply to crowd workers who are considering enrollment into a research study because the relevant relationship has not been set up. For two reasons, dependent relationships, if they apply to crowd workers who are enrolled in a crowdsourced research study, are anomalous and are better analyzed in another way. First, crowdsourced research participation is too fleeting to feature the prospect of a superordinate's favor as an autonomy‐overwhelming motivation for action. Second, in the short time span of crowdsourced research participation, the threat of a diminished reputation score could serve as an autonomy‐overwhelming motivation against withdrawing from what one otherwise would desire to withdraw from. However, the specificity of the motivation diverges from the usual application of dependent relationship as a general (hence unspecified) tendency for autonomy to be threatened. As we shall see in the next section, for such specific complaints, analysis in terms of exploitation rather than dependent relationships frames the ethical problems more usefully.
I have thus far attempted to weaken two autonomy‐based reasons for identifying ethical wrongs in crowdsourced research—undue inducements and dependent relationships. Dire background conditions like economic vulnerability and sources of influence like power imbalance are commonly held to diminish research participants’ autonomy and to result in defective informed consent. It is thought that the individuals’ ability to give valid informed consent diminishes either because, under such conditions, they assign more weight to risky options in their decision‐making than they otherwise would or because the influence of dependent relationships leads them to make decisions that they otherwise would not make. I have attempted to show in the preceding two sections that economic vulnerability and power imbalance in the form of a dependent relationship do not detract from individuals’ making autonomous decisions and are not applicable to crowdsourced research, respectively. The weakening of the two autonomy‐based reasons also weakens the charge that the current practice of using crowd workers as research participants is unethical because of defects in their consent to performing tasks.
In this section, I accept Wertheimer's account of exploitation and show that, instead of considerations of autonomy and related assessments of informed consent, the ethical evaluation of crowdsourced research is clearer along his dimensions of moral weight and moral force in exploitation. For Wertheimer, A exploits B in a transaction only if A gains unfairly from B in the transaction. A can exploit B in a transaction that is mutually advantageous to both A and B and to which B consents. There are two types of exploitation—harmful exploitation and mutually advantageous and consensual exploitation.37 Wertheimer focuses on the latter type because he thinks that, in cases of harmful exploitation, the harm or coercion distracts us from the morally relevant aspects of exploitation per se. This is demonstrated by how, even though a robber exploits her victim, our attention is drawn to the harm and deprivation of property that the robbery inflicts on the victim rather than to the exploitation of the victim.38
According to Wertheimer, the “moral weight” of a mutually advantageous and consensual exploitation refers to “the intensity of its wrongness”39 and admits of two assessment questions:
First, can it be seriously wrong for A to engage in mutually advantageous and consensual exploitative transaction with B, particularly if A has a right to refuse to transact with B? Second, is it wrong for B to allow him/herself to be exploited?40 The two questions direct our attention to whether A has acted wrongly in the transaction and whether B has acted wrongly in the transaction respectively.
Let us consider the first question. For Wertheimer, the relevant concept of exploitation is a moralized concept. What this means is that, because the concept of exploitation is applicable to a transaction only if the transaction is unfair, transactions to which this concept applies are prima facie morally wrong.41 The first question takes the prima facie wrongness to be outweighed by the mutually advantageous nature of the transaction and the fact that B consents to it, and it then directs one's attention to factors aggravating the wrongness to a serious level in spite of the mutual advantage and the consent. Parties in A's position can answer the first question by denying that their transacting with B is seriously wrong by invoking a “nonworseness claim”: “Given that I have a right not to transact with B and that transacting with B is not worse than not transacting with B, it can't be seriously wrong for me to engage in an unfair transaction with B.”42 The claim applies to the type of exploitation we are looking at because mutually advantageous and consensual transactions are those transactions in which transacting with B is not worse than not transacting with B.
Wertheimer undermines the truth of the nonworseness claim in two ways. First, he points out that the fact that transacting is not worse than not transacting does not rule out the possibility that it is still seriously wrong to transact. Domestic violence is not worse than murder, but it is still seriously wrong.43 Second, for some mutually advantageous and consensual exploitative transactions, the baseline against which to assess fairness changes. Wertheimer attempts to draw out our intuition on this point by an example of a mutually advantageous and consensual exploitative marriage. Let us visit his attempt in more detail.
Consider an exploitative but mutually advantageous and consensual marriage between A and B, where the marriage is unfair to B. The terms of the marriage are unfair to B in the allocation of money, childcare responsibilities, and domestic chores. Despite the unfair terms, being in this exploitative marriage is better for B than not, because B is truly in love with A, fulfills her life goal of bearing children, and enjoys economic stability. These advantages informed her consent to the marriage.44 To make the example starker, imagine that, at the time of the couple's engagement, B could have expected the unfair terms of the marriage to be exercised to a great extent: the terms stated that B would receive a subsistence level of money from A and toil every day before dawn until late at night, taking care of the five children the couple anticipated and doing the household chores. A was in a position to help B, for example, by hiring a nanny or a housekeeper, but had no intention of doing so. Against the charge that it was seriously wrong for A to marry B under such unfair terms, and where B had the right to refuse to marry A, A replies that it is not seriously wrong because B is better off in the marriage than not, and B agrees.
Resisting this reply, Wertheimer argues that the marriage transaction sets up a new moral baseline that overrides the premarital moral baseline that informed B's consent to marry A. The new baseline requires a higher level of welfare for B than that resulting from the exploitative marriage, even though being in the exploitative marriage resulted in a higher level of welfare for B than that with no marriage.45 This claim is plausible because the institution of marriage, indicated by marriage vows and legal obligations, brings with it higher standards for B's welfare than the terms of the exploitative marriage that B agreed to. A's reply takes the fact that B is better off in the exploitative marriage than not to be sufficient for defusing the seriousness of the wrong of the exploitative marriage. However, that baseline has been replaced by a more stringent one whereby B's welfare level in the exploitative marriage falls short.46
Now let us turn to the second question, “Is it wrong for B to allow him/herself to be exploited?” Wertheimer thinks that it may indeed be wrong for B to allow him‐ or herself to be exploited because, were many Bs to consistently reject the exploitative transactions, it would result eventually in many As transacting on fairer terms for that sort of transaction.47 Bs’ allowing themselves to be exploited is wrong in proportion to the number of attempts by others similarly situated to discourage such transactions by As: B is more wrong to allow him‐ or herself to be exploited if many others similarly situated refuse to be exploited in that way than if few others similarly situated refuse to be exploited in that way.48
This sort of wrongness on B's part can be so weighty that it justifies the state's prohibiting or otherwise interfering with a mutually advantageous and consensual agreement. According to Wertheimer, another ethically relevant aspect of an exploitation is its “moral force.” The moral force of exploitation resides in the type of reasons that exploitation generates for society to prohibit or otherwise interfere with exploitative transactions.49 Prohibiting or otherwise interfering with a mutually advantageous and consensual exploitative transaction causes B to be worse off, all things considered. To insist that such an intervention is morally justified consigns one to moral perfectionism or paternalism, both of which deny B's autonomy.50 To avoid the denial of B's autonomy, the moral force of the exploitation can be channeled through what Wertheimer calls “strategic intervention.”
Strategic intervention justifies prohibiting or otherwise interfering with a mutually advantageous and consensual exploitative transaction on the basis of the fact that doing so results in fairer transactions of that sort for Bs in the long run.51 Wertheimer makes the case with minimum wage law:
Suppose a society has a minimum wage law. If B's contracting with A to work for less than the minimum wage would leave everything as it is, there would be little reason to prevent B from so contracting with A if A would otherwise not hire B. […] But repealing the law “will not leave everything else unchanged.” If B and others similarly situated are willing to work for a subminimum wage, their agreements will alter the bargaining position of still other Bs. […] [We] prohibit B from being employed for a subminimum wage to ensure that other members of the class of Bs will not be employed for a subminimum wage.52
The moral force of exploitation—how it justifies intervention with the unfair transaction—is not ruled out by the facts of mutual advantage and consent. The force justifies intervention on the basis of fairness to others similarly situated.
As mentioned at the outset of this section, I assume Wertheimer's theory. A critical evaluation of his points is outside the scope of this article. Nonetheless, I include his theory by enumerating the practical advantages that applying his analysis of exploitation to an ethical evaluation of crowdsourced research has over analyzing in terms of threats to autonomy. I begin by preempting an objection: what about straightforward harms like that of not being paid by unscrupulous requesters for work done? Transactions such as these fall outside the scope of ones that are mutually advantageous and consensual. These exploitations are to be considered harmful and can be straightforwardly assessed in terms of the harms incurred. By contrast, with penumbral cases of exploitation that involve mutual advantage and consent, the aforementioned autonomy arguments and Wertheimer's points about the moral weight and moral force of exploitation are useful in adjudication.
Next, a complaint raised by some crowdsourced workers can be addressed by the points made in the section above regarding economic vulnerability and autonomy. On MTurk, the human intelligence tasks “approval rate” refers to the percentage of such tasks completed by a worker that are approved by the requester. Concern about one's own approval rate can affect the course of an individual's autonomous decision to enroll in a human intelligence task or to withdraw from it. Approval rate is important to a worker because it determines which and how many tasks are available to that worker. Requesters take a reputation score of 95% and above as indicative of quality work. Because it makes sense for a requester to set a worker‐acceptance threshold at 95%, any worker with a reputation score less than 95% suffers reduced access to human intelligence tasks. If one is economically vulnerable and relies on participating in crowdsourced research as the primary source of income, any reduction in reputation score is a weighty setback. One suffers reductions in reputation score if one withdraws from (“returns”) a human intelligence task in which one enrolls and also if one's submitted work is rejected by the requester. The fear of reductions in reputation scores can cause a worker to continue performing a human intelligence task from which, all other things considered, she wants to withdraw.
This case is analogous to the one we considered in the section about economic vulnerability and autonomy that resulted in undermining the charge of undue inducements. There, the background condition under consideration was economic hardship. That background condition does not detract from workers’ autonomous consent to perform human intelligence tasks because its presence does not compromise the voluntariness of consent, and the riskiness of performing such a task is low. Here, the background condition is the high reputation score threshold. Workers are not coerced to persist in a task that they would rather not continue performing by the possibility of a reduced reputation score because they can still decide to terminate a task after considering the reasons against doing so. Furthermore, the consequence of a diminished reputation score is not so damaging that the fear of a reduced score would cause the worker to downgrade the weight she would otherwise put on the objectively risky option of discontinuing a task: reputation scores are not diminished drastically by one aborted human intelligence task and can be earned back.
Let us now apply Wertheimer's analysis of exploitation. As mentioned, under MTurk's participation agreement, workers are categorized as independent contractors. This denies them rights that employees enjoy, like paid leave, overtime pay, the organizing of unions, and minimum wage. On the MTurk pricing page, requesters are told, “You decide how much to pay Workers for each assignment.”53 The independent contractors categorization in the participation agreement makes it possible for requesters to knowingly and legally pay below minimum wage for the completion of human intelligence tasks, and for workers to be cognizant of this fact when they participate on MTurk.
A range of unfair actions is compatible with the terms of the participation agreement. Requesters can give unrealistically short time estimates and limits for human intelligence tasks, take a long time in approving the tasks, overlook workers’ attempts at communication and completion code malfunctions, include long and unpaid screening surveys, and so on.54 Vague instructions can result in a worker's having to return work on which he wasted time but found the instructions too confusing to complete.55 On MTurk, requesters are not obliged to give reasons for rejections. Human intelligence tasks can be rejected for no reason or for generic reasons that do not provide workers with sufficient justification.56
Based on Wertheimer's analysis, these actions are exploitative because they are unfair to a party in a transaction. For example, an economically impoverished worker accepts a human intelligence task from an unscrupulous requester who performs one or more of the actions described in the previous paragraph, and the worker earns a lower hourly wage than would please him. Compared to the fact that he wasn't earning anything before completing the task, the worker's welfare level is improved marginally by completing it. Suppose he knew what he was in for because he had read MTurk's participation agreement and had experienced similar unscrupulous requester actions that he understands not to violate the terms of the agreement. This scenario satisfies Wertheimer's “mutually advantageous and consensual” characterization of the exploitation.
We now have to expose the moral weight behind the unfairness of the transaction in order for it to meet Wertheimer's necessary condition for exploitation. Entering into the requester‐worker transaction puts both parties into a relationship in which the moral baseline has changed. Where, considered independently of the relationship, the worker's welfare level is improved to a certain point that is better than his original welfare level, the moral baseline required by the new relationship requires his welfare to be even higher than that point. Apart from the improvement from the pretransaction welfare level, fairness in the new relationship requires the worker to be apprised of a reasonable time to complete the human intelligence task and to have legitimate queries responded to and completion code malfunctions rectified. Fairness requires that workers not be obliged to waste their time doing unpaid screening questionnaires and figuring out unclear instructions. It requires that workers be informed why their human intelligence tasks were rejected with enough detail to be useful in their efforts to avoid future rejections.
We can also apply Wertheimer's “strategic intervention” characterization of the moral force of exploitation. Recall that mutually advantageous and consensual exploitation can generate a moral force that justifies societal intervention with such transactions. In the example above, mutually advantageous and consensual labor for less than minimum wage affects the bargaining positions of similarly situated others, resulting in disadvantage to the relevant class of workers. Apart from the prevention of working for less than minimum wage that applies here, the moral force of exploitation can justify preventing other exploitative transactions that result in the propagation of unfairness—workers’ performing human intelligence tasks that have underreported completion times, tolerating requester rejections for no reason, filling out lengthy screening questions for no pay, and so on—while retaining those exploitative transactions57 that allow workers an avenue of economic improvement.
As a guide to adjudication, Wertheimer's analysis of exploitation is superior to analyzing in terms of threats to autonomy for the following reasons: First, our basic intuitions about autonomy tend to be preserved even though threats to autonomy may not be invoked as a direct evaluation criterion. Second, the reasoning can accommodate a variety of possibly conflicting fundamental norms, rather than insisting on the primacy of a principle of respect for autonomy. This allows for IRB members of different fundamental moral persuasions to come to an agreement about protocols that use crowd workers as research participants. Third, the framework minimizes misidentifications of undue inducements by presenting a ready target for attributions of unethicality in worker‐requester transactions. Below, I look at each of these reasons in turn.
First, informed consent can be characterized as waiving existing norms that determine an action to be unethical.58 The practice and efficacy of informed consent is grounded in the respect for personal autonomy, the capacity of a competent individual to guide the course of her life by her decisions. Her consent to an action done to her that would otherwise be unethical waives the relevant norms that determine it to be unethical. For example, your unethical action of stealing my money can be transformed, with my consent, into an action of borrowing my money or receiving it as a gift. My consent waives the norm determining that an action that violates my property rights is unethical.
Applying Wertheimer's analysis of exploitation to crowd workers as research participants allows for a separate prior assessment of whether particular deployments of crowd workers were unethical based on obvious cases of defective or absent consent. The application does not marginalize the value of personal autonomy to ethical evaluation. Apart from the obvious cases, however, in the majority of cases, it is unclear whether autonomy was disrespected because each transaction is mutually advantageous and there is evidence of consent. Wertheimer's analysis of exploitation is precisely suited for such cases because his discussion centers upon mutually advantageous and consensual exploitation in order to filter out occlusive factors other than those borne by exploitation per se that determine our ethical evaluation.
Second, while the unfairness of an exploitation grounds its prima facie unethicality, Wertheimer does not commit to a moral theory of unfairness. IRB members are chosen in part for their diversity of viewpoints and are often of different fundamental moral persuasions. The use of Wertheimer's analysis of exploitation allows for agreement on unfairness based on different, more fundamental reasons, which may be incompatible.59 For example, IRB members can agree on the unfairness of an exploitation while disagreeing more fundamentally about whether the unfairness is due to violations of human dignity or reduces utility.
Third, Wertheimer's framework allows us to minimize the sort of misevaluation that informed the discussion above about economic vulnerability and autonomy. Recall that the charge of undue inducement to participate in research has been leveled at requesters. The basis of this charge is the fact that potential participants in dire economic straits can be influenced to enter into a transaction wherein they are poorly paid in an attempt to ameliorate their financial situations. Yet the charge is inconclusive because the competency of these potential participants has not been clearly compromised by the fact that the offer is made against the backdrop of economic exigency. It is not clear that one cannot autonomously opt to work for low pay in order to ameliorate one's situation.
What seems to be going on in these cases is that the ethics reviewer feels that there is something unethical about the offer, scans the case for an ethical scapegoat, and, because the economic background gives weight to the option to participate for low wages, identifies that as undue inducement. However, we commonly face situations wherein background conditions cause an option to gain weight in our autonomous practical reasoning though it otherwise would not have. A common example is the running out of time. If one were delayed and then had to rush for an important appointment, one might consider an option one ordinarily would not, for example, taking an expensive taxi ride instead of a bus. The difference in background conditions lends an option more weight. Just as we can autonomously choose to take the taxi instead of the bus in such situations, so a potential worker in economic need can autonomously choose to enroll in a crowdsourced study involving a human intelligence task.
Consider a typical crowdsourced research protocol where harm to participants is minimal and informed consent is sought. Despite these features, an IRB member has ethical qualms about the protocol. The advantage to using Wertheimer's analysis of exploitation to evaluate the ethics of crowdsourced research participants is that his analysis has a ready correlate for an ethical qualm—the unfairness of the transaction. This removes the pressure to identify another source of unethicality and thus minimizes the chance that undue inducement is incorrectly identified as the locus of unethicality. There are true cases of undue inducements, where one's perception of risk is diminished, with an inordinate increase in incentives to participate in a piece of objectively very risky research. The significance of these cases is reduced when the cases are lumped with misidentified cases of undue inducements. Also, as Wertheimer mentions, disallowing exploitative transactions on the grounds of autonomy‐denying unethicality can have the consequence of denying people a way of alleviating their financial difficulties.60
Vulnerability and IRB Review
Robert Goodin categorizes the strategies to counter exploitation into two types. The first type is to empower the subordinate party to “defend itself” against the superordinate party relative to the unfairness. The second is to “forestall the threat of exploitation” by denying the superordinate party discretionary control over the resources needed by the subordinate party.61 Both strategies are available but may not be known to prospective crowdsourced research participants.
Crowdsourced workers have adopted the first type of strategy by organizing themselves into online communities that share information and resources. Communities like the now‐defunct Turkopticon, replaced and enhanced by TurkerView, and also TurkNation, MTurk Crowd, and Turker Hub62 rank requesters, blacklist errant ones, dispense advice about crowd working, and make available plug‐ins that automate the process of sourcing for good human intelligence tasks, thus making the discovery of such tasks more efficient for workers. The sharing of such information acts as a countermeasure by the crowdsourced worker community against power imbalances because requesters with bad track records will find it harder to enroll workers as research subjects. In addition, the information provides incentive for requesters to behave fairly toward workers who enroll in their human intelligence tasks because being highly ranked increases the enrollment of workers with high acceptance rates for their studies.
As previously noted, crowdsourced workers have also collaborated with academic requesters as a group named WeAreDynamo to produce the document Guidelines for Academic Researchers. These guidelines are pertinent to the assessments and recommendations of IRBs for research protocols that involve the use of crowd workers as research subjects. Before the appearance of this set of guidelines, IRBs had only general principles, norms, and guidelines laid out by venerable documents like The Belmont Report, the Council for International Organizations of Medical Sciences guidelines, and guidelines of particular disciplines, like those of the American Psychological Association,63 to rely on for ethical review of these protocols. However, because most IRBs are unfamiliar with the activity of crowd working, they lack enough knowledge of relevant aspects of the activity to apply the general principles, norms, and guidelines.
The company Prolific is an example of a dedicated crowdsourced research platform that is ethically mindful. Prolific sets a minimum payment threshold at £5 or $7.25 an hour, which conforms to minimum wage requirements in the United Kingdom and the United States, respectively.64 Should a participant want to query a researcher over rejected work, they can contact the researcher directly using the platform's messaging system.65 The platform also has a “Return and cancel reward” button that allows participants to withdraw from research that they have enrolled in without detriment to their “Prolific Score,” the equivalent of MTurk's reputation score.66 Finally, at the end of each research participation, participants are asked for feedback in the form of a thumbs up or thumbs down, which can alert the platform to technical issues or to studies that are problematic to participant experience.67
At the time of this writing, Prolific was developing a system that allows researchers to make partial payments for work that is partially satisfactory, and a requirement and system for researchers to justify rejections to participants.68 Prolific appears to be the sole explicitly ethically conscious crowdsourced research platform in existence.69 Making known its existence and encouraging participation on its platform meets the condition of Goodin's second strategy against exploitation, that of denying the superordinate party discretionary control over the resources needed by the subordinate party. Here, instead of requesters, the platform itself is seen as the superordinate, potentially exploitative, party. It potentially profits at the expense of workers, through potentially unfair platform policies like categorizing workers as independent contractors so that minimum wage requirements can be circumvented.
From the foregoing discussion, there are three bulwarks against the threat of exploitation of crowdsourced research participants. These take the forms of collective action by the crowd workers themselves, ground‐up crowdsourced research ethics guidelines, and alternative ethics‐conscious crowdsourced research platforms. They guard against the threat of exploitation because the first falls under Goodin's self‐defense strategy and the third falls under his forestalling of threat strategy, while the second can fall under either strategy, depending on the guideline in question. The following points enumerate how these resources might be taken into consideration by an IRB in reviewing a crowdsourced research protocol:
- Recommend the inclusion of information on collective action resources, guidelines, and alternative platforms on the information page of the task. This need not be onerous and can consist of one link each, under the heading “Resources.”
- Publish the links to the WeAreDynamo Guidelines for Academic Requesters and to at least one ethics‐conscious alternative platform to MTurk in IRB protocol review submission instructions internal to the institution.
- Use the WeAreDynamo Guidelines in reviewing crowdsourced research protocols.
- Assess the relevant ethical aspects of a crowdsourced research protocol by threats of participant exploitation rather than by threats to participant autonomy.70
I end with a general remark about the role of vulnerable groups in ethics deliberations. As I mentioned at the end of the first section, there has been a shift toward thinking about ethics‐relevant vulnerability in terms of context dependence in addition to static vulnerable groups, as was the case in the past. Significant voices who argue for context‐dependent vulnerability also go on to dismiss vulnerable groups as having no role to play in ethics deliberations. Vulnerable populations bear a greater tendency to be wronged across a wide enough range of contexts or tend to suffer wrongs of a great enough intensity to be identifiable as a group. One reason for the identification of vulnerable groups is that, when invoked in ethics deliberation, a vulnerable group serves to focus attention on what is historically ethically salient about the case at hand by contributing a ready‐made module of possible wrongs and ways of mitigation that are associated with such a group.
The WeAreDynamo Guidelines contains ways of ethically treating crowdsourced research participants. These ways are informed by the ground‐up experiences of crowd workers and concerned researchers. The real vulnerabilities that these guidelines seek to mitigate are agreed upon as ethically relevant ones. Because there is a substantial set of real ways in which a crowdsourced research participant can be wronged, the existence of the guidelines implies that crowdsourced research participants constitute a vulnerable group. When crowd workers are deemed a vulnerable group in ethics deliberations, the content of the Guidelines serves to focus attention on what is historically ethically salient about the case at hand by contributing a ready‐made module of possible wrongs and ways of mitigation that are associated with a vulnerable group. This not only makes ethics deliberation more efficient, but it also enables IRB members who are unfamiliar with crowd working to contribute fruitfully to the protocol review.
- 1, “ Are All ‘Research Fields’ Equal? Rethinking Practice for the Use of Data from Crowdsourcing Market Places,” Behavior Research Methods 49, no. 4 (2017): 1334– 35.
- 2 Ibid., 1335.
- 3 Ibid., 1334.
- 4For example, National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research ( Bethesda, MD: The Commission, 1979), 13; International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, “Integrated Addendum to ICHE6(R1): Guideline for Good Clinical Practice,” current step 4, version dated November 9, 2016, https://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R2__Step_4_2016_1109.pdf.
- 5Such a characterization of ethically relevant vulnerability is plausible for two reasons. First, it builds the ethical dimension into the characterization as wrongs instead of merely harms. This excludes the windsurfer's vulnerability from relevance. Second, it determines that the risk of being wronged has to be above a baseline before ethically relevant vulnerability can be ascribed. This guards against triviality because every human subject faces a minimal risk of being wronged in ordinary life. For an example of such a view, see , “ Vulnerability in Research and Health Care: Describing the Elephant in the Room?,” Bioethics 22, no. 4 (2008): 191– 202, at195.
- 6See, for instance, seven types of vulnerability in , “ Seven Vulnerabilities in the Pediatric Research Subject,” Theoretical Medicine 24 (2003): 107– 203; discussion of layers of vulnerability in Luna, F., “Elucidating the Concept of Vulnerability: Layers Not Labels,” International Journal of Feminist Approaches to Bioethics 2, no. 1 (2009): 121-39; and the distinction between inherent, situational, and pathogenic vulnerability in Rogers, W., C. Mackenzie, and S. Dodds, “Why Bioethics Needs a Concept of Vulnerability,” International Journal of Feminist Approaches to Bioethics 5, no. 2 (2012): 11-38.
- 7, “Norway's Prisons Are Doing Something Right,” New York Times, December 18, 2012.
- 8For three examples, see American Psychological Association (APA), Ethical Principles of Psychologists and Code of Conduct ( APA, January 2017), https://www.apa.org/ethics/code/, “General Principles” section and section 8, “Research and Publication”; Council for International Organizations of Medical Sciences (CIOMS) in collaboration with World Health Organization, International Ethical Guidelines for Health-Related Research Involving Humans (Geneva, Switzerland: CIOMS, 2016): 57-59, and The British Psychological Society, Code of Human Ethics (Leicester, UK: BPS 2014): 6-7, 31-32.
- 9“ Introduction to Amazon Mechanical Turk,” Amazon Web Services, application program interface version, July 1, 2017, https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkRequester/IntroductionArticle.html.
- 10 Gleibs, “ Are All ‘Research Fields’ Equal?,” 1336.
- 11“ Fair Payment,” Dynamo Wiki, last modified April 4, 2016, https://web.archive.org/web/20190609195656/ http://wiki.wearedynamo.org/index.php?title=Fair_payment.
- 12, and , A History and Theory of Informed Consent ( New York: Oxford University Press, 1986), 238.
- 13, and , “ Preface to a Theory of Consent Transactions: Beyond Valid Consent,” in The Ethics of Consent: Theory and Practice, ed. F. G. Miller, and A. Wertheimer ( New York: Oxford University Press, 2010), 93.
- 14 Ibid.
- 15 Ibid., 94.
- 16For Faden and Beauchamp, this is demonstrated by the requirement for the absence of controlling influences (A History and Theory of Informed Consent, 238, 256– 61).
- 17 Miller and Wertheimer, “ Preface to a Theory of Consent Transactions,” 94.
- 18 Faden and Beauchamp, A History and Theory of Informed Consent, 341.
- 19 Ibid., 340.
- 20 Ibid., 339.
- 21 Ibid., 340.
- 22 Ibid., 344– 46.
- 23 Ibid., 357.
- 24, Exploitation ( Princeton, NJ: Princeton University Press, 1999), 270.
- 25This is suggested by “[u]ndue inducements may be troublesome because: (1) offers that are too attractive may blind prospective subjects to the risks or impair their ability to exercise proper judgment” ( Office for Human Research Protections, IRB Guidebook [ Office for Human Research Protections, 1993], chapter 3, section G, at http://wayback.archive-it.org/org-745/20150930181805/ http://www.hhs.gov/ohrp/archive/irb/irb_guidebook.htm). This is also a commonly held view, as suggested by Emily Largent and colleagues in “Money, Coercion, and Undue Inducement: A Survey of Attitudes about Payments to Research Participants”: “Virtually all respondents [who were IRB members and research ethics professionals] agreed that an offer constitutes undue influence if it ‘distorts a subject's ability to perceive accurately the risks and benefits of research’” (in IRB: Ethics & Human Research 12, no. 1 : 1-8, at 6).
- 26, “Drug Test Cowboys: The Secret World of Pharmaceutical Trial Subjects,” Wired, April 24, 2007, https://www.wired.com/2007/04/feat-drugtest/. The upper limit for phase 1 payment appears to be $10,000. I've increased this to $20,000 to make the application of undue inducement starker. Being well aware of the study's dangers, Willy is risk averse enough to have a threshold of $10,000 before he considers participating.
- 27“ After 150-Plus Treatment-Related Deaths in Clinical Trials since 2014, Lawsuit Is Filed against FDA to Increase Protection of Human Subjects,” Center for Responsible Science, October 24, 2017, https://www.crs501.org/news/press-releases/53-after-150-plus-treatment-related-deaths-in-clinical-trials-since-2014-lawsuit-is-filed-against-fda-to-increase-protection-of-human-subjects.
- 28, Balanced Ethics Review: A Guide for Institutional Review Board Members ( Cham, Switzerland: Springer International Publishing, 2016): 63– 64.
- 29, et al., “ Distress in Response to and Perceived Usefulness of Trauma Research Interviews,” Journal of Trauma Dissociation 4, no. 2 (2003): 131– 42, cited in Whitney, Balanced Ethics Review, 64.
- 30, “ Behavioral Study of Obedience,” Journal of Abnormal and Social Psychology 67, no. 4 (1963): 371– 78, at 375.
- 31, Not by Chance Alone: My Life as a Social Psychologist ( New York: Basic Books, 2010): 148– 49, cited in Whitney, Balanced Ethics Review, 62.
- 32 Gleibs, “ Are All ‘Research Fields’ Equal?,” 1336. For similar arguments, see Dickert, N., and C. Grady, “What's the Price of a Research Subject? Approaches to Payment for Research Participation,” New England Journal of Medicine 341, no. 3 (1999): 198– 203, and Grant, R., and J. Sugarman, “Ethics in Human Subjects Research: Do Incentives Matter?,” Journal of Medicine and Philosophy 29, no. 6 (2004): 717-38.
- 33 Amazon Mechanical Turk, “ Participation Agreement,” last updated, December 17, 2018, https://www.mturk.com/participation-agreement.
- 34, “ On the Use of Crowdsourcing Labor Markets in Research,” Perspectives on Politics 14, no. 2 (2016): 422– 31, at 424.
- 35It is not clear that crowd workers do not meet any of the conditions that employees do. Alek Felstiner lists seven factors that the U.S. Department of Labor considers relevant to determine if an individual is an employee, and proceeds to analyze the application of each factor to MTurk workers. His analysis shows that the nature of human intelligence tasks is too varied to conclusively determine that MTurk workers are not employees ( , “ Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry,” Berkeley Journal of Employment and Labor Law 32, no. 1 : 171– 79).
- 36Dependent relationships appear often in IRB guidelines and institutional ethics codes, for example, Council for International Organizations of Medical Sciences in collaboration with World Health Organization, International Ethical Guidelines for Health-Related Research Involving Humans, commentary to guideline 9, 35-36, for dependent relationships in the biomedical research, and The British Psychological Society, Code of Human Research Ethics ( Leicester, UK: BPS, 2014), section 10.1.3, “Individuals in a dependent or unequal relationship,” 32-33, for dependent relationships in psychological research. See also American Psychological Association, Ethical Principles of Psychologists and Code of Conduct, section 3.08, “Exploitative Relationships.”
- 37 Wertheimer, Exploitation, 14.
- 38 Ibid., 15– 16.
- 39 Ibid., 28.
- 40 Ibid., 279.
- 41 Ibid., 6.
- 42 Ibid., 289.
- 43 Ibid., 290.
- 44 Ibid.
- 45 Ibid., 291.
- 46 Ibid.
- 47 Ibid., 294.
- 48 Ibid., 294– 95.
- 49 Ibid., 296.
- 50 Ibid., 300.
- 51 Ibid., 301.
- 52 Ibid.
- 53“ Amazon Mechanical Turk Pricing,” Amazon MTurk Requester, https://requester.mturk.com/pricing.
- 54“ Basics of How to Be a Good Requester,” Dynamo Wiki, last modified April 4, 2016, https://web.archive.org/web/20190609195656/ http://wiki.wearedynamo.org/index.php?title=Basics_of_how_to_be_a_good_requester.
- 55, “The Internet Is Enabling a New Kind of Poorly Paid Hell,” Atlantic, January 23, 2018, https://www.theatlantic.com/business/archive/2018/01/amazon-mechanical-turk/551192/.
- 56, et al., “ Ethics and Tactics of Professional Crowdwork,” XRDS: Crossroads 17, no. 2 (2010): 39– 43, at 41.
- 57This refers to those in which a requester pays close to the lower limit of a range of pay that a worker is willing to accept for performing a human intelligence task. These transactions are unfair because the worker would be happier with more pay, but they are mutually advantageous and consensual.
- 58See, for instance, , “ The Moral Magic of Consent,” Legal Theory 2, no. 2 (1996): 121– 46; Manson, N. C., and O. O'Neill, Rethinking Informed Consent in Bioethics (Cambridge: Cambridge University Press, 2007), 72.
- 59This advantageous contribution to group ethical deliberation is noted in the context of casuistic reasoning in , and , The Abuse of Casuistry: A History of Moral Reasoning ( Berkeley, CA: University of California Press, 1988), 16– 19.
- 60 Wertheimer, Exploitation, 299.
- 61, Protecting the Vulnerable ( Chicago: University of Chicago Press, 1985), 202.
- 62 Pop-up window on the Turkopticon homepage, https://turkopticon.ucsd.edu/, October 23, 2018.
- 63 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research ( Washington, DC: U.S. Government Printing Office, 1979); Council for International Organizations of Medical Sciences in collaboration with World Health Organization, International Ethical Guidelines for Health-Related Research Involving Humans; American Psychological Association, Ethical Principles of Psychologists and Code of Conduct.
- 64, and , “ So, You Want to Recruit Participants Online?,” Prolific blog, November 20, 2018, https://blog.prolific.ac/so-you-want-to-recruit-participants-online/.
- 65“ My Work Was Rejected, What Can I Do?,” Prolific, last updated January 21, 2019, https://support.prolific.ac/article/55-my-work-was-rejected-what-can-i-do.
- 66“ What Does the ‘Return and Cancel Reward’ Action in Manage Studies Do?,” Prolific, last updated February 11, 2019, https://support.prolific.ac/article/62-what-does-the-return-and-cancel-reward-action-in-manage-studies-do.
- 67“ Feedback,” Prolific, last updated January 7, 2019, https://support.prolific.ac/article/68-feedback.
- 68“ My Work Was Rejected, What Can I Do?,” Prolific.
- 69 Other crowdsourced research platforms include Qualtrics and SurveyMonkey. Turk Prime and Psi Turk are primarily improved extensions of MTurk. None of these have explicit statements of ethics consciousness or features that are explicitly aimed at improving the ethics of crowdsourced research.
- 70This is not to be taken to be exhaustive of an ethics audit of a crowdsourced research protocol. For example, there should also be assessments of whether the privacy of participants and the confidentiality of their personal data receive adequate protection. These are also covered by the WeAreDynamo Guidelines for Academic Requesters, last modified on June 20, 2018, https://web.archive.org/web/20190609195657/ http://wiki.wearedynamo.org/index.php?title=Guidelines_for_Academic_Requesters.