Shame on you
A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity
Zachary Rothschild & Lucas Keefer
Motivation and Emotion, April 2017, Pages 209–229
Abstract:
Why do people express moral outrage? While this sentiment often stems from a perceived violation of some moral principle, we test the counter-intuitive possibility that moral outrage at third-party transgressions is sometimes a means of reducing guilt over one’s own moral failings and restoring a moral identity. We tested this guilt-driven account of outrage in five studies examining outrage at corporate labor exploitation and environmental destruction. Study 1 showed that personal guilt uniquely predicted moral outrage at corporate harm-doing and support for retributive punishment. Ingroup (vs. outgroup) wrongdoing elicited outrage at corporations through increased guilt, while the opportunity to express outrage reduced guilt (Study 2) and restored perceived personal morality (Study 3). Study 4 tested whether effects were due merely to downward social comparison and Study 5 showed that guilt-driven outrage was attenuated by an affirmation of moral identity in an unrelated context.
Conservatives Are More Reluctant to Give and Receive Apologies Than Liberals
Matthew Hornsey et al.
Social Psychological and Personality Science, forthcoming
Abstract:
This article examines the proposition that conservatives will be less willing than liberals to apologize and less likely to forgive after receiving an apology. In Study 1, we found evidence for both relationships in a nine-nation survey. In Study 2, participants wrote an open-ended response to a victim of a hypothetical transgression they had committed. More conservative participants were less likely to include apologetic elements in their response. We also tested two underlying mechanisms for the associations: social dominance orientation (SDO) and entity beliefs about human nature. SDO emerged as a stronger and more consistent mediator than entity beliefs. Apologies are theorized to be a rhetorical vehicle for removing power inequities in relationships posttransgression. Consistent with this theorizing, it was those who are relatively high in commitment to equality (i.e., those high in liberal ideology and low in SDO) who are most likely to provide and reward apologies.
Paternalistic Lies
Matthew Lupoli, Emma Edelman Levine & Adam Eric Greenberg
University of California Working Paper, March 2017
Abstract:
Many lies that are intended to help the target of the lie require the deceiver to make assumptions about the target’s best interests. In other words, lying often involves a paternalistic motive. Across six studies (N = 2,064), we show that although targets appreciate lies that yield unequivocal benefits relative to honesty, they penalize paternalistic lies. We identify three mechanisms behind the harmful effects of paternalistic lies, finding that targets believe that paternalistic liars are (a) not acting in targets’ best interests, (b) violating their autonomy, and (c) inaccurately predicting their preferences. Importantly, targets’ aversion towards paternalistic lies persists even when targets receive their preferred outcome as a result of a lie. Additionally, deceivers can mitigate some, but not all, of the harmful effects of paternalistic lies by directly communicating their good intentions. These results contribute to our understanding of deception and paternalistic policies.
Consistent Vegetarianism and the Suffering of Wild Animals
Thomas Sittler-Adamczewski
Journal of Practical Ethics, December 2016, Pages 94-102
Abstract:
Ethical consequentialist vegetarians believe that farmed animals have lives that are worse than non-existence. In this paper, I sketch out an argument that wild animals have worse lives than farmed animals, and that consistent vegetarians should therefore reduce the number of wild animals as a top priority. I consider objections to the argument, and discuss which courses of action are open to those who accept the argument.
Clever enough to tell the truth
Bradley Ruffle & Yossef Tobol
Experimental Economics, March 2017, Pages 130–155
Abstract:
We conduct a field experiment on 427 Israeli soldiers who each rolled a six-sided die in private and reported the outcome. For every point reported, the soldier received an additional half-hour early release from the army base on Thursday afternoon. We find that the higher a soldier’s military entrance score, the more honest he is on average. We replicate this finding on a sample of 156 civilians paid in cash for their die reports. Furthermore, the civilian experiments reveal that two measures of cognitive ability predict honesty, whereas general self-report honesty questions and a consistency check among them are of no value. We provide a rationale for the relationship between cognitive ability and honesty and discuss its generalizability.
Increasing honesty in humans with noninvasive brain stimulation
Michel André Maréchal et al.
Proceedings of the National Academy of Sciences, forthcoming
Abstract:
Honesty plays a key role in social and economic interactions and is crucial for societal functioning. However, breaches of honesty are pervasive and cause significant societal and economic problems that can affect entire nations. Despite its importance, remarkably little is known about the neurobiological mechanisms supporting honest behavior. We demonstrate that honesty can be increased in humans with transcranial direct current stimulation (tDCS) over the right dorsolateral prefrontal cortex. Participants (n = 145) completed a die-rolling task where they could misreport their outcomes to increase their earnings, thereby pitting honest behavior against personal financial gain. Cheating was substantial in a control condition but decreased dramatically when neural excitability was enhanced with tDCS. This increase in honesty could not be explained by changes in material self-interest or moral beliefs and was dissociated from participants’ impulsivity, willingness to take risks, and mood. A follow-up experiment (n = 156) showed that tDCS only reduced cheating when dishonest behavior benefited the participants themselves rather than another person, suggesting that the stimulated neural process specifically resolves conflicts between honesty and material self-interest. Our results demonstrate that honesty can be strengthened by noninvasive interventions and concur with theories proposing that the human brain has evolved mechanisms dedicated to control complex social behaviors.
Moral alchemy: How love changes norms
Rachel Magid & Laura Schulz
Cognition, forthcoming
Abstract:
We discuss a process by which non-moral concerns (that is concerns agreed to be non-moral within a particular cultural context) can take on moral content. We refer to this phenomenon as moral alchemy and suggest that it arises because moral obligations of care entail recursively valuing loved ones’ values, thus allowing propositions with no moral weight in themselves to become morally charged. Within this framework, we predict that when people believe a loved one cares about a behavior more than they do themselves, the moral imperative to care about the loved one’s interests will raise the value of that behavior, such that people will be more likely to infer that third parties will see the behavior as wrong (Experiment 1) and the behavior itself as more morally important (Experiment 2) than when the same behaviors are considered outside the context of a caring relationship. The current study confirmed these predictions.
Are Preferences for Allocating Harm Rational?
Alexander Davis, John Miller & Sudeep Bhatia
Decision, forthcoming
Abstract:
The allocation of nonmonetary harm is an important — yet understudied — domain of choice. Using a modified Dictator Game, we asked 27 participants to allocate a harmful event (time of putting their hand in ice water) between themselves and an anonymous stranger. We found substantially less coherent, and more egalitarian, preferences compared to other studies that ask participants to allocate monetary endowments. Specifically, 26% of participants made choices inconsistent with utility maximization, and 78% of participants behaved in an egalitarian manner. In comparable studies of monetary gains, only 2% were inconsistent and 30% egalitarian. The results suggest that the focus on monetary gains likely overestimates the rationality of other-regarding preferences and underestimates egalitarianism.
The Paradox of Group Mind: “People in a Group” Have More Mind Than “a Group of People”
Erin Cooley et al.
Journal of Experimental Psychology: General, forthcoming
Abstract:
Three studies examine how subtle shifts in framing can alter the mind perception of groups. Study 1 finds that people generally perceive groups to have less mind than individuals. However, Study 2 demonstrates that changing the framing of a group from “a group of people” to “people in a group,” substantially increases mind perception — leading to comparable levels of mind between groups and individuals. Study 3 reveals that this change in framing influences people’s sympathy for groups, an effect mediated by mind perception. We conclude that minor linguistic shifts can have big effects on how groups are perceived — with implications for mind perception and sympathy for mass suffering.
The Surprising Costs of Silence: Asymmetric Preferences for Prosocial Lies of Commission and Omission
Emma Edelman Levine et al.
University of Chicago Working Paper, February 2017
Abstract:
Across five experiments (N = 2624), we document a robust asymmetry between communicators’ and targets’ judgments of and preferences for deception. Communicators are more likely to focus on whether a particular communication tactic reflects a moral transgression, whereas targets are more likely to focus on whether a particular communication tactic helps or harms them. As a result, communicators often believe that omitting information is more ethical than telling a prosocial lie, whereas targets often believe the opposite. We document this asymmetry within the context of healthcare discussions, employee layoffs, and economic games, among both clinical populations (i.e., oncologists and cancer patients) and lay people. We identify moderators and downstream consequences of this asymmetry. We conclude by discussing psychological and practical implications for medicine, management, behavioral ethics, and human communication.
Not Taking Responsibility: Equity Trumps Efficiency in Allocation Decisions
Tom Gordon-Hecker et al.
Journal of Experimental Psychology: General, forthcoming
Abstract:
When allocating resources, equity and efficiency may conflict. When resources are scarce and cannot be distributed equally, one may choose to destroy resources and reduce societal welfare to maintain equity among its members. We examined whether people are averse to inequitable outcomes per se or to being responsible for deciding how inequity should be implemented. Three scenario-based experiments and one incentivized experiment revealed that participants are inequity responsibility averse: when asked to decide which of the 2 equally deserving individuals should receive a reward, they rather discarded the reward than choosing who will get it. This tendency diminished significantly when participants had the possibility to use a random device to allocate the reward. The finding suggests that it is more difficult to be responsible for the way inequity is implemented than to create inequity per se.
Morality constrains the default representation of what is possible
Jonathan Phillips & Fiery Cushman
Proceedings of the National Academy of Sciences, forthcoming
Abstract:
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.