Good and Bad
The paucity of morality in everyday talk
Mohammad Atari et al.
Scientific Reports, April 2023
Abstract:
Given its centrality in scholarly and popular discourse, morality should be expected to figure prominently in everyday talk. We test this expectation by examining the frequency of moral content in three contexts, using three methods: (a) Participants’ subjective frequency estimates (N = 581); (b) Human content analysis of unobtrusively recorded in-person interactions (N = 542 participants; n = 50,961 observations); and (c) Computational content analysis of Facebook posts (N = 3822 participants; n = 111,886 observations). In their self-reports, participants estimated that 21.5% of their interactions touched on morality (Study 1), but objectively, only 4.7% of recorded conversational samples (Study 2) and 2.2% of Facebook posts (Study 3) contained moral content. Collectively, these findings suggest that morality may be far less prominent in everyday life than scholarly and popular discourse, and laypeople, presume.
Moral Thin-Slicing
Julian De Freitas & Alon Hafri
Harvard Working Paper, December 2022
Abstract:
Given limits on time and attention, people increasingly make moral evaluations in a few seconds or less, yet it is unknown whether such snap judgments are accurate or not. On one hand, the literature suggests that people form fast moral impressions once they already know what has transpired (i.e., who did what to whom, and whether there was harm involved), but how long does it take for them to extract and integrate these ‘moral atoms’ from a visual scene in the first place to decide who is morally wrong? Using controlled stimuli, we find that people are capable of ‘moral thin-slicing’: they reliably identify moral transgressions from visual scenes presented in the blink of an eye (< 100 ms). Across four studies, we show that this remarkable ability arises because observers independently and rapidly extract the atoms of moral judgment -- event roles (who acted on whom) and harm level (harmful or unharmful). In sum, despite the rapid rate at which people view provocative moral transgressions online, as when consuming viral videos on social media or negative news about companies’ actions toward customers, their snap moral judgments about visual events can be surprisingly accurate.
Choosing Money Over Meaningful Work: Examining Relative Job Preferences for High Compensation Versus Meaningful Work
Sarah Ward
Personality and Social Psychology Bulletin, forthcoming
Abstract:
People sometimes must choose between prioritizing meaningful work or high compensation. Eight studies (N = 4,177; 7 preregistered) examined the relative importance of meaningful work and salary in evaluations of actual and hypothetical jobs. Although meaningful work and high salaries are both perceived as highly important job attributes when evaluated independently, when presented with tradeoffs between these job attributes, participants consistently preferred high-salary jobs with low meaningfulness over low-salary jobs with high meaningfulness (Studies 1-5). Forecasts of happiness and meaning outside of work helped explain condition differences in job interest (Studies 4 and 5). Extending the investigation toward actual jobs, Studies 6a and 6b showed that people express stronger preferences for higher pay (vs. more meaningful work) in their current jobs. Although meaningful work is a strongly valued job attribute, it may be less influential than salary to evaluations of hypothetical and current jobs.
The AI Effect: People rate distinctively human attributes as more essential to being human after learning about artificial intelligence advances
Erik Santoro & Benoît Monin
Journal of Experimental Social Psychology, forthcoming
Abstract:
As news reports describing Artificial Intelligence (AI) proliferate, will people's perceptions of human nature change such that they rate distinctively human attributes as more essential? Five studies (N = 5111) demonstrate this “AI Effect.” Study 1 first establishes a two-part classification of human attributes used in subsequent studies, distinguishing human attributes that AI are perceived as capable of (“shared” attributes such as using logic or language) from ones that humans are seen as uniquely capable of (“distinctive” attributes such as having a personality or beliefs). Study 2 demonstrates the AI Effect: compared to reading an article about the attributes of trees, reading an article describing AI advances leads participants to rate distinctive attributes as more essential to being human. Study 3 tests whether this effect is due to anthropomorphizing a non-human entity. Study 4 considers the alternative that this effect is solely driven by demand. Study 5 shows that it is enough to simply mention AI advances to observe this effect. This research suggests that as people learn about increasingly sophisticated AI, conceptions of human nature may shift in reaction to regard what makes humans unique as more essential.
An Adversarial Collaboration on Dirty Money
Arber Tasimi & Ori Friedman
Social Psychological and Personality Science, forthcoming
Abstract:
Across four preregistered experiments on American adults (total N = 968), and five supplemental experiments (total N = 869), we examined four accounts that might explain people’s aversion to “dirty money” (i.e., money earned in immoral ways): (a) they think it is morally tainted, (b) they care about illicit ownership, (c) they do not wish to profit from moral transgressions, and (d) accepting dirty money might imply an endorsement of the immoral means by which the money was acquired. Participants were unwilling to accept or touch dirty money, but they were relatively willing to take dirty money when it is lost and found. Together these findings suggest that people’s aversion to dirty money stems from concerns about both moral taint and endorsing the way in which dirty money was acquired.
Using the Veil of Ignorance to align AI systems with principles of justice
Laura Weidinger et al.
Proceedings of the National Academy of Sciences, 2 May 2023
Abstract:
The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.
Law and Norms: Empirical Evidence
Tom Lane, Daniele Nosenzo & Silvia Sonderegger
American Economic Review, May 2023, Pages 1255-1293
Abstract:
A large theoretical literature argues laws exert a causal effect on norms, but empirical evidence remains scant. Using a novel identification strategy, we provide a compelling empirical test of this proposition. We use incentivized vignette experiments to directly measure social norms relating to actions subject to legal thresholds. Our large-scale experiments (n = 7,000) run in the United Kingdom, United States, and China show that laws can causally influence social norms. Results are robust across different samples and methods of measuring norms, and are consistent with a model of social image concerns where individuals care about the inferences others make about their underlying prosociality.
Philosophy instruction changes views on moral controversies by decreasing reliance on intuition
Kerem Oktar et al.
Cognition, forthcoming
Abstract:
What changes people's judgments on moral issues, such as the ethics of abortion or eating meat? On some views, moral judgments result from deliberation, such that reasons and reasoning should be primary drivers of moral change. On other views, moral judgments reflect intuition, with reasons offered as post-hoc rationalizations. We test predictions of these accounts by investigating whether exposure to a moral philosophy course (vs. control courses) changes moral judgments, and if so, via what mechanism(s). In line with deliberative accounts of morality, we find that exposure to moral philosophy changes moral views. In line with intuitionist accounts, we find that the mechanism of change is reduced reliance on intuition, not increased reliance on deliberation; in fact, deliberation is related to increased confidence in judgments, not change. These findings suggest a new way to reconcile deliberative and intuitionist accounts: Exposure to reasons and evidence can change moral views, but primarily by discounting intuitions.
Minds of Monsters: Scary Imbalances Between Cognition and Emotion
Ivan Hernandez, Ryan Ritter & Jesse Preston
Personality and Social Psychology Bulletin, forthcoming
Abstract:
Four studies investigate a fear of imbalanced minds hypothesis that threatening agents perceived to be relatively mismatched in capacities for cognition (e.g., self-control and reasoning) and emotion (e.g., sensations and emotions) will be rated as scarier and more dangerous by observers. In ratings of fictional monsters (e.g., zombies and vampires), agents seen as more imbalanced between capacities for cognition and emotion (high cognition–low emotion or low cognition–high emotion) were rated as scarier compared to those with equally matched levels of cognition and emotion (Studies 1 and 2). Similar effects were observed using ratings of scary animals (e.g., tigers, sharks; Studies 2 and 3), and infected humans (Study 4). Moreover, these effects are explained through diminished perceived control/predictability over the target agent. These findings highlight the role of balance between cognition and emotion in appraisal of threatening agents, in part because those agents are seen as more chaotic and uncontrollable.
Reactance to uncivil disagreement? The integral effects of disagreement, incivility, and social endorsement
Shuning Lu & Hai Liang
Journal of Media Psychology, forthcoming
Abstract:
This study extends the psychological reactance theory by demonstrating that online political discussions, without explicit social influence attempts, can arouse psychological reactance by certain message features. Based on a 2 (stance: agreement vs. disagreement) × 2 (tone: civil vs. uncivil) × 2 (social endorsement: low vs. high) between-subjects online experiment in the United States (N = 418), the present study found that both disagreement and uncivil comments led to psychological reactance directly and indirectly via perceived threat to freedom. Unexpectedly, uncivil disagreement had smaller effects on psychological reactance than civil disagreement. In addition, although social endorsement cues did not show any independent effects on psychological reactance, they were found to exacerbate the direct effect of uncivil disagreement on psychological reactance. Overall, our study develops important theoretical connections between political deliberation and psychological reactance literatures. It also yields practical implications for fostering an inclusive and healthy environment for online political discussion.