What is Right
People judge others to have more voluntary control over beliefs than they themselves do
Corey Cusimano & Geoffrey Goodwin
Journal of Personality and Social Psychology, forthcoming
Abstract:
People think other individuals have considerable control over what they believe. However, no work to date has investigated how people judge their own belief control, nor whether such judgments diverge from their judgments of others. We addressed this gap in 7 studies and found that people judge others to be more able to voluntarily change what they believe than they themselves are. This occurs when people judge others who disagree with them (Study 1) as well as others who agree with them (Studies 2–5, 7), and it occurs when people judge strangers (Studies 1, 2, 4, and 5) as well as close others (Studies 3 and 7). It appears not to be explained by impression management or self-enhancement motives (Study 3). Rather, there is a discrepancy between the evidentiary constraints on belief change that people access via introspection, and their default assumptions about the ease of voluntary belief revision. That is, people tend spontaneously to think about the evidence that supports their beliefs, which leads them to judge their beliefs as outside their control. But they apparently fail to generalize this sense of constraint to others, and similarly fail to incorporate it into their generic model of beliefs (Studies 4–7). We discuss the implications of our findings for theories of ideology-based conflict, actor–observer biases, naïve realism, and ongoing debates regarding people’s actual capacity to voluntarily change what they believe.
Truth, Lies, and Gossip
Kim Peters & Miguel Fonseca
Psychological Science, June 2020, Pages 702-714
Abstract:
It is widely assumed that people will share inaccurate gossip for their own selfish purposes. This assumption, if true, presents a challenge to the growing body of work positing that gossip is a ready source of accurate reputational information and therefore is welfare improving. We tested this inaccuracy assumption by examining the frequency and form of spontaneous lies shared between gossiping members of networks playing a series of one-shot trust games (N = 320). We manipulated whether gossipers were or were not competing with each other. We showed that lies make up a sizeable minority of messages and are twice as frequent under gossiper competition. However, this had no discernible effect on trust levels. We attribute this to the findings that (a) gossip targets are insensitive to lies and (b) some lies are welfare enhancing. These findings suggest that lies need not prevent — and may help — gossip to serve reputational functions.
The power of the majority: Social conformity in sexual harassment punishment selection
Leilani Goodmon et al.
Journal of Applied Social Psychology, forthcoming
Abstract:
In his famous social conformity experiments in the 1950's, Asch found 75% of participants conformed to confederates’ incorrect answers at least once, with an overall conformity rate of 32%, revealing that humans are highly likely to conform to group behavior even when that behavior is clearly wrong. The purpose of this study was to determine if the social conformity effect generalized to scenarios involving sexual harassment punishment selections in the workplace. Participants read various workplace sexual harassment scenarios and then witnessed four confederates chose one of three types of punishments (verbal warning, 1‐week suspension, or termination). The confederates stated aloud punishments that were either appropriate (i.e., similar to normative data) or inappropriate (i.e., deviating either too harshly or leniently to normative data). Participants then provided their punishments selection aloud, and confidentially rated their decision confidence. We found an overall conformity rate of 46%, as 82.67% conformed at least once to harsh or lenient punishment selections. Participants who conformed to incorrect punishment selections exhibited lower levels of decision confidence, indicating that conformity may have been due more to social normative influence. The current results imply the social responses of others (i.e., coworkers, supervisors, or HR) can impact responses to sexual harassment. The results imply that social influence may be a significant contributing factor in mislabeling, misreporting, or inappropriately punishing sexual harassment in some organizations.
Harbingers of foul play: A field study of gain/loss frames and regulatory fit in the NFL
Evan Polman, Lyn Van Swol & Paul Hoban
Judgment and Decision Making, May 2020, Pages 353–370
Abstract:
Do people cheat more when they have something to gain, or when they have something to lose? The answer to this question isn’t straightforward, as research is mixed when it comes to understanding how unethical people will be when they might acquire something good versus avoid something bad. To wit, research has found that people cheat more in a loss (vs. gain) frame, yet research on regulatory focus has found that people cheat more in a promotion focus (where the focus is on acquiring gains) than in a prevention focus (where the focus is on avoiding losses). Through a large-scale field study containing 332,239 observations including 27,350 transgressions, we address the contradictory results of gain/loss frames and regulatory focus on committing unethical behavior in a context that contains a high risk of detecting unethical behavior (NFL football games). Our results replicated the separate effects of more cheating in a loss frame, and more cheating in a promotion focus. Furthermore, our data revealed a heretofore undocumented crossover interaction, in accordance with regulatory fit, which could disentangle past results: Specifically, we found promotion focus is associated with more cheating in a loss (vs. gain) frame, whereas prevention focus is associated with more cheating in a gain (vs. loss) frame. In gridiron football, this translates to offensive players fouling more when their team is losing (vs. winning) and defensive players fouling more when their team is winning (vs. losing).
The moral psychology of continuation decisions: A recipe for moral disengagement
Gary Sherman
Organizational Behavior and Human Decision Processes, May 2020, Pages 36-48
Abstract:
To what extent are decision-makers willing to impose costs on others as they pursue their goals? The current investigation tested the hypothesis that the answer depends on whether the decision is an initial decision (whether to start) or a reassessment (whether to continue or switch from a chosen path). Decision-makers should be more likely to experience moral disengagement — to perceive lower moral standards — when contemplating continuing an option relative to choosing that option initially. They will therefore be more willing to impose costs on others for the former than for the latter. In five experiments, which tested hypothetical decisions and real decisions with material stakes, participants were more likely to impose costs on others — and believed ignoring those costs would be more morally acceptable — if they were a side effect of continuation. These results indicate that the moral psychology of continuation is marked by moral disengagement.
“I’m just being honest.” When and why honesty enables help versus harm
Emma Levine & David Munguia Gomez
Journal of Personality and Social Psychology, forthcoming
Abstract:
Although honesty is typically conceptualized as a virtue, it often conflicts with other equally important moral values, such as avoiding interpersonal harm. In the present research, we explore when and why honesty enables helpful versus harmful behavior. Across 5 incentive-compatible experiments in the context of advice-giving and economic games, we document four central results. First, honesty enables selfish harm: people are more likely to engage in and justify selfish behavior when selfishness is associated with honesty than when it is not. Second, people are selectively honest: people are more likely to be honest when honesty is associated with selfishness than when honesty is associated with altruism. Third, these effects are more consistent with genuine, rather than motivated, preferences for honesty. Fourth, even when individuals have no selfish incentive to be honest, honesty can lead to interpersonal harm because people avoid information about how their honest behavior affects others. This research unearths new insights on the mechanisms underlying moral choice, and consequently, the contexts in which moral principles are a force of good versus a force of evil.
You Won’t Remember This: How Memory Efficacy Influences Virtuous Behavior
Maferima Touré-Tillery & Maryam Kouchaki
Journal of Consumer Research, forthcoming
Abstract:
The present article explores the effect of memory efficacy on consumer behavior — particularly on consumer’s likelihood to behave “virtuously,” that is, in line with standards such as ideals, values, morals, and social expectations. Memory efficacy refers to people’s general belief that they will be able to remember in the future the things they are experiencing or doing in the present. We hypothesize and find across five studies that when consumers have low memory-efficacy (vs. control), they are less likely to behave virtuously because their actions seem less consequential for their self-concept (i.e., less self-diagnostic). Using two different experimental manipulations of memory efficacy, we examine its effect on virtuous behavior in the context of prosocial choices — i.e., charitable giving (study 1A) and volunteering (study 1B and 2). We then explore our proposed underlying mechanism (perceptions of self-diagnosticity) using causal-chain mediation (studies 3A and 3B) and moderation approaches (studies 4 and 5) in the context of food choices. We conclude with a discussion of the practical and theoretical implications of our findings.