Biased
Overestimating the valuations and preferences of others
Minah Jung, Alice Moon & Leif Nelson
Journal of Experimental Psychology: General, forthcoming
Abstract:
People often make judgments about their own and others' valuations and preferences. Across 12 studies (N = 17,594), we find a robust bias in these judgments such that people overestimate the valuations and preferences of others. This overestimation arises because, when making predictions about others, people rely on their intuitive core representation of the experience (e.g., is the experience generally positive?) in lieu of a more complex representation that might also include countervailing aspects (e.g., is any of the experience negative?). We first demonstrate that the overestimation bias is pervasive for a wide range of positive (Studies 1-5) and negative experiences (Study 6). Furthermore, the bias is not merely an artifact of how preferences are measured (Study 7). Consistent with judgments based on core representations, the bias significantly reduces when the core representation is uniformly positive (Studies 8A-8B). Such judgments lead to a paradox in how people see others trade off between valuation and utility (Studies 9A-9B). Specifically, relative to themselves, people believe that an identically paying other will get more enjoyment from the same experience, but paradoxically, that an identically enjoying other will pay more for the same experience. Finally, consistent with a core representation explanation, explicitly prompting people to consider the entire distribution of others' preferences significantly reduced or eliminated the bias (Study 10). These findings suggest that social judgments of others' preferences are not only largely biased, but they also ignore how others make trade-offs between evaluative metrics.
Do beliefs yield to evidence? Examining belief perseverance vs. change in response to congruent empirical findings
Stephanie Anglin
Journal of Experimental Social Psychology, May 2019, Pages 176-199
Abstract:
Research on belief perseverance often suggests that people maintain or even strengthen their beliefs in response to disconfirming evidence. However, many studies demonstrating belief perseverance have presented participants with mixed evidence. The present research investigated whether people maintain their beliefs in response to a single pattern of findings. Across four studies, participants consistently shifted their beliefs in response to the evidence, even when it challenged their views on religion (Studies 1-3), politics (Study 3), gun control (Study 3), and capital punishment (Studies 3-4). Participants were also receptive to mixed evidence, shifting their beliefs in response to the first study presented and shifting back in response to the second (and overall, depolarizing in response to the evidence). Participants not only updated their beliefs about the research question but also sometimes their position on the issue. In some cases, bias emerged in participants' evaluations of evidence, but not in others. Evaluations of evidence quality generally predicted belief change in response to the evidence and did not largely explain the degree of belief maintenance participants exhibited. These findings suggest that people may be receptive to counter-attitudinal evidence when the findings are clear, and highlight the importance of further examining the conditions under which biased assimilation and belief perseverance (vs. change) occur.
Ownership, Learning, and Beliefs
Samuel Hartzmark, Samuel Hirshman & Alex Imas
University of Chicago Working Paper, November 2019
Abstract:
We examine how owning a good affects learning and beliefs about its quality. We show that people have more extreme reactions to information about a good that they own compared to the same information about a non-owned good: ownership causes more optimistic beliefs after receiving a positive signal and more pessimistic beliefs after receiving a negative signal. This effect on beliefs impacts the valuation gap between the minimum owners are willing to accept to part with the good and the maximum non-owners are willing to pay to attain it, i.e. the endowment effect. We show that the endowment effect increases in response to positive information and disappears with negative information. Comparing learning to normative benchmarks reveals that people overreact to signals about goods that they own, but that learning is close to Bayesian for non-owned goods. In exploring the mechanism, we find that ownership increases attention to recent signals about owned goods, exacerbating over-extrapolation. We demonstrate a similar relationship between ownership and over-extrapolation in survey data about stock market expectations. Our findings have implications for any setting with trade and scope for learning, and provide a microfoundation for models of disagreement that generate volume in asset markets.
Belief digitization: Do we treat uncertainty as probabilities or as bits?
Samuel Johnson, Thomas Merchant & Frank Keil
Journal of Experimental Psychology: General, forthcoming
Abstract:
Humans are often characterized as Bayesian reasoners. Here, we question the core Bayesian assumption that probabilities reflect degrees of belief. Across eight studies, we find that people instead reason in a digital manner, assuming that uncertain information is either true or false when using that information to make further inferences. Participants learned about 2 hypotheses, both consistent with some information but one more plausible than the other. Although people explicitly acknowledged that the less-plausible hypothesis had positive probability, they ignored this hypothesis when using the hypotheses to make predictions. This was true across several ways of manipulating plausibility (simplicity, evidence fit, explicit probabilities) and a diverse array of task variations. Taken together, the evidence suggests that digitization occurs in prediction because it circumvents processing bottlenecks surrounding people's ability to simulate outcomes in hypothetical worlds. These findings have implications for philosophy of science and for the organization of the mind.
Variability in the analysis of a single neuroimaging dataset by many teams
Rotem Botvinik-Nezer et al.
Stanford Working Paper, November 2019
Abstract:
Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.