Findings

You decide

Kevin Lewis

August 13, 2019

Relevance insensitivity: A new look at some old biases
Christopher Hsee, Yang Yang & Xilin Li
Organizational Behavior and Human Decision Processes, July 2019, Pages 13-26

Abstract:
People show systematic biases in judgment and decision making. We propose that many seemingly disparate biases reflect a common underlying mechanism — insensitivity to the relevance of some given information — and that manipulating the relevance of the information can eliminate or even reverse the original bias. We test our theory in four experiments, each focusing on a classic bias — the sunk cost fallacy, non-regressive prediction, anchoring bias, and base rate neglect, and show that people over-rely on a given piece of information when it is irrelevant, thus exhibiting one bias, and under-rely on the same piece of information when it is highly relevant, thus showing a reverse bias. For example, when a past cost is irrecoverable and hence irrelevant to future cost, people over-rely on it when making a decision for the future, thus exhibiting the classic sunk cost fallacy, but when the past cost is fully recoverable and hence highly relevant to future cost, people under-rely on it, thus showing the reverse of the sunk cost fallacy. We also find that when people are made sensitive to the relevance of the information, both the original biases and their reverse biases are attenuated. This research offers a new look at these “old” biases, suggesting that each individual bias is not general because it can be reversed, but collectively, these biases are general because they all reflect relevance insensitivity.


The 'Future Is Now' Bias: Anchoring and (Insufficient) Adjustment When Predicting the Future from the Present
Julian Givi & Jeff Galak
Carnegie Mellon University Working Paper, June 2019

Abstract:
In the present research, we document a novel forecasting bias, which we term the “future is now” (FIN) bias. Specifically, we show that people tend to believe that the future will mirror the present, even when such a belief is unfounded. That is, people overestimate the chances that whatever is happening now, will happen in the future, even when the (known) explicit probabilities of future outcomes contradict such a belief. This appears to be driven by an anchoring and (insufficient) adjustment process, whereby initial beliefs about the future are heavily influenced by the present circumstances, and then subsequent beliefs are not sufficiently adjusted once the probabilities of future outcomes are learned. Across nine studies (employing over 3,800 participants), we demonstrate the FIN bias in a variety of forecasting contexts, show that it manifests in incentive compatible settings, and provide evidence in support of an anchoring and (insufficient) adjustment mechanistic account.


Superhuman AI for multiplayer poker
Noam Brown & Tuomas Sandholm
Science, forthcoming

Abstract:
In recent years there have been great strides in artificial intelligence (AI), with games often serving as challenge problems, benchmarks, and milestones for progress. Poker has served for decades as such a challenge problem. Past successes in such benchmarks, including poker, have been limited to two-player games. However, poker in particular is traditionally played with more than two players. Multiplayer games present fundamental additional issues beyond those in two-player games, and multiplayer poker is a recognized AI milestone. In this paper we present Pluribus, an AI that we show is stronger than top human professionals in six-player no-limit Texas hold’em poker, the most popular form of poker played by humans.


Task-Dependent Algorithm Aversion
Noah Castelo, Maarten Bos & Donald Lehmann
Journal of Marketing Research, forthcoming

Abstract:
Research suggests that consumers are averse to relying on algorithms to perform tasks that are typically done by humans, despite the fact that algorithms often perform better. The authors explore when and why this is true in a wide variety of domains. They find that algorithms are trusted and relied on less for tasks that seem subjective (vs. objective) in nature. However, they show that perceived task objectivity is malleable and that increasing a task’s perceived objectivity increases trust in and use of algorithms for that task. Consumers mistakenly believe that algorithms lack the abilities required to perform subjective tasks. Increasing algorithms’ perceived affective human-likeness is therefore effective at increasing the use of algorithms for subjective tasks. These findings are supported by the results of four online lab studies with over 1,400 participants and two online field studies with over 56,000 participants. The results provide insights into when and why consumers are likely to use algorithms and how marketers can increase their use when they outperform humans.


The Voice of Cognition: Active and Passive Voice Influence Distance and Construal
Eugene Chan & Sam Maglio
Personality and Social Psychology Bulletin, forthcoming

Abstract:
English passages can be in either the active or passive voice. Relative to the active voice, the passive voice provides a sense of objectivity regarding the events being described. This leads to our hypothesis that passages in the passive voice can increase readers’ psychological distance from the content of the passage, triggering an abstract construal. In five studies with American, Australian, British, and Canadian participants, we find evidence for our propositions, with both paragraphs and sentences in the passive voice increasing readers’ felt temporal, hypothetical, and spatial distance from activities described in the text, which increases their abstraction in a manner that generalizes to unrelated tasks. As such, prose colors how people process information, with the active and passive voice influencing the reader in ways beyond what is stated in the written word.


Debiasing Training Improves Decision Making in the Field
Anne-Laure Sellier, Irene Scopelliti & Carey Morewedge
Psychological Science, forthcoming

Abstract:
The primary objection to debiasing-training interventions is a lack of evidence that they improve decision making in field settings, where reminders of bias are absent. We gave graduate students in three professional programs (N = 290) a one-shot training intervention that reduces confirmation bias in laboratory experiments. Natural variance in the training schedule assigned participants to receive training before or after solving an unannounced business case modeled on the decision to launch the Space Shuttle Challenger. We used case solutions to surreptitiously measure participants’ susceptibility to confirmation bias. Trained participants were 29% less likely to choose the inferior hypothesis-confirming solution than untrained participants. Analysis of case write-ups suggests that a reduction in confirmatory hypothesis testing accounts for their improved decision making in the case. The results provide promising evidence that debiasing-training effects transfer to field settings and can improve decision making in professional and private life.


Expert Decisions
Matthew Dobra & Christis Tombazos
Decision Sciences, forthcoming

Abstract:
National Research Assessments are used by countries to improve their research quality. They rely on experts to evaluate research, and use their assessments to reward publications. It is widely assumed that experts bring to the process the kind of judgment and knowledge that transcends their observable characteristics. We show that, for the most part, they do not. Using a unique uncensored dataset of expert deliberations, we find that 90% of expert decisions can be inferred from publicly available information, notably experts’ CVs. We also find that about half of their decisions are driven by objective quality standards and the other half by cognitive biases.


Harnessing the Wisdom of Crowds
Zhi Da & Xing Huang
Management Science, forthcoming

Abstract:
When will a large group provide an accurate answer to a question involving quantity estimation? We empirically examine this question on a crowd-based corporate earnings forecast platform (Estimize.com). By tracking user activities, we monitor the amount of public information a user views before making an earnings forecast. We find that the more public information users view, the less weight they put on their own private information. Although this improves the accuracy of individual forecasts, it reduces the accuracy of the group consensus forecast because useful private information is prevented from entering the consensus. To address endogeneity concerns related to a user’s information acquisition choice, we collaborate with Estimize.com to run experiments that restrict the information available to randomly selected stocks and users. The experiments confirm that “independent” forecasts result in a more accurate consensus. Estimize.com was convinced to switch to a “blind” platform from November 2015 on. The findings suggest that the wisdom of crowds can be better harnessed by encouraging independent voices from among group members and that more public information disclosure may not always improve group decision making.


Unsupervised word embeddings capture latent knowledge from materials science literature
Vahe Tshitoyan et al.
Nature, 4 July 2019, Pages 95–98

Abstract:
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure–property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature.


Patients with Lesions to Left Prefrontal Cortex (BA 9 and BA 10) Have Less Entrenched Beliefs and Are More Skeptical Reasoners
Vinod Goel et al.
Journal of Cognitive Neuroscience, forthcoming

Abstract:
The effect of prior beliefs on reasoning and decision-making is a robust, poorly understood phenomenon, exhibiting considerable individual variation. Neuroimaging studies widely show the involvement of the left prefrontal cortex (pFC) in reasoning involving beliefs. However, little patient data exist to speak to the necessity and role of the left pFC in belief-based inference. To address this shortcoming, we tested 102 patients with unilateral focal penetrating traumatic brain injuries and 49 matched controls. Participants provided plausibility ratings (plausible/implausible) to simple inductive arguments and (separately) strength of believability ratings of the conclusion to those arguments. A voxel-based lesion symptom mapping analysis identified 10 patients, all with lesions to the left pFC (BA 9 and BA 10) as rating significantly fewer arguments with highly believable conclusions as “plausible,” compared with all other patients. Subsequent analyses, incorporating the right hemisphere homolog of these patients (n = 12) and normal controls (n = 24), revealed patients with lesions to left pFC found fewer arguments plausible in the high believable than either of these groups, and there was no difference in the behavioral scores of the right pFC patients and normal controls. Further analysis, utilizing the belief ratings as the dependent measure, revealed a Group × Belief Rating interaction, with left pFC patients having less intense beliefs about the conclusions of moderately believable and highly believable arguments. We interpreted these results to indicate that lesions to left pFC (BA 9, BA 10) increase incredulity and make these patients more skeptical reasoners. The former can partially, but not fully, explain the latter. The other relevant factor may be that unilateral left pFC lesions disrupt hemispheric equilibrium and allow for an increased inhibitory role of the right pFC. We speculate that individual differences in belief bias in reasoning in the normal population may be a function of individual differences in the left and right pFC interactional dynamics. 


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.