Best Guess
Seeing the subjective as objective: People perceive the taste of those they disagree with as biased and wrong
Nathan Cheek, Shane Blackman & Emily Pronin
Journal of Behavioral Decision Making, forthcoming
Abstract:
People think that they see things as they are in “objective reality,” and they impute bias and other negative qualities to those who disagree. Evidence for these tendencies initially emerged in the domain of politics, where people tend to assume that there are objectively correct beliefs and positions. The present research shows that people are confident in the correctness of their views, and they negatively judge those who disagree, even in the seemingly “subjective” domain of art. Across seven experiments, participants evaluated paintings and encountered others who agreed or disagreed with their evaluations. Participants saw others' evaluations as less objective when they clashed with their own, and as more influenced by biasing factors like conformity or financial incentives. These aesthetic preferences felt as objective as political preferences. Reminding people of their belief that artistic preferences are “matters of opinion” reduced this thinking, but did not eliminate it. These findings suggest that people's convictions of their own objectivity are so powerful as to extend to domains that are typically regarded as “subjective.”
Anger increases susceptibility to misinformation
Michael Greenstein & Nancy Franklin
Experimental Psychology, September 2020, Pages 202–209
Abstract:
The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.
The Perfection Premium
Mathew Isaac & Katie Spangenberg
Social Psychological and Personality Science, forthcoming
Abstract:
This research documents a perfection premium in evaluative judgments wherein individuals disproportionately reward perfection on an attribute compared to near-perfect values on the same attribute. For example, individuals consider a student who earns a perfect score of 36 on the American College Test to be more intelligent than a student who earns a near-perfect 35, and this difference in perceived intelligence is significantly greater than the difference between students whose scores are 35 versus 34. The authors also show that the perfection premium occurs because people spontaneously place perfect items into a separate mental category than other items. As a result of this categorization process, the perceived evaluative distance between perfect and near-perfect items is exaggerated. Four experiments provide evidence in favor of the perfection premium and support for the proposed underlying mechanism in both social cognition and decision-making contexts.
People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error
Berkeley Dietvorst & Soaham Bharti
Psychological Science, forthcoming
Abstract:
Will people use self-driving cars, virtual doctors, and other algorithmic decision-makers if they outperform humans? The answer depends on the uncertainty inherent in the decision domain. We propose that people have diminishing sensitivity to forecasting error and that this preference results in people favoring riskier (and often worse-performing) decision-making methods, such as human judgment, in inherently uncertain domains. In nine studies (N = 4,820), we found that (a) people have diminishing sensitivity to each marginal unit of error that a forecast produces, (b) people are less likely to use the best possible algorithm in decision domains that are more unpredictable, (c) people choose between decision-making methods on the basis of the perceived likelihood of those methods producing a near-perfect answer, and (d) people prefer methods that exhibit higher variance in performance (all else being equal). To the extent that investing, medical decision-making, and other domains are inherently uncertain, people may be unwilling to use even the best possible algorithm in those domains.
Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry
Nils Köbis & Luca Mossink
Computers in Human Behavior, forthcoming
Abstract:
The release of openly available, robust natural language generation algorithms (NLG) has spurred much public attention and debate. One reason lies in the algorithms' purported ability to generate humanlike text across various domains. Empirical evidence using incentivized tasks to assess whether people (a) can distinguish and (b) prefer algorithm-generated versus human-written text is lacking. We conducted two experiments assessing behavioral reactions to the state-of-the-art Natural Language Generation algorithm GPT-2 (Ntotal = 830). Using the identical starting lines of human poems, GPT-2 produced samples of poems. From these samples, either a random poem was chosen (Human-out-of-the-loop) or the best one was selected (Human-in-the-loop) and in turn matched with a human-written poem. In a new incentivized version of the Turing Test, participants failed to reliably detect the algorithmically generated poems in the Human-in-the-loop treatment, yet succeeded in the Human-out-of-the-loop treatment. Further, people reveal a slight aversion to algorithm-generated poetry, independent on whether participants were informed about the algorithmic origin of the poem (Transparency) or not (Opacity). We discuss what these results convey about the performance of NLG algorithms to produce human-like text and propose methodologies to study such learning algorithms in human-agent experimental settings.
Enhancing the Wisdom of the Crowd With Cognitive-Process Diversity: The Benefits of Aggregating Intuitive and Analytical Judgments
Steffen Keck & Wenjie Tang
Psychological Science, forthcoming
Abstract:
Drawing on dual-process theory, we suggest that the benefits that arise from combining several quantitative individual judgments will be heightened when these judgments are based on different cognitive processes. We tested this hypothesis in three experimental studies in which participants provided estimates for the dates of different historical events (Study 1, N = 152), made probabilistic forecasts for the outcomes of soccer games (Study 2, N = 98), and estimated the weight of individuals on the basis of a photograph (Study 3, N = 3,695). For each of these tasks, participants were prompted to make judgments relying on an analytical process, on their intuition, or (in a control condition) on no specific instructions. Across all three studies, our results show that an aggregation of intuitive and analytical judgments provides more accurate estimates than any other aggregation procedure and that this advantage increases with the number of aggregated judgments.
Taking Social Comparison to the Extremes: The Huge-Fish-Tiny-Pond Effect in Self-Evaluations
Ethan Zell & Tara Lesick
Social Psychological and Personality Science, forthcoming
Abstract:
People evaluate themselves more favorably when they are a big fish in a little pond than a little fish in a big pond. The present research demonstrates that this tendency is exacerbated in extreme social comparison conditions, explains why, and highlights practical implications. Study 1 participants were told that they were a big (little) fish in a little (big) pond or a huge (tiny) fish in a tiny (huge) pond. Results provided evidence for a huge-fish-tiny-pond effect and showed that it is significantly larger than the big-fish-little-pond effect. Study 2 demonstrated that the huge-fish-tiny-pond effect reflects a neglect group rank information and Study 3 suggests that it inflates self-views. These experiments are the first to document the huge-fish-tiny-pond effect, which was highly robust (overall d = 2.05) and suggest that extreme social comparisons magnify self-evaluation tendencies.
The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd
Celia Gaertig & Joseph Simmons
Management Science, forthcoming
Abstract:
Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, a phenomenon known as the “wisdom of the inner crowd.” In this article, we find that this effect hinges on whether people explicitly decide in which direction their first guess had erred before making their second guess. In nine studies (N = 8,465), we found that asking people to explicitly indicate whether their first guess was too high or too low prior to making their second guess made people more likely to provide a second guess that was more extreme (in the same direction) than their first guess. As a consequence, the introduction of that “Too High/Too Low” question reduced (and sometimes eliminated or reversed) the wisdom-of-the-inner-crowd effect for (the majority of) questions with non-extreme correct answers and increased the wisdom-of-the-inner-crowd effect for questions with extreme correct answers. Our findings suggest that the wisdom-of-the-inner-crowd effect is not inevitable, but rather that it depends on the processes people use to generate their second guesses.
When Not Choosing Leads to Not Liking: Choice-Induced Preference in Infancy
Alex Silver et al.
Psychological Science, forthcoming
Abstract:
The question of how people’s preferences are shaped by their choices has generated decades of research. In a classic example, work on cognitive dissonance has found that observers who must choose between two equally attractive options subsequently avoid the unchosen option, suggesting that not choosing the item led them to like it less. However, almost all of the research on such choice-induced preference focuses on adults, leaving open the question of how much experience is necessary for its emergence. Here, we examined the developmental roots of this phenomenon in preverbal infants (N = 189). In a series of seven experiments using a free-choice paradigm, we found that infants experienced choice-induced preference change similar to adults’. Infants’ choice patterns reflected genuine preference change and not attraction to novelty or inherent attitudes toward the options. Hence, choice shapes preferences—even without extensive experience making decisions and without a well-developed self-concept.
Knowledge before Belief
Jonathan Phillips et al.
Behavioral and Brain Sciences, September 2020, Pages 1-37
Abstract:
Research on the capacity to understand others’ minds has tended to focus on representations of beliefs, which are widely taken to be among the most central and basic theory of mind representations. Representations of knowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one doesn't even represent them as believing it? Drawing on a wide range of methods across cognitive science, we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that non-human primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge (but not belief) attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibit a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledge representation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind — one that is focused on understanding others’ minds in relation to the actual world, rather than independent from it.