Clued in
The Curse of Expertise: When More Knowledge Leads to Miscalibrated Explanatory Insight
Matthew Fisher & Frank Keil
Cognitive Science, forthcoming
Abstract:
Does expertise within a domain of knowledge predict accurate self-assessment of the ability to explain topics in that domain? We find that expertise increases confidence in the ability to explain a wide variety of phenomena. However, this confidence is unwarranted; after actually offering full explanations, people are surprised by the limitations in their understanding. For passive expertise (familiar topics), miscalibration is moderated by education; those with more education are accurate in their self-assessments (Experiment 1). But when those with more education consider topics related to their area of concentrated study (college major), they also display an illusion of understanding (Experiment 2). This “curse of expertise” is explained by a failure to recognize the amount of detailed information that had been forgotten (Experiment 3). While expertise can sometimes lead to accurate self-knowledge, it can also create illusions of competence.
---------------------
Fast thinking: Implications for democratic politics
Gerry Stoker, Colin Hay & Matthew Barr
European Journal of Political Research, forthcoming
Abstract:
A major programme of research on cognition has been built around the idea that human beings are frequently intuitive thinkers and that human intuition is imperfect. The modern marketing of politics and the time-poor position of many citizens suggests that ‘fast’, intuitive, thinking in many contemporary democracies is ubiquitous. This article explores the consequences that such fast thinking might have for the democratic practice of contemporary politics. Using focus groups with a range of demographic profiles, fast thinking about how politics works is stimulated and followed by a more reflective and collectively deliberative form of slow thinking among the same participants. A strong trajectory emerges consistently in all groups in that in fast thinking mode participants are noticeably more negative and dismissive about the workings of politics than when in slow thinking mode. A fast thinking focus among citizens may be good enough to underwrite mainstream political exchange, but at the cost of supporting a general negativity about politics and the way it works. Yet breaking the cycle of fast thinking – as advocated by deliberation theorists – might not be straightforward because of the grip of fast thinking. The fast/slow thinking distinction, if carefully used, offers valuable new insight into political science.
---------------------
Nicolas Kervyn et al.
Journal of Experimental Social Psychology, January 2016, Pages 17–23
Abstract:
Three experiments show that describing a person in mixed rather than consistently positive (or negative) terms on warmth and competence — the two fundamental dimensions of social perception — results in more extreme impressions. Given sparse information on one dimension, amplified (i.e., more extreme) judgments arise when the other dimension is clearly opposite in valence. In Experiment 1, a competent-and-cold target was perceived as more competent than a competent-and-warm target. Experiment 2 extends this amplification effect by manipulating either warmth or competence and adding consistently negative descriptions. Experiment 3 replicates amplification using more naturalistic behavioral descriptions. These findings extend the compensation effect — a negative functional relation between perceived warmth and competence, previously observed only in explicitly comparative contexts — to single-target impression formation. Implications for traditional person-perception models and distributed social cognition are discussed.
---------------------
Decision importance as a cue for deferral
Job Krijnen, Marcel Zeelenberg & Seger Breugelmans
Judgment and Decision Making, September 2015, Pages 407–415
Abstract:
A series of 7 experiments found that people defer important decisions more than unimportant decisions, and that this is independent of choice set composition. This finding persists even when deferral does not provide more flexibility (Experiment 2), when deferral has potential disadvantages (Experiment 3), and when deferral has no material benefits and is financially costly (Experiment 4). The effect of importance on deferral was independent of potential choice conflict (Experiment 5 & 6). The only exception was a situation in which one alternative was clearly dominant; here decision importance did not affect the likelihood of deferral (Experiment 7). These results suggest that people use decision importance as a cue for deferral: more important decisions should take more time and effort.
---------------------
Blinding Us to the Obvious? The Effect of Statistical Training on the Evaluation of Evidence
Blakeley McShane & David Gal
Management Science, forthcoming
Abstract:
Statistical training helps individuals analyze and interpret data. However, the emphasis placed on null hypothesis significance testing in academic training and reporting may lead researchers to interpret evidence dichotomously rather than continuously. Consequently, researchers may either disregard evidence that fails to attain statistical significance or undervalue it relative to evidence that attains statistical significance. Surveys of researchers across a wide variety of fields (including medicine, epidemiology, cognitive science, psychology, business, and economics) show that a substantial majority does indeed do so. This phenomenon is manifest both in researchers’ interpretations of descriptions of evidence and in their likelihood judgments. Dichotomization of evidence is reduced though still present when researchers are asked to make decisions based on the evidence, particularly when the decision outcome is personally consequential. Recommendations are offered.
---------------------
Mark Schaller
Journal of Experimental Social Psychology, forthcoming
Abstract:
Most discussions of rigor and replication focus on empirical practices (methods used to collect and analyze data). Typically overlooked is the role of conceptual practices: The methods scientists use to arrive at and articulate research hypotheses in the first place. This article discusses how the conceptualization of research hypotheses has implications for methodological decision-making and, consequently, for the replicability of results. The article identifies three ways in which empirical findings may be non-replicable, and shows how all three kinds of non-replicability are more likely to emerge when scientists take an informal conceptual approach, in which personal predictions are equated with scientific hypotheses. The risk of non-replicability may be reduced if scientists adopt more formal conceptual practices, characterized by the rigorous use of “if-then” logic to articulate hypotheses, and to systematically diagnose the plausibility, size, and context-dependence of hypothesized effects. The article identifies benefits that are likely to arise from more rigorous and systematic conceptual practices, and identifies ways in which their use can be encouraged to be more normative within the scholarly culture of the psychological sciences.
---------------------
Confidence Leak in Perceptual Decision Making
Dobromir Rahnev et al.
Psychological Science, forthcoming
Abstract:
People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak — that is, confidence in one’s response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex.
---------------------
Deconstructing the seductive allure of neuroscience explanations
Deena Skolnick Weisberg, Jordan Taylor & Emily Hopkins
Judgment and Decision Making, September 2015, Pages 429–441
Abstract:
Previous work showed that people find explanations more satisfying when they contain irrelevant neuroscience information. The current studies investigate why this effect happens. In Study 1 (N=322), subjects judged psychology explanations that did or did not contain irrelevant neuroscience information. Longer explanations were judged more satisfying, as were explanations containing neuroscience information, but these two factors made independent contributions. In Study 2 (N=255), subjects directly compared good and bad explanations. Subjects were generally successful at selecting the good explanation except when the bad explanation contained neuroscience and the good one did not. Study 3 (N=159) tested whether neuroscience jargon was necessary for the effect, or whether it would obtain with any reference to the brain. Responses to these two conditions did not differ. These results confirm that neuroscience information exerts a seductive effect on people’s judgments, which may explain the appeal of neuroscience information within the public sphere.
---------------------
Distilling the Wisdom of Crowds: Prediction Markets Versus Prediction Polls
Pavel Atanasov et al.
Management Science, forthcoming
Abstract:
We report the results of the first large-scale, long-term, experimental test between two crowd sourcing methods – prediction markets and prediction polls. More than 2,400 participants made forecasts on 261 events over two seasons of a geopolitical prediction tournament. Some forecasters traded in a continuous double auction market and were ranked based on earnings. Others submitted probability judgments, independently or in teams, and were ranked based on Brier scores. In both seasons of the tournament, last day prices from the prediction market were more accurate than the simple mean of forecasts from prediction polls. However, team prediction polls outperformed prediction markets when poll forecasts were aggregated with algorithms using temporal decay, performance weighting and recalibration. The biggest advantage of prediction polls occurred at the start of long-duration questions. Prediction polls with proper scoring, algorithmic aggregation and teaming offer an attractive alternative to prediction markets for distilling the wisdom of crowds.
---------------------
Certainty and Overconfidence in Future Preferences for Food
Linda Thunström, Jonas Nordström & Jason Shogren
Journal of Economic Psychology, December 2015, Pages 101–113
Abstract:
We examine consumer certainty of future preferences and overconfidence in predicting future preferences. We explore how preference certainty and overconfidence impact the option value to revise today’s decisions in the future. We design a laboratory experiment that creates a controlled choice environment, in which a subject’s choice set (over food snacks) is known and constant over time, and the time frame is short -- subjects make choices for themselves today, and for one to two weeks ahead. Our results suggest that even for such a seemingly straightforward choice task, only 45 percent of subjects can predict future choices accurately, while stated certainty of future preferences (one and two weeks ahead) is around 80 percent. We define overconfidence in predicting future preferences as: the difference between actual accuracy at predicting future choices and stated certainty of future preferences. Our results suggest strong evidence of overconfidence. We find that overconfidence increases with the level of stated certainty of future preferences. Finally, we observe that the option value people attach to future choice flexibility decreases with overconfidence. Overconfidence in future preferences affects economic welfare because it says people have too much incentive to lock themselves into future suboptimal decisions.
---------------------
Elizabeth Focella et al.
Journal of Experimental Social Psychology, forthcoming
Abstract:
Four studies tested the prediction that when highly identified group members observe another ingroup member behave hypocritically, they experience vicarious hypocrisy, which they reduce by bolstering their support for the ingroup hypocrite's message. Participants in Experiment 1 (N = 161) who witnessed a similar ingroup member act hypocritically about using sunscreen reported more positive attitudes toward using sunscreen than participants exposed to an outgroup hypocrite or to a dissimilar ingroup hypocrite. The effect of vicarious hypocrisy on attitude bolstering was attenuated in Experiment 2 (N = 68) when ingroup identity was affirmed. In Study 3 (N = 64), more highly identified participants acquired sunscreen when a fellow ingroup member's hypocrisy was attributed to high compared to low choice. Study 4 (N = 68) showed that a misattribution cue attenuated the effect of vicarious hypocrisy on sunscreen acquisition. The discussion focuses on the vicarious dissonance processes that motivate some observers to defend, rather than reject, a hypocritical ingroup member.
---------------------
Inferring Others' Hidden Thoughts: Smart Guesses in a Low Diagnostic World
Chris Street et al.
Journal of Behavioral Decision Making, forthcoming
Abstract:
People are biased toward believing that what others say is what they truly think. This effect, known as the truth bias, has often been characterized as a judgmental error that impedes accuracy. We consider an alternative view: that it reflects the use of contextual information to make the best guess when the currently available information has low diagnosticity. Participants learnt the diagnostic value of four cues, which were present during truthful statements between 20% and 80% of the time. Afterwards, participants were given contextual information: either that most people would lie or that most would tell the truth. We found that people were biased in the direction of the context information when the individuating behavioral cues were nondiagnostic. As the individuating cues became more diagnostic, context had less to no effect. We conclude that more general context information is used to make an informed judgment when other individuating cues are absent. That is, the truth bias reflects a smart guess in a low diagnostic world.
---------------------
Karolina Lempert & Elizabeth Tricomi
Journal of Cognitive Neuroscience, forthcoming
Abstract:
Whereas positive feedback is both rewarding and informative, negative feedback can be construed as either punishing (because it is indicative of poor performance) or informative (because it may lead to goal attainment). In this neuroimaging experiment, we highlighted the informational value of negative feedback by intermixing trials with and without feedback. When performance feedback is expected, positive feedback triggers an increase in striatal activity, whereas negative feedback elicits a decrease in striatal activity. We predicted that, in contrast, when feedback receipt is unpredictable, the striatal response to negative feedback would increase. Participants performed a paired-associate learning task during fMRI scanning. In one condition (“blocked feedback”), the receipt of feedback was predictable — participants knew whether or not they would receive feedback for their responses. In another condition (“mixed feedback”), the receipt of feedback was unpredictable — on a random 50% of trials, participants received feedback, and they otherwise received no feedback. Negative feedback in the mixed feedback condition elicited more striatal activity than negative feedback in the blocked feedback condition. In contrast, feedback omission evoked more striatal activity when feedback delivery was expected, compared to when it was unpredictable. This pattern emerged from an increase in caudate activity in response to negative feedback in the mixed feedback condition and a decrease in ventral striatal activity in response to no feedback in this condition. These results suggest that, by emphasizing the informational value of negative feedback, an unpredictable feedback context alters the striatal response to negative feedback and to the omission of feedback.
---------------------
David Hardisty & Jeffrey Pfeffer
Stanford Working Paper, September 2015
Abstract:
Three studies explored the effects of uncertainty on people’s time preferences for financial gains and losses. In general, individuals seek to avoid uncertainty in situations of intertemporal choice. While holding the expected value of payouts constant, participants preferred immediate gains and losses if the future was uncertain, and preferred future gains and losses if the present was uncertain. This pattern of preferences is incompatible with current models of intertemporal choice, in which people should consistently prefer to have gains now and losses later. This pattern of uncertainty avoidance is also not explained by Prospect Theory models, which predict risk seeking for losses. We discuss these findings in relation to previous literature.
---------------------
Public perceptions of expert disagreement: Bias and incompetence or a complex and random world?
Nathan Dieckmann et al.
Public Understanding of Science, forthcoming
Abstract:
Expert disputes can present laypeople with several challenges including trying to understand why such disputes occur. In an online survey of the US public, we used a psychometric approach to elicit perceptions of expert disputes for 56 forecasts sampled from seven domains. People with low education, or with low self-reported topic knowledge, were most likely to attribute disputes to expert incompetence. People with higher self-reported knowledge tended to attribute disputes to expert bias due to financial or ideological reasons. The more highly educated and cognitively able were most likely to attribute disputes to natural factors, such as the irreducible complexity and randomness of the phenomenon. Our results show that laypeople tend to use coherent — albeit potentially overly narrow — attributions to make sense of expert disputes and that these explanations vary across different segments of the population. We highlight several important implications for scientists, risk managers, and decision makers.