Knowns and unknowns
When Self-Perceptions of Expertise Increase Closed-Minded Cognition: The Earned Dogmatism Effect
Victor Ottati et al.
Journal of Experimental Social Psychology, forthcoming
Abstract:
Although cultural values generally prescribe open-mindedness, open-minded cognition systematically varies across individuals and situations. According to the Earned Dogmatism Hypothesis, social norms dictate that experts are entitled to adopt a relatively dogmatic, closed-minded orientation. As a consequence, situations that engender self-perceptions of high expertise elicit a more closed-minded cognitive style. These predictions are confirmed in six experiments.
---------------------
Knowledge Does Not Protect Against Illusory Truth
Lisa Fazio et al.
Journal of Experimental Psychology: General, forthcoming
Abstract:
In daily life, we frequently encounter false claims in the form of consumer advertisements, political propaganda, and rumors. Repetition may be one way that insidious misconceptions, such as the belief that vitamin C prevents the common cold, enter our knowledge base. Research on the illusory truth effect demonstrates that repeated statements are easier to process, and subsequently perceived to be more truthful, than new statements. The prevailing assumption in the literature has been that knowledge constrains this effect (i.e., repeating the statement “The Atlantic Ocean is the largest ocean on Earth” will not make you believe it). We tested this assumption using both normed estimates of knowledge and individuals’ demonstrated knowledge on a postexperimental knowledge check (Experiment 1). Contrary to prior suppositions, illusory truth effects occurred even when participants knew better. Multinomial modeling demonstrated that participants sometimes rely on fluency even if knowledge is also available to them (Experiment 2). Thus, participants demonstrated knowledge neglect, or the failure to rely on stored knowledge, in the face of fluent processing experiences.
---------------------
The Influence of Control on Belief in Conspiracy Theories: Conceptual and Applied Extensions
Jan-Willem van Prooijen & Michele Acker
Applied Cognitive Psychology, forthcoming
Abstract:
Threats to control have been found to increase belief in conspiracy theories. We argue, however, that previous research observing this effect was limited in two ways. First, previous research did not exclude the possibility that affirming control might reduce conspiracy beliefs. Second, because of artificial lab procedures, previous findings provide little information about the external validity of the control threat–conspiracy belief relationship. In Study 1, we address the first limitation and find that affirming control indeed reduces belief in conspiracy theories as compared with a neutral baseline condition. In Study 2, we address the second limitation of the literature. In a large-scale US sample, we find that a societal threat to control, that citizens actually experienced, predicts belief in a range of common conspiracy theories. Taken together, these findings increase insight in the fundamental relationship between the human need for control and the tendency to believe in conspiracy theories.
---------------------
Some Dare Call It Conspiracy: Labeling Something a Conspiracy Theory Does Not Reduce Belief in It
Michael Wood
Political Psychology, forthcoming
Abstract:
“Conspiracy theory” is widely acknowledged to be a loaded term. Politicians use it to mock and dismiss allegations against them, while philosophers and political scientists warn that it could be used as a rhetorical weapon to pathologize dissent. In two empirical studies conducted on Amazon Mechanical Turk, I present an initial examination of whether this concern is justified. In Experiment 1, 150 participants judged a list of historical and speculative theories to be no less likely when they were labeled “conspiracy theories” than when they were labeled “ideas.” In Experiment 2 (N = 802), participants who read a news article about fictitious “corruption allegations” endorsed those allegations no more than participants who saw them labeled “conspiracy theories.” The lack of an effect of the conspiracy-theory label in both experiments was unexpected and may be due to a romanticized image of conspiracy theories in popular media or a dilution of the term to include mundane speculation regarding corruption and political intrigue.
---------------------
Expertise and decision-making in American football
Adam Woods et al.
Frontiers in Psychology, 13 July 2015
Abstract:
In American football, pass interference calls can be difficult to make, especially when the timing of contact between players is ambiguous. American football history contains many examples of controversial pass interference decisions, often with fans, players, and officials interpreting the same event differently. The current study sought to evaluate the influence of experience with concepts important for officiating decisions in American football on the probability (i.e., response criteria) of pass interference calls. We further investigated the extent to which such experience modulates perceptual biases that might influence the interpretation of such events. We hypothesized that observers with less experience with the American football concepts important for pass interference would make progressively more pass interference calls than more experienced observers, even when given an explicit description of the necessary criteria for a pass interference call. In a go/no-go experiment using photographs from American football games, three groups of participants with different levels of experience with American football (Football Naïve, Football Player, and Football Official) made pass interference calls for pictures depicting left-moving and right-moving events. More experience was associated with progressively and significantly fewer pass interference calls [F(2,48) = 10.4, p < 0.001], with Football Naïve participants making the most pass interference calls, and Football Officials the least. In addition, our data replicated a prior finding of spatial biases for interpreting left-moving images more harshly than identical right-moving images, but only in Football Players. These data suggest that experience with the concepts important for making a decision may influence the rate of decision-making, and may also play a role in susceptibility to spatial biases.
---------------------
Max Wolf et al.
PLoS ONE, August 2015
Abstract:
While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence.
---------------------
N.G. Holmes, Carl Wieman & D.A. Bonn
Proceedings of the National Academy of Sciences, 8 September 2015, Pages 11199–11204
Abstract:
The ability to make decisions based on data, with its inherent uncertainties and variability, is a complex and vital skill in the modern world. The need for such quantitative critical thinking occurs in many different contexts, and although it is an important goal of education, that goal is seldom being achieved. We argue that the key element for developing this ability is repeated practice in making decisions based on data, with feedback on those decisions. We demonstrate a structure for providing suitable practice that can be applied in any instructional setting that involves the acquisition of data and relating that data to scientific models. This study reports the results of applying that structure in an introductory physics laboratory course. Students in an experimental condition were repeatedly instructed to make and act on quantitative comparisons between datasets, and between data and models, an approach that is common to all science disciplines. These instructions were slowly faded across the course. After the instructions had been removed, students in the experimental condition were 12 times more likely to spontaneously propose or make changes to improve their experimental methods than a control group, who performed traditional experimental activities. The students in the experimental condition were also four times more likely to identify and explain a limitation of a physical model using their data. Students in the experimental condition also showed much more sophisticated reasoning about their data. These differences between the groups were seen to persist into a subsequent course taken the following year.
---------------------
Meadhbh Foster & Mark Keane
Cognitive Psychology, September 2015, Pages 74–116
Abstract:
Early theories of surprise, including Darwin’s, argued that it was predominantly a basic emotion. Recently, theories have taken a more cognitive view of surprise, casting it as a process of “making sense of surprising events”. The current paper advances the view that the essence of this sense-making process is explanation; specifically, that people’s perception of surprise is a metacognitive estimate of the cognitive work involved in explaining an abnormal event. So, some surprises are more surprising because they are harder to explain. This proposal is tested in eight experiments that explore how (i) the contents of memory can influence surprise, (ii) different classes of scenarios can retrieve more/less relevant knowledge from memory to explain surprising outcomes, (iii) how partial explanations constrain the explanation process, reducing surprise, and (iv) how, overall, any factor that acts to increase the cognitive work in explaining a surprising event, results in higher levels of surprise (e.g., task demands to find three rather than one explanations). Across the present studies, using different materials, paradigms and measures, it is consistently and repeatedly found that the difficulty of explaining a surprising outcome is the best predictor for people’s perceptions of the surprisingness of events. Alternative accounts of these results are considered, as are future directions for this research.
---------------------
Conformity in Groups: The Effects of Others’ Views on Expressed Attitudes and Attitude Change
Lindsey Levitan & Brad Verhulst
Political Behavior, forthcoming
Abstract:
Two experiments demonstrate the powerful influence of others’ views on individual attitudes and attitude expression. Those around us can influence our views through persuasion and information exchange, but the current research hypothesizes that exposure to alternate views even without discussion or exchange of persuasive arguments can also alter what attitudes are expressed, and even generate long term shifts in attitudes. In an initial study, naïve participants were asked their attitudes on a range of standard survey items privately, publicly in a group with trained confederates, and again privately following the group setting. Findings indicate significant attitudinal conformity, which was most pronounced when participants were faced with a unanimous (versus non-unanimous) group. The group experience continued to influence participants’ views when they were again asked their views in private. A second experiment varied whether participants heard views from live confederates or via computer, demonstrating that these effects could not be attributed only to issue-relevant information provided by or inferred from group members, and that attitude change persisted long after participants had left the laboratory. In summary, when people are asked their attitudes publicly, they adjust their responses to conform to those around them, and this attitude change persists privately, even weeks later. Accordingly, such purely social processes of attitude change may be every bit as important as more traditional cognitive informational processes in understanding where people’s political attitudes come from, and how they may be changed.
---------------------
Blinded by Experience: Prior Experience, Negative News and Belief Updating
Bradley Staats, Diwas KC & Francesca Gino
Harvard Working Paper, August 2015
Abstract:
Traditional models of operations management involve dynamic decision-making assuming optimal (Bayesian) updating. However, behavioral theory suggests that individuals exhibit bias in their beliefs and decisions. We conduct both a field study and two laboratory studies to examine the phenomena in the context of health. In particular, we examine how an individual’s prior experiences and the experiences of those around them alter the operational decisions that the individual makes. We draw on an exogenous announcement of negative news by the Food and Drug Administration (FDA) and explore how this affects an operational decision – production tool choice – of interventional cardiologists deciding between two types of cardiac stents. Analyzing 147,000 choices over 6 years, we find that individuals do respond to negative news by using the focal production tool less often. However, we find that both individual’s own experience and others’ experience alter their responses in predictable ways. Moreover, although individual and other experience act as substitutes prior to negative news, the two types of experience act as complements following the negative announcement – leading to even greater use of the same production tool. Two controlled lab studies replicate our main findings and show that behavioral biases, not rational expectations, drive the effect. Our research contributes not only to operations management research, but also to the practice of healthcare and operations more generally.
---------------------
Learning from experience in nonlinear environments: Evidence from a competition scenario
Emre Soyer & Robin Hogarth
Cognitive Psychology, September 2015, Pages 48–73
Abstract:
We test people’s ability to learn to estimate a criterion (probability of success in a competition scenario) that requires aggregating information in a nonlinear manner. The learning environments faced by experimental participants are kind in that they are characterized by immediate, accurate feedback involving either naturalistic outcomes (information on winning and/or ranking) or the normatively correct probabilities. We find no evidence of learning from the former and modest learning from the latter, except that a group of participants endowed with a memory aid performed substantially better. However, when the task is restructured such that information should be aggregated in a linear fashion, participants learn to make more accurate assessments. Our experiments highlight the important role played by prior beliefs in learning tasks, the default status of linear aggregation in many inferential judgments, and the difficulty of learning in nonlinear environments even in the presence of veridical feedback.
---------------------
The precision of value-based choices depends causally on fronto-parietal phase coupling
Rafael Polanía et al.
Nature Communications, August 2015
Abstract:
Which meal would you like today, chicken or pasta? For such value-based choices, organisms must flexibly integrate various types of sensory information about internal states and the environment to transform them into actions. Recent accounts suggest that these choice-relevant processes are mediated by information transfer between functionally specialized but spatially distributed brain regions in parietal and prefrontal cortex; however, it remains unclear whether such fronto-parietal communication is causally involved in guiding value-based choices. We find that transcranially inducing oscillatory desynchronization between the frontopolar and -parietal cortex leads to more inaccurate choices between food rewards while leaving closely matched perceptual decisions unaffected. Computational modelling shows that this exogenous manipulation leads to imprecise value assignments to the choice alternatives. Thus, our study demonstrates that accurate value-based decisions critically involve coherent rhythmic information transfer between fronto-parietal brain areas and establishes an experimental approach to non-invasively manipulate the precision of value-based choices in humans.
---------------------
Interference effects of choice on confidence: Quantum characteristics of evidence accumulation
Peter Kvam et al.
Proceedings of the National Academy of Sciences, 25 August 2015, Pages 10645–10650
Abstract:
Decision-making relies on a process of evidence accumulation which generates support for possible hypotheses. Models of this process derived from classical stochastic theories assume that information accumulates by moving across definite levels of evidence, carving out a single trajectory across these levels over time. In contrast, quantum decision models assume that evidence develops over time in a superposition state analogous to a wavelike pattern and that judgments and decisions are constructed by a measurement process by which a definite state of evidence is created from this indefinite state. This constructive process implies that interference effects should arise when multiple responses (measurements) are elicited over time. We report such an interference effect during a motion direction discrimination task. Decisions during the task interfered with subsequent confidence judgments, resulting in less extreme and more accurate judgments than when no decision was elicited. These results provide qualitative and quantitative support for a quantum random walk model of evidence accumulation over the popular Markov random walk model. We discuss the cognitive and neural implications of modeling evidence accumulation as a quantum dynamic system.