Informed Decisions
Impatience for information: Curiosity is here today, gone tomorrow
Andras Molnar & Russell Golman
Journal of Behavioral Decision Making, forthcoming
Abstract:
Based on the curiosity-as-drive theory and the theory of information gaps, we argue that curiosity -- that is, the desire to seek out novel information for its own sake -- is highly transient, and while people may be tempted by immediate answers, they may be less motivated when they need to wait for information. Contrary to standard economic models, we predict an immediacy effect (or present bias) for information even in those cases when waiting does not affect the objective value of information. Furthermore, we argue that this immediacy effect is independent from motivated emotion-management; that is, introducing delays makes people less willing to obtain information for its own sake even when information does not elicit strong anticipatory feelings. We test these hypotheses in two pre-registered experiments (N = 2406) featuring real effort and monetary incentives and find that introducing a delay in information provision significantly reduces participants' willingness to obtain information. In Study 1, we also show that people display a stronger immediacy effect for information than for monetary rewards. In Study 2, we demonstrate that people are impatient for information regardless of how they expect to feel after receiving the information, and even when the perceived instrumental value of information remains unaffected by the delay. The strong impatience for information in both studies is consistent with the notion that curiosity acts as a drive, and as such, is highly transient.
Inaccurate forecasting of a randomized controlled trial
Mats Ahrenshop et al.
Journal of Experimental Political Science, forthcoming
Abstract:
We report the results of a forecasting experiment about a randomized controlled trial that was conducted in the field. The experiment asks Ph.D. students, faculty, and policy practitioners to forecast (1) compliance rates for the RCT and (2) treatment effects of the intervention. The forecasting experiment randomizes the order of questions about compliance and treatment effects and the provision of information that a pilot experiment had been conducted which produced null results. Forecasters were excessively optimistic about treatment effects and unresponsive to item order as well as to information about a pilot. Those who declare themselves expert in the area relevant to the intervention are particularly resistant to new information that the treatment is ineffective. We interpret our results as suggesting that we should exercise caution when undertaking expert forecasting, since experts may have unrealistic expectations and may be inflexible in altering these even when provided new information.
Not all bullshit pondered is tossed: Reflection decreases receptivity to some types of misleading information but not others
Shane Littrell, Ethan Meyers & Jonathan Fugelsang
Applied Cognitive Psychology, forthcoming
Abstract:
Across three studies (N = 659), we present evidence that engaging in explanatory reflection reduces receptivity to pseudo-profound bullshit but not scientific bullshit or fake news. Additionally, ratings for pseudo-profound and scientific bullshit attributed to authoritative sources were significantly inflated compared to bullshit from anonymous sources. These findings provide initial evidence that asking people to reflect on why they find certain statements meaningful (or not) helps reduce receptivity to some types of misinformation but not others. Moreover, the appeal of misleading claims spread by perceived experts may be largely immune to the putative benefits of interventions that rely solely on reflective thinking. Taken together, our results suggest that while encouraging the public to be more reflective can certainly be helpful as a general rule, the effectiveness of this strategy in reducing the persuasiveness of misleading or otherwise epistemically-suspect claims is limited by the type of claims being evaluated.
Online searches to evaluate misinformation can increase its perceived veracity
Kevin Aslett et al.
Nature, forthcoming
Abstract:
Considerable scholarly attention has been paid to understanding belief in online misinformation, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.
Looking on the (B)right Side of Life: Cognitive Ability and Miscalibrated Financial Expectations
Chris Dawson
Personality and Social Psychology Bulletin, forthcoming
Abstract:
It is a puzzle why humans tend toward unrealistic optimism, as it can lead to excessively risky behavior and a failure to take precautionary action. Using data from a large nationally representative U.K. sample (N=36,312), our claim is that optimism bias is partly a consequence of low cognition -- as measured by a broad range of cognitive skills, including memory, verbal fluency, fluid reasoning and numerical reasoning. We operationalize unrealistic optimism as the difference between a person’s financial expectation and the financial realization that follows, measured annually over a decade. All else being equal, those highest on cognitive ability experience a 22% (53.2%) increase in the probability of realism (pessimism) and a 34.8% reduction in optimism compared with those lowest on cognitive ability. This suggests that the negative consequences of an excessively optimistic mindset may, in part, be a side product of the true driver, low cognitive ability.
Exposure to the Views of Opposing Others with Latent Cognitive Differences Results in Social Influence -- But Only When Those Differences Remain Obscured
Douglas Guilbeault et al.
Management Science, forthcoming
Abstract:
Cognitive differences can catalyze social learning through the process of one-to-one social influence. Yet the learning benefits of exposure to the ideas of cognitively dissimilar others often fail to materialize. Why do cognitive differences produce learning from interpersonal influence in some contexts but not in others? To answer this question, we distinguish between cognition that is expressed -- one’s public stance on an issue and the way in which supporting arguments are framed -- and cognition that is latent -- the semantic associations that underpin these expressions. We theorize that, when latent cognition is obscured, one is more likely to be influenced to change one’s mind on an issue when exposed to the opposing ideas of cognitively dissimilar, rather than similar, others. When latent cognition is instead observable, a subtle similarity-attraction response tends to counteract the potency of cognitive differences -- even when social identity cues and other categorical distinctions are inaccessible. To evaluate these ideas, we introduce a novel experimental paradigm in which participants (a) respond to a polarizing scenario; (b) view an opposing argument by another whose latent cognition is either similar to or different from their own and is either observable or obscured; and (c) have an opportunity to respond again to the scenario. A preregistered study (n = 1,000) finds support for our theory. A supplemental study (n = 200) suggests that the social influence of latent cognitive differences operates through the mechanism of argument novelty. We discuss implications of these findings for research on social influence, collective intelligence, and cognitive diversity in groups.
Mathematical discoveries from program search with large language models
Bernardino Romera-Paredes et al.
Nature, forthcoming
Abstract:
Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches. Applying FunSearch to a central problem in extremal combinatorics -- the cap set problem -- we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.
A Heuristic for Combining Correlated Experts When There Are Few Data
David Soule, Yael Grushka-Cockayne & Jason Merrick
Management Science, forthcoming
Abstract:
It is intuitive and theoretically sound to combine experts’ forecasts based on their proven skills, while accounting for correlation among their forecast submissions. Simpler combination methods, however, which assume independence of forecasts or equal skill, have been found to be empirically robust, in particular, in settings in which there are few historical data available for assessing experts’ skill. One explanation for the robust performance by simple methods is that empirical estimation of skill and of correlations introduces error, leading to worse aggregated forecasts than simpler alternatives. We offer a heuristic that accounts for skill and reduces estimation error by utilizing a common correlation factor. Our theoretical results present an optimal form for this common correlation, and we offer Bayesian estimators that can be used in practice. The common correlation heuristic is shown to outperform alternative combination methods on macroeconomic and experimental forecasting where there are limited historical data.