Your Thoughts
(Why) Is Misinformation a Problem?
Zoë Adams et al.
Perspectives on Psychological Science, forthcoming
Abstract:
In the last decade there has been a proliferation of research on misinformation. One important aspect of this work that receives less attention than it should is exactly why misinformation is a problem. To adequately address this question, we must first look to its speculated causes and effects. We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet, social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent significant deviations from ground truths through people’s new way of knowing (intersubjectivity). This, we argue, is illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to established norms of liberal democracy that come from efforts to target the problem of misinformation.
Do Survey Questions Spread Conspiracy Beliefs?
Scott Clifford & Brian Sullivan
Journal of Experimental Political Science, forthcoming
Abstract:
Conspiracy theories and misinformation have become increasingly prominent in politics, and these beliefs have pernicious effects on political behavior. A prominent line of research suggests that these beliefs are promoted by repeated exposure. Yet, as scholars have rushed to understand these beliefs, they have exposed countless respondents to conspiratorial claims, raising the question of whether researchers are contributing to their spread. We investigate this possibility using a pre-registered within-subjects experiment embedded in a panel survey. The results suggest that exposure to a standard conspiracy question causes a significant increase in the likelihood of endorsing that conspiracy a week later. However, this exposure effect does not occur with a question format that offers an alternative, non-conspiratorial explanation for the target event. Thus, we recommend that researchers reduce the likelihood of spreading conspiracy beliefs by adopting a question format that asks respondents to choose between alternative explanations for an event.
Boosting the Wisdom of Crowds Within a Single Judgment Problem: Weighted Averaging Based on Peer Predictions
Asa Palley & Ville Satopää
Management Science, forthcoming
Abstract:
A combination of point estimates from multiple judges often provides a more accurate aggregate estimate than a point estimate from a single judge, a phenomenon called “the wisdom of crowds.” However, if the judges use shared information when forming their estimates, the simple average will end up overemphasizing this common component at the expense of the judges’ private information. A decision maker could in theory obtain a more accurate estimate by appropriately combining all information behind the judges’ opinions. Although this information underlies the judges’ individual estimates, it is typically unobservable and thus cannot be directly aggregated by a decision maker. In this article, we propose a weighting of judges’ individual estimates that appropriately combines their collective information within a single estimation problem. Judges are asked to provide both a point estimate of the quantity of interest and a prediction of the average estimate that will be given by all other judges. Predictions of others are then used as part of a criterion to determine weights that are applied to each judge’s estimate to form an aggregate estimate. Our weighting procedure is robust to noise in the judges’ responses and can be expressed in closed form. We use both simulation and data from a collection of experimental studies to illustrate that the weighting procedure outperforms existing methods. An R package called metaggR implements our method and is available on the Comprehensive R Archive Network.
Cultural Breadth and Embeddedness: The Individual Adoption of Organizational Culture as a Determinant of Creativity
Yoonjin Choi, Paul Ingram & Sang Won Han
Administrative Science Quarterly, forthcoming
Abstract:
We propose that individuals differ in their ability to generate creative ideas as a function of the values, beliefs, and norms of their social group’s culture they have adopted and routinely use. To generate creative ideas, an individual needs to think differently from their group to generate novel ideas that others cannot, while understanding what the group will view as appropriate and practical. We view culture as a network of cultural elements and decompose individuals’ cultural adoption into two conceptually and empirically distinct dimensions. Cultural breadth, which reflects whether individuals have adopted a broad range of values, beliefs, and norms that span the organization’s culture, contributes to the novelty required for creativity. Cultural embeddedness, which reflects whether individuals have adopted the core values, beliefs, and norms entrenched in the organization’s culture, helps an individual generate ideas that others will view as useful. We predict that individuals with both high cultural breadth and high cultural embeddedness, who we label integrated cultural brokers, will be most likely to generate creative ideas that are novel and useful. We test and find support for our theory in two contexts: an e-commerce firm in South Korea and MBA students at a U.S. university.
When alternative hypotheses shape your beliefs: Context effects in probability judgments
Xiaohong Cai & Timothy Pleskac
Cognition, February 2023
Abstract:
When people are asked to estimate the probability of an event occurring, they sometimes make different subjective probability judgments for different descriptions of the same event. This implies the evidence or support recruited to make these judgments is based on the descriptions of the events (hypotheses) instead of the events themselves, as captured by Tversky and Koehler’s (1994) support theory. Support theory, however, assumes each hypothesis elicits a fixed level of support (support invariance). Here, across three studies, we tested this support invariance assumption by asking participants to estimate the probability that an event would occur given a set of relevant statistics. We show that the support recruited about a target hypothesis can depend on the other hypotheses under consideration. Results reveal that for a pair of competing hypotheses, one hypothesis (the target hypothesis) appears more competitive relative to the other when a dud -- a hypothesis dominated by the target hypothesis -- is present. We also find that the target hypothesis can appear less competitive relative to the other when a resembler -- a hypothesis that is similar to the target hypothesis -- is present. These context effects invalidate the support invariance assumption in support theory and suggest that a similar process that drives preference construction may also underlie belief construction.
Unlocking creative potential: Reappraising emotional events facilitates creativity for conventional thinkers
Lily Yuxuan Zhu, Christopher Bauman & Maia Young
Organizational Behavior and Human Decision Processes, January 2023
Abstract:
We examine the cognitive processes that underpin emotion regulation strategies and their associations with creativity. Building on theories of emotion regulation and creative cognition, we theorize that cognitive reappraisal of emotion-eliciting events is positively associated with creativity because both involve considering new approaches or perspectives. We also predict that reappraisal experience boosts creativity for people prone to thinking conventionally. Three studies support our theory by demonstrating that reappraisal improves cognitive flexibility and enhances creativity for individuals low in openness to experience, independent from the effects of emotions on creativity. Therefore, reappraisal is an effective tool to foster creativity among conventional thinkers. More broadly, the results indicate that emotion regulation processes have downstream consequences on behavior, above and beyond their effects on emotions.
Defending humankind: Anthropocentric bias in the appreciation of AI art
Kobe Millet et al.
Computers in Human Behavior, forthcoming
Abstract:
We argue that recent advances of artificial intelligence (AI) in the domain of art (e.g., music, painting) pose a profound ontological threat to anthropocentric worldviews because they challenge one of the last frontiers of the human uniqueness narrative: artistic creativity. Four experiments (N = 1708), including a high-powered preregistered experiment, consistently reveal a pervasive bias against AI-made artworks and shed light on its psychological underpinnings. The same artwork is preferred less when labeled as AI-made (vs. human-made) because it is perceived as less creative and subsequently induces less awe, an emotional response typically associated with the aesthetic appreciation of art. These effects are more pronounced among people with stronger anthropocentric creativity beliefs (i.e., who believe that creativity is a uniquely human characteristic). Systematic depreciation of AI-made art (assignment of lower creative value, suppression of emotional reactions) appears to serve a shaken anthropocentric worldview whereby creativity is exclusively reserved for humans.
Using cognitive psychology to understand GPT-3
Marcel Binz & Eric Schulz
Proceedings of the National Academy of Sciences, 7 February 2023
Abstract:
We study GPT-3, a recent large language model, using tools from cognitive psychology. More specifically, we assess GPT-3’s decision-making, information search, deliberation, and causal reasoning abilities on a battery of canonical experiments from the literature. We find that much of GPT-3’s behavior is impressive: It solves vignette-based tasks similarly or better than human subjects, is able to make decent decisions from descriptions, outperforms humans in a multiarmed bandit task, and shows signatures of model-based reinforcement learning. Yet, we also find that small perturbations to vignette-based tasks can lead GPT-3 vastly astray, that it shows no signatures of directed exploration, and that it fails miserably in a causal reasoning task. Taken together, these results enrich our understanding of current large language models and pave the way for future investigations using tools from cognitive psychology to study increasingly capable and opaque artificial agents.