People believe misinformation is a threat because they assume others are gullible
Sacha Altay & Alberto Acerbi
New Media & Society, forthcoming
Alarmist narratives about the flow of misinformation and its negative consequences have gained traction in recent years. If these fears are to some extent warranted, the scientific literature suggests that many of them are exaggerated. Why are people so worried about misinformation? In two pre-registered surveys conducted in the United Kingdom (Nstudy_1 = 300, Nstudy_2 = 300) and replicated in the United States (Nstudy_1 = 302, Nstudy_2 = 299), we investigated the psychological factors associated with perceived danger of misinformation and how it contributes to the popularity of alarmist narratives on misinformation. We find that the strongest, and most reliable, predictor of perceived danger of misinformation is the third-person effect (i.e. the perception that others are more vulnerable to misinformation than the self) and, in particular, the belief that “distant” others (as opposed to family and friends) are vulnerable to misinformation. The belief that societal problems have simple solutions and clear causes was consistently, but weakly, associated with perceived danger of online misinformation. Other factors, like negative attitudes toward new technologies and higher sensitivity to threats, were inconsistently, and weakly, associated with perceived danger of online misinformation. Finally, we found that participants who report being more worried about misinformation are more willing to like and share alarmist narratives on misinformation. Our findings suggest that fears about misinformation tap into our tendency to view other people as gullible.
Insights into the accuracy of social scientists’ forecasts of societal change
The Forecasting Collaborative
Nature Human Behaviour, forthcoming
How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing the accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender–career and racial bias. After we provided them with historical trend data on the relevant domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N = 86 teams and 359 forecasts), with an opportunity to update forecasts on the basis of new data six months later (Tournament 2; N = 120 teams and 546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than those of simple statistical models (historical means, random walks or linear regressions) or the aggregate forecasts of a sample from the general public (N = 802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models and based predictions on prior data.
Are economists overconfident? Ideology and uncertainty in expert opinion
Austin Kozlowski & Tod Van Gunten
British Journal of Sociology, forthcoming
Economics frequently serves as an advisory discipline to policymakers, bolstered in part by its claims to a unified intellectual framework and high disciplinary consensus. Recent research challenges this perspective, providing empirical evidence that economists' professional opinions are divided by ideological commitments to either free markets on one hand or state intervention on the other. We investigate the influence of ideology in economics by examining the relation between economists' ideological commitments and the certainty with which they express their expert opinions. To examine this relationship, we analyze data from the Initiative on Global Markets Economic Experts Panel, a unique survey of 51 economists at seven elite American universities. Our results suggest that economists with ideologically patterned views report higher levels of certainty in their opinions than their less ideologically consistent peers, but this boost in confidence is limited to topics that closely pertain to the free market versus interventionism divide.
Human heuristics for AI-generated language are flawed
Maurice Jakesch, Jeffrey Hancock & Mor Naaman
Proceedings of the National Academy of Sciences, 14 March 2023
Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.
Remembering and forgetting information about the COVID-19 vaccine on Twitter
Ezgi Bilgin & Qi Wang
Memory, February 2023, Pages 247-258
Social media exposes people to selective information of what they have previously known. We conducted two laboratory studies to examine in a simulated online context the phenomenon of retrieval-induced forgetting, where information reposted on social media is likely to be later remembered and relevant but not reposted information may be forgotten. Specifically, we examined how exposure to selective information about the COVID-19 vaccine via tweets affected subsequent memory and whether people’s attitudes towards vaccination played a role in their memory for the information. Young adults (N = 119; Study 1) and community members (N = 92; Study 2) were presented with information about the COVID-19 vaccine that included both pro- and anti-vaccine arguments, organised in four categories (i.e., science, children, religion, morality). They then read tweets that repeated half of the arguments from two of the categories. In a subsequent memory test, participants remembered best the statements repeated in the tweets and remembered worst the statements from the same category but not repeated in the tweets, thus exhibiting retrieval-induced forgetting. This pattern of results was similar across pro- and anti-vaccine arguments, regardless of the participants’ level of support for vaccination. We discussed the findings in light of remembering and forgetting in the context of the pandemic and social media.
Complexity and Time
Benjamin Enke, Thomas Graeber & Ryan Oprea
NBER Working Paper, March 2023
We provide experimental evidence that core intertemporal choice anomalies -- including extreme short-run impatience, structural estimates of present bias, hyperbolicity and transitivity violations -- are driven by complexity rather than time or risk preferences. First, all anomalies also arise in structurally similar atemporal decision problems involving valuation of iteratively discounted (but immediately paid) rewards. These computational errors are strongly predictive of intertemporal decisions. Second, intertemporal choice anomalies are highly correlated with indices of complexity responses including cognitive uncertainty and choice inconsistency. We show that model misspecification resulting from ignoring behavioral responses to complexity severely inflates structural estimates of present bias.
The wisdom of many in few: Finding individuals who are as wise as the crowd
Mark Himmelstein, David Budescu & Emily Ho
Journal of Experimental Psychology: General, forthcoming
Is forecasting ability a stable trait? While domain knowledge and reasoning abilities are necessary for making accurate forecasts, research shows that knowing how accurate forecasters have been in the past is the best predictor of future accuracy. However, unlike the measurement of other traits, evaluating forecasting skill requires substantial time investment. Forecasters must make predictions about events that may not resolve for many days, weeks, months, or even years into the future before their accuracy can be estimated. Our work builds upon methods such as cultural consensus theory and proxy scoring rules to show talented forecasters can be discriminated in real time, without requiring any event resolutions. We define a peer similarity-based intersubjective evaluation method and test its utility in a unique longitudinal forecasting experiment. Because forecasters predicted all events at the same points in time, many of the confounds common to forecasting tournaments or observational data were eliminated. This allowed us to demonstrate the effectiveness of our method in real time, as time progressed and more information about forecasters became available. Intersubjective accuracy scores, which can be obtained immediately after the forecasts are made, were both valid and reliable estimators of forecasting talent. We also found that asking forecasters to make meta-predictions about what they expect others to believe can serve as an incentive-compatible method of intersubjective evaluation. Our results indicate that selecting small groups of, or even single forecasters, based on intersubjective accuracy can yield subsequent forecasts that approximate the actual accuracy of much larger crowd aggregates.
Understanding Why Searching the Internet Inflates Confidence in Explanatory Ability
Emmaline Drew Eliseev & Elizabeth Marsh
Applied Cognitive Psychology, forthcoming
People rely on the internet for easy access to information, setting up potential confusion about the boundaries between an individual's knowledge and the information they find online. Across four experiments, we replicated and extended past work showing that online searching inflates people's confidence in their knowledge. Participants who searched the internet for explanations rated their explanatory ability higher than participants who read but did not search for the same explanations. Two experiments showed that extraneous web page content (pictures) does not drive this effect. The last experiment modeled how search engines yield results; participants saw (but did not search for) a list of hits, which included “snippets” that previewed web page content, before reading the explanations. Participants in this condition were as confident as participants who searched online. Previewing hits primes to-be-read content, in a modern-day equivalent of Titchener's (1921) example of a brief glance eliciting false feelings of familiarity.
“False Advertising, Fact-Checked” Examining How Social Identification Affects Fact-Checking of False Advertisements
Greg Song, Natalie Brown-Devlin & Won-Ki Moon
University of Texas Working Paper, January 2023
For the first time, this study examines the application of existing fact-checking technology to the novel context of false advertising on social media. Using a social identity framework, the current study investigates two understudied consumer groups: vegans and vegetarians (Study 1) and moms (Study 2). The findings indicate that introducing a fact-checking tool on social media to alert consumers when an advertisement contains false information is beneficial. Consumers develop a sense of deception toward the advertisement when they are notified by a fact-checking tool. The crux of this study, however, is that a fact-checking feature backfires when consumers view persuasive content accentuating their social identity in-group characteristics. While consumers are aware that the advertisement contains false information, they continue to regard it as credible when it emphasizes the distinctive traits of their in-group. This study contributes to the literature on targeted advertising, social identity, message credibility, cognitive dissonance, and false information. Furthermore, the findings can guide managerial decisions about the use of fact-checking features in social media advertising.
Superhuman artificial intelligence can improve human decision-making by increasing novelty
Minkyu Shin et al.
Proceedings of the National Academy of Sciences, 21 March 2023
How will superhuman artificial intelligence (AI) affect human decision-making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 y (1950 to 2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players’ strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.