Marketing Strategy
Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity
Stephanie Tully, Chiara Longoni & Gil Appel
Journal of Marketing, forthcoming
Abstract:
As artificial intelligence (AI) transforms society, understanding factors that influence AI receptivity is increasingly important. The current research investigates which types of consumers have greater AI receptivity. Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI. This lower literacy-greater receptivity link is not explained by differences in perceptions of AI's capability, ethicality, or feared impact on humanity. Instead, this link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI's execution of tasks that seem to require uniquely human attributes. In line with this theorizing, the lower literacy-higher receptivity link is mediated by perceptions of AI as magical and is moderated among tasks not assumed to require distinctly human attributes. These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.
"Or They Could Just Not Use It?": The Dilemma of AI Disclosure for Audience Trust in News
Benjamin Toff & Felix Simon
International Journal of Press/Politics, forthcoming
Abstract:
The adoption of artificial intelligence (AI) technologies in the production and distribution of news has generated theoretical, normative, and practical concerns around the erosion of journalistic authority and autonomy and the spread of misinformation. With trust in news already low in many places worldwide, both scholars and practitioners are wary of how the public will respond to news generated through automated methods, prompting calls for labeling of AI-generated content. In this study, we present results from a novel survey-experiment conducted using actual AI-generated journalistic content. We test whether audiences in the United States, where trust is particularly polarized along partisan lines, perceive news labeled as AI-generated as more or less trustworthy. We find on average that audiences perceive news labeled as AI-generated as less trustworthy, not more, even when articles themselves are not evaluated as any less accurate or unfair. Furthermore, we find that these effects are largely concentrated among those whose preexisting levels of trust in news are higher to begin with and among those who exhibit higher levels of knowledge about journalism. We also find that negative effects associated with perceived trustworthiness are largely counteracted when articles disclose the list of sources used to generate the content. As news organizations increasingly look toward adopting AI technologies in their newsrooms, our results hold implications for how disclosure about these techniques may contribute to or further undermine audience confidence in the institution of journalism at a time in which its standing with the public is especially tenuous.
Where A/B Testing Goes Wrong: How Divergent Delivery Affects What Online Experiments Cannot (and Can) Tell You About How Customers Respond to Advertising
Michael Braun & Eric Schwartz
Journal of Marketing, forthcoming
Abstract:
Marketers use online advertising platforms to compare user responses to different ad content. But platforms' experimentation tools deliver different ads to distinct and undetectably optimized mixes of users that vary across ads, even during the test. Because exposure to ads in the test is nonrandom, the estimated comparisons confound the effect of the ad content with the effect of algorithmic targeting. This means that experimenters may not be learning what they think they are learning from ad A/B tests. The authors document these "divergent delivery" patterns during an online experiment for the first time. They explain how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A/B test results. Analytically, the authors extend the potential outcomes model of causal inference to treat random assignment of ads and user exposure to ads as separate experimental design elements. Managerially, the authors explain why platforms lack incentives to allow experimenters to untangle the effects of ad content from proprietary algorithmic selection of users when running A/B tests. Given that experimenters have diverse reasons for comparing user responses to ads, the authors offer tailored prescriptive guidance to experimenters based on their specific goals.
When corporate silence is costly: Negative consumer responses to corporate silence on social issues
Marco Shaojun Qin et al.
Strategic Management Journal, forthcoming
Abstract:
The growth of corporate activism on contentious social issues creates a puzzle as to why companies would risk engaging on divisive topics. Indeed, a mixed body of evidence identifies that such activism often reduces stakeholder support. We shed light on this puzzle by reversing attention to the costs of not engaging in corporate activism. Grounded in the cognitive model of stakeholder behavior, we theorize whether and when consumers will negatively respond to corporate silence on a social issue based on the visibility of silence. Our theory also suggests that peer activism and market niche are pivotal contingencies that exacerbate or mitigate such negative responses. Using a rigorous within-company cross-platform difference-in-differences econometric model, we find support for our theory and uncover substantial costs of corporate inaction.
Competition and Diversity in Generative AI
Manish Raghavan
MIT Working Paper, December 2024
Abstract:
Recent evidence suggests that the use of generative artificial intelligence reduces the diversity of content produced. In this work, we develop a game-theoretic model to explore the downstream consequences of content homogeneity when producers use generative AI to compete with one another. At equilibrium, players indeed produce content that is less diverse than optimal. However, stronger competition mitigates homogeneity and induces more diverse production. Perhaps more surprisingly, we show that a generative AI model that performs well in isolation (i.e., according to a benchmark) may fail to do so when faced with competition, and vice versa. We validate our results empirically by using language models to play Scattergories, a word game in which players are rewarded for producing answers that are both correct and unique. We discuss how the interplay between competition and homogeneity has implications for the development, evaluation, and use of generative AI.
The Sound of Status: Product Volume as a Status Signal of Dominance or Prestige
Michael Lowe et al.
Journal of Marketing Research, forthcoming
Abstract:
Prior research on status signaling reveals that consumers may select subtle or overt signals to convey their social status. In this research, the authors build on this insight in the context of auditory signaling and show that consumers use product volume as a status signal. Using the Dominance-Prestige Account of Rank Allocation framework, the authors propose that the path consumers take to achieve status will influence their choice of loud or quiet products. A combination of quantitative models, a lab experiment and a conjoint study demonstrates that consumers pursuing status via dominance are more likely to choose relatively louder products, while those who use a prestige path are more likely to choose relatively quieter products. The authors further demonstrated a boundary condition for prestige-driven consumers, the presence of other signals of social rank, such as education or profession. When no other markers of their social rank are present prestige-driven people reduce their preference for quiet signals (Study 4).