Findings

Managing to Succeed

Kevin Lewis

April 17, 2025

Promotion Decisions and the Adoption of Explicit Potential Assessment
Isabella Grabner, Judith Künneke & Frank Moers
Management Science, forthcoming

Abstract:
In this study, we leverage the transformation of a performance management system that shifts from only providing a performance rating to also providing an explicit assessment of potential. Although performance measures enable organizations to evaluate an employee’s past performance, they provide limited information regarding the employee’s potential to perform in a prospective job that requires a different skill set. Consequently, firms are gradually moving toward the incorporation of explicit potential assessments in their annual appraisal process to evaluate the employee’s promotability to a different task environment. Our data access allows us to provide evidence on the consequences of implementing a performance management system that adopted the explicit assessment of potential. We find that, on average, the performance of promoted employees is lower after implementation, suggesting that the potential assessment system is less effective in identifying candidates suitable for promotion. Additional analyses lead us to conclude that the difficulty of evaluating employee potential reduces performance upon promotion because of inaccurate recommendations of supervisors who do not sufficiently differentiate in their ratings. We thus identify variation in supervisors’ evaluation quality as an important source of the Peter Principle.


Measuring Human Leadership Skills with AI Agents
Ben Weidmann, Yixian Xu & David Deming
NBER Working Paper, April 2025

Abstract:
We show that leadership skill with artificially intelligent (AI) agents predicts leadership skill with human groups. In a large pre-registered lab experiment, human leaders worked with AI agents to solve problems. Their performance on this “AI leadership test” was strongly correlated (ρ=0.81) with their causal impact as leaders of human teams, which we estimate by repeatedly randomly assigning leaders to groups of human followers and measuring team performance. Successful leaders of both humans and AI agents ask more questions and engage in more conversational turn-taking; they score higher on measures of social intelligence, fluid intelligence, and decision-making skill, but do not differ in gender, age, ethnicity or education. Our findings indicate that AI agents can be effective proxies for human participants in social experiments, which greatly simplifies the measurement of leadership and teamwork skills.


The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
Fabrizio Dell'Acqua et al.
NBER Working Paper, April 2025

Abstract:
We examine how artificial intelligence transforms the core pillars of collaboration -- performance, expertise sharing, and social engagement -- through a pre-registered field experiment with 776 professionals at Procter & Gamble, a global consumer packaged goods company. Working on real product innovation challenges, professionals were randomly assigned to work either with or without AI, and either individually or with another professional in new product development teams. Our findings reveal that AI significantly enhances performance: individuals with AI matched the performance of teams without AI, demonstrating that AI can effectively replicate certain benefits of human collaboration. Moreover, AI breaks down functional silos. Without AI, R&D professionals tended to suggest more technical solutions, while Commercial professionals leaned towards commercially-oriented proposals. Professionals using AI produced balanced solutions, regardless of their professional background. Finally, AI’s language-based interface prompted more positive self-reported emotional responses among participants, suggesting it can fulfill part of the social and motivational role traditionally offered by human teammates. Our results suggest that AI adoption at scale in knowledge work reshapes not only performance but also how expertise and social connectivity manifest within teams, compelling organizations to rethink the very structure of collaborative work.


Collaborating with AI Agents: Field Experiments on Teamwork, Productivity, and Performance
Harang Ju & Sinan Aral
MIT Working Paper, March 2025

Abstract:
To uncover how AI agents change productivity, performance, and work processes, we introduce MindMeld: an experimentation platform enabling humans and AI agents to collaborate in integrative workspaces. In a large-scale marketing experiment on the platform, 2310 participants were randomly assigned to human-human and human-AI teams, with randomized AI personality traits. The teams exchanged 183,691 messages, and created 63,656 image edits, 1,960,095 ad copy edits, and 10,375 AI-generated images while producing 11,138 ads for a large think tank. Analysis of fine-grained communication, collaboration, and workflow logs revealed that collaborating with AI agents increased communication by 137% and allowed humans to focus 23% more on text and image content generation messaging and 20% less on direct text editing. Humans on Human-AI teams sent 23% fewer social messages, creating 60% greater productivity per worker and higher-quality ad copy. In contrast, human-human teams produced higher-quality images, suggesting that AI agents require fine-tuning for multimodal workflows. AI personality prompt randomization revealed that AI traits can complement human personalities to enhance collaboration. For example, conscientious humans paired with open AI agents improved image quality, while extroverted humans paired with conscientious AI agents reduced the quality of text, images, and clicks. In field tests of ad campaigns with ~5M impressions, ads with higher image quality produced by human collaborations and higher text quality produced by AI collaborations performed significantly better on click-through rate and cost per click metrics. Overall, ads created by human-AI teams performed similarly to those created by human-human teams. Together, these results suggest AI agents can improve teamwork and productivity, especially when tuned to complement human traits.


(Inaccurate) Beliefs about Skill Decay
Daniel Connolly, Samantha Horn & George Loewenstein
Carnegie Mellon University Working Paper, August 2024

Abstract:
Across five controlled experiments, we investigate the accuracy of beliefs about skill decay. Participants consistently underestimated their own skill decay by 28% to 59% across tasks. Even after directly experiencing skill decay, participants continued to underpredict its extent. We identify two mechanisms driving this underestimation: First, participants were more accurate in predicting others' skill decline than their own, suggesting ego-based motivations are at play. Second, both subgroup heterogeneity and variable importance analyses reveal an underappreciation of the adverse impact of age on skill decay. Together, these findings suggest systematic misjudgments of skill retention, with implications for human capital investment decisions.


Maintaining cooperation through vertical communication of trust when removing sanctions
Ann-Christin Posten et al.
Proceedings of the National Academy of Sciences, 25 March 2025

Abstract:
An effective way to foster cooperation is to monitor behavior and sanction freeriding. Yet, previous studies have shown that cooperation quickly declines when sanctioning mechanisms are removed. We test whether explicitly expressing trust in players’ capability to maintain cooperation after the removal of sanctions, i.e., vertical communication of trust, has the potential to alleviate this drop in compliance. Four incentivized public-goods experiments (N = 2,823) find that the vertical communication of trust maintains cooperation upon the removal of centralized (Study 1), third-party (Study 2a, 2b), and peer punishment (Study 3), and this effect extends beyond single interactions (Study 4). In all studies, vertical trust communication increases mutual trust among players, providing support to the idea that vertically communicating trust can be a self-fulfilling prophecy. Extrapolating our findings to natural environments, they suggest that authorities should carefully consider how they communicate the lifting of rules and sanctions.


Are Donors Watching? Nonprofit Rating Availability and Pay-to-Performance Sensitivity
Chen Zhao & Richard Dull
Journal of Business Ethics, April 2025, Pages 855-872

Abstract:
CEO compensation by nonprofit organizations is controversial. Higher-qualified CEOs should be compensated more than lesser-qualified individuals because of better performance regarding organizational goals and missions. Alternatively, an ethical issue may exist if CEOs are overcompensated resulting in a negative impact on the operations of their organizations. Donors have the incentive to monitor nonprofit organizations, but their role is limited to their ability to acquire nonprofit organization information. However, charity rating agencies make information more accessible and understandable, thus reducing information asymmetry between donors and nonprofit organizations. This study examines whether charity rating availability is associated with negative pay-to-performance sensitivity. Using a sample derived from the IRS Form 990 s and Charity Navigator ratings, this study provides evidence that rating availability is negatively related to pay-to-performance sensitivity in nonprofit organizations. Additional tests provide evidence that the overall rating score and its financial rating component are negatively associated with pay-to-performance sensitivity.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.