Publish or perish
Networks and Productivity: Causal Evidence from Editor Rotations
Jonathan Brogaard, Joseph Engelberg & Christopher Parsons
Journal of Financial Economics, January 2014, Pages 251-270
Abstract:
Using detailed publication and citation data for over 50,000 articles from 30 major economics and finance journals, we investigate whether network proximity to an editor influences research productivity. During an editor's tenure, his current university colleagues publish about 100% more papers in the editor's journal, compared to years when he is not editor. In contrast to editorial nepotism, such "inside" articles have significantly higher ex post citation counts, even when same-journal and self-cites are excluded. Our results thus suggest that despite potential conflicts of interest faced by editors, personal associations are used to improve selection decisions.
----------------------
Prizes and Productivity: How Winning the Fields Medal Affects Scientific Output
George Borjas & Kirk Doran
NBER Working Paper, September 2013
Abstract:
Knowledge generation is key to economic growth, and scientific prizes are designed to encourage it. But how does winning a prestigious prize affect future output? We compare the productivity of Fields medalists (winners of the top mathematics prize) to that of similarly brilliant contenders. The two groups have similar publication rates until the award year, after which the winners' productivity declines. The medalists begin to "play the field," studying unfamiliar topics at the expense of writing papers. It appears that tournaments can have large post-prize effects on the effort allocation of knowledge producers.
----------------------
Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone
Uri Simonsohn
Psychological Science, October 2013, Pages 1875-1888
Abstract:
I argue that requiring authors to post the raw data supporting their published results has the benefit, among many others, of making fraud much less likely to go undetected. I illustrate this point by describing two cases of suspected fraud I identified exclusively through statistical analysis of reported means and standard deviations. Analyses of the raw data behind these published results provided invaluable confirmation of the initial suspicions, ruling out benign explanations (e.g., reporting errors, unusual distributions), identifying additional signs of fabrication, and also ruling out one of the suspected fraud's explanations for his anomalous results. If journals, granting agencies, universities, or other entities overseeing research promoted or required data posting, it seems inevitable that fraud would be reduced.
----------------------
The Retraction Penalty: Catastrophe and Consequence in Scientific Teams
Ginger Zhe Jin et al.
NBER Working Paper, October 2013
Abstract:
What are the individual rewards to working in teams? This question extends across many production settings but is of long-standing interest in science and innovation, where the "Matthew Effect" suggests that eminent team members garner credit for great works at the expense of less eminent team members. In this paper, we study this question in reverse, examining highly negative events - article retractions. Using the Web of Science, we investigate how retractions affect citations to the authors' prior publications. We find that the Matthew Effect works in reverse - namely, scientific misconduct imposes little citation penalty on eminent coauthors. By contrast, less eminent coauthors face substantial citation declines to their prior work, and especially when they are teamed with an eminent author. A simple Bayesian model is used to interpret the results. These findings suggest that a good reputation can have protective properties, but at the expense of those with less established reputations.
----------------------
US studies may overestimate effect sizes in softer research
Daniele Fanelli & John Ioannidis
Proceedings of the National Academy of Sciences, 10 September 2013, Pages 15031-15036
Abstract:
Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such "US effect" and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.
----------------------
The life of p: 'Just significant' results are on the rise
Nathan Leggett et al.
Quarterly Journal of Experimental Psychology, December 2013, Pages 2303-2309
Abstract:
Null hypothesis significance testing uses the seemingly arbitrary probability of .05 as a means of objectively determining if a tested effect is reliable. Within recent psychological articles, research has found an over-representation of p values around this cut-off. The present study examined whether this over-representation is a product of recent pressure to publish or if it has existed throughout psychological research. Articles published in 1965 and 2005 from two prominent psychology journals were examined. Like previous research, the frequency of p values at and just below .05 was greater than expected compared to p frequencies in other ranges. While this over-representation was found for values published in both 1965 and 2005, it was much greater in 2005. Additionally, p values close to but over .05 were more likely to be rounded down to, or incorrectly reported as, significant in 2005 compared to 1965. Modern statistical software and an increased pressure to publish may explain this pattern. The problem may be alleviated by reduced reliance on p values and increased reporting of confidence intervals and effect sizes.
----------------------
Does Knowledge Accumulation Increase the Returns to Collaboration?
Ajay Agrawal, Avi Goldfarb & Florenta Teodoridis
NBER Working Paper, December 2013
Abstract:
We conduct the first empirical test of the knowledge burden hypothesis, one of several theories advanced to explain increasing team sizes in science. For identification, we exploit the collapse of the USSR as an exogenous shock to the knowledge frontier causing a sudden release of previously hidden research. We report evidence that team size increased disproportionately in Soviet-rich relative to -poor subfields of theoretical mathematics after 1990. Furthermore, consistent with the hypothesized mechanism, scholars in Soviet-rich subfields disproportionately increased citations to Soviet prior art and became increasingly specialized.
----------------------
Atypical Combinations and Scientific Impact
Brian Uzzi et al.
Science, 25 October 2013, Pages 468-472
Abstract:
Novelty is an essential feature of creative ideas, yet the building blocks of new ideas are often embodied in existing knowledge. From this perspective, balancing atypical knowledge with conventional knowledge may be critical to the link between innovativeness and impact. Our analysis of 17.9 million papers spanning all scientific fields suggests that science follows a nearly universal pattern: The highest-impact science is primarily grounded in exceptionally conventional combinations of prior work yet simultaneously features an intrusion of unusual combinations. Papers of this type were twice as likely to be highly cited works. Novel combinations of prior work are rare, yet teams are 37.7% more likely than solo authors to insert novel combinations into familiar knowledge domains.
----------------------
Editorial Decisions May Perpetuate Belief in Invalid Research Findings
Kimmo Eriksson & Brent Simpson
PLoS ONE, September 2013
Abstract:
Social psychology and related disciplines are seeing a resurgence of interest in replication, as well as actual replication efforts. But prior work suggests that even a clear demonstration that a finding is invalid often fails to shake acceptance of the finding. This threatens the full impact of these replication efforts. Here we show that the actions of two key players - journal editors and the authors of original (invalidated) research findings - are critical to the broader public's continued belief in an invalidated research conclusion. Across three experiments, we show that belief in an invalidated finding falls sharply when a critical failed replication is published in the same - versus different - journal as the original finding, and when the authors of the original finding acknowledge that the new findings invalidate their conclusions. We conclude by discussing policy implications of our key findings.
----------------------
Editorial Bias in Legal Academia
Albert Yoon
Journal of Legal Analysis, Winter 2013, Pages 309-338
Abstract:
In academia, journals serve as a proxy for quality, where prestigious journals are presumed to publish articles of higher quality than their less prestigious counterparts. Concerns over editorial bias in selecting articles, however, challenge this claim. This article develops a framework for evaluating this bias in legal academia, examining over 25,000 articles from nearly 200 general interest law reviews. Examining published articles in law reviews - the dominant venue for scholarship - and subsequent citations to these articles, we find that, with few exceptions, law reviews publish more articles from faculty at their own institution than from faculty at other law schools. Law review publications of their own faculty are cited less frequently than publications of outside faculty. This disparity is more pronounced among higher-ranked law reviews, but occurs across the entire distribution of journals. We correspondingly find that law faculty publish their lesser-cited articles in their own law review relative to their articles published in other law reviews. These findings suggest that legal scholarship, in contrast to other academic disciplines, exhibits bias in article selection at the expense of lower quality.
----------------------
Willful Blindness: The Inefficient Reward Structure in Academic Research
Stan Liebowitz
Economic Inquiry, forthcoming
Abstract:
This article examines how economics departments judge research articles and assign credit to authors. It begins with a demonstration that only strictly prorated author credit induces researchers to choose efficient sized teams. Nevertheless, survey evidence reveals that most economics departments only partially prorate authorship credit, implying excessive coauthorship. Indeed, a half-century increase in coauthorship may be better explained by incomplete proration than by any increased specialization among authors. A possible explanation for the reliance on incomplete proration is the self-interest of economists who are more likely to engage coauthorship - full professors. The self-interest of senior faculty may also explain the relatively small role given to citations in senior promotions. A rational response by economists to the under-proration of author credit is to engage in false authorship. Although false authorship is of dubious ethical status, it may have the perverse impact of improving the efficiency of team production. Grossly excessive coauthorship, where little attention is paid to most authors listed on a paper, as found in some other academic disciplines, may be the path down which economics is headed if the reward structure is not altered.