Analysis reveals AI’s impact on research, journals

Pierce sounds alarm about increasing strain on peer review system

Submissions to Organization Science have risen 42% since the late 2022 release of ChatGPT, but quality is down, according to a new study. (Image: Shutterstock)

ChatGPT and other generative artificial intelligence (AI) tools have been heavily marketed as productivity tools to boost creativity and accelerate workflows. In academia, that has led to more — but not better — research, according to a new analysis published April 27 by the editors of Organization Science, including Editor-in-Chief Lamar Pierce, of Washington University in St. Louis.

Pierce

The study is the first to provide an account of AI’s impact on submissions and reviews at a major academic journal. The authors used Pangram, an AI content detector, to analyze submissions to the journal over a five-year period. Altogether, the sample included 6,957 submissions by 11,887 authors, reviewed 10,389 times by 2,519 unique reviewers. The first two years before the release of ChatGPT served as a control group for comparison.

According to their analysis, submissions to the journal have risen 42% since the late 2022 release of ChatGPT. Concurrently, writing quality has rapidly declined. As a result, most of these submissions are rejected, many during the initial screening by deputy editors. Manuscripts with very low AI scores were most likely to be published in any journal.

The editors also found that more than 30% of reviews in Organization Science use some degree of AI. These reviews are harder to read and focus more on theory and less on data. The authors concluded that AI-generated writing in reviews makes it harder for both editors and authors to act on reviewer feedback and can potentially affect manuscript quality.

“Having spoken with editors at other journals, I think we all had a sense that this was going on, but no one had the hard evidence on the extent of it or the implications,” said Pierce, the Beverly and James Hance Professor of Strategy at WashU Olin Business School. “This was an important motivation for us to take this project on. It’s hard for any of us to develop solutions if we don’t fully understand the extent of the problem.”

The authors warned that the current state of AI tools, amplified by existing publish-or-perish incentives, is placing the peer-review system under stress that is not sustainable for academic journals.

“It’s so easy for us to jump to conclusions on how to address AI in the research process, but what we crucially need is ongoing data and analysis to understand what’s actually happening. And what’s happening is changing constantly,” Pierce said.

“For those of us who are heavy users of AI, the speed of advancement is exciting but also frightening. So I hope those who are proposing policy and best practices are also immersed in using the technology. Without doing so, they’ll miss the dynamic nature of the technology and why static policies will be quickly outdated.”

The authors stressed that their study didn’t aim to determine appropriate levels of AI usage — the issue is far too complex. Rather, they hoped to start a conversation that includes journal editors, universities and authors. They’re off to a strong start: In the first week alone, the article was downloaded 10,000 times. Major news and academic outlets including Nature, Forbes and the Financial Times also joined in the conversation.

In the following Q&A, Pierce discusses the emerging crisis in peer review created by AI adoption and the “publish or perish” mentality.

In the article, you say that you believe “AI has the potential to transform research and our field” but we’re not reaching its full potential. How should researchers use AI?

The key to improving research through AI is asking where humans need to be in the loop and where AI can reduce costs and time in tasks where they can match or exceed human capabilities. AI is extremely helpful in accelerating coding, advising on methods, conducting search, and playing an adversarial role critiquing human arguments and logic. And these are just a few ways it can help. But the human authors still need to evaluate these contributions and fully understand what the AI agents are doing. Platforms such as Claude Code substantially help with coding, but we worry that human authors won’t understand what is actually being done. 

How have universities’ productivity metrics and incentives contributed to the rapid adoption of AI in research?

The incentives in universities and other institutions will ultimately drive researcher behavior. It’s hard for me to blame junior scholars for focusing on more instead of better if that’s what they are rewarded for. Junior scholars face tremendous pressure for promotion and funding. Blaming them for quantity overriding quality is like shooting the horses after we steered them off a cliff.
 
Universities need to get past productivity “counts” of publications and instead focus on the best few contributions of scholars. The challenge is that such evaluations are ultimately subjective, which raises concerns about equitable evaluation. Many public schools promote based on publication counts to address biased evaluations of research records. Evaluating quality of research takes far more time than just counting lines on a C.V.

When students cheat on their homework, they miss out on the opportunity to learn. What do researchers miss out on when they turn to AI to do their research?

Part of researchers’ natural career progression is learning new skills and knowledge with each project or paper. If AI is used in research without understanding how and what it’s producing, scholars don’t become experts. This is a huge concern with junior scholars becoming better writers, theorists and empiricists. They may not gain expertise in the same way that we did a decade ago, and human expertise is still crucial for breakthrough scientific research.

How has this analysis changed your perspective as the editor-in-chief of Organization Science?

A lot of journals are thinking about how we adjust the existing peer review process to account for AI, but I’ve come to understand that this is the wrong question. The right question is: What is the best process for evaluating and promoting great research? We need to start from scratch in designing this system rather than tweaking the current one. This redesign will require a large collective effort. I’m hopeful this paper will engage others in finding better solutions.


C. Gartenberg, S. Hasan, A. Murray, L. Pierce. More Versus Better: Artificial Intelligence, Incentives,and the Emerging Crisis in Peer Review. Organization Science, published online April 27, 2026. DOI: https://doi.org/10.1287/orsc.2026.ed.v37.n3

Study co-authors include Claudine Gartenberg, at the University of Pennsylvania’s Wharton School; Sharique Hasan, at Duke University’s Fuqua School of Business; and Alex Murray, at the University of Oregon’s Lundquist College of Business. The Wharton School and Fuqua School of Business provided funding.