logo
Loading...
Back to all articles
RB 1014 minMay 4, 2026

When AI Reviews AI: The Peer Review System Is Buckling

By Tarun Singhal

Share:
When AI Reviews AI: The Peer Review System Is Buckling

Last week, a philosophy journal had a small earthquake. A referee report, the thing that's supposed to be the human conscience of academic publishing, was found to have been written by AI. Not assisted by AI. Generated by it.

That's not the only one. At ICLR 2026, the most-read AI conference in the world, more than one in five peer reviews were found to be fully AI-written. Five years ago, that statistic would have been a scandal. Today, it's a Tuesday.

The peer review system isn't broken. It's something more uncomfortable. It's still working, but the trust contract underneath it is being quietly rewritten, and most researchers are finding out about it after the fact.

How We Got Here

Peer review was built for a slower world. A thoughtful expert reads a paper for a few hours, sends back questions, and a journal editor weighs the response. Reviewers are unpaid. Workloads have ballooned. Submissions across most fields have multiplied since 2020.

Then ChatGPT showed up. For a tired reviewer juggling teaching, grants, and three other reviews, "draft me a critique of this manuscript" is a tempting shortcut.

Publishers know it's happening. Springer Nature, Elsevier, and Wiley have all rolled out AI-detection systems and policies on AI-assisted reviewing this year. Some now require submissions to include an "AI Interaction Protocol", a mini methods section about which model the authors used and on which databases. The trouble is that detectors aren't reliable. They flag mixed human-AI text as fully machine-generated, which means a human reviewer who polished their grammar with a tool can be falsely accused of misconduct, and a fully AI-written report can sometimes slip through clean.

What This Costs the Average Researcher

Imagine you're a new PhD reading a paper in your subfield. You see "peer reviewed at a top journal" stamped on the front. You assume that means somebody senior, with the right expertise, took the work apart and rebuilt it before signing off.

In 2026, that assumption is starting to break. The label still appears, but the work behind it is increasingly opaque. Was your paper read by a human? Was it read at all? Were the comments shaped by an LLM that doesn't actually understand the experimental design?

This matters because peer review isn't just a quality stamp. It's how the field decides what counts as known. When that signal degrades, downstream effects spread quickly. Citations get inflated for work that was never truly vetted. Bad ideas survive longer. Good ideas don't get the sharp pushback they need to become great. And researchers, especially early-career ones, start to lose their bearings on what to trust.

Where Discovery Comes In

When the trust signal weakens at the journal level, researchers turn elsewhere. They turn to community signals like who's citing the work, who's discussing it, who's building on it. They turn to multimedia, where a 12-minute audio explanation from the author often beats a 28-page methods section for figuring out what was actually done. They turn to surfacing tools that can pull together diverse signals across language, format, and citation context, so readers can form their own judgement instead of relying on a single stamp. ResearchBunny's AI-curated discovery and multilingual audio briefs are built for exactly this moment, when the right paper isn't always the most-cited one, and the most-cited one isn't always the most trustworthy.

What Comes Next

Some changes are already moving. Open peer review, where reports are signed and published alongside the paper, is gaining ground because it makes both sides accountable. Publish-Review-Curate models are forcing the conversation about what review even means in a world of preprints and post-publication scrutiny. AI Interaction Protocols, awkward as they are, may become standard.

The peer review system won't collapse. Systems like that rarely do. They just get more layered, more checked, and more dependent on readers doing the work that institutions used to do for them.

Researchers in 2026 don't get to outsource judgement anymore. The label on the front of the paper isn't the whole story. It hasn't been for a while.

#AI#Peer Review#Scholarly Publishing#Research Integrity#Academic Publishing

Written by

Tarun Singhal

Related articles

How to Get More Eyes (and Ears) on Your Research...Without Compromise
RB 101Nov 6, 20255 min

How to Get More Eyes (and Ears) on Your Research...Without Compromise

Your research doesn't have to stay locked in papers or behind paywalls. In our latest blog, we break down how any researcher; introverted, early-career, or time-crunched, can turn their work into accessible formats like multilingual audio, visual summaries, and playlist-style collections using ResearchBunny.

By Simran Bhatia

What Scholarly Publishers Must Prioritise in 2026
RB 101Nov 3, 20255 min

What Scholarly Publishers Must Prioritise in 2026

Research today must travel further, faster and more accessibly than ever. The true value of scholarship lies not just in publication but in understanding and use. Publishers who embrace this shift will not only drive better visibility and citations, they’ll drive real influence, trust, and global impact.

By Simran Bhatia