The Pressure to Publish Is Challenging the Foundations of Academic Integrity

Andrei Mihai

According to an analysis in Science, around 1.92 million papers were indexed by the Scopus and Web of Science publication databases in 2016. In 2022, that number rose to 2.82 million. The authors of that analysis asked themselves whether this is eroding trust in science; can all this research be genuine and trustworthy?

At the 12th Heidelberg Laureate Forum, which took place in September 2025, a panel of mathematicians, publishers, and research integrity experts tried to address this thorny situation. Their discussion revealed a complex ecosystem for academics caught between the need for openness and the pressure to perform in the publishing arena.

Publish or Perish

a moderator standing up in front of a panel
Panel discussion at Heidelberg Laureate Forum. ©: HLFF / Kreutzer

The modern academic machine runs fast. In the “good old days,” researchers could take their time and immerse themselves in their work. You would sometimes discuss with reviewers directly and engage in complex back-and-forth, says Yukari Ito, a mathematician and Professor at Nagoya University.

Nowadays, this is no longer the case. The “publish or perish” mantra has firmly gripped the academic world. Papers are expected to appear in rapid succession, each feeding the next grant application or promotion review.

“There are many papers published every day, which also puts pressure on us to publish many papers,” Ito says. “We have to be careful with this evaluation. Many people look at the number of papers and citations and so on, she added.”

Eunsang Lee, who works in the research integrity group at Springer Nature, also acknowledges this issue and says the pressure is coming from the research institutions.

“We understand that there is pressure to publish many papers, especially from the institution or funding bodies,” he said. “We are trying to work together closely with institutions and funding bodies to release this kind of pressure, but it’s a very long way to go. It also depends on the culture and country. As a publisher there’s not much we can do, but we try to provide a safer platform for researchers.”

group of people engaged in a discussion at a conference, with one person gesturing
Eunsang Lee (left) at the panel discussion on the state of academic intregrity. ©: HLFF / Flemming

This pressure is not without consequences, and the consequences are felt most by young researchers. Data consistently shows that PhD students suffer severe mental strain and are at high risk of depression and anxiety. For many young researchers, mental health problems have become “normal” parts of research.

At the same time, this approach can end up prioritizing quantity over quality, resulting in a flood of papers. Some of these papers are good or excellent; some are repetitive or inconsequential; and some are outright fraudulent.

How Bad Is Academic Fraud?

There is no way to tell just how widespread academic fraud is. Lee, whose main task is to analyze data from potentially problematic papers or authors, says he was not originally aware of this problem. “After joining the publishing industry, I realized that academic integrity is a serious problem.”

There are many types of breaches, Lee says. “Data fabrication is one example. For publishers, this is difficult because identifying data fabrication requires both expertise and in-depth analysis. The problem is that we are getting this at a massive scale, so it’s really hard to tackle.”

But aside from the classic issues, newer risks have also emerged, in particular due to the rise of AI. AI can mask plagiarism under clever paraphrasing, or it can generate or edit images (sometimes, with hilarious results). It can also facilitate chains of citations, connecting unrelated papers in an artificial web of self-reference.

“AI can generate text and images, which can be a big problem and result in obviously fake science. We also see many cases of irrelevant references because some people include self-citations of unrelated work or some other irrelevant references to inflate their citation profile. It’s a big problem and also hard to tackle as a publisher.”

There is no simple way to weed out these problems, but it is not impossible, either.

The Rise of the Scientific Sleuth

a man speaking and gesturing
Lonni Besançon on the panel. ©: HLFF / Flemming

For Lonni Besançon, an Assistant Professor of Visualization at Linköping University in Sweden, and alumnus of the HLF, questions of integrity became personal during the pandemic.

“I got into misconduct finding and looking at it because some of the papers that I was reading during COVID were a bit problematic,” he says. “I have a methodological background also, so I started looking into this more, and I’ve now discovered quite a few problematic papers. Around a few thousand, in different fields, not in mine. I’ve been reporting on them for a while and talking about this.”

Besançon belongs to a growing group of volunteer “scientific sleuths” – researchers who spend their free time identifying falsified data, image duplications, and plagiarized manuscripts. Many of them have exposed widespread misconduct, including fake journals and so-called “paper mills” that mass-produce fraudulent articles for paying clients. Science greatly benefits from this process, but this is largely unpaid, underappreciated, volunteer work.

Assessing scientific discovery has never been easy, and the need for better metrics is clear. Yet, for Besançon, the core issue is the idea of metrics itself.

“The day something becomes a metric is the day people start gaming that. Personally, I’m against all kinds of metrics. I see the point of citations …, but I think if we make a metric, eventually people will game it. It’s always been the case.”

The French researcher also emphasizes that when a problem is spotted, this is not always because of bad intent. Honest mistakes obviously happen. However, retractions or corrections have come to be seen as damning for scientists, when in fact, they are a normal part of the scientific process.

“We shouldn’t see the correction of papers or the retraction of papers as a problem. If there’s a mistake in one of our papers, we should correct it because this is the body of knowledge that we’ve created for the world,” says Besançon.

Lee echoed that sentiment: “Don’t be afraid of retractions and corrections caused by honest errors. Everyone can make a mistake. Publishers always try to distinguish between corrections made by honest errors and research integrity. So don’t worry about this. Post-publication actions by authors are the healthiest way to achieve a high standard of research today. That’s something I want to mention especially for young researchers.”

So What Do We Do?

a panel of people talking, behind them a screen highlighting the speaker
Yukari Ito speaking on the panel. ©: HLFF / Flemming.

Maintaining trust in a system that produces nearly three million papers a year is an enormous challenge. Transparency is a good way to start. If research integrity is the foundation of science, then transparency is its scaffolding. It supports trust not only between researchers and their peers, but between science and the public.

“Transparency is key. When in doubt, share. That’s one thing I can say confidently. If you can share, share. If you can’t, for instance if you have anonymization problems, just explain why you’re not sharing. Be transparent with everything as much as you can.”

This ethos of openness, honesty, and accountability was echoed across the panel. It is the best defense scientists have at their disposal to showcase the value of their work.

“It’s obvious that you have to ensure fully transparent authorship, including authorship changes, and also you have to declare any conflict of interest and cite your sources. The next thing I would say, which is especially true in the case of mathematics and computer science is to share your code and data availability as much as you can,” says Lee.

Yet, even as researchers and publishers confront misconduct, the structure of academic incentives remains unchanged, pushing for more papers. Ito suggests that perhaps the entire system should rediscover the patience that once defined scholarship: to slow down, to prioritize originality over output, and to treat retractions and corrections as acts of integrity and not as failure.

“For young researchers, publishing your first paper is very important,” she said. “You have to find one big problem. But then, you should continue your research by being original and interesting.”

Ultimately, transparency and integrity alone will not solve a systemic problem rooted in incentives. Until universities, funders, and publishers work together to devise a system that truly rewards rigor and value over volume, the pressure cooker of modern academia will keep boiling.

The post The Pressure to Publish Is Challenging the Foundations of Academic Integrity originally appeared on the HLFF SciLogs blog.