Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.

Submission Preview

Link to Story

We Risk a Deluge of AI-Written ‘Science’ Pushing Corporate Interests – Here's What to Do About It

Accepted submission by hubie at 2025-09-21 14:52:03
News

We risk a deluge of AI-written 'science' pushing corporate interests – here's what to do about it [theconversation.com]:

Back in the 2000s, the American pharmaceutical firm Wyeth was sued by thousands of women who had developed breast cancer [nytimes.com] after taking its hormone replacement drugs. Court filings revealed the role of "dozens of ghostwritten reviews and commentaries published in medical journals [plos.org] and supplements being used to promote unproven benefits and downplay harms" related to the drugs.

Wyeth, which was taken over by Pfizer in 2009, had paid a medical communications firm to produce these articles, which were published under the bylines of leading doctors in the field (with their consent). Any medical professionals reading these articles and relying on them for prescription advice would have had no idea that Wyeth was behind them.

The pharmaceutical company insisted [nytimes.com] that everything written was scientifically accurate and – shockingly – that paying ghostwriters for such services was common in the industry. Pfizer ended up paying out more than US$1 billion [fiercepharma.com] (£744 million) in damages over the harms from the drugs.

The articles in question are an excellent example of "resmearch" – bullshit science [princeton.edu] in the service of corporate interests. While the overwhelming majority of researchers are motivated to uncover the truth and check their findings robustly, resmearch is unconcerned with truth – it seeks only to persuade.

[...] Already the public health literature is observing a slew of papers that draw on data optimised for use with an AI [nature.com] to report single-factor results. Single-factor results link a single factor to some health outcome, such as finding a link between eating eggs and developing dementia.

These studies lend themselves to specious results. When datasets span thousands of people and hundreds of pieces of information about them, researchers will inevitably find misleading correlations that occur by chance.

A search of leading academic databases [plos.org] Scopus and Pubmed showed that an average of four single-factor studies were published per year between 2014 and 2021. In the first ten months of 2024 alone, a whopping 190 were published.

These weren't necessarily motivated by corporate interests – some could, for example, be the result of academics looking to publish more material to boost their career prospects. The point is more that with AI facilitating these kinds of studies, they become an added temptation for businesses looking to promote products.

[...] One issue is that research does not always go through peer review [shortform.com] prior to informing policy. In 2021, for example, US Supreme Court justice Samuel Alito, in an opinion [supremecourt.gov] on the right to carry a gun, cited a briefing paper by a Georgetown academic that presented survey data on gun use [supremecourt.gov].

The academic and gun survey were funded by the Constitutional Defence Fund [nytimes.com], which the New York Times describes as a "pro-gun nonprofit [nytimes.com]".

Since the survey data are not publicly available and the academic has refused to answer questions [nytimes.com] about this, it is impossible to know whether his results are resmearch. Still, lawyers have referenced his paper in cases across the US [nytimes.com] to defend gun interests.

One obvious lesson is that anyone relying on research should be wary of any that has not passed peer review. A less obvious lesson is that we will need to reform peer review as well. There has been much discussion in recent years about the explosion in published research and the extent to which reviewers do their jobs properly.

[...] In general, the current system seems ill-equipped to cope with the deluge of papers that AI will precipitate. Reviewers need to invest time, effort and scrupulous attention checking preregistrations, specification curve analyses, data, code and so on.

This requires a peer-review mechanism that rewards reviewers [arxiv.org] for the quality of their reviews [arxiv.org].

Public trust in science remains high [nature.com] worldwide. That is good for society because the scientific method is an impartial judge that promotes what is true and meaningful over what is popular or profitable.

Yet AI threatens to take us further from that ideal than ever. If science is to maintain its credibility, we urgently need to incentivise meaningful peer review.


Original Submission