Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Producing more but understanding less: The risks of AI for scientific research

Accepted submission by Freeman at 2024-03-07 15:39:06 from the everything is fine dept.
News

https://arstechnica.com/science/2024/03/producing-more-but-understanding-less-the-risks-of-ai-for-scientific-research/ [arstechnica.com]

Last month, we witnessed the viral sensation [arstechnica.com] of several egregiously bad AI-generated figures published in a peer-reviewed article [arstechnica.net] in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals.

As Ars Senior Health Reporter Beth Mole reported [arstechnica.com], looking closer only revealed more flaws, including the labels "dissilced," "Stemm cells," "iollotte sserotgomar," and "dck." Figure 2 was less graphic but equally mangled, rife with nonsense text and baffling images. Ditto for Figure 3, a collage of small circular images densely annotated with gibberish.
[...]
While the proliferation of errors is a valid concern, especially in the early days of AI tools like ChatGPT, two researchers argue in a new perspective [nature.com] published in the journal Nature that AI also poses potential long-term epistemic risks to the practice of science.

Molly Crockett [princeton.edu] is a psychologist at Princeton University who routinely collaborates with researchers from other disciplines in her research into how people learn and make decisions in social situations. Her co-author, Lisa Messeri [yale.edu], is an anthropologist at Yale University whose research focuses on science and technology studies (STS), analyzing the norms and consequences of scientific and technological communities as they forge new fields of knowledge and invention—like AI.
[...]
The paper's tagline is "producing more while understanding less," and that is the central message the pair hopes to convey. "The goal of scientific knowledge is to understand the world and all of its complexity, diversity, and expansiveness," Messeri told Ars. "Our concern is that even though we might be writing more and more papers, because they are constrained by what AI can and can't do, in the end, we're really only asking questions and producing a lot of papers that are within AI's capabilities."
[...]
One concrete example: My team built a machine learning algorithm to predict moral outrage expressions on Twitter. It works really well. It does as well as showing a tweet to a human and asking, "Is this person outraged or not?" In order to train that algorithm, we showed a bunch of tweets to human participants and asked them to say whether this tweet contained outrage. Because we have that ground truth of human perception, we can be reasonably certain that our tool is doing what we want it to do.
[...]
Once you have multiple models interacting which are not interpretable and might be making errors in a systematic way that you are not able to recognize, that's where we start to get into dangerous territory. Legal scholar Jonathan Zittrain [medium.com] has called this concept "intellectual debt": As soon as you have multiple systems interacting in a complex environment, you can very quickly get to a point where there are errors propagating through the system, but you don't know where they originate because each individual system is not interpretable to the scientists.
[...]
So much of the discourse around AI pushes this message of inevitability: that AI is here, it is not going away, it's inevitable that this is going to bring us to a bright future and solve all our problems. That message is coming from people who stand to make a lot of money from AI and its uptake all across society, including science. But we decide when and how we are going to use AI tools in our work. This is not inevitable. We just need to be really careful that these tools serve us. We're not saying that they can't. We're just adamant that we need to educate ourselves in the ways that AI introduces epistemic risk to the production of scientific knowledge. Scientists working alone are not going to engineer our way out of those risks.

Nature, 2024. DOI: 10.1038/s41586-024-07146-0 [doi.org]

Related stories on SoylentNews:
Widely Used Machine Learning Models Reproduce Dataset Bias: Study [soylentnews.org] - 20240220
Scientists Aghast at Bizarre AI Rat With Huge Genitals in Peer-Reviewed Article [soylentnews.org] - 20240218
AI in Medicine Needs to be Carefully Deployed to Counter Bias – and Not Entrench It [soylentnews.org] - 20230723
Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response [soylentnews.org] - 20220708
Machine Learning Can be Fair and Accurate [soylentnews.org] - 20211026
Detroit Man Sues Police for Wrongfully Arresting Him Based on Facial Recognition [soylentnews.org] - 20210414
Why So Much Science is Wrong, False, Puffed, or Misleading [soylentnews.org] - 20200927
Scientific Fraud Described in Comic Book Form [soylentnews.org] - 20200731
Don't Fool Yourself With Confirmation Bias...and 49 Other Cognitive Biases [soylentnews.org] - 20200215
AI Will be Biased Depending on the Dataset Used for Training [soylentnews.org] - 20170414


Original Submission