Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday February 03 2023, @01:51AM   Printer-friendly
from the can-ChatGPT-be-a-reviewer? dept.

But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed:

Springer Nature, the world's largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can't be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.

"We felt compelled to clarify our position: for our authors, for our editors, and for ourselves," Magdalena Skipper, editor-in-chief of Springer Nature's flagship publication, Nature, tells The Verge. "This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present."

[...] Skipper says that banning AI tools in scientific work would be ineffective. "I think we can safely say that outright bans of anything don't work," she says. Instead, she says, the scientific community — including researchers, publishers, and conference organizers — needs to come together to work out new norms for disclosure and guardrails for safety.

Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday February 03 2023, @08:58PM

    by Anonymous Coward on Friday February 03 2023, @08:58PM (#1290102)

    The good papers won't need AI and the poor papers won't disclose it. After all if the tech is just good enough it won't show. They'll replace the tedious work with just having the AI write that up, then they'll have a human give it the once over or two and fixe the idiocy of it all and include the things that they have agreed need to be there that the AI obviously missed or didn't understand. Then onto the "real" science. If it's just that they can't be used as citation, which makes sense since it didn't really have any actual ideas of its own, then some poor gradstudent will act as ChatGPT-goalie and take all the credits.

    I wait for them to train the AI in such a way that it will actually start to cite others properly. Then we'll have SEO-levels of jank as it will start to try and create citation-monsters where AI papers are citing each other in circle-jerk fashion to try and get its citation-index up. Then we'll have the real divider between the good papers and the truly shit once.

    The only difference between this and what we have now is that cheap imported labor is being used instead of free AI labor.

    Science is being swamped out with "high standards" for grad students, who need to churn out 3 4 5 pieces of flair in order to get their degree. Quality control is outsourced to journals in the form of reviewed articles. It's massively vulnerable to corruption - and IMHO is actively exploited by certain nation states. Just go to any science department in the United States and take a random guess which one...

    Anyone trying to do diligent work has no hope of keeping up with the firehose of feces being flung out by these untrained armies. ChatGPT will hopefully bring the stupidity to a head and students will be required 500 articles, or 5000 articles for the bright ones.