But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed:
Springer Nature, the world's largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can't be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.
"We felt compelled to clarify our position: for our authors, for our editors, and for ourselves," Magdalena Skipper, editor-in-chief of Springer Nature's flagship publication, Nature, tells The Verge. "This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present."
[...] Skipper says that banning AI tools in scientific work would be ineffective. "I think we can safely say that outright bans of anything don't work," she says. Instead, she says, the scientific community — including researchers, publishers, and conference organizers — needs to come together to work out new norms for disclosure and guardrails for safety.
Originally spotted on The Eponymous Pickle.
(Score: 4, Interesting) by looorg on Friday February 03 2023, @08:34AM (3 children)
The good papers won't need AI and the poor papers won't disclose it. After all if the tech is just good enough it won't show. They'll replace the tedious work with just having the AI write that up, then they'll have a human give it the once over or two and fixe the idiocy of it all and include the things that they have agreed need to be there that the AI obviously missed or didn't understand. Then onto the "real" science. If it's just that they can't be used as citation, which makes sense since it didn't really have any actual ideas of its own, then some poor gradstudent will act as ChatGPT-goalie and take all the credits.
I wait for them to train the AI in such a way that it will actually start to cite others properly. Then we'll have SEO-levels of jank as it will start to try and create citation-monsters where AI papers are citing each other in circle-jerk fashion to try and get its citation-index up. Then we'll have the real divider between the good papers and the truly shit once.
(Score: 2) by takyon on Friday February 03 2023, @11:27AM (1 child)
https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days/ [cnet.com]
https://en.wikipedia.org/wiki/Meta_(academic_company) [wikipedia.org]
Facebook is out of the running, I think.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by looorg on Friday February 03 2023, @01:46PM
Sweet. Facebook created Science-Tay the anti-vaxx-bot. Somehow I don't see them unpausing it for the public anytime soon. I guess this is the big thing, AI can't at the moment hold and reason around an argument. It can only identify key words and then parrot them back. To some, and apparently in some cases, this might go over and appear reasonable to some people. Or any/many people at just a quick glance. But once you actually read it then it is just exposed for the complete mess that it is.
It's been tried here at work for the last month or so as teachers was having a panic about students cheating with ChatGPT. None of them have been able to produce a solid and good paper as of yet, there have been a lot of horrible once and some that sort of didn't seem so bad at a glance. A lot of them are just word rambling as it tries to include everything it things is important and correct. But it can't set those things in context or relation to each other. Papers that could hold an actual argument or compare a few things to each other? None. Actual good papers? None. Not a single one. The current conclusion is that the best case scenario so far is to sort of have the ChatGPT paper as a base to dig up the basics or generic information that you then work and develop. But then since it's so basic you should have just gotten it by reading the books or going to the lecture. Perhaps the upside is that now the students actually have to read the books so they can check the work of their ChatGPT paper so they don't get busted for obvious plagiarism and cheating.
(Score: 0) by Anonymous Coward on Friday February 03 2023, @08:58PM
The only difference between this and what we have now is that cheap imported labor is being used instead of free AI labor.
Science is being swamped out with "high standards" for grad students, who need to churn out
345 pieces of flair in order to get their degree. Quality control is outsourced to journals in the form of reviewed articles. It's massively vulnerable to corruption - and IMHO is actively exploited by certain nation states. Just go to any science department in the United States and take a random guess which one...Anyone trying to do diligent work has no hope of keeping up with the firehose of feces being flung out by these untrained armies. ChatGPT will hopefully bring the stupidity to a head and students will be required 500 articles, or 5000 articles for the bright ones.