Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday February 03 2023, @01:51AM   Printer-friendly
from the can-ChatGPT-be-a-reviewer? dept.

But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed:

Springer Nature, the world's largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can't be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.

"We felt compelled to clarify our position: for our authors, for our editors, and for ourselves," Magdalena Skipper, editor-in-chief of Springer Nature's flagship publication, Nature, tells The Verge. "This new generation of LLM tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them, but [also] using them in ways that go beyond how they can genuinely be used at present."

[...] Skipper says that banning AI tools in scientific work would be ineffective. "I think we can safely say that outright bans of anything don't work," she says. Instead, she says, the scientific community — including researchers, publishers, and conference organizers — needs to come together to work out new norms for disclosure and guardrails for safety.

Originally spotted on The Eponymous Pickle.


Original Submission

Related Stories

Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film 15 comments

https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.

[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.

Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by looorg on Friday February 03 2023, @08:34AM (3 children)

    by looorg (578) on Friday February 03 2023, @08:34AM (#1289992)

    The good papers won't need AI and the poor papers won't disclose it. After all if the tech is just good enough it won't show. They'll replace the tedious work with just having the AI write that up, then they'll have a human give it the once over or two and fixe the idiocy of it all and include the things that they have agreed need to be there that the AI obviously missed or didn't understand. Then onto the "real" science. If it's just that they can't be used as citation, which makes sense since it didn't really have any actual ideas of its own, then some poor gradstudent will act as ChatGPT-goalie and take all the credits.

    I wait for them to train the AI in such a way that it will actually start to cite others properly. Then we'll have SEO-levels of jank as it will start to try and create citation-monsters where AI papers are citing each other in circle-jerk fashion to try and get its citation-index up. Then we'll have the real divider between the good papers and the truly shit once.

    • (Score: 2) by takyon on Friday February 03 2023, @11:27AM (1 child)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Friday February 03 2023, @11:27AM (#1290000) Journal
      • (Score: 2) by looorg on Friday February 03 2023, @01:46PM

        by looorg (578) on Friday February 03 2023, @01:46PM (#1290011)

        Sweet. Facebook created Science-Tay the anti-vaxx-bot. Somehow I don't see them unpausing it for the public anytime soon. I guess this is the big thing, AI can't at the moment hold and reason around an argument. It can only identify key words and then parrot them back. To some, and apparently in some cases, this might go over and appear reasonable to some people. Or any/many people at just a quick glance. But once you actually read it then it is just exposed for the complete mess that it is.

        It's been tried here at work for the last month or so as teachers was having a panic about students cheating with ChatGPT. None of them have been able to produce a solid and good paper as of yet, there have been a lot of horrible once and some that sort of didn't seem so bad at a glance. A lot of them are just word rambling as it tries to include everything it things is important and correct. But it can't set those things in context or relation to each other. Papers that could hold an actual argument or compare a few things to each other? None. Actual good papers? None. Not a single one. The current conclusion is that the best case scenario so far is to sort of have the ChatGPT paper as a base to dig up the basics or generic information that you then work and develop. But then since it's so basic you should have just gotten it by reading the books or going to the lecture. Perhaps the upside is that now the students actually have to read the books so they can check the work of their ChatGPT paper so they don't get busted for obvious plagiarism and cheating.

    • (Score: 0) by Anonymous Coward on Friday February 03 2023, @08:58PM

      by Anonymous Coward on Friday February 03 2023, @08:58PM (#1290102)

      The good papers won't need AI and the poor papers won't disclose it. After all if the tech is just good enough it won't show. They'll replace the tedious work with just having the AI write that up, then they'll have a human give it the once over or two and fixe the idiocy of it all and include the things that they have agreed need to be there that the AI obviously missed or didn't understand. Then onto the "real" science. If it's just that they can't be used as citation, which makes sense since it didn't really have any actual ideas of its own, then some poor gradstudent will act as ChatGPT-goalie and take all the credits.

      I wait for them to train the AI in such a way that it will actually start to cite others properly. Then we'll have SEO-levels of jank as it will start to try and create citation-monsters where AI papers are citing each other in circle-jerk fashion to try and get its citation-index up. Then we'll have the real divider between the good papers and the truly shit once.

      The only difference between this and what we have now is that cheap imported labor is being used instead of free AI labor.

      Science is being swamped out with "high standards" for grad students, who need to churn out 3 4 5 pieces of flair in order to get their degree. Quality control is outsourced to journals in the form of reviewed articles. It's massively vulnerable to corruption - and IMHO is actively exploited by certain nation states. Just go to any science department in the United States and take a random guess which one...

      Anyone trying to do diligent work has no hope of keeping up with the firehose of feces being flung out by these untrained armies. ChatGPT will hopefully bring the stupidity to a head and students will be required 500 articles, or 5000 articles for the bright ones.

  • (Score: 2) by JoeMerchant on Friday February 03 2023, @02:13PM (14 children)

    by JoeMerchant (3937) on Friday February 03 2023, @02:13PM (#1290017)

    ChatGPT is a tool, like Google, Baidu or Yandex. If you're taking the output of ChatGPT directly into your paper without editing, you might credit the ChatGPT developers... better still would be to publish your input queries that led to that output.

    --
    🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Friday February 03 2023, @04:40PM (1 child)

      by Anonymous Coward on Friday February 03 2023, @04:40PM (#1290040)

      > better still would be to publish your input queries that led to that output.

      This^^^ Perhaps ChatGPT could be configured to begin every answer with an echo of the question--"You asked....:"
      That could help establish it as a tool, not as a method of cheating.

      • (Score: 0) by Anonymous Coward on Friday February 03 2023, @09:03PM

        by Anonymous Coward on Friday February 03 2023, @09:03PM (#1290103)

        Who the fuck wants to read that shit tho? It's like burying the one original idea in hundreds of pages of boilerplate. It's training readers to skim. Let's cut out the middle man - just poet the queries and not the answers.

    • (Score: 2) by DannyB on Friday February 03 2023, @05:11PM (11 children)

      by DannyB (5839) Subscriber Badge on Friday February 03 2023, @05:11PM (#1290050) Journal

      Chat GPT may be only a mere tool to you, but it is going to replace humans in all important fields of human endeavor. Especially the most important things from software developers to reality TV show stars. This is because Chat GPT’s confidence in its answers more than makes up for any lack of accuracy in those answers. No new human creativity or insight will ever again be necessary because Chat GPT can simply re-hash the distilled wisdom of the materials it was trained on. Similarly no new computer code will ever be needed because bits and scraps of existing code can be stitched together in meaningless ways to somehow perform all possible software tasks that may be necessary in the future. I know this to be true because a machine told me sew. I always believe anything a machine tells me. And you should too!

      I am reminded of an ancient Star Trek pilot that was made at a time when people lived like 20th century savages without the internet and some people still have black and white TV. In this episode there was an underground civilization that had forgotten how to repair the ancient machines they depended on that were built by their ancestors. Obviously their ancestors had elegant weapons from a more civilized age.

      --
      Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
      • (Score: 2) by JoeMerchant on Friday February 03 2023, @05:22PM (4 children)

        by JoeMerchant (3937) on Friday February 03 2023, @05:22PM (#1290054)

        Chat GPT is a mere tool, it is going to replace humans in all unimportant fields of human endeavor.

        If Chat GPT does it better, why are we wasting our lives trying to compete? Humans used to prepare the fields for planting by hand, then they made tools to make it easier, then they harnessed animal power to pull tools like a plough, then they employed the internal combustion engine in tractors to replace the animals, and now we have all manner of automated farm equipment. Growing food is an important endeavor, but the direct preparation of the field is not an important field of human endeavor.

        The real question is: social upheaval. How are the rich going to remain rich if they can't keep the poor masses working for a living?

        --
        🌻🌻 [google.com]
        • (Score: 2) by mcgrew on Friday February 03 2023, @08:56PM (3 children)

          by mcgrew (701) <publish@mcgrewbooks.com> on Friday February 03 2023, @08:56PM (#1290099) Homepage Journal

          Tools have ALWAYS replaced humans and animals, from the steam engine to the steam shovel to the computer (see Hidden Figures, about human computers and NASA). One steam shovel put a hundred laborers out of work.

          --
          Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
          • (Score: 2) by JoeMerchant on Friday February 03 2023, @09:52PM (2 children)

            by JoeMerchant (3937) on Friday February 03 2023, @09:52PM (#1290118)

            And, is manually digging a hole that a steam shovel can dig "important" human endeavor?

            The problem I see is: people have flocked to "simple" jobs in droves. Job descriptions like: "All you have to do is memorize all this, then execute the decision tree we've laid out for you." Those jobs are toast.

            Problem solving, analytical thinking, coloring outside the lines, those are still "important" human endeavors... "Hello and thank you for calling MegaCorp, how may I direct your call?" not so much.

            --
            🌻🌻 [google.com]
            • (Score: 2) by mcgrew on Saturday February 04 2023, @04:17PM (1 child)

              by mcgrew (701) <publish@mcgrewbooks.com> on Saturday February 04 2023, @04:17PM (#1290249) Homepage Journal

              And, is manually digging a hole that a steam shovel can dig "important" human endeavor?

              It was before that steam shovel was invented. It's the only way they could build bridges and such for thousands of years, and to any human except the very rich, getting a paycheck is the most important of endeavors. That steam shovel cost fifty laborers their jobs and further enriched their employer, but its invention was still a worthy endeavor.

              The problem I see is: people have flocked to "simple" jobs in droves. ...Problem solving, analytical thinking, coloring outside the lines, those are still "important" human endeavors...

              When the simple job pays as well as the complex job, you'll have that. That's why pure communism can't work on a large scale, although it can in a tribal setting. Then there's the fact that some people are simply not smart enough to design that new bridge and have to dig the foundation.

              --
              Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
              • (Score: 2) by JoeMerchant on Saturday February 04 2023, @07:55PM

                by JoeMerchant (3937) on Saturday February 04 2023, @07:55PM (#1290292)

                >any human except the very rich, getting a paycheck is the most important of endeavors.

                I think I started this off with social upheaval being the biggest challenge...

                >some people are simply not smart enough to design that new bridge and have to dig the foundation.

                For well over 100 years, most bridges can be selected from a catalog, all you need to do is provide proper foundations to set them on, and the steam shovel and it's successors have dug those foundations.

                We do, however, have a lot more telephone customer assistants than in 1800, that may change with this latest advance.

                --
                🌻🌻 [google.com]
      • (Score: 2) by JoeMerchant on Friday February 03 2023, @05:27PM (5 children)

        by JoeMerchant (3937) on Friday February 03 2023, @05:27PM (#1290056)

        >No new human creativity or insight will ever again be necessary because Chat GPT can simply re-hash the distilled wisdom of the materials it was trained on.

        I don't know about you, but my 30+ year career in R&D has been predicated on "boldly doing things that have not been done before." Sure, we leverage and reuse existing knowledge wherever possible, but it's the application of that existing knowledge to do new things that's always been valuable to my employers.

        Looking at my current employer, with 100K+ employees, the big visible activity is humans constantly rearranging (proceduralizing, standardizing, etc.) how humans do things. ChatGPT should be much easier to manage, in that respect. But, in the fields of new endeavor, try asking ChatGPT how to create a practical stable exothermic fusion reaction.

        --
        🌻🌻 [google.com]
        • (Score: 2) by DannyB on Friday February 03 2023, @05:30PM

          by DannyB (5839) Subscriber Badge on Friday February 03 2023, @05:30PM (#1290058) Journal

          I did not use any <no-sarcasm> tags. :-)

          --
          Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
        • (Score: 2) by JoeMerchant on Friday February 03 2023, @05:42PM

          by JoeMerchant (3937) on Friday February 03 2023, @05:42PM (#1290061)

          Completion remains an exercise left to the humans:

          Creating a practical, stable exothermic nuclear fusion reaction for electricity generation is a challenging scientific and engineering task that is currently an active area of research. No viable commercial solution exists yet.

          The basic idea is to contain a plasma of fusion fuels (usually hydrogen isotopes such as deuterium and tritium) at high temperature and pressure so that the fusion reactions can occur. The most widely studied fusion reaction is deuterium-tritium fusion, which releases a large amount of energy in the form of high-speed neutrons. These neutrons can then be captured to produce heat, which can be converted into electricity using a steam turbine.

          To achieve fusion conditions, the plasma must be heated to tens of millions of degrees and maintained in a confined state for a sufficient length of time to allow a significant number of fusion reactions to occur. This is typically done using magnetic confinement, where magnetic fields are used to contain the plasma and prevent it from touching the walls of the containment vessel. One example of a magnetic confinement device is a tokamak, which uses magnetic fields to confine the plasma in a toroidal shape.

          While significant progress has been made in magnetic confinement fusion, many technical challenges remain, including maintaining a high plasma pressure and temperature for a sufficient length of time, developing materials that can withstand the harsh conditions of the plasma environment, and improving the overall efficiency of the energy conversion process.

          In summary, creating a practical, stable exothermic nuclear fusion reaction for electricity generation is a complex and ongoing research effort that requires significant scientific and engineering innovations.

          --
          🌻🌻 [google.com]
        • (Score: 0) by Anonymous Coward on Friday February 03 2023, @09:14PM (2 children)

          by Anonymous Coward on Friday February 03 2023, @09:14PM (#1290107)

          I don't know about you, but...

          Well let me tell you... ;) My 25+ year career in R&D has been characterized by people unable to come up with clever ideas or do serious experimental work being promoted into managerial positions, and then imposing the very thing that blocked them onto others. Namely: discipline, punishment, rules, hierarchy. All these things plainly fail to generate ideas or productive communities, but on the other hand they do generate feelings of superiority and narcissistic supply. Which is the point after all.

          • (Score: 3, Interesting) by JoeMerchant on Friday February 03 2023, @10:06PM (1 child)

            by JoeMerchant (3937) on Friday February 03 2023, @10:06PM (#1290121)

            >they do generate feelings of superiority and narcissistic supply. Which is the point after all.

            Yeah, I was in a place like that once. Had a two year obligation otherwise I'd have to pay them back for the (6+ months' salary) moving expenses they fronted to get me there. Not specifically because of that time limit, but kinda coincidentally a whole lot of factors came together which saw me leaving that place 2 years and 5 months after I started.

            My favorite episode there was: proposed "improvement" to the product, I point out that the existing "unimproved" state presents an important test of a rare, but potentially deadly reaction to the product and it currently is performing that test in a well controlled environment with all kinds of support to recover if the 1/700 "bad thing" happens to be particularly bad in this case. In the "improved" product, that test now comes in a not-so-well-equipped location which could lead to serious freakout and maybe someday a death which could have been prevented in the better equipped environment. So, that 1/700 phenomenon is extremely taboo to speak about within earshot of upper management and several minions come at me with pre-prepared, if inappropriate, rebuttals. To which: I shrug. Y'all know: I came, I saw, I brought it up, you can listen or you can shut me down, but you can't say you never heard or thought of the possibilities.

            So, like a month later, one of those minions - of course the one who shouted the loudest that I didn't know what I was talking about, it's all irrelevant, nothing to see here - turns around and picks up all my arguments and starts championing them all around the company as his own ideas. He pulls this shit to my boss right in front of me, my boss remembers very clearly that the whole thing was presented by me, to the core product team, a month ago, so we both sort of snort/grin at him and say: "Yeah, you run with that, sounds really solid to us."

            The people above Mr. Idea Smuggler were worse, orders of magnitude worse.

            --
            🌻🌻 [google.com]
            • (Score: 0) by Anonymous Coward on Saturday February 04 2023, @04:39AM

              by Anonymous Coward on Saturday February 04 2023, @04:39AM (#1290172)

              I think the abrupt about-face done in a convincing manner is what distinguishes them to the upper tier. If you can't lie to my face convincingly, then you aren't going to make it 'round here.

(1)