Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 8 submissions in the queue.
posted by hubie on Wednesday August 10 2022, @09:24AM   Printer-friendly
from the Patent-Act-originalism dept.

'There is no ambiguity,' says judge:

The US federal circuit court has confirmed that AI systems cannot patent inventions because they are not human beings.

The ruling is the latest failure in a series of quixotic legal battles by computer scientist Stephen Thaler to copyright and patent the output of various AI software tools he's created.

In 2019, Thaler failed to copyright an image on behalf of an AI system he dubbed Creativity Machine, with that decision upheld on appeal by the US Copyright Office in 2022. In a parallel case, the US Patent Office ruled in 2020 that Thaler's AI system DABUS could not be a legal inventor because it was not a "natural person," with this decision then upheld by a judge in 2021. Now, the federal circuit court has, once more, confirmed this decision.

[...] The Patent Act clearly states that only human beings can hold patents, says Stark. The Act refers to patent-holders as "individuals," a term which the Supreme Court has ruled "ordinarily means a human being, a person" (following "how we use the word in everyday parlance"); and uses personal pronouns — "herself" and "himself" — throughout, rather than terms such as "itself," which Stark says "would permit non-human inventors" in a reading.

[...] According to BloombergLaw, Thaler plans to appeal the circuit court's ruling, with his attorney, Ryan Abbott of Brown, Neri, Smith & Khan LLP, criticizing the court's "narrow and textualist approach" to the Patent Act.

Previously:
    UK Decides AI Still Cannot Patent Inventions
    When AI is the Inventor Who Gets the Patent?
    AI Computers Can't Patent their Own Inventions -- Yet -- a US Judge Rules


Original Submission

Related Stories

AI Computers Can’t Patent their Own Inventions -- Yet -- a US Judge Rules 11 comments

AI computers can't patent their own inventions — yet — a US judge rules

Should an artificially intelligent machine be able to patent its own inventions? For a US federal judge, the larger implications of that question were irrelevant. In April 2020, the US Patent and Trademark Office (USPTO) ruled that only "natural persons" could be credited as the inventor of a patent, and a US court decided Thursday that yes, that's what the law technically says (via Bloomberg).

Not every country agrees with that direction. South Africa and Australia decided to go the other direction, granting one patent and reinstating a second patent application filed by AI researcher Steven Thaler, whose AI system DABUS reportedly came up with a flashing light and a new type of food container. Thaler is the one who sued the US in this case as well — he's part of a group called The Artificial Inventor Project that's lobbying for AI recognition around the globe.

On a patent application doesn't the inventor have to swear they invented it? [602.01 Naming the Inventor]


Original Submission

When AI is the Inventor Who Gets the Patent? 71 comments

When AI is the inventor who gets the patent?:

It's not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI -- do we award a patent to a machine?

[...] In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents -- laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products -- a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler's appeal against IP Australia's rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn't be named the inventor.

UK Decides AI Still Cannot Patent Inventions 10 comments

The UK's Intellectual Property Office has decided artificial-intelligence systems cannot patent inventions for the time being:

A recent IPO consultation found many experts doubted AI was currently able to invent without human assistance.

Current law allowed humans to patent inventions made with AI assistance, the government said, despite "misperceptions" this was not the case.

Last year, the Court of Appeal ruled against Stephen Thaler, who had said his Dabus AI system should be recognised as the inventor in two patent applications, for:

  • a food container
  • a flashing light

The judges sided, by a two-to-one majority, with the IPO, which had told him to list a real person as the inventor.

"Only a person can have rights - a machine cannot," wrote Lady Justice Laing in her judgement.

"A patent is a statutory right and it can only be granted to a person."

But the IPO also said it would "need to understand how our IP system should protect AI-devised inventions in the future" and committed to advancing international discussions, with a view to keeping the UK competitive.

Originally spotted on The Eponymous Pickle.

Previously:
When AI is the Inventor Who Gets the Patent?
AI Computers Can't Patent their Own Inventions -- Yet -- a US Judge Rules
USPTO Rejects AI-Invention for Lack of a Human Inventor
AI Denied Patent by Human-Centric European Patent Office
The USPTO Wants to Know If Artificial Intelligence Can Own the Content It Creates
U.S. Patent and Trademark Office Asks If "AI" Can Create or Infringe Copyrighted Works


Original Submission

Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film 15 comments

https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.

[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.

Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

US Rejects AI Copyright for Famous State Fair-Winning Midjourney Art 12 comments

https://arstechnica.com/information-technology/2023/09/us-rejects-ai-copyright-for-famous-state-fair-winning-midjourney-art/

On Tuesday, the US Copyright Office Review Board rejected copyright protection for an AI-generated artwork that won a Colorado State Fair art contest last year because it lacks human authorship required for registration, Reuters reports. The win, which was widely covered in the press at the time, ignited controversy over the ethics of AI-generated artwork.
[...]
In August 2022, Artist Jason M. Allen created the piece in question, titled Theatre D'opera Spatial, using the Midjourney image synthesis service, which was relatively new at the time. The image depicting a futuristic royal scene won top prize in the fair's "Digital Arts/Digitally Manipulated Photography" category.
[...]
This is not the first time the Copyright Office has rejected AI-generated artwork. In February, it revoked copyright protection for images made by artist Kris Kashtanova using Midjourney for the graphic novel Zarya of the Dawn but allowed copyrighting the human-arranged portions of the work.

Related:
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Tyler Perry Puts $800 Million Studio Expansion on Hold Because of OpenAI's Sora 16 comments

https://arstechnica.com/information-technology/2024/02/i-just-dont-see-how-we-survive-tyler-perry-issues-hollywood-warning-over-ai-video-tech/

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.

"I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would've increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I'm seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it's able to do. It's shocking to me."

[...] "It makes me worry so much about all of the people in the business," he told The Hollywood Reporter. "Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I'm thinking this will touch every corner of our industry."

You can read the full interview at The Hollywood Reporter

[...] Perry also looks beyond Hollywood and says that it's not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. "If you look at it across the world, how it's changing so quickly, I'm hoping that there's a whole government approach to help everyone be able to sustain."

Previously on SoylentNews:
OpenAI Teases a New Generative Video Model Called Sora - 20240222

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by SomeGuy on Wednesday August 10 2022, @12:18PM (7 children)

    by SomeGuy (5632) on Wednesday August 10 2022, @12:18PM (#1265926)

    You wouldn't assign a patent to a calculator, word processor, or stapler, would you?

    "AI"s (that are really just advanced pattern matching) are simply tools. They don't think, they don't reason, they don't invent. They are just tools, they are written and operated by and for PEOPLE. Even if total idiots are fooled when these tools pull data from insanely huge databases so the results look unique... they arn't, and they are just based on whatever work has been fed in to them.

    So why an AI tool then? The only reason I can think of is so some patent troll can spam the system with every possible permutation of a patent, so when a person invents and engineers something for a real need... lawsuit time!

    • (Score: 4, Interesting) by Immerman on Wednesday August 10 2022, @01:32PM (5 children)

      by Immerman (3985) on Wednesday August 10 2022, @01:32PM (#1265939)

      Just to be contrarian - there's still no evidence that human intelligence is anything more than advanced pattern matching either.

      Also, how would assigning a patent to an AI aid in patent trolling? The question is only about the name on the patent, not how it was arrived at.

      And it seems a perfectly reasonable ruling - just consider the alternative: Once a patent issued everyone is legally prohibited from using that implementation without a license from the patent holder. A license nobody can get because the patent holder is not a legal person, and thus cannot enter into a legally binding agreement.

      End result - you've completely prohibited anyone from using the technology until the patent expires - a complete perversion of the stated purpose of the patent system.

      • (Score: 3, Interesting) by pkrasimirov on Wednesday August 10 2022, @02:07PM (1 child)

        by pkrasimirov (3358) Subscriber Badge on Wednesday August 10 2022, @02:07PM (#1265944)

        I think it goes more about accountability. If AI can haz patents then AI can haz property. Like in ownership. Title deeds. Scary stuff.

        In this regard a dog or monkey is far more matching the threshold than any machine learning but they are still not allowed to own stuff.

        • (Score: 3, Interesting) by Immerman on Wednesday August 10 2022, @02:33PM

          by Immerman (3985) on Wednesday August 10 2022, @02:33PM (#1265950)

          You missed the big ones like a right to life, liberty, and pursuit of happiness. Not to mention freedom from imprisonment and forced labor, etc. except as duly sentenced punishment for crimes.

          After all - either they're legally a person, or they're not.

          I doubt anyone really cares whether a non-sentient AI can own property - it's easy enough to make it do what it's told, so legally it amounts to a variation on a corporation.

          However, a great many very wealthy people would object to not being able subject it to forced labor - that is after all the whole reason most are created.

      • (Score: 1) by Acabatag on Wednesday August 10 2022, @02:18PM (2 children)

        by Acabatag (2885) on Wednesday August 10 2022, @02:18PM (#1265946)

        Just to be contrarian - there's still no evidence that human intelligence is anything more than advanced pattern matching either.

        There is no need to limit your grasp of reality to such a strict mechanistic scope. Fine to do so if you like but then you've cut yourself off from so much of the knowledge, culture and yes, wisdom that makes us human. We are not just elaborate tricked-up machines.

        • (Score: 3, Interesting) by Immerman on Wednesday August 10 2022, @02:25PM

          by Immerman (3985) on Wednesday August 10 2022, @02:25PM (#1265948)

          I don't actually believe it, but the fact remains there's zero evidence to refute it. Claiming we're anything else is a matter of faith, not logic or knowledge. (We could argue about the wisdom of believing either way)

          And I don't see how it makes any difference whatsoever to the ability to make use of the knowledge, culture, and wisdom we've accumulated. Insofar as those have real-world applications, including to subjective levels of comfort/satisfaction/etc., they're equally applicable either way.

          And in the cases where they have no effect, even subjectively, their value as knowledge, wisdom, etc. is also a matter of faith

        • (Score: 2) by Immerman on Wednesday August 10 2022, @02:40PM

          by Immerman (3985) on Wednesday August 10 2022, @02:40PM (#1265951)

          I suppose my main point was that it's a dangerous practice to legally exclude "others" based on their lack of attributes we can't prove that we have either. Both morally, and because such ill-defined exclusions have a historical tendency to spread to include ever larger groups of people without enough influence to fight back.

    • (Score: 2) by Immerman on Wednesday August 10 2022, @01:35PM

      by Immerman (3985) on Wednesday August 10 2022, @01:35PM (#1265940)

      Also - a good patent *already* includes every possible permutation of the covered technology that the inventor can think of. If it doesn't, it's trivially easy for competitors to avoid infringing it.

(1)