Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by janrinok on Monday September 12 2022, @06:33AM   Printer-friendly
from the can-we-define-what-"AI"-is-first? dept.

The EU's AI Act could have a chilling effect on open source efforts, experts warn:

The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU's draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

"This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."

In 2021, the European Commission — the EU's politically independent executive arm — released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

[...] Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who's a part of the Knives and Paintbrushes collective, thinks it's "perfectly fine" to regulate open source AI "a little more heavily" than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

"The fearmongering about 'stifling innovation' comes mostly from people who want to do away with all regulation and have free rein, and that's generally not a view I put much stock into," Cook said. "I think it's okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it."

To wit, as my colleague Natasha Lomas has previously noted, the EU's risk-based approach lists several prohibited uses of AI (e.g. China-style state social credit scoring) while imposing restrictions on AI systems considered to be "high-risk" — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.

[...] "Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones," Delangue, Ferrandis and Solaiman said. "The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community."

That well may be achievable. Given the many moving parts involved in EU rulemaking (not to mention the stakeholders affected by it), it'll likely be years before AI regulation in the bloc starts to take shape.


Original Submission

Related Stories

You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi 30 comments

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/

Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).

If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.
[...]
For example, here's a list of notable LLaMA-related events based on a timeline Willison laid out in a Hacker News comment:

Related:
DuckDuckGo's New Wikipedia Summary Bot: "We Fully Expect It to Make Mistakes"
Robots Let ChatGPT Touch the Real World Thanks to Microsoft (Article has a bunch of other SoylentNews related links as well.)
Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI


Original Submission

A Turning Point for U.S. AI Policy: Senate Explores Solutions 4 comments

Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.

Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:

It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."

Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Touché) by Rosco P. Coltrane on Monday September 12 2022, @10:18AM

    by Rosco P. Coltrane (4757) on Monday September 12 2022, @10:18AM (#1271293)

    The nonpartisan think tank

    Ha ha ha that was funny.

    Okay next article.

  • (Score: 2) by PiMuNu on Monday September 12 2022, @10:41AM

    by PiMuNu (3823) on Monday September 12 2022, @10:41AM (#1271294)

    The AI folks can offer a job to EU officials on their "public policy advisory board"... seems to be the ticket to move EU policy in the right direction.

  • (Score: 3, Insightful) by khallow on Monday September 12 2022, @01:41PM (1 child)

    by khallow (3766) Subscriber Badge on Monday September 12 2022, @01:41PM (#1271314) Journal

    Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who's a part of the Knives and Paintbrushes collective, thinks it's "perfectly fine" to regulate open source AI "a little more heavily" than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

    "The fearmongering about 'stifling innovation' comes mostly from people who want to do away with all regulation and have free rein, and that's generally not a view I put much stock into," Cook said. "I think it's okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it."

    In other words, appearance over reality. I'd put more stock in the people advocating for more regulation, if they would at least give the appearance of caring about whether the regulation would make a better world or not. When it's more important to have a precedent than viable, useful law...

    • (Score: 2, Insightful) by khallow on Monday September 12 2022, @03:40PM

      by khallow (3766) Subscriber Badge on Monday September 12 2022, @03:40PM (#1271337) Journal
      Also, I should have highlighted "I think it's okay to legislate in the name of a better world". I want that better world, not it's name. We already have too many such places which exist in name only.
  • (Score: 3, Interesting) by DannyB on Monday September 12 2022, @02:53PM (2 children)

    by DannyB (5839) Subscriber Badge on Monday September 12 2022, @02:53PM (#1271333) Journal

    If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

    If some disastrous outcome were the death of all humans, then the issue of regulation becomes moot. There would be no human-run courts to hear any lawsuit against open source developers. There would be no open source developers to sue. Thus regulation is not needed.

    If the disastrous outcome they imagine is that AI steers people to misinformation, and some people are stupid enough to believe it, then it is already too late.

    --
    Fact: We get heavier as we age due to more information in our heads. When no more will fit it accumulates as fat.
    • (Score: 2) by c0lo on Monday September 12 2022, @07:45PM (1 child)

      by c0lo (156) Subscriber Badge on Monday September 12 2022, @07:45PM (#1271399) Journal

      If the disastrous outcome they imagine is that AI steers people to misinformation, and some people are stupid enough to believe it...

      Heh, early 2008, "home prices never go down".
      No need for an AI, plenty exhibited this natural stupidity.

      , then it is already too late

      Ummm... you were saying...?

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by DannyB on Tuesday September 13 2022, @04:10PM

        by DannyB (5839) Subscriber Badge on Tuesday September 13 2022, @04:10PM (#1271489) Journal

        The age of men is over. The time of the Orc has come.

        --
        Fact: We get heavier as we age due to more information in our heads. When no more will fit it accumulates as fat.
  • (Score: 3, Interesting) by acid andy on Tuesday September 13 2022, @12:13AM

    by acid andy (1683) on Tuesday September 13 2022, @12:13AM (#1271413) Homepage Journal

    What attributes does a piece of software need to be classified as "AI" under this act? Would it include the various forms of AI in computer games, for example? If it does, this could be a bigger problem.

    --
    Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(1)