
from the can-we-define-what-"AI"-is-first? dept.
The EU's AI Act could have a chilling effect on open source efforts, experts warn:
The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU's draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.
"This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."
In 2021, the European Commission — the EU's politically independent executive arm — released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.
In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.
[...] Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who's a part of the Knives and Paintbrushes collective, thinks it's "perfectly fine" to regulate open source AI "a little more heavily" than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.
"The fearmongering about 'stifling innovation' comes mostly from people who want to do away with all regulation and have free rein, and that's generally not a view I put much stock into," Cook said. "I think it's okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it."
To wit, as my colleague Natasha Lomas has previously noted, the EU's risk-based approach lists several prohibited uses of AI (e.g. China-style state social credit scoring) while imposing restrictions on AI systems considered to be "high-risk" — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.
[...] "Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones," Delangue, Ferrandis and Solaiman said. "The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community."
That well may be achievable. Given the many moving parts involved in EU rulemaking (not to mention the stakeholders affected by it), it'll likely be years before AI regulation in the bloc starts to take shape.
Related Stories
Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).
If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.
[...]
For example, here's a list of notable LLaMA-related events based on a timeline Willison laid out in a Hacker News comment:
- February 24, 2023: Meta AI announces LLaMA.
- March 2, 2023: Someone leaks the LLaMA models via BitTorrent.
- March 10, 2023: Georgi Gerganov creates llama.cpp, which can run on an M1 Mac.
- March 11, 2023: Artem Andreenko runs LLaMA 7B (slowly) on a Raspberry Pi 4, 4GB RAM, 10 sec/token.
- March 12, 2023: LLaMA 7B running on NPX, a node.js execution tool.
- March 13, 2023: Someone gets llama.cpp running on a Pixel 6 phone, also very slowly.
- March 13, 2023, 2023: Stanford releases Alpaca 7B, an instruction-tuned version of LLaMA 7B that "behaves similarly to OpenAI's "text-davinci-003" but runs on much less powerful hardware.
Related:
DuckDuckGo's New Wikipedia Summary Bot: "We Fully Expect It to Make Mistakes"
Robots Let ChatGPT Touch the Real World Thanks to Microsoft (Article has a bunch of other SoylentNews related links as well.)
Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI
Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.
Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:
It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."
Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.
(Score: 4, Touché) by Rosco P. Coltrane on Monday September 12 2022, @10:18AM
Ha ha ha that was funny.
Okay next article.
(Score: 2) by PiMuNu on Monday September 12 2022, @10:41AM
The AI folks can offer a job to EU officials on their "public policy advisory board"... seems to be the ticket to move EU policy in the right direction.
(Score: 3, Insightful) by khallow on Monday September 12 2022, @01:41PM (1 child)
In other words, appearance over reality. I'd put more stock in the people advocating for more regulation, if they would at least give the appearance of caring about whether the regulation would make a better world or not. When it's more important to have a precedent than viable, useful law...
(Score: 2, Insightful) by khallow on Monday September 12 2022, @03:40PM
(Score: 3, Interesting) by DannyB on Monday September 12 2022, @02:53PM (2 children)
If some disastrous outcome were the death of all humans, then the issue of regulation becomes moot. There would be no human-run courts to hear any lawsuit against open source developers. There would be no open source developers to sue. Thus regulation is not needed.
If the disastrous outcome they imagine is that AI steers people to misinformation, and some people are stupid enough to believe it, then it is already too late.
Fact: We get heavier as we age due to more information in our heads. When no more will fit it accumulates as fat.
(Score: 2) by c0lo on Monday September 12 2022, @07:45PM (1 child)
Heh, early 2008, "home prices never go down".
No need for an AI, plenty exhibited this natural stupidity.
Ummm... you were saying...?
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by DannyB on Tuesday September 13 2022, @04:10PM
The age of men is over. The time of the Orc has come.
Fact: We get heavier as we age due to more information in our heads. When no more will fit it accumulates as fat.
(Score: 3, Interesting) by acid andy on Tuesday September 13 2022, @12:13AM
What attributes does a piece of software need to be classified as "AI" under this act? Would it include the various forms of AI in computer games, for example? If it does, this could be a bigger problem.
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—