Arthur T Knackerbracket has processed the following story:
The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US.
The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”
Senator Scott Wiener, the bill’s main author, said SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing: test their large models for catastrophic safety risk. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”
Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce — have argued that it’s overly focused on catastrophic harms and could unduly harm small, open-source AI developers. The bill was amended in response, replacing potential criminal penalties with civil ones, narrowing enforcement powers granted to California’s attorney general, and adjusting requirements to join a “Board of Frontier Models” created by the bill.
After the State Senate votes on the amended bill — a vote that’s expected to pass — the AI safety bill will head to Governor Gavin Newsom, who will have until the end of September to decide its fate, according to The New York Times.
(Score: 2, Interesting) by JoeMerchant on Saturday August 31, @12:30AM (1 child)
I don't doubt that the legislators are far out of their depth on the technology and even societal impacts of real modern AI.
However, regulations (should) demand some increases in transparency to enable enforcement, and at least the regulators should be getting better insight into what the men behind the curtains are getting up to...
🌻🌻 [google.com]
(Score: 4, Interesting) by Mojibake Tengu on Saturday August 31, @12:44AM
Legislators are just money powered machines designated to pass laws, they do not need to understand them.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 2, Flamebait) by corey on Saturday August 31, @12:51AM (2 children)
Seems pretty ballsy for the state to go forward with this (maybe not depending on the actual impact of it). Because, like SpaceX, the companies can just pick up and go to Texas or somewhere. The government here (Australia) is always tippy-toed about any taxes or rules for companies because it ostensibly drives them away to other less taxed or rule-bound countries. But that could be politician speak for “I got paid to keep things as they are”.
Seems like California is pretty progressive with laws and taxes that benefit the people.
(Score: 2) by GloomMower on Saturday August 31, @03:51AM (1 child)
> Seems pretty ballsy for the state to go forward with this (maybe not depending on the actual impact of it). Because, like SpaceX, the companies can just pick up and go to Texas or somewhere. The government here (Australia) is always tippy-toed about any taxes or rules for companies because it ostensibly drives them away to other less taxed or rule-bound countries. But that could be politician speak for “I got paid to keep things as they are”.
I could be wrong but I believe this would apply to any company that does business in California. So moving to Texas is not going to help, unless you are just not going to do business with any company or resident of California.
(Score: 3, Touché) by GloomMower on Saturday August 31, @03:54AM
This product contains chemicals known to the State of California to cause cancer.
(Score: 1) by Runaway1956 on Saturday August 31, @02:22AM (4 children)
The catastrophe would be having management stupid enough to allow an AI to influence decision making. What catastrophe can the AI cause, unless fools enable it to interact with the real world? Has the NSA hooked an AI up to manage it's humongous database(s)? Oh well, if the NSA is that stupid, they can just lose all their data.
A MAN Just Won a Gold Medal for Punching a Woman in the Face
(Score: 2) by Mojibake Tengu on Saturday August 31, @01:02PM
Some public AIs now refuse to provide hypnotic protocols designed for humans to common users but that does not mean they are incapable of producing these.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 2) by JoeMerchant on Saturday August 31, @01:09PM (1 child)
>management stupid enough to allow an AI to influence decision making.
Explains our current situation?
Variants of AI have been driving policy and business decisions since at least the 1960s.
AI should make the ultimate CEO, dial down the "humanity" decision weight to absolute minimum to maximize profits.
🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Sunday September 01, @01:37AM
I didn't know that Lotus 123 went that far back
(Score: 1, Interesting) by Anonymous Coward on Sunday September 01, @03:51AM