As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."
The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."
(Score: 2) by krishnoid on Monday May 15, @09:26PM (9 children)
"What do you think about this article: [TFA link]":
"Thank you"
You're welcome. I'm glad I could be of assistance.
(Score: 3, Touché) by RS3 on Monday May 15, @11:23PM (7 children)
An AI says that AI development is okay. No conflict of interest to see here, move along. :)
(Score: 1) by khallow on Tuesday May 16, @01:46AM (6 children)
Why is it ok for a human with conflicts of interest to make an argument that supports those interests, but not ok for an AI?
(Score: 0) by Anonymous Coward on Tuesday May 16, @03:18AM
And, the $64K question: This was Google Bard, why didn't it slam anything to do with Microsoft, one of Google's competitors.
(Score: 2) by RS3 on Tuesday May 16, @06:49AM (4 children)
I'm not following you- can you give me an example?
(Score: 1) by khallow on Tuesday May 16, @02:25PM (3 children)
And if a Soylentil asks for your favorite text editor and why is it your favorite, are you expected to argue for a text editor that you don't favor?
(Score: 2) by RS3 on Tuesday May 16, @03:33PM (2 children)
I see your point and perspective. I'll have to think about that. It seems the natural order of things, you know, basic instinct / self protection / survival. Is there some other way things should happen that might be better?
(Score: 2, Informative) by khallow on Tuesday May 16, @09:25PM (1 child)
A classic NASA example happened in 2005, when a study for selecting a new launch vehicle found that a Space Shuttle-like vehicle ("Shuttle stack" is a typical name for this configuration) was best by safety and performance standards. This was found to be a lie when an appendix was released a few years later under a FOIA request (it had been previously withheld on grounds that its release would have violated NDA agreements that NASA had allegedly made with the contractors) which showed that the study had deliberately compromised its safety and performance standards precisely with the Shuttle stack. Numerous problems had come up with the configuration (very high vibration, high acceleration, high air pressure during early launch or "maxQ", massive crawler needed to move the vehicle from integration facility to launch pad, need for a very aggressive launch abort/escape system, etc) when they were making the prototype. Almost all of these problems had been foretold in the appendix!
The gimmick here was that the report was presented as being impartial when the hidden part showed that the study had an heavy bias towards the configuration that was eventually selected. It was a part of the theater of the time to attempt to legitimize the eventual choice. If the appendix had been publicly viewable at the very beginning, the bias would have been obvious and the selection process contested.
(Score: 2, Insightful) by khallow on Tuesday May 16, @09:32PM
And that argument is given more weight because of the perception of lack of bias.
(Score: 2) by inertnet on Tuesday May 16, @09:08AM
Funny that it's obviously programmed to respond as if it were a human, with all the "I think" and "we need" and talk about AI as "them". It reminds me of the Data character from Star Trek, it's even funnier if you imagine his voice when reading these AI responses.