As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."
The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."
(Score: 2, Insightful) by khallow on Tuesday May 16, @01:47AM (3 children)
No, because we aren't competent enough to get ahead of the curve.
(Score: 2) by aafcac on Tuesday May 16, @03:50PM (2 children)
We don't have to be ahead of the curve, that's why regulations should be focused on slowing the curve enough that it can be studied carefully as we go forward. We've already got Tesla cars murdering people in the streets because the AI isn't advanced enough to figure out how far away motorcycles are or properly handle jersey barriers at exits. That's going to be how things are going to be, if we're lucky, as the companies doing the development are trying to be first to market and have little idea about how advanced any of the competition is.
Technology isn't something that we can always predict the impact of. I doubt when Freon based refrigerants or asbestos were introduced, anybody had any idea as to how large a problem they would be. Likewise, look at all the things that computers are being used for that likely weren't predicted decades back when the first mainframes were punch card based and extremely slow. But, we can mandate that measures be put into place to keep these systems from causing harm while we figure out how to design them, so they don't do dangerous things, or give us something that we ask for, but shouldn't be asking for.
(Score: 1) by khallow on Tuesday May 16, @09:35PM
Why should we? What's the evidence for this need? This "slowing the curve" method is another threat to our future like AI or asbestos, yet we're not taking proper precautions with it.
(Score: 2, Informative) by khallow on Tuesday May 16, @09:39PM
As an aside, we already have systems for dealing with technology that doesn't work right. Slowing AI isn't needed when people can just sue Tesla for murdering cars and there's even the possibility of actual crime, if gross negligence can be shown.