As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."
The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."
(Score: 2) by JoeMerchant on Monday May 15, @09:44PM (2 children)
The problem with unregulated AI development is the speed with which it can potentially do damage.
If we only start drafting regulations when damage is already being done...
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by aafcac on Tuesday May 16, @03:46PM (1 child)
Yes, and doing so assumes that the people doing the research will know when they're about to take it too far and stop to reconsider. It's a lot like the atomic bomb, if the US had known that the Germans were effectively incapable of developing the technology in our life times due to a lack of heavy water, people on the team likely wouldn't have been willing to do the work to make it so. Now that it is a real thing, it's not something that can be undone. The awareness that it can be done is sufficient to ensure that shy of wiping the technology out completely for so long that it becomes a myth on par with things like the parting of the Red Sea due to God or a flood that encompasses the whole world, people will know that it can be done and those with the power will pursue recreating it.
In the case of AI and ML with hardware getting ever more powerful, the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.
(Score: 3, Insightful) by JoeMerchant on Tuesday May 16, @05:59PM
>a flood that encompasses the whole world
We're working on that one with our atmospheric carbon release technology...
>the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.
Unfortunately, I think the Microsoftie is onto something in basic human nature. We're mostly (quite reasonably) skeptical of gene therapy, modified genetic codes delivered by viral vectors, but it took Patient 1 dying of complications to really put the brakes on the technology. I'm not saying that's how it should be, I'm saying that's how it is be.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end