As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."
The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."
(Score: 3, Touché) by RedGreen on Monday May 15, @09:33PM (11 children)
How about we get ahead of the curve for a change and be proactive on preventing any harm before it gets established. Instead of chasing the tail and being too far behind to even catch up to the garbage these tech companies foist on us in the name of "progress".
"I modded down, down, down, and the flames went higher." -- Sven Olsen
(Score: 2) by JoeMerchant on Monday May 15, @09:44PM (2 children)
The problem with unregulated AI development is the speed with which it can potentially do damage.
If we only start drafting regulations when damage is already being done...
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by aafcac on Tuesday May 16, @03:46PM (1 child)
Yes, and doing so assumes that the people doing the research will know when they're about to take it too far and stop to reconsider. It's a lot like the atomic bomb, if the US had known that the Germans were effectively incapable of developing the technology in our life times due to a lack of heavy water, people on the team likely wouldn't have been willing to do the work to make it so. Now that it is a real thing, it's not something that can be undone. The awareness that it can be done is sufficient to ensure that shy of wiping the technology out completely for so long that it becomes a myth on par with things like the parting of the Red Sea due to God or a flood that encompasses the whole world, people will know that it can be done and those with the power will pursue recreating it.
In the case of AI and ML with hardware getting ever more powerful, the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.
(Score: 3, Insightful) by JoeMerchant on Tuesday May 16, @05:59PM
>a flood that encompasses the whole world
We're working on that one with our atmospheric carbon release technology...
>the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.
Unfortunately, I think the Microsoftie is onto something in basic human nature. We're mostly (quite reasonably) skeptical of gene therapy, modified genetic codes delivered by viral vectors, but it took Patient 1 dying of complications to really put the brakes on the technology. I'm not saying that's how it should be, I'm saying that's how it is be.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 0) by Anonymous Coward on Monday May 15, @11:09PM (3 children)
How about we let things play out instead of putting in place a bunch of burdensome regulations that will inevitably hurt the little guy. I don't really care to see stronger copyright laws or restrictions on how much performance a PC can have.
(Score: -1, Troll) by Anonymous Coward on Tuesday May 16, @01:05AM
Jesse? Is that you? https://www.legendsofamerica.com/james-gang/ [legendsofamerica.com]
(Score: 4, Interesting) by RedGreen on Tuesday May 16, @01:42AM (1 child)
"How about we let things play out instead of putting in place a bunch of burdensome regulations that will inevitably hurt the little guy. I don't really care to see stronger copyright laws or restrictions on how much performance a PC can have."
Fuck that and the horse it rode in on too. I am sick of all this let them slimy parasite corporations do whatever the hell they want no matter who gets hurt by their newest latest scheme to steal more of our money, invade our privacy some more, sell us toxic products, half of which destroy the planet. The we are only beholden to making more money for our shareholders, fuck society with all the harm we do, that is for the idiot governments to pay for. Well I for one say enough is enough they get to be held accountable for their actions they cause, the time for it has long since past. We need the corporate death penalty for both the corporation and the people in charge of it. That will change how the cocksuckers go about messing us about killing countless among us with their greed.
"I modded down, down, down, and the flames went higher." -- Sven Olsen
(Score: 1, Funny) by Anonymous Coward on Tuesday May 16, @02:15AM
Your proposal has been rejected due to Congressional inaction and China. Better luck next paradigm shift.
(Score: 2, Insightful) by khallow on Tuesday May 16, @01:47AM (3 children)
No, because we aren't competent enough to get ahead of the curve.
(Score: 2) by aafcac on Tuesday May 16, @03:50PM (2 children)
We don't have to be ahead of the curve, that's why regulations should be focused on slowing the curve enough that it can be studied carefully as we go forward. We've already got Tesla cars murdering people in the streets because the AI isn't advanced enough to figure out how far away motorcycles are or properly handle jersey barriers at exits. That's going to be how things are going to be, if we're lucky, as the companies doing the development are trying to be first to market and have little idea about how advanced any of the competition is.
Technology isn't something that we can always predict the impact of. I doubt when Freon based refrigerants or asbestos were introduced, anybody had any idea as to how large a problem they would be. Likewise, look at all the things that computers are being used for that likely weren't predicted decades back when the first mainframes were punch card based and extremely slow. But, we can mandate that measures be put into place to keep these systems from causing harm while we figure out how to design them, so they don't do dangerous things, or give us something that we ask for, but shouldn't be asking for.
(Score: 1) by khallow on Tuesday May 16, @09:35PM
Why should we? What's the evidence for this need? This "slowing the curve" method is another threat to our future like AI or asbestos, yet we're not taking proper precautions with it.
(Score: 2, Informative) by khallow on Tuesday May 16, @09:39PM
As an aside, we already have systems for dealing with technology that doesn't work right. Slowing AI isn't needed when people can just sue Tesla for murdering cars and there's even the possibility of actual crime, if gross negligence can be shown.