Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Runaway1956 on Monday May 15, @08:34PM (8 children)

    by Runaway1956 (2926) Subscriber Badge on Monday May 15, @08:34PM (#1306449) Homepage Journal

    It isn't necessary that anyone be harmed by something, before it can be regulated. All that is necessary for regulations to be formed, are for people to disagree about how that thing can be used. It may or may not be necessary that someone imagines that they might be harmed by the product, but I don't really think so.

    That may explain why the internet is so much like the Wild Wild West. Arse wipes in charge of $corporation and $project claim that you can't regulate them, until they harm someone? FFS, even if that were true, we can demonstrate that people are being tricked and fooled by AI, already today! The potential for harm is abundant.

    So, they think maybe no regulation should apply, until someone dies as a result of their product? Is one death enough, or do we need a thousand? How much harm is relevant?

    --
    Abortion is the number one killed of children in the United States.
    Starting Score:    1  point
    Moderation   +4  
       Insightful=3, Interesting=1, Total=4
    Extra 'Insightful' Modifier   0  

    Total Score:   5  
  • (Score: 2) by krishnoid on Monday May 15, @09:19PM

    by krishnoid (1156) on Monday May 15, @09:19PM (#1306455)

    Does artificial intelligence count as "arms"? Could it?

  • (Score: 4, Insightful) by Gaaark on Tuesday May 16, @12:46AM

    by Gaaark (41) Subscriber Badge on Tuesday May 16, @12:46AM (#1306478) Journal

    For 'them', it comes down to a definition of 'harm': I'd say Microsoft has harmed the computer industry and should be bankrupted. Obviously, Microsoft would have a problem with that.

    What kind of harm must be done?
    When do you regulate AI?
    When do you 'Push the button'?

    https://www.youtube.com/watch?v=o861Ka9TtT4 [youtube.com]
    (Actually foretells Russia's salami tactic as well!)

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 1) by khallow on Tuesday May 16, @01:40AM (4 children)

    by khallow (3766) Subscriber Badge on Tuesday May 16, @01:40AM (#1306486) Journal

    It isn't necessary that anyone be harmed by something, before it can be regulated. All that is necessary for regulations to be formed, are for people to disagree about how that thing can be used. It may or may not be necessary that someone imagines that they might be harmed by the product, but I don't really think so.

    Just get enough political power and you can make any sort of garbage regulation possible.

    That may explain why the internet is so much like the Wild Wild West. Arse wipes in charge of $corporation and $project claim that you can't regulate them, until they harm someone? FFS, even if that were true, we can demonstrate that people are being tricked and fooled by AI, already today! The potential for harm is abundant.

    So, they think maybe no regulation should apply, until someone dies as a result of their product? Is one death enough, or do we need a thousand? How much harm is relevant?

    I strongly disagree. I think it would curb a vast amount of terrible regulation, if no regulation were passed until there was evidence not only of true harm, including people dying, but true harm that wasn't already addressed by existing regulation! This nuttery where some program looks unsettling means we have to pass regulation that hurts our future, needs to stop. It neglects two things: first, it only affects progress when the law is applied - if whoever develops the program simply just doesn't do it in relevant jurisdictions, then there's no regulatory defense. Only those who can ignore the law benefit and that means an edge for all those bad guys out there.

    Second, nobody understands what's being regulated here and hence, can't regulate in a sane way.

    • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 16, @02:41AM (1 child)

      by Anonymous Coward on Tuesday May 16, @02:41AM (#1306498)

      I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could. "We must not be too hasty, think of how much money is at stake? Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it. Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered. This is the tobacco companies, the oil companies with their leaded gas all over again, social media companies with their data harvesting. I'll take terrible regulation that can be removed over no regulation that you try to put in after the fact any day. Especially since a lot of the stuff put forth as "terrible regulation" is actually just companies complaining that they can't do whatever they want solely for their own (not their worker's) benefit.

      I do agree if there are existing regulations that can be used, that they be used and not new ones added provided sufficient resources are put in place for enforcement. If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

      • (Score: 1, Disagree) by khallow on Tuesday May 16, @03:15AM

        by khallow (3766) Subscriber Badge on Tuesday May 16, @03:15AM (#1306502) Journal

        I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could.

        Sounds like it wasn't worth regulating in the first place then, if no regulation leads to that long drawn out process.

        Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it.

        I'm also thinking of how easily it would be to strangle useful, emerging technologies. When you have the above situation where established interests were allowed to choose their own regulation, then they are more than powerful enough to block competing new technologies via regulation. For example, that's resistance to the gig economy in a nutshell.

        Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered.

        Given today's terrible regulatory environment, which you describe in part, of course they would do that.

        If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

        Even if those new regulatory powers just add to the power of the lobbyists? Remember you already describe severe regulatory dysfunction. What's the point of creating new regulation when it's going to be honored as well as the original regulation?

    • (Score: 2) by aafcac on Tuesday May 16, @03:16PM (1 child)

      by aafcac (17646) on Tuesday May 16, @03:16PM (#1306548)

      So, we should wait for something catastrophic to happen before taking any preventative steps? Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer. OTOH, it was known that lead was a problem before it was added to fuel, but there wasn't any "harm" there until after it was put into the gas and emitted all over the place.

      In other words, only the most ignorant of people would suggest that we need to wait until there's harm before taking steps to reduce or mitigated it. Clearly, we don't always get it right and sometimes the measures that we put into place at the time look goofy later, even though they are legitimate measures to address what's going on in the here and now.

      Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward. We already see the companies working on the technology to have a shocking lack of responsibility that has already killed people, it seems to me that even if we do require harm before regulating, that condition has already been met thanks to irresponsible corporations like Tesla and Uber. Not to mention the various HR firms using ML based software to screen job applicants.

      • (Score: 1) by khallow on Tuesday May 16, @09:28PM

        by khallow (3766) Subscriber Badge on Tuesday May 16, @09:28PM (#1306618) Journal

        So, we should wait for something catastrophic to happen before taking any preventative steps?

        Indeed. Because otherwise you don't have a clue what the problems are.

        Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer.

        And we still don't know because nobody was looking for ozone holes before the modern age. We don't actually know that there was destruction of the ozone layer. It's a reasonable model, but it's not backed by evidence, but rather by observation bias.

        Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward.

        Unless, of course, the "reasonable path" causes more harm.

  • (Score: 2) by mcgrew on Tuesday May 16, @07:09PM

    by mcgrew (701) <publish@mcgrewbooks.com> on Tuesday May 16, @07:09PM (#1306598) Homepage Journal

    That depends on what country and what circumstances. In the US, cannabis was outlawed on the basis of bald-faced lies.

    The internet is wild west because it's international. Kind of hard for Russia to arrest me for telling the world they invaded Ukraine, despite Russia's laws against it.

    To a corporation, "harm" is fewer profits.

    --
    Carbon, The only element in the known universe to ever gain sentience