Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 16, @02:41AM (1 child)

    by Anonymous Coward on Tuesday May 16, @02:41AM (#1306498)

    I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could. "We must not be too hasty, think of how much money is at stake? Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it. Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered. This is the tobacco companies, the oil companies with their leaded gas all over again, social media companies with their data harvesting. I'll take terrible regulation that can be removed over no regulation that you try to put in after the fact any day. Especially since a lot of the stuff put forth as "terrible regulation" is actually just companies complaining that they can't do whatever they want solely for their own (not their worker's) benefit.

    I do agree if there are existing regulations that can be used, that they be used and not new ones added provided sufficient resources are put in place for enforcement. If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 1, Disagree) by khallow on Tuesday May 16, @03:15AM

    by khallow (3766) Subscriber Badge on Tuesday May 16, @03:15AM (#1306502) Journal

    I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could.

    Sounds like it wasn't worth regulating in the first place then, if no regulation leads to that long drawn out process.

    Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it.

    I'm also thinking of how easily it would be to strangle useful, emerging technologies. When you have the above situation where established interests were allowed to choose their own regulation, then they are more than powerful enough to block competing new technologies via regulation. For example, that's resistance to the gig economy in a nutshell.

    Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered.

    Given today's terrible regulatory environment, which you describe in part, of course they would do that.

    If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

    Even if those new regulatory powers just add to the power of the lobbyists? Remember you already describe severe regulatory dysfunction. What's the point of creating new regulation when it's going to be honored as well as the original regulation?