Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Informative) by khallow on Tuesday May 16, @09:25PM (1 child)

    by khallow (3766) Subscriber Badge on Tuesday May 16, @09:25PM (#1306617) Journal
    I don't think so, nor do I see it as an actual problem. If I argue for something I want and it's clear that's what I'm doing, then it doesn't matter if I'm human or AI. The problem comes in when someone presents their argument as being unbiased.

    A classic NASA example happened in 2005, when a study for selecting a new launch vehicle found that a Space Shuttle-like vehicle ("Shuttle stack" is a typical name for this configuration) was best by safety and performance standards. This was found to be a lie when an appendix was released a few years later under a FOIA request (it had been previously withheld on grounds that its release would have violated NDA agreements that NASA had allegedly made with the contractors) which showed that the study had deliberately compromised its safety and performance standards precisely with the Shuttle stack. Numerous problems had come up with the configuration (very high vibration, high acceleration, high air pressure during early launch or "maxQ", massive crawler needed to move the vehicle from integration facility to launch pad, need for a very aggressive launch abort/escape system, etc) when they were making the prototype. Almost all of these problems had been foretold in the appendix!

    The gimmick here was that the report was presented as being impartial when the hidden part showed that the study had an heavy bias towards the configuration that was eventually selected. It was a part of the theater of the time to attempt to legitimize the eventual choice. If the appendix had been publicly viewable at the very beginning, the bias would have been obvious and the selection process contested.
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   2  
  • (Score: 2, Insightful) by khallow on Tuesday May 16, @09:32PM

    by khallow (3766) Subscriber Badge on Tuesday May 16, @09:32PM (#1306620) Journal

    The problem comes in when someone presents their argument as being unbiased.

    And that argument is given more weight because of the perception of lack of bias.