Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by aafcac on Tuesday May 16, @03:16PM (1 child)

    by aafcac (17646) on Tuesday May 16, @03:16PM (#1306548)

    So, we should wait for something catastrophic to happen before taking any preventative steps? Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer. OTOH, it was known that lead was a problem before it was added to fuel, but there wasn't any "harm" there until after it was put into the gas and emitted all over the place.

    In other words, only the most ignorant of people would suggest that we need to wait until there's harm before taking steps to reduce or mitigated it. Clearly, we don't always get it right and sometimes the measures that we put into place at the time look goofy later, even though they are legitimate measures to address what's going on in the here and now.

    Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward. We already see the companies working on the technology to have a shocking lack of responsibility that has already killed people, it seems to me that even if we do require harm before regulating, that condition has already been met thanks to irresponsible corporations like Tesla and Uber. Not to mention the various HR firms using ML based software to screen job applicants.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1) by khallow on Tuesday May 16, @09:28PM

    by khallow (3766) Subscriber Badge on Tuesday May 16, @09:28PM (#1306618) Journal

    So, we should wait for something catastrophic to happen before taking any preventative steps?

    Indeed. Because otherwise you don't have a clue what the problems are.

    Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer.

    And we still don't know because nobody was looking for ozone holes before the modern age. We don't actually know that there was destruction of the ozone layer. It's a reasonable model, but it's not backed by evidence, but rather by observation bias.

    Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward.

    Unless, of course, the "reasonable path" causes more harm.