Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Tuesday May 16, @09:28PM

    by khallow (3766) Subscriber Badge on Tuesday May 16, @09:28PM (#1306618) Journal

    So, we should wait for something catastrophic to happen before taking any preventative steps?

    Indeed. Because otherwise you don't have a clue what the problems are.

    Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer.

    And we still don't know because nobody was looking for ozone holes before the modern age. We don't actually know that there was destruction of the ozone layer. It's a reasonable model, but it's not backed by evidence, but rather by observation bias.

    Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward.

    Unless, of course, the "reasonable path" causes more harm.