Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DadaDoofy on Tuesday May 16, @04:48PM (1 child)

    by DadaDoofy (23827) on Tuesday May 16, @04:48PM (#1306562)

    The real question is how much transparency will we see in term of who is training it and with what information? It will most definitely be used to reinforce specific narratives and deride contradictory ones as "conspiracy theories" and "misinformation", while being sold as an intelligent, unbiased reference.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by Freeman on Tuesday May 16, @07:28PM

    by Freeman (732) Subscriber Badge on Tuesday May 16, @07:28PM (#1306603) Journal

    Currently, as far as I can tell, the LLMs have been learning from a very broad range of data. Which generally includes the likes of Reddit, 4chan, Twitter, and MySpace posts. Along with vast troves of data from around the internet as a whole. When you think about what they trained it on, it suddenly becomes very clear why Microsoft's first iteration turned into a Nazi and their more recent attempt showed good skills at gaslighting and attacking the user.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"