Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by krishnoid on Monday May 15, @09:26PM (9 children)

    by krishnoid (1156) on Monday May 15, @09:26PM (#1306458)

    "What do you think about this article: [TFA link]":


    The article you linked discusses the views of Microsoft chief economist Michael Schwarz on the regulation of artificial intelligence (AI). Schwarz argues that we should not regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios. He also says that we need to be careful not to over-regulate AI, as this could stifle innovation.

    I think Schwarz's views are reasonable. It is true that AI is a powerful technology, and it is important to be mindful of the potential risks. However, I also think it is important to be cautious about over-regulating AI. As Schwarz says, regulation could stifle innovation, and this could have a negative impact on society as a whole.

    I think the best approach is to strike a balance between regulation and innovation. We need to be mindful of the potential risks of AI, but we also need to be careful not to stifle innovation. I think the best way to do this is to work with the AI community to develop responsible AI practices. This means developing standards and guidelines for the development and use of AI, and it also means educating the public about AI and its potential risks and benefits.

    I believe that by working together, we can ensure that AI is used for good and not for harm.

    "Thank you"
    You're welcome. I'm glad I could be of assistance.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Touché) by RS3 on Monday May 15, @11:23PM (7 children)

    by RS3 (6367) on Monday May 15, @11:23PM (#1306471)

    An AI says that AI development is okay. No conflict of interest to see here, move along. :)

    • (Score: 1) by khallow on Tuesday May 16, @01:46AM (6 children)

      by khallow (3766) Subscriber Badge on Tuesday May 16, @01:46AM (#1306489) Journal

      An AI says that AI development is okay. No conflict of interest to see here, move along. :)

      Why is it ok for a human with conflicts of interest to make an argument that supports those interests, but not ok for an AI?

      • (Score: 0) by Anonymous Coward on Tuesday May 16, @03:18AM

        by Anonymous Coward on Tuesday May 16, @03:18AM (#1306503)

        And, the $64K question: This was Google Bard, why didn't it slam anything to do with Microsoft, one of Google's competitors.

      • (Score: 2) by RS3 on Tuesday May 16, @06:49AM (4 children)

        by RS3 (6367) on Tuesday May 16, @06:49AM (#1306522)

        Why is it ok for a human with conflicts of interest to make an argument that supports those interests

        I'm not following you- can you give me an example?

        • (Score: 1) by khallow on Tuesday May 16, @02:25PM (3 children)

          by khallow (3766) Subscriber Badge on Tuesday May 16, @02:25PM (#1306541) Journal
          It's all over the place. The most extreme is the typical court system where two or more parties not only argue their clearly biased cases, but it is expected that they do that.

          And if a Soylentil asks for your favorite text editor and why is it your favorite, are you expected to argue for a text editor that you don't favor?
          • (Score: 2) by RS3 on Tuesday May 16, @03:33PM (2 children)

            by RS3 (6367) on Tuesday May 16, @03:33PM (#1306551)

            I see your point and perspective. I'll have to think about that. It seems the natural order of things, you know, basic instinct / self protection / survival. Is there some other way things should happen that might be better?

            • (Score: 2, Informative) by khallow on Tuesday May 16, @09:25PM (1 child)

              by khallow (3766) Subscriber Badge on Tuesday May 16, @09:25PM (#1306617) Journal
              I don't think so, nor do I see it as an actual problem. If I argue for something I want and it's clear that's what I'm doing, then it doesn't matter if I'm human or AI. The problem comes in when someone presents their argument as being unbiased.

              A classic NASA example happened in 2005, when a study for selecting a new launch vehicle found that a Space Shuttle-like vehicle ("Shuttle stack" is a typical name for this configuration) was best by safety and performance standards. This was found to be a lie when an appendix was released a few years later under a FOIA request (it had been previously withheld on grounds that its release would have violated NDA agreements that NASA had allegedly made with the contractors) which showed that the study had deliberately compromised its safety and performance standards precisely with the Shuttle stack. Numerous problems had come up with the configuration (very high vibration, high acceleration, high air pressure during early launch or "maxQ", massive crawler needed to move the vehicle from integration facility to launch pad, need for a very aggressive launch abort/escape system, etc) when they were making the prototype. Almost all of these problems had been foretold in the appendix!

              The gimmick here was that the report was presented as being impartial when the hidden part showed that the study had an heavy bias towards the configuration that was eventually selected. It was a part of the theater of the time to attempt to legitimize the eventual choice. If the appendix had been publicly viewable at the very beginning, the bias would have been obvious and the selection process contested.
              • (Score: 2, Insightful) by khallow on Tuesday May 16, @09:32PM

                by khallow (3766) Subscriber Badge on Tuesday May 16, @09:32PM (#1306620) Journal

                The problem comes in when someone presents their argument as being unbiased.

                And that argument is given more weight because of the perception of lack of bias.

  • (Score: 2) by inertnet on Tuesday May 16, @09:08AM

    by inertnet (4071) Subscriber Badge on Tuesday May 16, @09:08AM (#1306528) Journal

    Funny that it's obviously programmed to respond as if it were a human, with all the "I think" and "we need" and talk about AI as "them". It reminds me of the Data character from Star Trek, it's even funnier if you imagine his voice when reading these AI responses.