Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday May 15 2023, @08:19PM   Printer-friendly
from the nuke-it-from-orbit-hindsight-20/20 dept.

https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
[...]
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."


Original Submission

Related Stories

A Turning Point for U.S. AI Policy: Senate Explores Solutions 4 comments

Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.

Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:

It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."

Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.

Tyler Perry Puts $800 Million Studio Expansion on Hold Because of OpenAI's Sora 16 comments

https://arstechnica.com/information-technology/2024/02/i-just-dont-see-how-we-survive-tyler-perry-issues-hollywood-warning-over-ai-video-tech/

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.

"I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would've increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I'm seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it's able to do. It's shocking to me."

[...] "It makes me worry so much about all of the people in the business," he told The Hollywood Reporter. "Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I'm thinking this will touch every corner of our industry."

You can read the full interview at The Hollywood Reporter

[...] Perry also looks beyond Hollywood and says that it's not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. "If you look at it across the world, how it's changing so quickly, I'm hoping that there's a whole government approach to help everyone be able to sustain."

Previously on SoylentNews:
OpenAI Teases a New Generative Video Model Called Sora - 20240222

Microsoft Accused of Selling AI Tool That Spews Violent, Sexual Images to Kids 13 comments

https://arstechnica.com/tech-policy/2024/03/microsoft-accused-of-selling-ai-tool-that-spews-violent-sexual-images-to-kids/

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Runaway1956 on Monday May 15 2023, @08:34PM (8 children)

    by Runaway1956 (2926) Subscriber Badge on Monday May 15 2023, @08:34PM (#1306449) Journal

    It isn't necessary that anyone be harmed by something, before it can be regulated. All that is necessary for regulations to be formed, are for people to disagree about how that thing can be used. It may or may not be necessary that someone imagines that they might be harmed by the product, but I don't really think so.

    That may explain why the internet is so much like the Wild Wild West. Arse wipes in charge of $corporation and $project claim that you can't regulate them, until they harm someone? FFS, even if that were true, we can demonstrate that people are being tricked and fooled by AI, already today! The potential for harm is abundant.

    So, they think maybe no regulation should apply, until someone dies as a result of their product? Is one death enough, or do we need a thousand? How much harm is relevant?

    • (Score: 2) by krishnoid on Monday May 15 2023, @09:19PM

      by krishnoid (1156) on Monday May 15 2023, @09:19PM (#1306455)

      Does artificial intelligence count as "arms"? Could it?

    • (Score: 4, Insightful) by Gaaark on Tuesday May 16 2023, @12:46AM

      by Gaaark (41) on Tuesday May 16 2023, @12:46AM (#1306478) Journal

      For 'them', it comes down to a definition of 'harm': I'd say Microsoft has harmed the computer industry and should be bankrupted. Obviously, Microsoft would have a problem with that.

      What kind of harm must be done?
      When do you regulate AI?
      When do you 'Push the button'?

      https://www.youtube.com/watch?v=o861Ka9TtT4 [youtube.com]
      (Actually foretells Russia's salami tactic as well!)

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 1) by khallow on Tuesday May 16 2023, @01:40AM (4 children)

      by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @01:40AM (#1306486) Journal

      It isn't necessary that anyone be harmed by something, before it can be regulated. All that is necessary for regulations to be formed, are for people to disagree about how that thing can be used. It may or may not be necessary that someone imagines that they might be harmed by the product, but I don't really think so.

      Just get enough political power and you can make any sort of garbage regulation possible.

      That may explain why the internet is so much like the Wild Wild West. Arse wipes in charge of $corporation and $project claim that you can't regulate them, until they harm someone? FFS, even if that were true, we can demonstrate that people are being tricked and fooled by AI, already today! The potential for harm is abundant.

      So, they think maybe no regulation should apply, until someone dies as a result of their product? Is one death enough, or do we need a thousand? How much harm is relevant?

      I strongly disagree. I think it would curb a vast amount of terrible regulation, if no regulation were passed until there was evidence not only of true harm, including people dying, but true harm that wasn't already addressed by existing regulation! This nuttery where some program looks unsettling means we have to pass regulation that hurts our future, needs to stop. It neglects two things: first, it only affects progress when the law is applied - if whoever develops the program simply just doesn't do it in relevant jurisdictions, then there's no regulatory defense. Only those who can ignore the law benefit and that means an edge for all those bad guys out there.

      Second, nobody understands what's being regulated here and hence, can't regulate in a sane way.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 16 2023, @02:41AM (1 child)

        by Anonymous Coward on Tuesday May 16 2023, @02:41AM (#1306498)

        I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could. "We must not be too hasty, think of how much money is at stake? Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it. Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered. This is the tobacco companies, the oil companies with their leaded gas all over again, social media companies with their data harvesting. I'll take terrible regulation that can be removed over no regulation that you try to put in after the fact any day. Especially since a lot of the stuff put forth as "terrible regulation" is actually just companies complaining that they can't do whatever they want solely for their own (not their worker's) benefit.

        I do agree if there are existing regulations that can be used, that they be used and not new ones added provided sufficient resources are put in place for enforcement. If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

        • (Score: 1, Disagree) by khallow on Tuesday May 16 2023, @03:15AM

          by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @03:15AM (#1306502) Journal

          I don't agree that we let industries grow to such physical or financial sizes and let the body counts grow to some threshold number before we consider regulation. When you get to that point, then it turns out to make any change you need to do several decades worth of "studies", then you need several more decades of "debate" to wait and let the industry run its profitable course before doing something by which time the industry has squeezed as much juice out of that lemon as they could.

          Sounds like it wasn't worth regulating in the first place then, if no regulation leads to that long drawn out process.

          Think of how many jobs will be impacted." Basically, let them become "too big to fail" before doing anything about it.

          I'm also thinking of how easily it would be to strangle useful, emerging technologies. When you have the above situation where established interests were allowed to choose their own regulation, then they are more than powerful enough to block competing new technologies via regulation. For example, that's resistance to the gig economy in a nutshell.

          Why do you think there is this AI land grab going on? It is to grab as much market or mindshare before anything serious is considered.

          Given today's terrible regulatory environment, which you describe in part, of course they would do that.

          If certain lobbyists write legislation to choke off enforcement funding for some agency, then I'm all for new regulatory powers being given elsewhere to countermand that.

          Even if those new regulatory powers just add to the power of the lobbyists? Remember you already describe severe regulatory dysfunction. What's the point of creating new regulation when it's going to be honored as well as the original regulation?

      • (Score: 2) by aafcac on Tuesday May 16 2023, @03:16PM (1 child)

        by aafcac (17646) on Tuesday May 16 2023, @03:16PM (#1306548)

        So, we should wait for something catastrophic to happen before taking any preventative steps? Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer. OTOH, it was known that lead was a problem before it was added to fuel, but there wasn't any "harm" there until after it was put into the gas and emitted all over the place.

        In other words, only the most ignorant of people would suggest that we need to wait until there's harm before taking steps to reduce or mitigated it. Clearly, we don't always get it right and sometimes the measures that we put into place at the time look goofy later, even though they are legitimate measures to address what's going on in the here and now.

        Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward. We already see the companies working on the technology to have a shocking lack of responsibility that has already killed people, it seems to me that even if we do require harm before regulating, that condition has already been met thanks to irresponsible corporations like Tesla and Uber. Not to mention the various HR firms using ML based software to screen job applicants.

        • (Score: 1) by khallow on Tuesday May 16 2023, @09:28PM

          by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @09:28PM (#1306618) Journal

          So, we should wait for something catastrophic to happen before taking any preventative steps?

          Indeed. Because otherwise you don't have a clue what the problems are.

          Just look at the Ozone layer, I doubt that when the technology that led to the hole was being developed that anybody had any idea that it would destroy the ozone layer.

          And we still don't know because nobody was looking for ozone holes before the modern age. We don't actually know that there was destruction of the ozone layer. It's a reasonable model, but it's not backed by evidence, but rather by observation bias.

          Ultimately, we don't know what's going to come of the ML craze and as such, proceeding with caution is the only reasonable path forward.

          Unless, of course, the "reasonable path" causes more harm.

    • (Score: 2) by mcgrew on Tuesday May 16 2023, @07:09PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Tuesday May 16 2023, @07:09PM (#1306598) Homepage Journal

      That depends on what country and what circumstances. In the US, cannabis was outlawed on the basis of bald-faced lies.

      The internet is wild west because it's international. Kind of hard for Russia to arrest me for telling the world they invaded Ukraine, despite Russia's laws against it.

      To a corporation, "harm" is fewer profits.

      --
      mcgrewbooks.com mcgrew.info nooze.org
  • (Score: 5, Insightful) by JoeMerchant on Monday May 15 2023, @08:45PM

    by JoeMerchant (3937) on Monday May 15 2023, @08:45PM (#1306450)

    We're not going to hurt anyone, or change the geo-political power structure. It's just some theory to practice work around that E=mc2 thing. It could be really big, but you just have to trust us, we're not going to let it hurt anyone.

    Coming from Microsoft, no less.

    --
    🌻🌻 [google.com]
  • (Score: 2) by bloodnok on Monday May 15 2023, @09:10PM (1 child)

    by bloodnok (2578) on Monday May 15 2023, @09:10PM (#1306453)

    Right now all the damage that is "being done by AI" is not being done by AI but by the humans that have chosen to use them.

    If there is a problem with election misinformation being spread by AI, then we ought to look at why we think that election misinformation spread by humans is not a problem.

    If there are employment decisions being made unfairly by AIs then maybe we should also look at the unfair employment decisions made by humans.

    Why do we think its appropriate to allow dumb humans to do some things, but not smart machines?

    There are some serious and important discussions to be had about AI, but they are being drowned out by a whole lot of missing-the-point nonsense.

    Let's not wait for harm to happen, but let's also not legislate to prevent harms from AIs that we casually accept from humans.

    __
    The Major

    • (Score: 3, Touché) by Gaaark on Tuesday May 16 2023, @12:47AM

      by Gaaark (41) on Tuesday May 16 2023, @12:47AM (#1306479) Journal

      ^this is why we can't have anything nice. Everywhere we go, we're humans.

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 2) by krishnoid on Monday May 15 2023, @09:26PM (9 children)

    by krishnoid (1156) on Monday May 15 2023, @09:26PM (#1306458)

    "What do you think about this article: [TFA link]":


    The article you linked discusses the views of Microsoft chief economist Michael Schwarz on the regulation of artificial intelligence (AI). Schwarz argues that we should not regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios. He also says that we need to be careful not to over-regulate AI, as this could stifle innovation.

    I think Schwarz's views are reasonable. It is true that AI is a powerful technology, and it is important to be mindful of the potential risks. However, I also think it is important to be cautious about over-regulating AI. As Schwarz says, regulation could stifle innovation, and this could have a negative impact on society as a whole.

    I think the best approach is to strike a balance between regulation and innovation. We need to be mindful of the potential risks of AI, but we also need to be careful not to stifle innovation. I think the best way to do this is to work with the AI community to develop responsible AI practices. This means developing standards and guidelines for the development and use of AI, and it also means educating the public about AI and its potential risks and benefits.

    I believe that by working together, we can ensure that AI is used for good and not for harm.

    "Thank you"
    You're welcome. I'm glad I could be of assistance.

    • (Score: 3, Touché) by RS3 on Monday May 15 2023, @11:23PM (7 children)

      by RS3 (6367) on Monday May 15 2023, @11:23PM (#1306471)

      An AI says that AI development is okay. No conflict of interest to see here, move along. :)

      • (Score: 1) by khallow on Tuesday May 16 2023, @01:46AM (6 children)

        by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @01:46AM (#1306489) Journal

        An AI says that AI development is okay. No conflict of interest to see here, move along. :)

        Why is it ok for a human with conflicts of interest to make an argument that supports those interests, but not ok for an AI?

        • (Score: 0) by Anonymous Coward on Tuesday May 16 2023, @03:18AM

          by Anonymous Coward on Tuesday May 16 2023, @03:18AM (#1306503)

          And, the $64K question: This was Google Bard, why didn't it slam anything to do with Microsoft, one of Google's competitors.

        • (Score: 2) by RS3 on Tuesday May 16 2023, @06:49AM (4 children)

          by RS3 (6367) on Tuesday May 16 2023, @06:49AM (#1306522)

          Why is it ok for a human with conflicts of interest to make an argument that supports those interests

          I'm not following you- can you give me an example?

          • (Score: 1) by khallow on Tuesday May 16 2023, @02:25PM (3 children)

            by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @02:25PM (#1306541) Journal
            It's all over the place. The most extreme is the typical court system where two or more parties not only argue their clearly biased cases, but it is expected that they do that.

            And if a Soylentil asks for your favorite text editor and why is it your favorite, are you expected to argue for a text editor that you don't favor?
            • (Score: 2) by RS3 on Tuesday May 16 2023, @03:33PM (2 children)

              by RS3 (6367) on Tuesday May 16 2023, @03:33PM (#1306551)

              I see your point and perspective. I'll have to think about that. It seems the natural order of things, you know, basic instinct / self protection / survival. Is there some other way things should happen that might be better?

              • (Score: 2, Informative) by khallow on Tuesday May 16 2023, @09:25PM (1 child)

                by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @09:25PM (#1306617) Journal
                I don't think so, nor do I see it as an actual problem. If I argue for something I want and it's clear that's what I'm doing, then it doesn't matter if I'm human or AI. The problem comes in when someone presents their argument as being unbiased.

                A classic NASA example happened in 2005, when a study for selecting a new launch vehicle found that a Space Shuttle-like vehicle ("Shuttle stack" is a typical name for this configuration) was best by safety and performance standards. This was found to be a lie when an appendix was released a few years later under a FOIA request (it had been previously withheld on grounds that its release would have violated NDA agreements that NASA had allegedly made with the contractors) which showed that the study had deliberately compromised its safety and performance standards precisely with the Shuttle stack. Numerous problems had come up with the configuration (very high vibration, high acceleration, high air pressure during early launch or "maxQ", massive crawler needed to move the vehicle from integration facility to launch pad, need for a very aggressive launch abort/escape system, etc) when they were making the prototype. Almost all of these problems had been foretold in the appendix!

                The gimmick here was that the report was presented as being impartial when the hidden part showed that the study had an heavy bias towards the configuration that was eventually selected. It was a part of the theater of the time to attempt to legitimize the eventual choice. If the appendix had been publicly viewable at the very beginning, the bias would have been obvious and the selection process contested.
                • (Score: 2, Insightful) by khallow on Tuesday May 16 2023, @09:32PM

                  by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @09:32PM (#1306620) Journal

                  The problem comes in when someone presents their argument as being unbiased.

                  And that argument is given more weight because of the perception of lack of bias.

    • (Score: 2) by inertnet on Tuesday May 16 2023, @09:08AM

      by inertnet (4071) on Tuesday May 16 2023, @09:08AM (#1306528) Journal

      Funny that it's obviously programmed to respond as if it were a human, with all the "I think" and "we need" and talk about AI as "them". It reminds me of the Data character from Star Trek, it's even funnier if you imagine his voice when reading these AI responses.

  • (Score: 3, Touché) by RedGreen on Monday May 15 2023, @09:33PM (11 children)

    by RedGreen (888) on Monday May 15 2023, @09:33PM (#1306460)

    How about we get ahead of the curve for a change and be proactive on preventing any harm before it gets established. Instead of chasing the tail and being too far behind to even catch up to the garbage these tech companies foist on us in the name of "progress".

    --
    "I modded down, down, down, and the flames went higher." -- Sven Olsen
    • (Score: 2) by JoeMerchant on Monday May 15 2023, @09:44PM (2 children)

      by JoeMerchant (3937) on Monday May 15 2023, @09:44PM (#1306461)

      The problem with unregulated AI development is the speed with which it can potentially do damage.

      If we only start drafting regulations when damage is already being done...

      --
      🌻🌻 [google.com]
      • (Score: 2) by aafcac on Tuesday May 16 2023, @03:46PM (1 child)

        by aafcac (17646) on Tuesday May 16 2023, @03:46PM (#1306552)

        Yes, and doing so assumes that the people doing the research will know when they're about to take it too far and stop to reconsider. It's a lot like the atomic bomb, if the US had known that the Germans were effectively incapable of developing the technology in our life times due to a lack of heavy water, people on the team likely wouldn't have been willing to do the work to make it so. Now that it is a real thing, it's not something that can be undone. The awareness that it can be done is sufficient to ensure that shy of wiping the technology out completely for so long that it becomes a myth on par with things like the parting of the Red Sea due to God or a flood that encompasses the whole world, people will know that it can be done and those with the power will pursue recreating it.

        In the case of AI and ML with hardware getting ever more powerful, the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.

        • (Score: 3, Insightful) by JoeMerchant on Tuesday May 16 2023, @05:59PM

          by JoeMerchant (3937) on Tuesday May 16 2023, @05:59PM (#1306573)

          >a flood that encompasses the whole world

          We're working on that one with our atmospheric carbon release technology...

          >the ease of going too far gets larger each year and there should be something in place to help identify those cases as an absolute bare minimum before going much further.

          Unfortunately, I think the Microsoftie is onto something in basic human nature. We're mostly (quite reasonably) skeptical of gene therapy, modified genetic codes delivered by viral vectors, but it took Patient 1 dying of complications to really put the brakes on the technology. I'm not saying that's how it should be, I'm saying that's how it is be.

          --
          🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Monday May 15 2023, @11:09PM (3 children)

      by Anonymous Coward on Monday May 15 2023, @11:09PM (#1306469)

      How about we let things play out instead of putting in place a bunch of burdensome regulations that will inevitably hurt the little guy. I don't really care to see stronger copyright laws or restrictions on how much performance a PC can have.

      • (Score: -1, Troll) by Anonymous Coward on Tuesday May 16 2023, @01:05AM

        by Anonymous Coward on Tuesday May 16 2023, @01:05AM (#1306482)

        Jesse? Is that you? https://www.legendsofamerica.com/james-gang/ [legendsofamerica.com]

      • (Score: 4, Interesting) by RedGreen on Tuesday May 16 2023, @01:42AM (1 child)

        by RedGreen (888) on Tuesday May 16 2023, @01:42AM (#1306487)

        "How about we let things play out instead of putting in place a bunch of burdensome regulations that will inevitably hurt the little guy. I don't really care to see stronger copyright laws or restrictions on how much performance a PC can have."

        Fuck that and the horse it rode in on too. I am sick of all this let them slimy parasite corporations do whatever the hell they want no matter who gets hurt by their newest latest scheme to steal more of our money, invade our privacy some more, sell us toxic products, half of which destroy the planet. The we are only beholden to making more money for our shareholders, fuck society with all the harm we do, that is for the idiot governments to pay for. Well I for one say enough is enough they get to be held accountable for their actions they cause, the time for it has long since past. We need the corporate death penalty for both the corporation and the people in charge of it. That will change how the cocksuckers go about messing us about killing countless among us with their greed.

        --
        "I modded down, down, down, and the flames went higher." -- Sven Olsen
        • (Score: 1, Funny) by Anonymous Coward on Tuesday May 16 2023, @02:15AM

          by Anonymous Coward on Tuesday May 16 2023, @02:15AM (#1306495)

          Your proposal has been rejected due to Congressional inaction and China. Better luck next paradigm shift.

    • (Score: 2, Insightful) by khallow on Tuesday May 16 2023, @01:47AM (3 children)

      by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @01:47AM (#1306490) Journal

      How about we get ahead of the curve for a change and be proactive on preventing any harm before it gets established.

      No, because we aren't competent enough to get ahead of the curve.

      • (Score: 2) by aafcac on Tuesday May 16 2023, @03:50PM (2 children)

        by aafcac (17646) on Tuesday May 16 2023, @03:50PM (#1306554)

        We don't have to be ahead of the curve, that's why regulations should be focused on slowing the curve enough that it can be studied carefully as we go forward. We've already got Tesla cars murdering people in the streets because the AI isn't advanced enough to figure out how far away motorcycles are or properly handle jersey barriers at exits. That's going to be how things are going to be, if we're lucky, as the companies doing the development are trying to be first to market and have little idea about how advanced any of the competition is.

        Technology isn't something that we can always predict the impact of. I doubt when Freon based refrigerants or asbestos were introduced, anybody had any idea as to how large a problem they would be. Likewise, look at all the things that computers are being used for that likely weren't predicted decades back when the first mainframes were punch card based and extremely slow. But, we can mandate that measures be put into place to keep these systems from causing harm while we figure out how to design them, so they don't do dangerous things, or give us something that we ask for, but shouldn't be asking for.

        • (Score: 1) by khallow on Tuesday May 16 2023, @09:35PM

          by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @09:35PM (#1306621) Journal

          We don't have to be ahead of the curve, that's why regulations should be focused on slowing the curve enough that it can be studied carefully as we go forward.

          Why should we? What's the evidence for this need? This "slowing the curve" method is another threat to our future like AI or asbestos, yet we're not taking proper precautions with it.

        • (Score: 2, Informative) by khallow on Tuesday May 16 2023, @09:39PM

          by khallow (3766) Subscriber Badge on Tuesday May 16 2023, @09:39PM (#1306623) Journal

          We've already got Tesla cars murdering people in the streets because the AI isn't advanced enough to figure out how far away motorcycles are or properly handle jersey barriers at exits.

          As an aside, we already have systems for dealing with technology that doesn't work right. Slowing AI isn't needed when people can just sue Tesla for murdering cars and there's even the possibility of actual crime, if gross negligence can be shown.

  • (Score: 2) by SomeGuy on Tuesday May 16 2023, @12:48AM (2 children)

    by SomeGuy (5632) on Tuesday May 16 2023, @12:48AM (#1306480)

    Ok, so bring on the meaningless harm. Such as IA written drivel news stories, advertising and tracking that finds even more creepy and surreal ways to control you, personalized information query results with various levels of mis-information, all while no one questions if it is accurate or even useful.

    We will get to placing AI in mission-critical life-or-death situations later. And idiots will still somehow be ok with it.

  • (Score: 2, Informative) by namefags_are_jerks on Tuesday May 16 2023, @03:42AM

    by namefags_are_jerks (17638) on Tuesday May 16 2023, @03:42AM (#1306508)

    Imagine what Laws for the Information Superhighway would've been put in place in 1990... Politicians weren't even at the 'series of tubes' level of understanding back then, and would've just copy&pasted from regulating the telephone companies. There likely would have been laws about financial fraud, but nothing about identity theft and DDoS. At the moment everyone's panicing about population manipulation and deepfakes, but future has still-to-be-invented threats using GenAI (high-frequency
    AI-trading creating another Flash-Crash of 2010..?)

    It reminds me of e-Scooters here in Australia -- one of our States (NSW) was 'pro-active' and used copy&pasted laws from other transportation that e-scoots barely resembled, resulting in them still being banned 20 years later, while another (QLD) started by allowing controlled trails and watching how people killed themselves, discovering that they did need different laws to bicycles and escoots from other countries, because Australians were using them differently from expected (well, we're dumb boofheads) and the transportation network is different. Even though it killed a (irresponsible..) few, Queensland kept them, but appropriate laws were introduced like "only allowed on streets with 50 km.h and under speed limits".

  • (Score: 2) by VLM on Tuesday May 16 2023, @12:15PM

    by VLM (445) on Tuesday May 16 2023, @12:15PM (#1306537)

    This is out of context:

    "we shouldn't regulate ... until we see some meaningful harm that is actually happening, not imaginary scenarios."

    The discussion should be in context how AI fits in compared to climate change, self driving cars, cryptocurrency, lootbox-style pay2win mobile gaming, high carb diets, low carb diets, nuclear power, recent pandemics, experimental vaccines, microplastics, genetic engineering GMO plants, various herbicides and insecticides, etc.

  • (Score: 2) by DadaDoofy on Tuesday May 16 2023, @04:48PM (1 child)

    by DadaDoofy (23827) on Tuesday May 16 2023, @04:48PM (#1306562)

    The real question is how much transparency will we see in term of who is training it and with what information? It will most definitely be used to reinforce specific narratives and deride contradictory ones as "conspiracy theories" and "misinformation", while being sold as an intelligent, unbiased reference.

    • (Score: 3, Insightful) by Freeman on Tuesday May 16 2023, @07:28PM

      by Freeman (732) on Tuesday May 16 2023, @07:28PM (#1306603) Journal

      Currently, as far as I can tell, the LLMs have been learning from a very broad range of data. Which generally includes the likes of Reddit, 4chan, Twitter, and MySpace posts. Along with vast troves of data from around the internet as a whole. When you think about what they trained it on, it suddenly becomes very clear why Microsoft's first iteration turned into a Nazi and their more recent attempt showed good skills at gaslighting and attacking the user.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(1)