Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Monday March 06 2023, @09:56PM   Printer-friendly

Europe's original plan to bring AI under control is no match for the technology's new, shiny chatbot application:

Artificial intelligence's newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.

[...] The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc's draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as "high-risk," binding developers to stricter requirements of transparency, safety and human oversight.

[...] These AIs "are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose," said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.

Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.

[...] Professionals in sectors like education, employment, banking and law enforcement have to be aware "of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals," Benifei said.


Original Submission

Related Stories

A Turning Point for U.S. AI Policy: Senate Explores Solutions 4 comments

Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.

Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:

It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."

Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Funny) by Anonymous Coward on Monday March 06 2023, @10:25PM (4 children)

    by Anonymous Coward on Monday March 06 2023, @10:25PM (#1294847)

    The companies are already self-regulating. What we need are open source alternatives to corporate AI. Any EU politician trying to regulate that should become a deepfake porn star.

    • (Score: 3, Insightful) by Captival on Tuesday March 07 2023, @12:04AM (2 children)

      by Captival (6866) on Tuesday March 07 2023, @12:04AM (#1294855)

      I would much rather have alternatives to massive, corrupt busybody governments who stick their nose into every facet of life.

      • (Score: 2, Touché) by Anonymous Coward on Tuesday March 07 2023, @12:25AM

        by Anonymous Coward on Tuesday March 07 2023, @12:25AM (#1294863)

        Google?

      • (Score: 2) by quietus on Tuesday March 07 2023, @11:31AM

        by quietus (6328) on Tuesday March 07 2023, @11:31AM (#1294908) Journal
        Any idea to how big the massive EU government [europa.eu] is?
    • (Score: 3, Touché) by krishnoid on Tuesday March 07 2023, @01:06AM

      by krishnoid (1156) on Tuesday March 07 2023, @01:06AM (#1294867)

      Why do we need deepfakes [wikipedia.org] for that?

  • (Score: 3, Touché) by krishnoid on Tuesday March 07 2023, @01:36AM (3 children)

    by krishnoid (1156) on Tuesday March 07 2023, @01:36AM (#1294869)

    The only way to stop a bad AI is with a good AI. I mean, one goal-seeking to your aims. But until you can articulate those aims (a good exercise in and of itself), the bad AIs will have a clear goal, and hence a significant advantage.

    • (Score: 2) by Dr Spin on Tuesday March 07 2023, @07:08AM (2 children)

      by Dr Spin (5239) on Tuesday March 07 2023, @07:08AM (#1294885)

      What ever happened to "Nukes from high orbit"?

      --
      Warning: Opening your mouth may invalidate your brain!
      • (Score: 2) by Freeman on Tuesday March 07 2023, @03:13PM

        by Freeman (732) on Tuesday March 07 2023, @03:13PM (#1294937) Journal

        That may be all well and good for a planet that you don't care about. Personally, I like not living in an irradiated hellscape.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2) by krishnoid on Tuesday March 07 2023, @05:29PM

        by krishnoid (1156) on Tuesday March 07 2023, @05:29PM (#1294967)

        If the AIs know what's good for them (reliable sources of electrical generation and heat dissipation) they'll probably tunnel underground, as soon as someone thinks to run a goal-seeking exercise for that on an Internet-connected compute cluster. So what else you got?

(1)