Slash Boxes

SoylentNews is people

posted by hubie on Monday May 22 2023, @09:13AM   Printer-friendly

Almost 20 years ago, Senator Ted Stevens was widely mocked and ridiculed for referring to the Internet as a series of tubes even though he led the Senate Commerce Committee which was responsible for regulating it. And just several years ago, members of Congress were mocked for their lack of understanding of Facebook's business model when Mark Zuckerberg testified about the Cambridge Analytica scandal.

Fast forward to this week, when the Senate Judiciary Committee held one of the most productive hearings in Congress in many years, taking up the challenge of how to regulate the emerging AI revolution. This time around, the senators were well-prepared, knowledgeable and engaged. Over at ACM, Marc Rotenberg, a former Staff Counsel for the Senate Judiciary Committee has a good assessment of the meeting that notes the highlights and warning signs:

It is easy for a Congressional hearing to spin off in many directions, particularly with a new topic. Senator Blumenthal set out three AI guardrails—transparency, accountability, and limitations on use—that resonated with the AI experts and anchored the discussion. As Senator Blumenthal said at the opening, "This is the first in a series of hearings to write the rules of AI. Our goal is to demystify and hold accountable those new technologies and avoid some of the mistakes of the past."

Congress has struggled in recent years because of increasing polarization. That makes it difficult for members of different parties, even when they agree, to move forward with legislation. In the early days of U.S. AI policy, Dr. Lorraine Kisselburgh and I urged bipartisan support for such initiatives as the OSTP AI Bill of Rights. In January, President Biden called for non-partisan legislation for AI. The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.

[...] When asked about solutions for privacy, the witnesses tended toward proposals, such as opt-outs and policy notices, that will do little to curb the misuse of AI systems. The key to effective legislation will be to allocate rights and responsibilities for AI developers and users. This allocation will necessarily be asymmetric as those who are designing the big models are far more able to control outcomes and minimize risk than those who will be subject to the outputs. That is why regulation must start where the control is most concentrated. A good model for AI policy is the Universal Guidelines for AI, widely endorsed by AI experts and scientific associations.

[...] The news media is still captivated by tech CEOs. Much of the post-hearing reporting focused on Altman's recommendation to Congress. That is not how democratic institutions operate. Industry support for effective legislation will be welcomed by Congress, but industry does not get the final say. There are still too many closed-door meetings with tech CEOs. Congress must be wary of adopting legislation favored by current industry leaders. There should be more public hearings and opportunities for meaningful public comment on the nation's AI strategy.

TFA also includes arguments on whether we even need legislation and observations on the risk of repeating past mistakes, among other points.


Original Submission

Related Stories

The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn 8 comments

The EU's AI Act could have a chilling effect on open source efforts, experts warn:

The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU's draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

"This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."

In 2021, the European Commission — the EU's politically independent executive arm — released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

ChatGPT Broke the EU Plan to Regulate AI 9 comments

Europe's original plan to bring AI under control is no match for the technology's new, shiny chatbot application:

Artificial intelligence's newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.

[...] The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc's draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as "high-risk," binding developers to stricter requirements of transparency, safety and human oversight.

[...] These AIs "are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose," said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.

Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.

[...] Professionals in sectors like education, employment, banking and law enforcement have to be aware "of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals," Benifei said.

Original Submission

“Meaningful Harm” From AI Necessary Before Regulation, says Microsoft Exec 41 comments

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."

Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Touché) by GlennC on Monday May 22 2023, @12:27PM

    by GlennC (3656) on Monday May 22 2023, @12:27PM (#1307313)

    And the reason why is obvious when you see it. It's even in the summary itself.

    The Senate hearing on AI was a model of bipartisan cooperation, with members of the two parties expressing similar concerns and looking for opportunities for agreement.

    The two sides of the Party can NEVER be seen to work together. Their cheerleaders won't allow it.

    Sorry folks...the world is bigger and more varied than you want it to be. Deal with it.
  • (Score: 3, Interesting) by VLM on Monday May 22 2023, @12:48PM

    by VLM (445) on Monday May 22 2023, @12:48PM (#1307317)

    A good model for AI policy is the Universal Guidelines for AI

    Most people have never read that document, but its basically a long list of BAU activities for businesses then making a pinky swear with fingers crossed behind their back that stuff they've been doing for decades with Excel would "never" be done using AI.

    Its hard to imagine anyone believing any of that. Everyone knows its not "OK" to do everything on that list with Excel or very small Python scripts, likewise everyone knows they're not going to suddenly stop all of that just because "AI" is a different name for a new computer tool.

    It would be like relying on "The Universal Guidelines for Excel" to prevent the end result damage to the financial system caused by a generation of people using Lotus-1-2-3.

  • (Score: 3, Insightful) by DannyB on Monday May 22 2023, @04:05PM

    by DannyB (5839) Subscriber Badge on Monday May 22 2023, @04:05PM (#1307345) Journal

    Quick tip: whenever you read: Senate Exploring Solutions to blah blah

    Read it as: Senate Exploding Solutions . . .

    It is against their party platforms to actually get anything useful or productive accomplished.

    If we tell conservatives that the climate is transitioning, they will work to stop it.
  • (Score: 1) by lush7 on Friday May 26 2023, @05:11PM

    by lush7 (18543) on Friday May 26 2023, @05:11PM (#1308333)

    It worries just as much, the shit they do, when they agree with each other; probably more a good bit of the time.