████ # This file was generated bot-o-matically! Edit at your own risk. ████
The nation’s biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the U.S. Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.
Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hands, even though they had diverse views,” he said.
Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly-developing technology, how companies could be more transparent and how the United States can stay ahead of China and other countries.
“The key point was really that it’s important for us to have a referee,” said Elon Musk, CEO of Tesla and X, during a break in the daylong forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.”
Schumer will not necessarily take the tech executives’ advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.
Congress should do what it can to maximize AI’s benefits and minimize the negatives, Schumer said, “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails.”
Other executives attending the meeting were Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting “might go down in history as being very important for the future of civilization.”
First, though, lawmakers have to agree on whether to regulate, and how.
Learn More at SecurityWeek’s Cyber AI & Automation S [securitysummits.com]ummit [icscybersecurityconference.com]
Join this virtual event as we explore the hype and promise surrounding AI-powered security solutions in the enterprise and the threats posed by adversarial use of AI.
December 6, 2023 | Virtual Event
Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.
Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and he listed some of the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.
Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.
“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.
The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.
During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. “open source” AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.
In terms of a potential new agency for regulation, “that is one of the biggest questions we have to answer and that we will continue to discuss,” Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.
Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.
“I think it’s important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” he said.
Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.
Sen. Josh Hawley, R-Mo., said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer’s event risked emphasizing the concerns of big firms over everyone else.
Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was “hard to envision a room like that in any way meaningfully representing the interests of the broader public.” She did not attend.
In the United States, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.
There is also division, with some members of Congress worrying more about overregulation of the industry while others are concerned more about the potential risks. Those differences often fall along party lines.
“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” Young said. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”
Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Schumer said they discussed “the need to do something fairly immediate” before next year’s presidential election.
Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.
Some of those invited to Capitol Hill, such as Musk, have voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a University of California, Berkeley researcher who has studied algorithmic bias, said she tried to emphasize real-world harms already occurring.
“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be” Raji said. What remains to be seen, she said, is which voices senators will listen to and what priorities they elevate as they work to pass new laws.
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
Related: The Good, the Bad and the Ugly of Generative AI [securityweek.com]
Related: ChatGPT, the AI Revolution, and Security, Privacy and Ethical Implications [securityweek.com]
Related: XDR and the Age-old Problem of Alert Fatigue [securityweek.com]
Related: House Panels Probe Gov’t Use of Facial Recognition Software [securityweek.com]
Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.
Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.
Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.
Cost avoidance is a powerful way to kick-off ROI discussions. However, to quickly move beyond objections, shifting to a more tangible approach to calculate ROI can help.(Marc Solomon) [securityweek.com]
Cybercriminals are increasingly trying to find ways to get around security, detection, intelligence and controls as APTs start to merge with conventional cybercrime.(Derek Manky) [securityweek.com]
The next time you see CNAPP, CASB, WAAS, CSPM or many of the other phrases, it will be helpful to take a deep breath and realize enterprise security has never been a binary one or zero.(Matt Honea) [securityweek.com]
While quantum-based attacks are still in the future, organizations must think about how to defend data in transit when encryption no longer works.(Marie Hattar) [securityweek.com]
Just as a professional football team needs coordination, strategy and adaptability to secure a win on the field, a well-rounded cybersecurity strategy must address specific challenges and threats.(Matt Wilson) [securityweek.com]
href="whatsapp://send?text=Tech Industry Leaders Endorse Regulating Artificial Intelligence at Rare Summit in Washington
The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.
The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.
ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.
The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...
Cloud security researcher warns that stolen Microsoft signing key was more powerful and not limited to Outlook.com and Exchange Online.
Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.
Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.
Private equity giant plans to buy Forcepoint’s Global Governments and Critical Infrastructure (G2CI) business unit for $2.5 billion.
Tesla CEO Elon Musk says there was "overwhelming consensus" for regulation on artificial intelligence after tech heavyweights gathered in Washington to discuss AI.
Tech bosses attending the meeting included Meta's Mark Zuckerberg and Google boss Sundar Pichai.
Microsoft's former CEO Bill Gates and Microsoft's current CEO Satya Nadella were also in attendance.
The Wednesday meeting with US lawmakers was held behind closed doors.
The forum was convened by Senate Majority Leader Chuck Schumer and included the tech leaders as well as civil rights advocates.
The power of artificial intelligence - for both good and bad - has been the subject of keen interest from politicians around the world.
In May, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee, describing the potential pitfalls of the new technology.
ChatGPT and other similar programmes can create incredibly human-like answers to questions - but can also be wildly inaccurate.
"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Mr Altman said. "We want to work with the government to prevent that from happening," he said.
There are fears that the technology could lead to mass layoffs, turbo charge fraud and make misinformation more convincing.
AI companies have also been criticised for training their models on data scraped from the internet without permission or payment to creators.
In April, Mr Musk told the BBC: "I think there should be a regulatory body established for overseeing AI to make sure that it does not present a danger to the public."
In Wednesday's meeting, he said he wanted a "referee" for artificial intelligence.
"I think we'll probably see something happen. I don't know on what timeframe or exactly how it will manifest itself," he told reporters after.
Mr Zuckerberg said that Congress "should engage with AI to support innovation and safeguards".
He added it was "better that the standard is set by American companies that can work with our government to shape these models on important issues".
Republican Senator Mike Rounds said it would take time for Congress to act.
"Are we ready to go out and write legislation? Absolutely not," Mr Rounds said. "We're not there."
Democrat Senator Cory Booker said all participants agreed "the government has a regulatory role" but crafting legislation would be a challenge.
- How AI may be a powerful tool in treating male infertility [bbc.co.uk]
- How long until a robot is doing your chores? [bbc.co.uk]
More on this story More on this story
Access through your institution [springernature.com]Buy or subscribe
“Regulation of AI is essential,” Sam Altman, chief executive of technology firm OpenAI, told US senators this May during a hearing on artificial intelligence (AI). Many tech experts and non-experts agree, and the clamour for legal guard rails around AI is rising. This year, the European Union is expected to pass its first broad AI laws after more than two years of debate. China already has AI regulations in place.
Access options Access through your institution [springernature.com]
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Learn more [nature.com]
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Learn more [nature.com]
Rent or buy this article
Prices vary by article type
Learn more [nature.com]
Prices may be subject to local taxes which are calculated during checkout
Additional access options:
- Log in [nature.com]
- Learn about institutional subscriptions [springernature.com]
- Read our FAQs [nature.com]
- Contact customer support [springernature.com]
Nature620, 260-263 (2023)
Reprints and Permissions [copyright.com]