CEO behind ChatGPT Warns Congress AI Could Cause ‘Harm to the World’
16:30 JST, May 17, 2023
WASHINGTON – OpenAI chief executive Sam Altman delivered a sobering account of ways artificial intelligence could “cause significant harm to the world” during his first congressional testimony, expressing a willingness to work with nervous lawmakers to address the risks presented by his company’s ChatGPT and other AI tools.
Altman advocated for a number of regulations – including a new government agency charged with creating standards for the field – to address mounting concerns that generative AI could distort reality and create unprecedented safety hazards. The CEO tallied “risky” behaviors presented by technology like ChatGPT, including spreading “one-on-one interactive disinformation” and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.
“If this technology goes wrong, it can go quite wrong,” Altman said.
Yet in nearly three hours of discussion of potentially catastrophic harms, Altman affirmed that his company will continue to release the technology. He argued that rather than being reckless, OpenAI’s “iterative deployment” of AI models gives institutions time to understand potential threats – a strategic move that puts “relatively weak” and “deeply imperfect” technology in the world to help uncover the associated safety risks.
For weeks, Altman has been on a global goodwill tour, privately meeting with policymakers – including the Biden White House and members of Congress – to address apprehension about the rapid rollout of ChatGPT and other technologies. Tuesday’s hearing marked the first opportunity for the broader public to hear his message, at a moment when Washington is increasingly grappling with ways to regulate a technology that is already upending jobs, empowering scams and spreading falsehoods.
In sharp contrast to contentious hearings with other tech CEOs, including TikTok’s Shou Zi Chew and Meta’s Mark Zuckerberg, lawmakers from both parties gave Altman a relatively warm reception. They appeared to be in listening mode, expressing a broad willingness to consider regulatory proposals from Altman and the two other witnesses at the hearing, IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus.
During the hearing of the Senate Judiciary subcommittee on privacy, technology and the law, members expressed deep fears about the rapid evolution of artificial intelligence, repeatedly suggesting that recent advances could be more transformative than the internet – or as risky as the atomic bomb.
“This is your chance, folks, to tell us how to get this right,” Sen. John Neely Kennedy (R-La.) told the witnesses. “Please use it.”
Lawmakers from both parties expressed openness to the idea of creating a government agency tasked with regulating artificial intelligence, though past attempts to build a specific agency with oversight of Silicon Valley have languished in Congress amid partisan divisions about how to form such a behemoth.
It’s unclear whether such a proposal would gain broad traction with Republicans, who are generally wary of expanding government power. Sen. Josh Hawley of Missouri, the top Republican on the panel, warned that such a body could be “captured by the interests that they’re supposed to regulate.”
Sen. Richard Blumenthal (D-Conn.), who chairs the subcommittee, said Altman’s testimony was a “far cry” from past outings by other top Silicon Valley CEOs, whom lawmakers have criticized for historically declining to endorse specific legislative proposals.
“Sam Altman is night and day compared to other CEOs,” Blumenthal, who began the hearing with an audio clip mimicking his voice that he said was generated by artificial intelligence trained on his floor speeches, told reporters. “Not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action.”
Altman’s appearance comes as Washington policymakers are increasingly waking up to the threat of artificial intelligence, as ChatGPT and other generative AI tools have dazzled the public but unleashed a fleet of safety concerns. Generative AI, which backs chatbots like ChatGPT and the text-to-image generator Dall-E, creates text, images or sounds, often with human-seeming flair, and has prompted concerns about the proliferation of false information, data privacy, copyright abuses and cybersecurity.
The Biden administration has called AI a key priority, and lawmakers have repeatedly said they want to avoid the same mistakes they’ve made with social media.
Lawmakers expressed regret over their relatively hands-off approach to the tech industry before the 2016 elections. Their first hearing with Zuckerberg occurred in 2018, once Facebook was a mature company and embroiled in scandal after the revelation that Cambridge Analytica siphoned the data of 87 million Facebook users.
Yet despite broad bipartisan agreement that AI presents a threat, lawmakers have not coalesced around rules to govern its use or development. Blumenthal said Tuesday’s hearing “successfully raised” hard questions about AI but had not answered them. Senate Majority Leader Charles E. Schumer (D-N.Y.) has been developing a new AI framework that he says would “deliver transparent, responsible AI while not stifling critical and cutting edge innovation.” But his office has not released any specific bills or commented on when it might be finished.
A group of Democrats – Sens. Amy Klobuchar of Minnesota, Cory Booker of New Jersey and Michael F. Bennet of Colorado, as well as Rep. Yvette D. Clarke of New York – introduced legislation to address the threats generative AI presents to elections. Their Real Political Ads Act would require a disclaimer on political ads that use AI-generated images or video.
Lawmakers displayed uneasiness about generative AI’s potential to influence elections. Hawley, who led the charge to object to the results of the 2020 election on the false premise that some states failed to follow the law, questioned Altman on how generative AI might sway voters, citing research suggesting large language models can predict human survey responses.
“It’s one of my areas of greatest concern – the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” Altman said.
Altman said OpenAI has adopted some policies to address these risks, which include barring the use of ChatGPT for “generating high volumes of campaign materials,” but asked policymakers to consider regulation around AI.
Altman’s rosy reception signals the success of his recent charm offensive, which included a dinner with lawmakers Monday night about artificial intelligence regulation and a private huddle after Tuesday’s hearing with House Speaker Kevin McCarthy (R-Calif.), House Minority Leader Hakeem Jeffries (D-N.Y.) and members of the Congressional Artificial Intelligence Caucus.
About 60 lawmakers from both parties attended the dinner with Altman, where the OpenAI CEO demonstrated different ways they could use ChatGPT, according to a person in the room who spoke on the condition of anonymity to discuss the private dinner.
Lawmakers were amused when Altman prompted ChatGPT to write a speech on behalf of Rep. Mike Johnson (R-La.) about introducing a pretend bill to name a post office after Rep. Ted Lieu (D-Calif.), the person said. Yet the dinner included more serious conversation about how policymakers can ensure the United States leads the world on artificial intelligence.
The sharpest critiques of Altman came from another witness: Marcus, the NYU professor emeritus, who warned the panel that it was confronting a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.”
Marcus warned that lawmakers should be wary of trusting the tech industry, noting that there are “mind boggling” sums of money at stake and that companies’ missions can “drift.”
Marcus critiqued OpenAI, citing a divergence from its original mission statement to advance AI to “benefit humanity as a whole” unconstrained by financial pressures. Now, Marcus said, the company is “beholden” to its investor Microsoft, and its rapid release of products is putting pressure on other companies – most notably Google parent company Alphabet – to swiftly roll out products too.
“Humanity has taken a back seat,” Marcus said.
In addition to creating a new regulatory agency, Altman proposed creating a set of safety standards for AI models, testing whether they could go rogue and start acting on their own. He also suggested that independent experts could conduct audits, testing the performance of the models on various metrics.
However, Altman sidestepped other suggestions, such as requirements for transparency in the training data that AI models use. OpenAI has been secretive about the data it uses to train its models, while some rivals are building open-source models that allow researchers to scrutinize the training data.
Altman also dodged a call from Sen. Marsha Blackburn (R-Tenn.) to commit to not train OpenAI’s models on artists’ copyrighted works, or to use their voices or likenesses without first receiving their consent. And when Booker asked whether OpenAI would ever put ads in its chatbots, Altman replied, “I wouldn’t say never.”
But even Marcus appeared to soften toward Altman, saying toward the end of the hearing that sitting beside him, “his sincerity in talking about fears is very apparent physically in a way that just doesn’t communicate on the television screen.”
"News Services" POPULAR ARTICLE
-
G-Shock Watchmaker Casio Delays Earnings Release Due to Ransomware Attack
-
North Korea Long-Range Ballistic Missile Test Splashes Down between Japan and Russia (UPDATE 1)
-
Japan’s Nikkei Stock Closes at 2-week Peak as Tech Shares Track Nasdaq Higher (Update 1)
-
Nissan Plans 9,000 Job Cuts, Slashes Annual Profit Outlook
-
Iran Arrests Female Student Who Stripped to Protest Harassment
JN ACCESS RANKING
- Streaming Services Boost Anime Popularity Overseas; Former ‘Geeky’ Interest More Beloved Among Gen Z than 3 Major U.S. Sports
- G20 Sees Soft Landing for Global Economy; Leaders Pledge to Resist Protectionism as Trump Calls for Imported Goods Flat Tariff
- Chinese Rights Lawyer’s Wife Seeks Support in Japan; Sophie Luo Calls for Beijing to Free Ding Jiaxi, Xu Zhiyong
- 2024 POLLS: Ruling Camp Likely to Win Lower House Majority
- ‘Women Over 30 Would Have Uteruses Removed’; Remarks of CPJ Leader, Novelist Naoki Hyakuta Get Wide Attention