Examining Generative AI: Cutting Through the Chaos / Competition Accelerates AI Development, Overrides Caution; Warning Voices Left Behind In Latest Tech Gold Rush
The Yomiuri Shimbun
6:00 JST, November 25, 2024
This is the fourth installment in a series examining how society should deal with generative artificial intelligence (AI).
***
“AI systems with human-competitive intelligence can pose profound risks to society and humanity … Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
These lines are excerpted from an open letter issued in March last year by the U.S. nonprofit organization Future of Life Institute (FLI) in the heat of ChatGPT sweeping the world.
The letter warns of a risk that humans will lose control of AI if its development continues as it is, and calls for pausing the development of powerful AI systems. The letter was supported and signed by such prominent figures as entrepreneur Elon Musk and historian Yuval Noah Harari among many others.
The signatory list also includes the names of Japanese researchers. Keio University Prof. Satoshi Kurihara, the president of the Japanese Society for Artificial Intelligence, said: “The hasty release of AI into the world will cause confusion. AI developers and users should stop for a moment and think about the various issues involved.”
University of Tokyo Prof. Emeritus Yoshihiko Nakamura, a robot researcher, said: “Conversations with AI may have a major impact on human intellectual activity. I think we all need to think about this.” Leaders in various fields were increasingly concerned about rapidly spreading generative AI.
‘Public will pay price’
However, the movement quickly petered out. OpenAI, Microsoft Corp., and Google LLC all released new AIs and related services one after another. The development race intensified with an expectation of the huge market of the future, and the stock market was abuzz with the generative AI boom.
“In the midst of the huge movement of capital, the open letter was powerless,” Nakamura said.
Just four months after the letter’s release, Musk announced the formation of a new company, xAI, effectively throwing his signature into a dust bin.
“Businesses can’t stop development as they’re up against rivals. Humans are creatures that can’t stop themselves after all,” Kurihara said, expressing his complex feelings about the development race.
AI developers themselves have expressed a sense of crisis over the idea that their excessive haste to release new services has resulted in neglect of safety measures.
At OpenAI, which became one of the world’s largest startups with the explosive spread of its ChatGPT, key executives left one after another, apparently rebelling against the company’s shift to a focus on profit.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” Jan Leike, who was in charge of OpenAI’s safety measures, wrote in a post criticizing its management team on X (formerly Twitter) when he left the company in May. “OpenAI must become a safety-first AGI [artificial general intelligence] company.”
University of Toronto Prof. Emeritus and AI scientist Geoffrey Hinton criticized OpenAI at a press conference held on Oct. 8 when he won the 2024 Nobel Prize in Physics. “Over time, it turned out that [OpenAI Chief Executive Officer] Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate,” he said.
On Oct. 2, OpenAI announced that it had raised $6.6 billion (about ¥1 trillion) in new funding, a sign of further acceleration of generative AI development. It has also been reported that the company would be reorganized from its current nonprofit-oriented structure to a more profit-oriented one. In the face of business logic, the company’s founding mission to ensure “AGI benefits all of humanity” appears to be wavering.
The FLI’s open letter was endorsed by over 33,000 signatories. FLI’s Communications Director Ben Cumming is deeply concerned about the development race by AI corporations such as big tech firms.
“Intense competitive pressures will force them [AI corporations] to cut corners safety-wise, and it is the public that will pay the price,” Cumming said.
Related Articles “Examining Generative AI: Cutting Through the Chaos”
Popular Articles
-
Record 320 School Staff Punished for Sex Offenses in Japan
-
Central Tokyo Observes 1st Snow of Season; 25 Days Earlier than L...
-
Peace in Ukraine Hinges on NATO Membership, Says Kyiv’s Ex-foreig...
-
Honda, Nissan Integration Likely to Affect Auto Parts Suppliers; ...
-
Satoru Hiura Marks 40th Anniversary of Her Career as Manga Creato...
-
Honda, Nissan Struggle to Fight Back in China Price War as BYD’s ...
-
Japan Defers Decision on Planned Hokuriku Shinkansen Route
-
Netflix Signs US Broadcast Deal With FIFA for the Women's World C...
"Society" POPULAR ARTICLE
-
Miho Nakayama, Japanese Actress and Singer, Found Dead at Her Tokyo Residence; She was 54 (UPDATE 1)
-
Record 320 School Staff Punished for Sex Offenses in Japan
-
Risk of Nuclear Weapons Being Used Greater Than Ever; Support Growing in Russia As Ukraine War Continues
-
Central Tokyo Observes 1st Snow of Season; 25 Days Earlier than Last Winter
-
Overtourism Grows as Snow Cap Appears on Mt. Fuji; Local Municipalities Hard Pressed to Establish Countermeasures
JN ACCESS RANKING
- Japan’s Kansai Economic Delegation Meets China Vice Premier, Confirm Cooperation; China Called to Expand Domestic Demand
- Core Inflation in Tokyo Accelerates in November
- Yomiuri Stock Index to Launch in March; 333 Companies to be Equally Weighted
- Yomiuri 333 Stock Index Raises Investor Expectations in Japan; Equal Weighting To Provide New Perspective
- China to Test Mine for Rare Metals Off Japan Island; Japan Lagging in Technologies Needed for Extraction