Has my stock been accused of fraud?Join over 160k users who know.

Ticker Price Change($) Change(%) Shares Volume Prev Close Open Gain($) Gain(%)
Ticker Status Jurisdiction Filing Date CP Start CP End CP Loss Deadline
Ticker Case Name Status CP Start CP End Deadline Settlement Amt
Ticker Name Date Analyst Firm Up/Down Target ($) Rating Change Rating Current

News

Nearly Half Of OpenAI's AGI Safety Researchers Resign Amid Growing Focus On Commercial Product Development: Report

Author: Benzinga Neuro | August 28, 2024 06:47am

Nearly half of the artificial general intelligence safety researchers at OpenAI have resigned in recent months, according to a former researcher, Daniel Kokotajlo.

What Happened: OpenAI, known for its artificial intelligence assistant ChatGPT, has seen a significant number of its AGI safety staff leave. This includes notable figures like Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, and cofounder John Schulman, Fortune reported on Tuesday.

The exodus follows the May resignations of chief scientist Ilya Sutskever and Jan Leike, who led the “superalignment” team.

Leike announced his resignation on X, citing that safety concerns had been overshadowed by product development at the San Francisco-based company.

Kokotajlo, who left OpenAI in April 2023, attributed the resignations to OpenAI’s increasing emphasis on commercial products over safety research. He also noted a “chilling effect” on those attempting to publish research on AGI risks, with more influence from the company’s communications and lobbying wings.

An OpenAI spokesperson stated the company remains committed to providing safe AI systems and engaging in rigorous debates about AI risks, according to the report.

See Also: Telegram Founder Pavel Durov’s Arrest In France Sparks Reactions From Ethereum Creator Vitalik Buterin And Tron Founder Justin Sun

Why It Matters: The mass exodus at OpenAI comes amid a broader debate on AI safety and regulation. Elon Musk, who co-founded OpenAI in 2015 and left in 2018, recently endorsed the SB 1047 AI safety bill in California. Musk’s support contrasts with OpenAI’s preference for federal regulation over state laws.

Adding to the turmoil, OpenAI has been criticized by former employees for its stance against California’s AI regulation bill. William Saunders and Kokotajlo wrote to Governor Gavin Newsom, warning of “catastrophic harm to society” if AI models are developed without adequate safety precautions.

In response to the growing exodus, OpenAI has appointed Irina Kofman, a former Meta Platforms Inc. executive, to drive strategy and focus on safety and preparedness. Kofman will report directly to CTO Mira Murati.

Meanwhile, California lawmakers are poised to vote on SB 1047, despite opposition from major tech companies and some Democrats. If approved, the bill will move to Governor Newsom, who must sign or veto it by the end of September.

Read Next:

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Posted In: META

CLASS ACTION DEADLINES - JOIN NOW!

NEW CASE INVESTIGATION

CORE Finalist