Has my stock been accused of fraud?Join over 160k users who know.

Ticker Price Change($) Change(%) Shares Volume Prev Close Open Gain($) Gain(%)
Ticker Status Jurisdiction Filing Date CP Start CP End CP Loss Deadline
Ticker Case Name Status CP Start CP End Deadline Settlement Amt
Ticker Name Date Analyst Firm Up/Down Target ($) Rating Change Rating Current

News

Reported Earlier, 'Democrats Push Sam Altman On OpenAI's Safety Record/ Senators Demanded Answers About Whistleblowers And Conflicts Of Interest' - The Verge

Author: Benzinga Newsdesk | August 08, 2024 02:06pm

https://www.theverge.com/2024/8/8/24216094/openai-sam-altman-warren-trahan-whistleblowers-safety-reviews

 

Sen. Elizabeth Warren (D-MA) and Rep. Lori Trahan (D-MA) are calling for answers about how OpenAI handles whistleblowers and safety reviews after former employees complained that internal criticism is often stifled.

"Given the discrepancy between your public comments and reports of OpenAI's actions, we request information about OpenAI's whistleblower and conflict of interest protections in order to understand whether federal intervention may be necessary," Warren and Trahan wrote in a letter exclusively shared with The Verge.

The lawmakers cited several instances where OpenAI's safety procedures have been called into question. For example, they said, in 2022, an unreleased version of GPT-4 was being tested in a new version of the Microsoft Bing search engine in India before receiving approval from OpenAI's safety board. They also recalled OpenAI CEO Sam Altman's brief ousting from the company in 2023 as a result of the board's concerns, in part, "over commercializing advances before understanding the consequences."

Warren and Trahan's letter to Altman comes as the company is plagued by a laundry list of safety concerns, which often are at odds with the company's public statements. For instance, an anonymous source told The Washington Post that OpenAI rushed through safety tests, the Superalignment team (which was partly responsible for safety) was dissolved, and a safety executive quit, claiming that "safety culture and processes have taken a backseat to shiny products." Lindsey Held, a spokesperson for OpenAI, denied the claims in The Washington Post's report, saying that the company "didn't cut corners on our safety process, though we recognize the launch was stressful for our teams."

Other lawmakers have also sought answers about the company's safety practices, including a group of senators led by Brian Schatz (D-HI) in July. Warren and Trahan asked for further clarity on OpenAI's responses to that group, including on its creation of a new "Integrity Line" for employees to report concerns.

Meanwhile, OpenAI appears to be on the offensive. In July, the company announced a partnership with Los Alamos National Laboratory to explore how advanced AI models can safely aid in bioscientific research. Just last week, Altman announced via X that OpenAI is collaborating with the US Artificial Intelligence Safety Institute and emphasized that 20 percent of computing resources at the company will be dedicated to safety (a promise originally made to the now-defunct Superalignment team). In the same post, Altman said that OpenAI has removed nondisparagement clauses for employees and provisions allowing the cancellation of vested equity, a key issue in Warren and Trahan's letter.

The letter signals a key policy interest for the lawmakers, who previously introduced bills to expand protections for whistleblowers, like the FTC Whistleblower Act and the SEC Whistleblower Reform Act. It could also serve as a signal to law enforcement agencies that so far have reportedly set their sights on OpenAI's possible antitrust violations and harmful data practices.

Warren and Trahan asked Altman to provide information about how its new AI safety hotline for employees was being used and how the company follows up on reports. They also asked for "a detailed accounting" of all the times OpenAI products have "bypassed safety protocols" and in what circumstances a product would be allowed to skip a safety review. The lawmakers are also seeking information on OpenAI's conflicts policy. They asked Altman whether he's been required to divest from any outside holdings and "what specific protections are in place to protect OpenAI from your financial conflicts of interest." They asked Altman to respond by August 22nd.

Warren also notes how vocal Altman has been about his concerns regarding AI. Last year, in front of the Senate, Altman warned that AI's capabilities could be "significantly destabilizing for public safety and national security" and emphasized the impossibility of anticipating every potential abuse or failure of the technology. These warnings seemed to resonate with lawmakers — in OpenAI's home state of California, state Sen. Scott Wiener is pushing for a bill to regulate large language models, including restrictions that would hold companies legally accountable if their AI is used in harmful ways.

Posted In: MSFT

CLASS ACTION DEADLINES - JOIN NOW!

NEW CASE INVESTIGATION

CORE Finalist