Future Tech

OpenAI sets up safety group in wake of high-profile exits

Tan KW
Publish date: Wed, 29 May 2024, 07:51 AM
Tan KW
0 459,783
Future Tech

OpenAI has created a new safety group as it works on the successor to GPT-4 while grappling with the recent departure of high-profile members who criticized its commercial intent.

Termed the Safety and Security Committee (SSC), the team's leadership includes OpenAI CEO Sam Altman, Bret Taylor (who is set to be the chair), Adam D'Angelo, and Nicole Seligman, all of whom also sit on the board of directors.

Other SSC members include various team leaders at OpenAI, including Jakub Pachocki, who has been chief scientist for just 13 days after he replaced co-founder Ilya Sutskever.

OpenAI says the safety team will advise the board of directors on "critical safety and security decisions" from now on. Those decisions will, presumably, impact the development of the successor to GPT-4, which OpenAI briefly mentions in its announcement as its "next frontier model."

"While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said, without specifying what it expects to be discussed.

First on the docket is a 90-day period of developing safety recommendations for the board's consideration, though the implication is that Altman and other directors have the final say as they get to review the recommendations. Naturally, the OpenAI CEO and the four other leads will also have their chance to influence the recommendations before they even reach the board of directors.

The formation of the new security board is likely wrapped up with two high-profile departures that happened earlier this month: that of Sutskever and Jan Leike. Their exits from OpenAI were immediately followed by the dissolution of the company's Superalignment group, which existed to evaluate AI safety concerns over the long term. Leike had been the leader of the Superalignment team up until his departure.

OpenAI also lost Daniel Kokotajlo, who worked on OpenAI's governance team, earlier this month, and in February co-founder Andrej Karpathy quit as well.

While Sutskever and Karpathy have declined to delve too deeply into the reasoning behind their departures, Leike and Kokotajlo have made it clear that they resigned over disagreements on AI safety.

"Over the past years, safety culture and processes have taken a backseat to shiny products," Leike said the day before the Superalignment team was abolished. "We are long overdue in getting incredibly serious about the implications of AGI… OpenAI must become a safety-first AGI company."

Similarly, Kokotajlo said he "quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI."

AI safety is undoubtedly a contentious issue at OpenAI, and that's probably a key reason why the new Safety and Security Committee was created. Although whether the new group will actually satisfy safety advocates is an open question. ®

 

https://www.theregister.com//2024/05/28/openai_establishes_new_safety_group/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment