TLDR
- Former OpenAI employee William Saunders quit due to concerns about the company’s approach to AI safety.
- Saunders compared OpenAI’s trajectory to that of the Titanic, prioritizing new products over safety measures.
- He expressed worry about OpenAI’s focus on achieving Artificial General Intelligence while also releasing commercial products.
- Saunders was part of OpenAI’s superalignment team, which was later dissolved.
- Other former OpenAI employees have also left to start companies focused on AI safety.
William Saunders, a former member of OpenAI’s superalignment team, has spoken out about his decision to leave the company after three years.
Saunders, who worked on understanding the behavior of AI language models, says he quit because he felt OpenAI was prioritizing product development over implementing adequate safety measures.
In recent interviews, Saunders compared OpenAI’s trajectory to that of the Titanic, suggesting that the company is building impressive technology without enough safeguards in place.
“I really didn’t want to end up working for the Titanic of AI, and so that’s why I resigned,” Saunders stated in a podcast interview.
Saunders expressed concern about OpenAI’s dual focus on achieving Artificial General Intelligence (AGI) – the point where AI can teach itself – while also releasing commercial products.
He believes this combination could lead to rushed development and inadequate safety precautions.
“They’re on this trajectory to change the world, and yet when they release things, their priorities are more like a product company,” Saunders explained.
He added that while there are employees at OpenAI doing good work on understanding and preventing risks, he did not see sufficient prioritization of this work.
The former employee’s concerns extend to the broader AI industry. Saunders was one of 13 former and current employees from OpenAI and Google DeepMind who signed an open letter titled “A Right to Warn.”
The letter emphasized the importance of allowing people within the AI community to speak up about their concerns regarding rapidly developing technology.
Saunders’ departure from OpenAI is not an isolated incident. Other prominent figures have also left the company over similar concerns.
For example, Anthropic, a rival AI company, was founded in 2021 by former OpenAI employees who felt the company wasn’t focused enough on trust and safety.
More recently, Ilya Sutskever, OpenAI’s co-founder and former chief scientist, left in June 2024 to start Safe Superintelligence Inc., a company dedicated to researching AI while ensuring “safety always remains ahead.”
OpenAI has faced criticism and internal turmoil over its approach to AI development. In November 2023, CEO Sam Altman was briefly removed by the board, who cited a loss of trust.
Although Altman was reinstated days later, the incident highlighted ongoing tensions within the company.
Despite these concerns, OpenAI continues to push forward with its AI development. The company recently dissolved its superalignment team, the group Saunders was part of, which was tasked with controlling AI systems that could one day be smarter than humans.