Saturday , Nov. 30, 2024, 7:12 a.m.
News thumbnail
Business / Tue, 04 Jun 2024 The Verge

Former OpenAI employees say whistleblower protection on AI safety is not enough

Several former OpenAI employees warned in an open letter that advanced AI companies like OpenAI stifle criticism and oversight, especially as concerns over AI safety have increased in the past few months. The open letter, signed by 13 former OpenAI employees (six of whom chose to remain anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that in the absence of any effective government oversight, AI companies should commit to open criticism principles. The letter’s signees claim current whistleblower protections “are insufficient” because they focus on illegal activity rather than concerns that, they say, are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of time off are protected by whistleblower protection laws, which means employers cannot fire, lay off, reduce hours, or demote whistleblowers. One former researcher, Jan Leike, said that “safety culture and processes have taken a backseat to shiny products” at OpenAI.

Several former OpenAI employees warned in an open letter that advanced AI companies like OpenAI stifle criticism and oversight, especially as concerns over AI safety have increased in the past few months.

The open letter, signed by 13 former OpenAI employees (six of whom chose to remain anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that in the absence of any effective government oversight, AI companies should commit to open criticism principles. These principles include avoiding the creation and enforcement of non-disparagement clauses, facilitating a “verifiably” anonymous process to report issues, allowing current and former employees to raise concerns to the public, and not retaliating against whistleblowers.

The letter says that while they believe in AI’s potential to benefit society, they also see risks, such as the entrenchment of inequalities, manipulation and misinformation, and the possibility of human extinction. While there are important concerns about a machine that could take over the planet, today’s generative AI has more down-to-earth problems, such as copyright violations, the inadvertent sharing of problematic and illegal images, and concerns it can mimic peoples’ likenesses and mislead the public.

The letter’s signees claim current whistleblower protections “are insufficient” because they focus on illegal activity rather than concerns that, they say, are mostly unregulated. The Department of Labor states workers reporting violations of wages, discrimination, safety, fraud, and withholding of time off are protected by whistleblower protection laws, which means employers cannot fire, lay off, reduce hours, or demote whistleblowers. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues,” the letter reads.

Recently, several OpenAI researchers resigned after the company disbanded its “Superalignment” team, which focused on addressing AI’s long-term risks, and the departure of co-founder Ilya Sutskever, who had been championing safety in the company. One former researcher, Jan Leike, said that “safety culture and processes have taken a backseat to shiny products” at OpenAI.

logo

Stay informed with the latest news and updates from around India and the world.We bring you credible news, captivating stories, and valuable insights every day

©All Rights Reserved.