Tuesday, June 4, 2024

 

DIGITAL LIFE


The dark side of bigtech's

Current and former employees of artificial intelligence (AI) companies said in an open letter that companies retaliate against people who raise concerns about the security of the technology. Among the signatories of the document are professionals who are at (or worked at) Anthropic, OpenAI and Google DeepMind.

Employees and former employees of artificial intelligence (AI) companies allege, in an open letter, that companies retaliate against those who express concerns about the security of the technology. Among the signatories are professionals from Anthropic, OpenAI and Google DeepMind;

The letter, available online, claims that AI companies stifle criticism and oversight, especially given the recent rise in concerns about AI safety. Signatories uphold principles of open criticism and whistleblower protection;

The document is signed by 13 professionals, with endorsements from renowned experts (Yoshua Bengio, Geoffrey Hinton and Stuart Russell). The letter criticizes the insufficiency of current whistleblower protections, focused on illegal activities, while AI safety concerns remain largely unregulated;

OpenAI faced a stampede of researchers after disbanding its “Superalignment” team dedicated to the long-term risks of AI. Ilya Sutskever, co-founder of OpenAI, also left the company. OpenAI currently has a new security team, led by CEO Sam Altman.

In the open letter available online, professionals claim that AI companies stifle criticism and oversight, especially as concerns about AI safety have increased in recent months.

The document bears the signature of 13 professionals (seven nominal, six anonymous). Of these, seven are former OpenAI employees, four still work at OpenAI, one worked at Google DeepMind, and one is a former Anthropic employee and currently works at GoogleDeepMind.

Additionally, the letter is endorsed by Yoshua Bengio, Geoffrey Hinton, and Stuart Russell. Out of curiosity, Hinton worked at Google and is considered the “Godfather of AI”.

The text states that, in the absence of any effective government oversight, AI companies must commit to principles of open criticism. These principles include avoiding the creation and enforcement of non-disparagement clauses, facilitating a “verifiable” anonymous process for reporting problems, allowing current and former employees to raise concerns with the public, and not retaliating against whistleblowers.

The letter says that while professionals believe in AI's potential to benefit society, they also see risks such as worsening inequalities, manipulation and misinformation, and the possibility of human extinction. Ministry of Labor) states that workers who report wage, discrimination, safety, fraud, and time-off withholding violations are protected by whistleblower protection laws. This means employers cannot fire, reduce hours, or demote whistleblowers.

“Some of us reasonably fear various forms of retaliation, given the history of such cases in the industry. We are not the first to encounter or speak about these issues,” the letter says.

The letter's signatories claim that current protections for whistleblowers “are insufficient” because they focus on illegal activities rather than concerns that, they say, are largely unregulated.

OpenAI Disbands After Disbanding Security Team...Recently, several OpenAI researchers resigned after the company dismantled its “Super Alignment” team. This team focused on addressing the long-term risks of AI.

Additionally, Ilya Sutskever, co-founder of OpenAI who had been championing AI security at the company, left the company. Now, OpenAI has a new security team, led by CEO Sam Altman.

A former researcher, Jan Leike, said that “security culture and processes have taken a backseat to new products” at OpenAI.

mundophone

No comments:

Post a Comment

  TECH Brave browser blocks Windows Recall screenshotting feature The Brave browser, a web browser focused on user privacy, claims it is abl...