Whistleblower Protections for AI Employees
Authored by: Claudia Wilson, Jennifer Gibson, Kristin Brown, Abra Ganz, Karl Koch
Published by: Center For AI Policy (CAIP), Psst, Centre for AI Risk Management & Alignment (CARMA), OAISIS/Third Opinion
Publication Date: June 2025
About: AI capabilities are rapidly advancing, and so too are the risks. Malicious actors could misuse models to conduct sophisticated cyberattacks or to design bioweapons. AI deployed in critical sectors could malfunction in unexpected ways, causing widespread devastation. Whistleblowers are a powerful tool to minimise the risk of public harm from AI.
Proper protections can be designed to avoid concerns such as the violation of trade secrets. Yet, AI employees have no dedicated whistleblower protections. Instead, they are forced to rely on patchy state laws or attempt to make their concerns relevant to SEC legislation.
This leaves AI employees uncertain about whether they will be protected, disincentivising them from reporting potentially catastrophic issues. AI employees and whistleblowers have expressed the desire for explicit legal protections. Thirteen current and previous employees have publicly called for a “Right to Warn”. Separately, anonymous surveys indicate that employees are afraid of retaliation.
Languages: English
More info.