AI and the Need for Whistleblowing in Tech

29/02/2024

Written by Ed Haswell, WIN's Admin & Communications Assistant


The announcement of Sora, the new text-to-video AI generation from ChatGPT creators, is a development that some find exciting and full of promise, while others find worrying, mainly because of the capabilities of these new technologies to misinform and misrepresent in dangerous ways. These concerns rest alongside the negative environmental impacts caused by the extraction of metals from countries with poor human rights records and little power to regulate the industry and the vast resources (land, water and energy) required to run and maintain these tech systems.  At the heart of the debate is the concern that the world has not yet set clear ethical and regulatory standards and rules that should be guiding those developing these new technologies and governments seem as yet powerless (unwilling or unable) to hold wealthy tech companies to account for the harms that Sora and other AI applications are already or are likely to cause. 

In 2021, the now former Co-Leader of Google’s Ethical AI Team Timnit Gebru was fired after co-writing and releasing a study with MIT Media Lab Researcher, Dr. Joy Buolamwini, that highlighted negative environmental impacts and the difficulty in identifying the embedded biases within large AI language applications, the likes of which Google uses. 

To run the powerful machines needed to host and enable AI, the components must get smaller and increasingly use metals that are problematic to obtain in order to increase their efficiency and power. One of these components is cobalt, and the almost insatiable need for the metal is aiding a silent genocide in the Democratic Republic of Congo (DRC). At least 25,000 children are forced to mine cobalt with few tools and little or no protective equipment against its toxicity.  While the use of child labour is a serious breach of human rights, it is but one of many other abuses surrounding this activity. 

AI systems learn to make decisions based on the data on which they are trained.  However, this data can, and does, include biased human decisions as well as reflect historical or social inequities. Digital activist and AI Researcher, Dr. Joy Buolamwini, (mentioned above) coined the term "the coded gaze" to describe the bias that is often seen in AI systems which are shaped by the priorities and prejudices - both conscious and unconscious - of the people who design them. Research has shown that automated systems used to inform decisions about judicial sentencing, for example, produce results that are biased against black people and that others used for selecting targets for online advertising can discriminate based on race and gender. - When the Robot Doesn’t See Dark Skin — MIT Media Lab

Such examples show how incredibly important whistleblowers and whistleblower protections are to guard against abuses in the industry and how valuable is guidance for those working in the tech industry. We urge tech workers in the USA, especially Silicon Valley, to read the US Tech Workers Guidebook created by Ifeoma Ozoma, funded by Luminate. For those working in the UK, its handbook was created by TSN, WIN and Protect, Ireland's handbook was created by TSN and Dr. Lauren Kierans. Please click on the hyperlinks attached. 

Jennifer Gibson, the Legal Director of The Signals Network (a WIN membership organisation) speaks expertly about the issue in the web post here.