Artificial Intelligence ©Getty Images

Companies must act now to ensure responsible development of artificial intelligence

There is a brutal double irony at play when over 350 artificial intelligence professionals proclaim that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal scale risks such as pandemics and nuclear war.” 

First, the 30th May signatories – including the CEOs of Google DeepMind and OpenAI – warning about the end of civilization are the very people and companies responsible for creating this technology in the first place. Second, it’s exactly these same companies who have the power to ensure that artificial intelligence actually benefits humanity, or the very least does not do harm. 

The human rights community has developed an effective human rights due diligence framework to help companies identify, prevent, and mitigate the potential negative impacts of their products. It is essential that the companies developing new Generative AI products implement such human rights due diligence frameworks now, before it’s too late.  

Generative AI is a broad term, describing “creative” algorithms that can themselves generate new content, including images, text, audio, video and even computer code. These algorithms are trained on massive real-world datasets, and then use that training to create outputs that are often indistinguishable from “real” data – often rendering it difficult, if not impossible, to tell if the content was generated by a person, or by an algorithm. To date, Generative AI products have taken three main forms: tools like ChatGPT which generate text, tools like Dall-E, Midjourney and Stable Diffusion that generate images, and tools like Codex and Copilot that generate compute code.  

The sudden rise of new Generative AI tools has been unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This far outpaces the initial growth of popular platforms like TikTok, which took nine months to reach as many people.  

Throughout history technology has helped to advance human rights, but also created harm, often in unpredictable ways. When internet search tools, social media, and mobile technology were first released, and as they grew in widespread adoption and accessibility, it was nearly impossible to predict many of the distressing ways that these transformative technologies became themselves drivers and multipliers of human rights abuses around the world. Meta’s role in the 2017 ethnic cleansing of the Rohingya in Myanmar, for example, or the use of almost undetectable spyware deployed to make mobile phones 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of the introduction of disruptive technologies whose social and political implications had not been given serious consideration.  

So what might a human rights based approach to generative AI look like, and how might we get there? Three guiding early steps, based on evidence and examples from the recent past, provide an initial guiding framework for what this could be.

Three steps to more responsible development of artificial intelligence

First, in order to fulfill their responsibility to respect human rights, companies developing Generative AI tools must immediately implement a rigorous human rights due diligence framework, as laid out in the UN Guiding Principles on Business and Human Rights. This includes proactive and ongoing due diligence, to identify actual and potential harms, transparency regarding these harms, and mitigation and remediation where appropriate.  

Second, companies developing these technologies must take immediate steps to proactively engage with academics, civil society actors, and community organizations, especially those representing traditionally marginalized communities. Although we cannot predict all the ways in which this new technology can and may cause or contribute to harm, we have extensive evidence that marginalized communities are most likely to suffer the consequences. The initial versions of ChatGPT engaged in racial and gender bias, suggesting, for instance, that Indigenous women are “worth” less than other races and genders. Active engagement with marginalized communities must be part of the product design and policy development processes, to better understand the potential impact of these new tools. This cannot be an afterthought once companies have already caused or contributed to harm.  

Third, the human rights community itself needs to step up. In the absence of regulation to prevent and mitigate the potentially dangerous effects of Generative AI, human rights organizations should take the lead in identifying actual and potential harm. This means that human rights organizations should themselves help to build a body of deep understanding around these tools, and take the lead in developing research, advocacy, and engagement that anticipate the transformative power of Generative AI. 

Complacency in the face of this revolutionary moment is not an option – but neither, for that matter, is cynicism. We all have a stake in  ensuring that this powerful new technology is used to benefit humankind. Implementing a human rights based-approach to identifying and responding to harm is a critical first step in this process.  

Opinion piece published on Al Jazeera written by: Eliza Campbell, Research, Tech and Inequality, Amnesty International USA and Michael Kleinman, Director, Silicon Valley Initiative, Amnesty International 

What you can do

TAKE ACTION to ban the use of facial recognition technology. Sign and share our global petition calling for a total ban on the use, development, production, and sale, of facial recognition technology for mass surveillance purposes by the police and other government agencies and a total ban on exports of facial recognition technology systems.

Learn More

Ban dangerous facial recognition technology that amplifies racist policing 

Open Letter: Canadian Government Must Ban Use of Facial Recognition Surveillance by Federal Law Enforcement, Intelligence Agencies

TOP PHOTO: ©Getty Images