European parliament building, Brussels, Belgium © Getty Images/Image Source. The European Union (EU) must ban dangerous, AI-powered technologies in the AI Act.

EU must ban dangerous, AI-powered technologies in historic AI Act 

The European Union (EU) must ban dangerous, AI-powered technologies in the AI Act, Amnesty International said today, as the bloc aims to finalize the world’s first comprehensive AI rulebook this fall.  

Numerous states across the globe have deployed unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime. These technologies are often branded as ‘technical fixes’ for structural issues such as poverty, sexism and discrimination. They use sensitive and often staggering amounts of data, which are fed into automated systems to decide whether or not individuals should receive housing, benefits, healthcare and education — or even be charged with a crime.  

These systems are not used to improve people’s access to welfare, they are used to cut costs. And when you already have systemic racism and discrimination, these technologies amplify harms against marginalized communities at much greater scale and speed.

Mher Hakobyan, Amnesty International’s Advocacy Advisor on AI Regulation

Yet instead of fixing societal problems, many AI systems have flagrantly amplified racism and inequalities, and perpetuated human rights harms and discrimination.  

“These systems are not used to improve people’s access to welfare, they are used to cut costs. And when you already have systemic racism and discrimination, these technologies amplify harms against marginalized communities at much greater scale and speed,” said Mher Hakobyan, Amnesty International’s Advocacy Advisor on AI Regulation. 

“Instead of focusing disproportionally on the ‘existential threats’ posed by AI, EU lawmakers should formulate laws that address existing problems, such as the fact that these technologies are used for making grossly discriminatory decisions that undermine access to basic human rights.”  

Cruelly deprived of childcare benefits 

In 2021, Amnesty International documented how an AI system used by the Dutch tax authorities had racially profiled recipients of childcare benefits. The tool was supposed to ascertain whether benefit claims were genuine or fraudulent, but the system wrongly penalized thousands of parents from low-income and immigrant backgrounds, plunging them into exorbitant debt and poverty. 

It was so strange. I got a letter stating that I had wrongly been given childcare benefits. And I thought, ‘How can that be?’ I was in my early 20s. I didn’t know much about the tax authorities. I found myself in this world of paperwork. I just saw everything slipping away. Since we’ve been acknowledged as victims of what I call the ‘benefits crime’, even four years later, we’re still being treated as a number.

Batya Brown

Batya Brown, who was falsely accused of benefits fraud by the Dutch childcare system, said the Dutch tax authorities demanded she repay hundreds of thousands of euros. She became entangled in a net of bureaucracy and financial anxiety. Years later, justice remains out of sight.  

“It was so strange. I got a letter stating that I had wrongly been given childcare benefits. And I thought, ‘How can that be?’ I was in my early 20s. I didn’t know much about the tax authorities. I found myself in this world of paperwork. I just saw everything slipping away. Since we’ve been acknowledged as victims of what I call the ‘benefits crime’, even four years later, we’re still being treated as a number,” said Batya Brown. 
 
“The Dutch childcare benefits scandal must serve as a warning to EU lawmakers. Using AI systems to monitor the provision of essential benefits can lead to devastating consequences for marginalized communities. Social scoring, profiling and risk assessment systems must all be banned in the AI Act, whether it’s used to police recipients of welfare protection, “predict” probability of committing crime, or decide on asylum claims,” said Mher Hakobyan.  

Ban use and export of intrusive surveillance systems 

Under the guise of ‘national security’, facial recognition systems are becoming a go-to tool for governments seeking to excessively surveil individuals in society.  Law enforcement agencies deploy these systems in public spaces to identify individuals who may have committed a crime, despite the risk of wrongful arrest

Amnesty, within a coalition of more than 155 organizations, has called to ensure full ban on live and retrospective facial recognition in publicly accessible places, including border areas and around detention facilities, by all actors, without exceptions in the EU.  

In places including New York, Hyderabad, and the Occupied Palestinian Territories (OPT)Amnesty International has documented and exposed how facial recognition systems accelerate existing systems of control and discrimination.  

In the OPT, the Israeli authorities are using facial recognition to police and control Palestinians, restricting their freedom of movement and their ability to access basic rights.  

Amnesty International’s research has also revealed how cameras made by TKH Security, a Dutch company, are being used as part of the surveillance apparatus  in occupied East Jerusalem.   

“Besides ensuring a full ban of facial recognition within the EU, lawmakers must ensure that this and other highly problematic technologies banned within the EU are not manufactured in the bloc, only to be exported to countries where they are used to commit serious human rights violations. EU and Member States have obligations under international law to ensure that companies within their jurisdictions do not profit from human rights abuses by exporting technologies used for mass surveillance and racist policing,” said Mher Hakobyan.  

AI technology facilitates abuse of migrants  

EU member states have increasingly resorted to using opaque and hostile technologies to facilitate abuses of migrants, refugees and asylum seekers at their borders.    

Lawmakers must ban racist profiling and risk assessment systems, which label migrants and asylum seekers as ‘threats’, as well as forecasting technologies, which are used to predict border movements and deny people the right to asylum.  

“Every time you pass through an airport, every time you cross a border, every time you apply for a job, you’re subject to the decisions of these models. We don’t have to get to the point of Terminator or the Matrix for these threats to be existential. For people, it’s existential if it’s taking away your life chances and your livelihoods,” said Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR).  

AI Act must not give Big Tech power to self-regulate 

Big Tech companies have also lobbied to introduce loopholes in the AI Act’s risk classification process, which would allow tech companies to determine whether their technologies should be classified as ‘high risk’.   

“It is crucial that the EU adopts legislation on AI that protects and promotes human rights. Granting Big Tech companies, the power to self-regulate seriously undermines the main aims of the AI Act, including protecting people from human rights abuses. The solution here is very simple – go back to the original proposal of the European Commission, which provides a clear list of scenarios where use of an AI tool would be considered high-risk,” said Mher Hakobyan. 

Background  

Amnesty International, as part of a coalition of civil society organizations led by the European Digital Rights Network (EDRi), has been calling for EU artificial intelligence regulation that protects and promotes human rights, including rights of people on the move. 

High-level trilateral negotiations known as Trilogues between the European Parliament, Council of the EU (representing the 27 Member States of the EU), and the European Commission are set to take place in October, with the aim of adopting the AI Act by the end of the current EU mandate in 2024.  

Top Image: European parliament building, Brussels, Belgium © Getty Images/Image Source