top of page

The Maastricht Diplomat

MD-fulltext-logo.png
  • 1200px-Facebook_f_logo_(2019).svg
  • Instagram_logo_2016.svg

The Human Cost of AI in migration

With the AI Code of Conduct by the G7, the UK AI Safety Summit, and the Biden administration's sweeping executive order on AI oversight, there is no doubt that the end of 2023 will be remembered as a turning point for the global regulatory response to Artificial Intelligence. The most striking event is undoubtedly December 8, 2023, which marks the (long-awaited) provisional agreement of what could be the most controversial digital legislation in EU history - yes, the EU AI Act. But before we dive into this Act and its implications on migration, let me take you through its story from its earliest origins. 


Ever since its launch last November 2022, the rapid growth of ChatGPT has been hogging the spotlight, further integrating AI into our daily lives. The rise of Artificial Intelligence, this sort of 'digital know-it-all genius', marked a new shift in the current ‘Age of Algorithm’. This new friend-of-all humans allows us to compose music, generate restaurant reviews, or even answer exam questions (which I don’t recommend). Yet, despite the excitement generated by its ground-breaking potential, concerns have arisen regarding its ethical implications. From potential violations of human rights to breaches of privacy, the risks are multiple. These technological developments raise puzzling questions regarding the kind of society we aspire to construct: How far can this 'AI-driven' society reach? Are we heading towards a dystopian 1984-esque reality? 


AI in migration context

Migration and asylum are also affected by the rise of the “AI driven society”. EU member states are increasingly relying on AI systems to regulate migration and strengthen border security, affecting millions of people fleeing their countries. These systems are integrated at every stage of the migration process: before entering the European zone, during the entry process, during a stay and the return process. From profiling systems to predictive assessment, these technologies are used in several ways.  


Among these various applications of AI in migration, let's first have a look into predictive assessments. If this term sounds unfamiliar to you, you might be surprised you know more than you think. You might have already encountered them while using a movie or music streaming platform. Predictive assessments function as a kind of digital mystic foresight crystal ball. As such, they meticulously scrutinise our personal attributes and online decisions to identify patterns and broader behaviours. Through complex algorithms, they compare this information with data from previous customers to make predictions based on your choices. Seems to be no surprise then when Netflix mostly suggests action movies if you've recently watched one! As far-fetched as it may sound, the use of AI in migration is very similar. By examining data and historical migration trends, predictive analytic tools can 'forecast' border movements. Non-governmental organisations and civil society have rang bells on the dangers these predictive tools hold on the fundamental human rights of migrants, asylum seekers and other marginalised communities. Certain predictions  can perpetuate the idea that specific ‘groups of people’ represent a threat of irregular migration, potentially resulting in unlawful push-backs, pull-backs and, in some cases, prevent individuals from seeking asylum.


Particularly, AI-based profiling systems merit further attention. Based on predefined risks, AI can be employed to 'filter' regular from irregular migrants with the ultimate goal of preventing the latter from gaining access to EU territory. Amnesty International has highlighted how these systems exacerbate racist and discriminatory law enforcement against racialized people. Assessing individuals based on predefined characteristics like geographic location may unveil a person’s possible ethnicity, and nationality could work as a ‘proxy’ for race and religion. The data used to create and develop these systems clearly mirror historical, systemic, institutional, and social biases. Due to these inherent biases, AI-based profiling systems tend to ultimately categorise irregular migrants and asylum seekers as “security threats”, highlighting the broader “racialised suspicion against migrants”


Another very dangerous use of AI in migration involves emotion recognition technologies. By scrutinising specific physical or behavioural characteristics, including facial expression or vocal tone for instance, these technologies are based on the idea that delving into an individual’s emotional state could help assess their credibility. If you find yourself quite sceptical about this concept, let me assure you that you have every reason to be. Indeed, emotion recognition is scientifically dubious and exhibits risks of racial profiling. Several studies have shown that emotions are understood differently depending on the culture, and their meaning can vary from one society to another. Additionally, a 2019 study found no reliable link between facial expression and inner emotional states. Consequently, the increasing use of these technologies in the migration context is, once again, quite worrying.  


Now that we have investigated examples of dangerous uses of AI, there is no room for doubt: the regulation of AI, especially in migration, is an urgent matter.


The EU AI Act: A Blind Spot?  

To address this pressing concern regarding AI in general, the European Commission proposed in April 2021 the first-ever legal framework on AI, also known as the "AI Act". As it stands, this legislative proposal aims to regulate "high-risk" Artificial Intelligence and protect fundamental rights enshrined in the Charter, such as the rights to privacy and to the protection of personal data. To do so, it adopts a "risk-based" approach, categorising AI into different risk levels: unacceptable, high, and limited or minimal risk. It is guided by the straightforward principle -  the “higher the perceived risk, the stricter the rules”. On December 8th,  the European Union’s co-legislative bodies  clinched a political deal on Artificial Intelligence. However, it is not time to declare “victory”.  Although they have agreed on the AI Act, the political agreement has not yet been officially adopted by both the Parliament and the Council. Further “technical meetings” are expected in the upcoming month before the text will be finalised. Countries such as France and  Spain are determined to continue these discussions to ensure this treaty does not act as a ‘brake on innovation’. 


As we await the final version, let’s have a look at the practises that negotiators have already decided to ban, such as: biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the internet; or emotion recognition in the workplace and educational institutions”.


While this may seem like a step in the right direction, these prohibitions do not get at some aspects of the issue. Firstly, when it comes to emotion recognition, pressure from member states has shifted the European Parliament’s initial position. After warnings from civil society on the dangers associated with emotion recognition, the EU parliament voted in June 2023 the prohibition of emotion recognition across 4 contexts - education, workplace, law enforcement and migration. Nevertheless, under member states’ pressure, the prohibition was lifted in law enforcement and the migration context. By shifting position, EU negotiators clearly fail to address the harmful impact AI emotion recognition systems have on people on the move, namely migrants and asylum seekers. This decision raises doubts about the legitimacy of this Act and, more specifically, on ‘whose  rights’ the EU genuinely seeks to protect through this provision. 


Delving deeper, the EU negotiators' disregard of the impact of AI in the migration context can further be observed in the original 2021 Proposal for an Artificial Intelligence Act. Here, there is no doubt that the “devil is in the details”. For instance, article 83 (1) of this proposal specifies that the regulation should not apply to “AI systems part of the large-scale IT systems”. As you might wonder what’s the issue here, let’s take a closer look at the term "large-scale IT systems”. This term encompasses crucial EU migration databases such as Eurodac (European Asylum Dactyloscopy Database) and the upcoming ETIAS (European Travel Information and Authorisation System). The issue with these large-scale IT systems is that they use AI to collect and process asylum seekers’ and refugees’ personal and sensitive data, such as their digitised fingerprints,  in a way considered in Article 6(2) as “high risk” to safety and fundamental rights. Despite this evident danger, article 83 exempts these harmful EU migration databases from regulation, once again highlighting the EU’s oversight of the threats posed by AI in migration. 



The Future Landscape 

Following the full assessment of the “technical drafts”, the forthcoming weeks will shed light on the extent to which the EU AI Acts protects people on the move from the worst excess of surveillance. Updating the EU AI Act is urgent to address AI-related harm within the context of migration effectively.


 Specifically, a coalition of civil society organisations have called on the EU to


  1. Ban harmful AI practices in the migration context. This ban would be a legal prohibition of  previously mentioned technologies, such as predictive analytic systems, automated risk assessment, and biometric surveillance. 


  1. Regulate all AI high-risk systems in migration. Every AI Systems employed in migration, such as surveillance technology in border control and identity checks, would be subjected to clear oversight and accountability measures. All these Systems should fall into the “high risk” category. 


  1. Ensure the AI Act applies to the EU's huge migration databases. 

It calls for the amendment of Article 83 to ensure AI as integrated into the large-scale EU IT databases falls within the scope of the AI Act and that the essential safeguards are applicable to  the utilisation of AI in the EU migration context. 


  1. Make the EU's AI act an instrument of protection. Lawmakers would ensure the EU AI Act  empower people to seek justice, provide public transparency and oversight when police, migration and national security agencies employ most harmful AI systems. 



Conclusively, the EU AI Act represents a crucial opportunity to stop the normalisation of AI systems built on racist and discriminatory structures targeting migrants, asylum seekers and other marginalised groups. Since the European Union was founded on the core values of respect for peace, security, and human dignity, we aspire for these ideals to be universally applicable in the broader context of migration. Without a system of accountability and transparency, the violence, deaths, push-backs stemming from the EU AI systems will persist and remain unknown. Consequently, EU institutions are urged to ban the use of harmful technology and effectively regulate all AI systems in migration. If the EU AI Act is ineffective in preventing irreversible harm, it risks to compromise its primary goal - safeguarding the fundamental rights and human dignity of any individuals impacted by the use of AI.

Comments


Email Address: journal@myunsa.org

Copyright 2020 UNSA | All rights reserved UNSA

powered-by-unsa.png
bottom of page