DEC 3 — Human trafficking is the movement of human beings mostly for forced labour, sexual exploitation, organ buying or forced criminality which impacts millions of people worldwide. With the looming growth of technology and multimedia, traffickers have increasingly turned to social media and other online platforms to locate, groom, and recruit potential victims, advertise and sell them, transfer illicit funds, and even monitor their victims.
Additionally, traffickers exploit internet technologies to directly harm victims, leading to the emergence of “cyber-trafficking” —a new form of trafficking where victims are exploited through online means. Cyber-trafficking is further exacerbated by advances in Artificial Intelligence (AI) technologies which allow traffickers to manipulate appearances and identities making it easier for them to evade detection and continue to exploit people.
Some traffickers use AI to automate and expand their scope of operations, targeting vulnerable individuals through online platforms and social media. For example, AI-powered social media algorithms enable traffickers to reach potential victims with deceptive ads (e.g., fake job offers, romantic schemes, etc.) while maintaining anonymity. They can also use deepfake technology to create falsified images or videos of victims for online commercial sex markets and pornography. These images are often graphic, disturbing, and non-consensual.
Traffickers also use deepfake images and videos to deceive victims by impersonating a prominent figure to gain their confidence and trust. This method is often used for job scams and sex trafficking.
While the rise of AI has profoundly being used by criminal organisations and traffickers to sustain their network and operate sophisticatedly, it has also offered law enforcement agencies new tools to counter human trafficking. Hence, law enforcement agencies has increasingly adopted AI technologies in their quest to counter human trafficking.
This is definitely a progressive step in enhancing their investigations and intercepting the modus operandi of trafficking operations.
For example, AI would be able to analyse message histories shared in online chatrooms or between traffickers and buyers or traffickers and victims or investigate video footage and photos for signs of trafficking which has proven to be labour intensive and cumbersome when carried out the traditional way.
With the advancement of technologies, AI can perform these tasks and reduce the strain on labour while minimising the psychological burden on law enforcers who are under intense pressure to rescue victims.
However, the effectiveness of using AI for countering human trafficking remains uncertain and raises ethical and legal concerns, risking potential harms to victims of trafficking.
These harms include but not limited to potential violation of data privacy and AI biases that create discriminatory outcomes.
As of now, the Malaysian Parliament is yet to pass a bill to address human trafficking cases that involve explicit images created through deepfake technology, or a bill that comprehensively addresses AI-enabled human trafficking cases.
Unlike the European Union who has launched its legislation for AI on 12 July 2024, namely the European Union’s Artificial Intelligence Act (EU AI Act) which came into force on 1 August 2024, Malaysia’s anti-human trafficking laws and regulatory frameworks remains inadequate due to the rapid development of AI.
Existing laws such as the Anti-Trafficking in Persons and Anti-Migrant Smuggling Act 2007 (ATIPSOM), Penal Code, Child Act 1991, Sexual Offences Act 2017, Immigration Act 1959/53 focus primarily on traditional trafficking methods.
The ATIPSOM and Penal Code for example, does not address anonymized trafficking operations, AI-manipulated images and identities, and the exploitation of global digital platforms.
Even the Evidence Act 1950 does not accurately capture AI technology and its inventions under section 90 of the Act (which allows the admissibility of documents produced by computers, and of statements contained therein).
What we can currently rely on for now is the extra-territorial nature of the EU AI Act which governs the development, deployment, supply, and use of AI systems based on the risk level of the systems.
Article 2(1)(a) of the Act states that “the EU AI Act applies to providers placing on the market or putting into service AI systems or GPAI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country.”
In this instance, Malaysian entities may be affected, for instance, if they (with or without a physical presence in the EU) make the output of an AI system available for use in the EU.
However, the EU AI Act does not apply to AI systems used for military, defence, national security, or personal use.
Given the lacuna in law relating to the use of AI, it is imperative for the Malaysian government to decide on culpability, entity, and methods of prosecution when it involves AI.
Who can we place the blame on if the traffickers uses chatbot, deepfake images, video or AI related technologies to deceive, coerce or abuse the vulnerability of a person? Can we then call this an “offenderless” crime or do we penalise the programmer for his/her inherent role which tantamount to a breach of natural justice.
At present, laws are enforceable by and against legal persons only. Within this context, human beings, states, businesses, professional bodies, companies and corporations are legal persons and can be brought to court.
Can we now consider machines themselves as legal persons? But how can we establish the intention of the robot? Could a robot claim defence currently available to people such as diminished responsibility, provocation, self-defence, necessity, mental disorder or intoxication should it begin to malfunction or make flawed decisions? At the moment, there is no recognition of robots as legal persons – so they cannot currently be held liable or culpable for any wrongdoings or harm caused to anyone.
In conclusion, AI has become a double-edged sword in the context of human trafficking. While it offers traffickers new tools to expand their operations, it also empowers law enforcement with the ability to detect and dismantle trafficking networks more efficiently.
AI and algorithms also has the potential to enhance various aspects of criminal justice decision-making. However, lawmakers need to update and amend the current anti-human trafficking legal frameworks to effectively counter human trafficking.
New laws and regulations should focus on utilizing AI while eliminating privacy infringements and biased nature of AI.
It should also include situations of “offenderless” crime in situations where it is hard to detect the traffickers. The law as it stands now lacks clarity.
Therefore, it is imperative for states to not only embrace the use of AI in countering human trafficking but also address the culpability of AI when used in trafficking operations.
The author, Dr. Haezreena Begum Abdul Hamid is a criminologist and Deputy Dean (Higher Degree), Faculty of Law, Universiti Malaya, Kuala Lumpur.
* This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.