Life
Can AI’s easily-used deepfake voice, visual tools affect state elections in Malaysia? Experts weigh in
Experts in Malaysia call for legal measures to be put in place in dealing with artificial intelligence deepfake Photos. — Pictures by Yusof Mat Isa and AFP

KUALA LUMPUR, July 12 — The saying "seeing is believing” perhaps may no longer be as relevant in the era of artificial intelligence (AI) and its tools.

The technology now allows anyone to generate ultra-realistic images or sound bites in reasonably believable deepfake content that is starting to erode online trust.

Advertising
Advertising

And the scary part is that anyone can do that with automated AI tools.

Unlike the Photoshop era where certain levels of technical expertise were required to produce a fake image, AI has now made it much easier for anyone to misuse the technology in their evil quest of generating fake content.

For instance, chatbots can be used to spit out falsehoods, while face-swapping apps can put an individual’s face on pornographic videos.

Voice cloning tools have also been used times and again to defraud millions of individuals or corporations through various scam strategies.

But let us understand what deepfakes are in today’s digitalisation era.

Simply put, deepfakes attempt to simulate content using an individual’s likeness as a source of AI learning.

The catchphrase here is "learning”.

The tools are designed to learn and improve over time, and therefore, we can only expect such fake content to get more realistic with time.

Deepfakes are created through a robust source of media that is gathered from the internet, social media and third-party data exchanges.

For example, AI tools can use sound bites from your voice or footage of you to generate content resembling you.

While the use of such tools is becoming more rampant among cybercriminals who often operate scamming syndicates, many experts have warned of AI’s serious consequences on other issues such as elections.

In the US, many debates are currently ongoing about the effects of AI on the upcoming 2024 presidential elections.

As Malaysia prepares to hold six state elections on August 12, Malay Mail reached out to AI and legal experts to better understand the possible dangers of the technology if it is left unregulated.

Great technology but very dangerous

According to cyber security expert and lawyer Derek Fernandez, AI and its tools amplify human capabilities to a superior level which can get dangerous.

"It’s a powerful tool to amplify capabilities to write script and analyses weaknesses in applications.”

Fernandez said while such tools can do a lot of good, they can become a threat if they fall into the wrong hands.

"It’s like a knife; it can be used to stab people or perform surgery and save lives.

"The technology is evolving at such a fast pace that the laws and countermeasures aren’t able to catch up with it.”

Looking at the extent of AI’s capabilities, Fernandez said the tools have the potential to be used to manipulate election outcomes through deepfake content.

"If such content circulates online, they may be designed to orchestrate a narrative to change the political landscape of free and fair elections.

"Yes, it may not be happening yet today as the technology is fairly new, but rest assured that it will arrive.”

Therefore, Fernandez said there is an urgent need for relevant regulations to be put in place to tackle such challenges.

According to him, digitalisation has emboldened the criminals.

"The criminals today have vast resources available to pursue their wrongdoings.”

Despite that, Fernandez said there’s a lack of necessary cybersecurity resources to ensure such crimes don’t happen.

He said anything that can be digitalised can be replicated and modified, therefore, there’s a need for strong regulations and tools to be able to detect any criminal activities and be able to track the offenders.

"If you can turn anything into a byte of data, you will be able to reproduce and replicate it.”

How can it affect elections?

Senior research associate at Khazanah Research Institute Jun-E Tan said the use of AI for campaigning can get complicated as it may be used to generate deepfakes touching on sensitive topics.

"From the content generation point of view, the cost of running disinformation and smear campaigns is reduced remarkably [by using AI].

"Secondly, the recommender algorithms of social media platforms tend to curate and amplify sensational content that generates the most user engagement, which campaigners can strategise to do.”

Tan said users may also micro-target different groups of voters with specific types of messaging that they may find most susceptible to the content.

She said while Malaysia is a multicultural society with existing sensitivities, hate speech remains a major concern when it comes to elections.

Citing a recent report from the Independent Journalism Centre on the 15th general election, she said findings showed that discussions on race and religion were laced with disinformation and used to spread fear and distrust between communities.

"With the addition of deepfake technologies into the picture, there may be a danger of tensions escalating as the awareness and understanding of the capabilities of such technologies remain low within the wider population.”

Institute of Strategic and International Studies Malaysia senior analyst Farlina Said told Malay Mail that deepfake content may have societal implications, especially in an environment that has political divisions.

"Trust is the most difficult factor in an environment of voice cloning and visual modification.

"A person on the receiving end of content would or should question if the content is real or it isn’t.”

Farlina said the deepfake content consumers’ response is dependent on how they interact with content.

"Thus, unpacking the potential impact of such modified content requires examining factors for Malaysia’s digital resilience such as digital literacy, social cohesion online and offline as well as any psycho-social factors.”

Early this year, deepfake arrest photos of former US president Donal Trump shocked social media before he faced indictment over payment of hush money to a woman he allegedly had an affair with.

Regulations efforts need to start fast

According to Fernandez, technology can be a valuable servant but a bad master.

"If we empower it to be a master, it may be hazardous but if we empower it to be a servant, great good can come from it when used correctly.

"The question is how we effectively regulate good use as it remains the biggest and most difficult challenge for the government.”

He said active digitalisation had been allowed and pursued over the years without considering security adequately.

Therefore, he said it’s vital to regulate AI in the country to stop criminals from using the tools to their advantage.

"We need laws to regulate the use of AI from a legal and ethical standpoint to protect society.

"The European Union is now in the process of drafting laws to tackle AI use and it may be a good starting point for us as they have done a fairly good job.”

Fernandez said the laws must determine who should be allowed access to AI.

"If we want to open it up to every citizen to use AI, then we need to have ways to know who the users are.”

According to him, the system should be modified to allow user registration to enable the authorities to trace the users should there be any crimes.

"Just like Gmail, the users should sign up and be traceable if one commits cybercrimes.”

Secondly, Fernandez said there must be a condition to make it mandatory for all the AI content to carry an auto-generated watermark to distinguish them from the original content.

The regulation has to be rigorous enough to offer a proactive response when it comes to dealing with cybercrimes.

"Imagine a malicious fake AI-generated TikTok video of a politician was made viral during the election campaign, by the time it’s proven the content was fake the damage has already been done.”

Fernandez said Malaysia already has some existing laws to tackle some of the issues that may arise from deepfakes but such legislation needs to be upgraded.

"We need to move fast to have policies that are suitable for the new technology.

"We also need to look into ways how to detect deepfakes.

Fernandez also said the government may use threat assessment technology to utilise data and detect possible threats as an interim measure to combat cybercrimes committed by AI tools.

Additionally, he said to ensure compliance, the new regulations should hold accountable tech corporations that negligently or deliberately allow deepfake content generated by AI to break the law.

Malaysians from Selangor, Penang, Negeri Sembilan, Kedah, Kelantan and Terengganu will be heading to polls on August 12.

The Election Commission has set the nomination day on July 29, giving candidates to campaign officially.

Related Articles

 

You May Also Like