SINGAPORE, Feb 11 — Several information technology (IT) experts are warning that sophisticated chatbots such as OpenAI's ChatGPT and Google’s Bard could soon be used by criminals to scam, hack and carry out malicious attacks.
And it is only "a matter of time” before Singapore is the subject of such an attack, one expert added.
These chatbots, which use "large language models” to produce more credible content, could make scams harder to detect because one key giveaway of scams has long been that any text involved is often poorly written with bad grammar.
ChatGPT and Bard could fix grammatical errors and iron out awkward phrasing to make the scams more convincing, even in languages other than English.
On Wednesday, in response to TODAY's queries, Singapore police said they were monitoring the possible use of ChatGPT by criminals, noting that the tool was "a relatively new development”.
Study: Cyber attack aided by ChatGPT likely within a year
A study released earlier this year by BlackBerry, a software and cybersecurity company with headquarters in Canada, found that 51 per cent of 1,500 IT professionals across multiple countries surveyed believe that a successful cyberattack will be credited to ChatGPT in less than a year.
BlackBerry’s threat research and intelligence team identified Singapore as being in the top 10 countries in terms of the most cyberattacks.
In just 90 days between September 1 and November 30, 2022, the firm's AI cybersecurity software stopped about 10,300 malware-based cyberattacks in Singapore, which translate to about 113 per day or almost five per hour.
That was out of a total of about 1.75 million such attacks worldwide in that period.
In an interview with TODAY, Jonathan Jackson, a BlackBerry director of engineering (Asia Pacific), said it is "only a matter of time” before Singapore suffers a cyberattack using Chat GPT.
"While there haven’t been any known reported cases (of cyberattacks due to ChatGPT) in Singapore yet... popular underground forums are full of threat actors sharing samples of code to encrypt data, steal information and evade detection, all by using ChatGPT’s capability.”
Chatbots offer plenty of scope for scammers, hackers
One expert cited an example of ChatGPT’s use in an actual scam.
"There have been reports of ChatGPT being used to generate phishing emails... there have also been reports in underground forums where some attackers created a password stealer in Python with ChatGPT,” said Stas Protassov, co-founder of Acronis, a cybersecurity and data protection company.
However, in general, the experts said that sophisticated AI tools such as ChatGPT and Bard are still in their early stages.
That means that cybercriminals are still figuring out how to use them and that various unforeseen devious techniques employing these chatbots could emerge in the months ahead.
• Phishing ― Sophisticated AI chatbots can be used to create highly convincing phishing messages, which are messages that mimic legitimate sources to trick people into revealing sensitive information, such as banking login details. This can even be personalised, as ChatGPT can generate text in a wide variety of languages.
• Impersonation ― As ChatGPT has been trained by large amounts of data and details, it could be used to impersonate specific individuals or organisations to convince people into falling for scams, such as urgent requests for money from a long-lost friend, or a request for a company password from your boss.
• Fake news ― AI language models can generate misleading or false information to manipulate public opinion or cause harm. For example, ChatGPT could produce dubious content or even create entire fake news sites within minutes.
• Hacking ― ChatGPT could analyse the source code of hacking targets, such as websites, to find weaknesses to exploit. It could also create automated scripts, such as "brute forcing” various leaked or hacked credentials, until one works.
Research project: Identifying AI content problematic
A research project published in January this year by researchers from WithSecure, a cybersecurity company headquartered in Finland, concluded that identifying malicious content written by AI would be a problem.
The project explored how AI-language generation tools could be misused and tested ChatGPT on a variety of possible uses, such as creating phishing content and fake news articles.
The researchers conducted experiments that proved that "large language models” such as Chat GPT could be used:
• To craft email threads suitable for phishing attacks
• To "deepfake” or very effectively mimic a person's writing style
• To incorporate opinion into existing written content
• To craft convincing fake articles even if relevant information is not included in the model's training data
What can be done
While there is a high possibility of ChatGPT to be used in various ways to make scams and hacks more threatening, there is some good news.
"Arguably, we are still at the early adopters stage, even cybercriminals are still figuring out how to use it.”
"This means that there is a small window of opportunity for policy makers, tech developers... all parts of society to get ahead, to ensure that loopholes of such chatbots cannot be weaponised and put in cybersecurity and policy-level safeguards,” said Dr Lee.
The experts also pointed out that in general, scams mostly relied on individuals making decisions in a rush, due to panic or the fear of missing out.
For example, someone claiming to know you, who needs you to send them money because of an emergency.
As such, they advised people to slow down and check, to watch out for the names of email headers and URLs, and to verify sources of information.
Additionally, cybersecurity apps like ScamShield, which blocks scam messages and calls, can be downloaded.
Two-factor authentication, which makes sure that users can only access websites or applications after verification from two factors, like through a phone and a laptop, can also be used to further increase security.
Of the risks posed by ChatGPT, Protassov said: "On the other hand, OpenAI constantly updates and adjusts ChatGPT to reduce the likelihood of it being used to cause harm, both in physical and virtual worlds.” ― TODAY
You May Also Like