According to new research, cybercriminals are using Microsoft-owned ChatGPT to create Telegram bots that can write malware and steal your data.
Currently, it will not work if you ask ChatGPT to generate a phishing email impersonating a bank or malware. However, hackers are circumventing ChatGPT’s restrictions, and there is active discussion in underground forums about how to use the OpenAI API to circumvent ChatGPT’s barriers and limitations.
“The majority of this is accomplished by developing Telegram bots that use the API. According to CheckPoint Research, “these bots are advertised in hacking forums to increase their exposure” (CPR).
Also read: Ransomware-as-a-Service: A game changer in security
In 2019, the cyber-security firm discovered that cybercriminals were using ChatGPT to improve the coding in basic Info stealer malware.
Many discussions and studies have been conducted on how cybercriminals are utilising the OpenAI platform, specifically ChatGPT, to generate malicious content such as phishing, emails and malware.
Third-party applications and the use of the current version of OpenAI’s API contain few anti-abuse safeguards. As a result, it enables the creation of malicious content, such as phishing, emails and malware code, without the limitations or barriers imposed by ChatGPT’s user interface.
CPR discovered a cybercriminal advertising a newly created service — a Telegram bot using the OpenAI API with no limitations or restrictions — in an underground forum.
“A cybercriminal wrote a simple script that uses the OpenAI API to circumvent anti-abuse restrictions,” the researchers wrote.
According to the cyber-security firm, Russian cybercriminals have also attempted to circumvent OpenAI’s restrictions to use ChatGPT for malicious purposes.
ChatGPT is gaining popularity among cybercriminals because its AI technology can make a hacker more cost-effective.