With the advent of ChatGPT, employees are using it for all their content requirements in different business operations such as email marketing, client communication, coding suggestions, etc. You can easily generate content for websites, get product descriptions, create quests for games, etc. The use of ChatGPT APIs by developers in various applications for automating processes or meeting content needs is creating security vulnerabilities in those tools. BlackBerry’s research conducted on IT leaders and CISOs reveals that within the next 12 to 24 months, AI-powered cyber attacks could leverage ChatGPT to target organizations.
Subscribe to Our Newsletter On Linkedin
Sign up to Stay Tuned with the Latest Cyber Security News and Updates
Threat actors are constantly exploiting various tools that are either based on ChatGPT or employing similar supervised learning algorithms. The vulnerabilities possessed by these tools are letting hackers breach into organizations’ systems and steal their crucial data. No doubt, ChatGPT has proved itself to be of great use for employees but its ability to employ natural language processing for experimenting with cyber attack vectors. The report also states that a majority of IT professionals (53%) believe that hackers will use ChatGPT to create more convincing and authentic phishing emails.
Let’s check out one such example of a phishing email developed with the help of ChatGPT:
Why has ChatGPT Become Cybercriminals Favorite?
ChatGPT’s ability to produce or answer anything has made it a prime medium for hackers to experiment with their attack methods. From the previous image, it is clear that hackers are using it to create sophisticated emails. In a research conducted by Checkpoint, there was a thread of search on GPT – “Benefits of Malware”. In that thread, there was a Python code that is used to seek typical file formats, copy them to an arbitrary directory within the “Temp” folder, compress them into a ZIP archive, and transfer them to a predetermined FTP server.
In fact, According to an article, threat actors are utilizing ChatGPT’s widespread use on Facebook to transmit malware. 13 Facebook pages and accounts with a combined following of 500,000 are implicated in the spread of malware via Facebook 13, 2023. Threat actors occasionally target recently formed accounts as well, some are just 0 days old.
Threat actors use a particular video, either collectively or individually, to draw in and keep an audience through compromised accounts. So far, researchers have discovered at least 25 websites that are imitating the OpenAI website.
Book a Free Demo Call with Our People Security Expert
How Scammers are Using ChatGPT as a Route for a Cyber Attack?
It was difficult for scammers to make their communication appear authentic and realistic since they lacked appropriate knowledge of language and culture. Experts from Kaspersky have discovered a harmful campaign that takes advantage of ChatGPT’s rising popularity. They have claimed that hackers are using Facebook groups to spread malware while presenting a phony version of ChatGPT.
This fake version is nothing more than the virus, Fobo Trojan horse, which deceives consumers about its genuine purpose.
How are Scammers Targeting Users?
Scammers establish social media groups that closely resemble either official OpenAI accounts or communities of ChatGPT enthusiasts. These groups distribute compelling content, such as claiming that ChatGPT achieved 1 million users, faster than any other platform. At the end of the post, there is a link that supposedly offers a ChatGPT desktop application for download.
According to the Times of India, an official from the police cyber cell reached Gujarat in February 2023, following a phishing trail left by threat actors. They said the scammers used ChatGPT to create emails and spoof SMSs.
A ChatGPT-branded Chrome extension that is official, according to reports on Hackers News, has been harming users who have installed it. This fraudulent extension was able to slip through the vetting process and make its way onto the Chrome Web store, resulting in some unfortunate consequences for unsuspecting users.
Read More: Coinbase Data Breach: How Hackers Used SMS to Target Employees?
Exemplary Scenario where ChatGPT was Employed in Cyber Attack
Following is a scenario a threat actor uses ChatGPT as the vessel for a social engineering attack:
- The attackers make a phony SMS, email, or advertisement that advertises a fictitious ChatGPT version for Windows and post it online for distribution.
- The post’s link redirects visitors to a website that closely resembles the official ChatGPT website. These websites actively promote themselves as Facebook pages, utilizing the OpenAI logo to deceive users and lead them to a malicious website.
- The website redirects visitors to download what it claims to be the ChatGPT for Windows version, but what they get is an archive with an executable file.
- Users who continue with the installation procedure get an error message claiming that they cannot install the program, but the installation goes forward without their knowledge.
- During the installation procedure, the attackers deploy a fresh stealth Trojan known as Trojan-PSW.Win64.Fobo on the user’s PC. The trojan is specifically designed to steal saved account information from popular browsers such as Chrome, Firefox, Edge, and Brave.
- The Trojan attempts to get more information, such as the quantity of advertising money and the current balance of the business accounts, in addition to stealing login credentials.
Another notable use of ChatGPT was the creation of content for Dark Web scripts. The analysis said, several major hackers with little or no coding experience were using it to write codes, and those codes are being used for phishing, ransomware, and other malicious tasks.
In one instance, a malware creator revealed his experiments with ChatGPT to see if he could replicate well-known malware strains and techniques in a forum frequented by other cybercriminals.
A python-based information thief he created using ChatGPT and provided the code for searching, copying, and exfiltrating common files, including PDFs, photos, and documents from an infected desktop. This is just one example of his work. The same malware creator also demonstrated how he had written Java code using ChatGPT to download the PuTTY SSH and telnet client and covertly run it on a device via PowerShell.
ChatGPT: A Powerful AI Tool with Dual Nature
Many experts have projected that ChatGPT could be vulnerable to abuse by malicious actors, leading to detrimental consequences. It is crucial for developers to continually train and enhance their AI models to recognize requests that may be used for malicious purposes. This would help make it more challenging for cybercriminals to exploit the technology for their nefarious intentions.
The threat of hackers employing ChatGPT to develop attack vectors cannot be taken lightly, as it can have widespread impacts on individuals, businesses, and entire industries. As technology continues to evolve, these attacks may become more sophisticated and difficult to detect. It is crucial for individuals and organizations to take proactive security measures, such as regularly changing passwords, enabling multi-factor authentication (MFA), and staying vigilant against phishing attacks.
Collaboration and collective action are key in addressing the potential threats associated with the use of AI tools like ChatGPT. By working together and implementing necessary security measures, we can reduce the risks posed by cybercrime and safeguard the responsible use of AI technology in the future.
FAQs: ChatGPT: Hackers New Toy to Develop Attack Vectors
Yes, it is true that hackers are using ChatGPT to create malicious codes or phishing emails. Even, a security researcher tricked ChatGPT into building sophisticated data-stealing malware, eluding its anti-malicious-use protections. It took four hours to create a working piece of malware with zero detection of the Virus in total.
ChtGPT is a third-party system that absorbs information but sharing confidential customer or partner information may violate agreements. It can be used by hackers to write malware, produce phishing emails, manufacture fake news, and more. As a result, ChatGPT might pose more of a threat to cybersecurity than a benefit, according to an analysis by Bleeping Computer.
ChatGPT is an AI tool that can be used to generate written material inspired by copyrighted property. However, there are risks associated with using it, such as data leakage, unauthorized disclosure of confidential information, and content that an organization may have been contractually or legally required to protect. Additionally, the code generated by ChatGPT is the property of the person or service that provided the input, and it cannot be used in the development of any other AI.
AI tools like ChatGPT can pose a security risk since they have the potential to mimic human-like conversations that can deceive individuals into sharing sensitive information or engaging in malicious activities. This impersonation technique can be employed to conduct fraudulent or harmful actions, thereby highlighting the need for caution while interacting with AI systems.
ChatGPT does indeed save data. Conversations and user input are saved as an ongoing conversation thread each time users communicate with the AI. this contributes to the AI’s accuracy improvement over time, enabling it to better comprehend each inquiry and offer a more personalized response to each user. According to stringent international privacy regulations, all user data is stored.