close
close

first Drop

Com TW NOw News 2024

Revealing the Reality of WormGPT in Cybersecurity
news

Revealing the Reality of WormGPT in Cybersecurity

COMMENTARY

WormGPT, ChatGPT’s Dark Web impersonator that quickly generates convincing phishing emails, malware, and malicious hacker recommendations, is worming its way into consumers’ consciousness and fears.

Fortunately, many of these concerns can be addressed.

As someone who has researched the back-end functionalities of WormGPT, I can say that much of the discourse surrounding this sinister tool is fueled by a common misunderstanding of AI-based hacking applications.

Currently, WormGPT chatbot assistants are largely uncensored GPT models with some quick engineering — far less intimidating and advanced than they may be perceived. But that’s not to say that these and other tools like them can’t become far more threatening if left unchecked.

Therefore, it is important that cybersecurity stakeholders understand the differences between WormGPT’s current capabilities and the foreseeable threats it poses. could be pose as it evolves.

The facts in a row

A wave of questions from concerned customers set my investigation in motion. Initial Google searches led me to a mix of online tools, paid services and open source repositories, but the information on these was often fragmented and misleading.

Using various anonymity measures, I took my research to the Dark Web, where I found several variations of WormGPT across various Dark Web indexes, giving a much clearer picture of their usefulness. Each of the services offers a sleek and engaging user interface with pre-set interactions using OpenAI’s API or another uncensored major language model (LLM) running on a paid server.

However, their outward complexity is simply an elaborate ruse. Upon closer inspection, I discovered that WormGPT tools lack robust backend capabilities, meaning they are prone to crashes and exhibit high latency issues during peak user demand. At their core, these tools are merely sophisticated interfaces for basic AI interactions, not black-hat behemoths as they are touted to be.

The potential risks for the future

That said, gradual progress in generative AI (GenAI) technologies point to a future where AI can autonomously perform complex tasks on behalf of malicious actors.

It is no longer far-fetched to imagine advanced autonomous agents that can carry out cyberattacks with minimal human supervision: AI programs capable of leveraging “chain thought processes” to enhance their real-time agility in carrying out cybercriminal tasks.

Cyberattack automation is well within the realm of possibility, thanks to the availability of advanced GenAI models. For example, during my research into WormGPT-like tools, I discovered that you can easily operationalize an uncensored model on freely available code-sharing platforms such as Google Colab.

This accessibility suggests that even individuals with minimal technical expertise would be able to anonymize and launch sophisticated attacks. And as GenAI agents become increasingly adept at mimicking legitimate user behavior, standard security measures such as conventional regular expression-based filtering and metadata analysis become less effective at detecting the treacherous syntax of AI-borne cyberthreats.

Hypothetical attack scenario

Consider a scenario that illustrates how these AI-driven mechanisms could autonomously navigate through various stages of an advanced cyberattack at the request of an amateur hacker.

First, the AI ​​could perform reconnaissance, scraping publicly available data about target companies from search engines, social media, and other open sources, or using the knowledge already embedded in the LLM. From there, it could venture out onto the Dark Web to gather additional ammunition, such as sensitive information, leaked email threads, or other compromised user data.

Using this information, the AI ​​application can then initiate the infiltration phase, launching phishing campaigns against known company email addresses, scanning for vulnerable servers or open network ports, and attempting to crack the entry points.

Armed with the information it collects, the AI ​​tool could begin to Business Email Compromise (BEC) campaigns, spread ransomware or steal sensitive data with complete autonomy. During this exploitation process, it can continuously refine its social engineering methods, develop new hacking tools and adapt to countermeasures.

Using a retrieval-augmented generation (RAG) system, the AI ​​tool could then update its strategies based on the collected data and report back to the attack orchestrator in real time. Additionally, RAG allows the AI ​​to keep track of conversations with different entities, allowing agents to create databases to store sensitive information and manage multiple attack fronts simultaneously, operating as an entire department of attackers.

Lift the shield

The potential for WormGPT to become an even more dangerous tool is not far off. Companies may want to prepare viable AI-driven mitigation strategies now.

For example, organizations can invest in developing AI-driven defenses designed to predict and neutralize incoming attacks in advance. They can improve the accuracy of real-time anomaly detection systems and work to improve cybersecurity literacy at every level of the organization. A team of skilled incident response analysts will prove even more essential in the future.

While WormGPT tools may not be a major concern now, organizations should not let their guard down. AI-driven threats of this magnitude require a swift, immediate response.

As they say: He who rises early catches the worm.