WormGPT: AI tool designed to help cybercriminals will let hackers develop attacks on large scale, experts warn

WormGPT, which takes its name from OpenAI's popular chatbot, has been made to help hackers launch phishing attacks.

Please use Chrome browser for a more accessible video player

What is WormGPT?
Why you can trust Sky News

A ChatGPT-style tool designed to assist cybercriminals will let hackers develop sophisticated attacks on a significantly larger scale, researchers have warned.

The creators of WormGPT have branded it as an equivalent to the popular AI chatbot developed by OpenAI to produce human-like answers to questions.

But unlike ChatGPT, it does not have protections built in to stop people misusing the technology.

The chatbot was discovered by cybersecurity company Slash Next and reformed hacker Daniel Kelley, who found adverts for the malware on cybercrime forums.

While AI offers significant developments across healthcare and science, the ability of large AI models to process massive amounts of data very quickly means it can also aid hackers in developing ever more sophisticated attacks.

ChatGPT racked up 100 million users in the first two months of its launch last November.

Its success prompted other major technology giants to make public their own large language models, like Google's Bard or Meta's LLaMA 2.

Be the first to get Breaking News

Install the Sky News app for free

How WormGPT works

Hackers use WormGPT by taking out a subscription via the dark web.

They are then given access to a webpage that allows them to enter prompts and receive human-like replies.

The malware is mainly developed for phishing emails and business email compromise attacks.

This is a form of phishing attack where a hacker attempts to trick employees into transferring money or revealing sensitive information.

Tests run by researchers found the chatbot could write a persuasive email from a company's chief executive asking an employee to pay a fraudulent invoice.

It draws from a wide range of existing text written by humans, meaning the text it creates is more believable and can be used to impersonate a trusted person in a business email system.

Read more:
Putting chatbots to the test on dating apps
The author embracing AI to help write novels

Please use Chrome browser for a more accessible video player

Is AI an existential threat?

'This could facilitate attacks'

Mr Kelley said there is no direct risk to personal data, but added: "[WormGPT] does pose an indirect risk to personal data because it can be used to facilitate attacks attackers might want to launch, which would target personal data, like phishing or business email compromise attacks."

The researchers have recommended businesses improve their email verification systems by scanning for words like "urgent" or "wire transfer", which are often used in these attacks.

Improving staff training to understand how AI can be used to aid hackers could help identify attacks, they added.