Is ChatGPT really good for business?
“Hi Fred,I’m in a meeting , but let me know if you got my text . Thanks [Boss’ name].”
If it’s not obvious, this is a phishing text message. There are a few telling signs giving it away; there are odd spacing issues and perhaps it’s unusual that his boss would send him a text.
Many scams are sent via email. Scammers can use those facts to make a more credible lure and get someone to click a link, download a file, or hand over sensitive information.
Phishing remains one of the top causes of cybersecurity breaches. That’s not because threat actors are nostalgic for old scams: it’s because phishing still works.
Troublingly, the launch of ChatGPT and other bots could make for smarter, more ubiquitous, more effective phishing lures.
What is ChatGPT?
For the last month, ChatGPT has been all over the news: teachers are worried about it, will.i.am referenced it at Davos, and there’s even advice on how to use it to apply the technology to online dating.
ChatGPT is a “a new cutting-edge A.I. chatbot” that one New York Times journalist calls “quite simply, the best artificial intelligence chatbot ever released to the general public.” Developed by OpenAI, it has the ability to generate human-like text.
While the tool may have a lot of kooky uses, some cybersecurity experts are worried about what ChatGPT will empower threat actors to do. CyberArk reported that the bot has created “a new strand of polymorphic malware.”
Others are worried about a cruder—but still effective—use: deploying natural language capabilities to create more convincing phishing lures.
AI Goes Phishing
Human writers and chatbots can make effective phishing hooks because they each bring unique strengths.
Human writers understand social engineering and know how to craft messages that will be appealing and convincing to their intended targets. They also can understand the context and cultural references that would make the message more convincing to the recipients. Chatbots understand and generate human language, which allows them to mimic the style and tone of real organisations. They can also generate personalised messages on a larger scale, which increases the chances of tricking a recipient.
When combined, human writers can create a phishing message that will appeal to a target, and chatbots can generate it at scale and personalise it to each recipient. This makes the phishing message more convincing and increases the chances of success.
Lower Barriers = More Scams
Anything that lowers the barrier of entry could be a significant threat. In time, threat actors could teach bots to leverage passwords or target emails from data breaches.
“Although ChatGPT is an offline service, what I’m worried about is combining Internet access, automation, and AI to create persistent advanced attacks” said RSA CISO Rob Hughes.
“We’ve seen how persistent prompt-bombing attacks eroded users’ attention,” Hughes continued. “With chatbots, you wouldn’t even need a spammer to craft the message any longer. You could write a script that says ‘Gain familiarity with Internet data and keep messaging so-and-so until they click on the link.’ Turning the operation over to a bot that doesn’t sign off, doesn’t give up, and works on hundreds of users simultaneously could really change the nature of phishing attacks by enabling easy to use distributed spear-phishing tools.”
Bots Controlling Bots
Chatbots, Deepfake phone scams, or botnets: whether criminals are using an audio file, an email, or some other medium, the format and the tool behind it doesn’t really matter.
What matters is that we’re developing AI tools that might be on the verge of churning out new and progressively smarter threats faster than human criminals can produce them—and spreading those threats faster than cybersecurity personnel can respond.
The only way that organisations can keep up with the rate of change is to control bots by using bots: the same underlying principles that allow a ChatGPT to write progressively better jokes or term papers can also train security systems to recognise and respond to suspicious behaviour.
Ultimately these capabilities help security teams make more, better-informed, smarter, and better security decisions.
It’s Not Just Threat Actors
It’s not just malicious uses of AI that can pose a cybersecurity threat for organisations: deliberate enterprise uses of AI, including robotic process automation (RPA), can make far more decisions far faster than any individual human can keep up with.
Likewise, the perfect storm of multi-cloud environments and inadequate identity management is set to scale cloud security failures and expose organizations to more risks, causing 75% of cloud security failures by 2023.
“Don’t focus on the headlines”
Always-on bots programmed to phish; RPA scripts running without anyone watching; lions, tigers, and bears: there’s always going to be another threat.
“Cybersecurity teams can’t always predict what the next threat is going to be or where it’s going to come from,” said Hughes.
“Don’t focus on the headlines. Focus on what you can control: educate your users, use multi-factor authentication, and move toward zero trust. They’re some of the best ways to protect yourself against the threats we know about today and stay protected against the ones that are coming tomorrow.”
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?