LET’S MAKE THE PERFECT DIGITAL PARTNERSHIP

t: 0333 666 5777e: hello@itgl.com

ITGL Limited, Trafalgar House,
223 Southampton Road,
Portsmouth, PO6 4PY

Is cybercrime the next beneficiary of the AI revolution?

Is cybercrime the next beneficiary of the AI revolution?

Regardless of the industry in which you operate, you’re likely hearing the same story: artificial intelligence is going to revolutionise the way you work. While the term itself might seem intentionally vague or oblique, it’s true that we’re currently seeing an accelerating number of tools and services with some element of AI or machine learning integration forming the basis of how we work – and more importantly, we’re starting to see concrete, repeatable benefits arise out of these tools.

Of course, as with just about any technological advance, AI can easily be used for ill rather than good. You have probably already read about – or used – OpenAI‘s ChatGPT, a large language model chatbot that can produce reasonably convincing text on just about any subject imaginable. A public version of ChatGPT was only made available late last year, but already Blackberry has put out a report suggesting that the majority of “IT and cybersecurity decision-makers” believe that we are less than a year away from a successful cyberattack being credited to the service.¹

It might seem like a bit of a leap to go from a chatbot you can use to generate a polite-but-firm complaint about your neighbour’s overgrown hedges to a significant breach of cybersecurity, but there is sound reasoning behind the fears. It’s still the case that the human element is a factor in over 80% of breaches², and – according to Blackberry – the area most professionals see ChatGPT being used is in helping to “craft more believable and legitimate-sounding phishing emails” that would target this human element.

This is rather more sophisticated than inventing a long-lost benevolent relative that can actually spell your name correctly; ChatGPT is able to take samples of real emails and mimic their style and tone, in an attempt to better impersonate individuals that you may already know and interact with on a daily basis. What’s more, it can generate a fake thread of previous emails to lend legitimacy to an interaction that, combined with basic email spoofing, can fool the reader into believing they are speaking with a co-worker, client, or superior.³

Phishing might be the most promising area of ChatGPT’s budding criminal career, but it’s not the only skill on its CV. The service can also be used to generate working code in any number of languages, and although safeguards are in place to stop the bot from intentionally generating malicious code, researchers have already spotted cybercriminals ‘tricking’ the bot into writing code that could be employed in a cyberattack.⁴ While the code generated was rudimentary and frequently required tweaks in order to be used in a real-world scenario, it seems inevitable that we’re nearing a future where functional, bespoke malicious code can be instantly created with little-to-no technical knowhow. This inevitably means a greater number of bad actors, while the lower level of effort required per attack could mean that the more meagre returns offered by individuals and smaller organisations will be more regularly targeted.

Naturally, ChatGPT isn’t the only AI-related advancement that’s impinging on the borders of cyber security. In January, Microsoft researchers released a paper⁵ on VALL-E, a technology designed to be able to mimic a person’s voice using just three seconds of sample audio as the training model. While it’s still early days for this specific technology, even more basic versions are already fooling the voice authentication systems used in major banks⁶. Similarly, the technology behind deepfakes – AI-assisted videos that convincingly map one individual’s face onto another’s body – are starting to be used to circumvent other biometric checks, as the fidelity and real-time processing have improved sufficiently to defeat many facial recognition systems.⁷

What does all of this mean for those of us on the other side, looking to protect ourselves against AI-assisted cyberattacks? The primary thing to keep in mind is that the standard, tried-and-tested best practices continue to be effective against most of these nascent approaches. The phishing potential of ChatGPT is real, but it still relies upon the same structural weaknesses that traditional phishing took advantage of. A thorough implementation of multi-factor authentication, zero-trust architecture, and staff training will continue to be effective at stymying AI-assisted phishing, regardless of how convincing it may be to the human element. Similarly, even technology that can reliably defeat biometric checks will stall once the system requires the user to also authenticate with a hardware key or similar.

And, of course, AI and machine learning tools are not only in the hands of those that wish to do us harm. At ITGL we’re already well-versed in implementing solutions with AI/ML integrations, helping clients to proactively identify malware in encrypted traffic, spot insider threats, uncover suspicious user behaviour, and much more – both in cybersecurity and beyond. If you’re interested in hearing more about how our team can help keep you and your organisation safe, reach out to us at security@itgl.com.

Published by Cybersecurity Practice

March 9, 2023