The impact of artificial intelligence on cyber offence and defence


To make cyberspace more defensible—a goal championed by Columbia University and called for in the 2023 US National Cybersecurity Strategy—innovations must not just strengthen defences, but give a sustained advantage to defenders relative to attackers.

Artificial intelligence has the potential to be a game-changer for defenders. As a recent Deloitte report put it, ‘AI can be a force multiplier, enabling security teams not only to respond faster than cyberattackers can move but also to anticipate these moves and act in advance’.

Yet this is no less true if we switch it around: AI can enable cyberattackers to move faster than defenders can respond.

Even the best defensive advances have been quickly overtaken by greater leaps made by attackers, who have long had the systemic advantage in cyberspace. As security expert Dan Geer said in 2014, ‘Whether in detection, control or prevention, we are notching personal bests, but all the while the opposition is setting world records’. Most dishearteningly, many promising defences—such as ‘offensive security’ to crack passwords or scan networks for vulnerabilities—have ended up boosting attackers more than defenders.

For AI to avoid this fate, defenders, and those that fund new research and innovation, must remember that AI is not a magic wand that grants lasting invulnerability. For defenders to win the cybersecurity arms race in AI, investments must be constantly refreshed and well targeted to stay ahead of threat actors’ own innovative use of AI.

It’s hard to assess which side AI will assist more, the offense or the defence, since each is unique. But such apples-to-oranges comparisons can be clarified using two widely used frameworks.

The US National Institute of Standards and Technology’s Cybersecurity Framework can be used to highlight the many ways AI can help defence, while the Cyber Kill Chain framework, developed by Lockheed Martin, can do the same for AI’s uses by attackers.

This more structured approach can help technologists and policymakers target their investments and ensure that AI doesn’t follow the path of so many other technologies, nudging along defenders but turbocharging the offence.

Gains from AI for the defence

The NIST framework is an ideal architecture to cover all the ways AI might aid defenders. Table 1, while not meant to be a complete list, serves as an introduction.

Table 1: Using the NIST framework to categorise AI advantages for defenders

NIST framework function Ways AI might radically improve defence
Identify – Rapid automated discovery of an organisation’s devices and software
– Easier mapping of an organisation’s supply chain and its possible vulnerabilities and points of failure
– Identification of software vulnerabilities at speed and scale
Protect – Reduce demand for trained cyber defenders
– Reduce skill levels necessary for cyber defenders
– Automatically patch software and associated dependencies
Detect – Rapidly spot attempted intrusions by examining data at scale and speed, with few false-positive alerts
Respond – Vastly improved tracking of adversary activity by rapidly scanning logs and other behaviour
– Automatic ejection of attackers, wherever found, at speed
– Faster reverse-engineering and de-obfuscation, to understand how malware works to more quickly defeat and attribute it
– Substantial reduction in false-positive alerts for human follow-up
Recover – Automatically rebuild compromised infrastructure and restore lost data with minimum downtime

Even though this is just a subset, there are still substantial gains, especially if AI can drastically reduce the number of highly skilled defenders. Unfortunately, most of the other gains are directly matched by corresponding gains to attackers.

Gains from AI for the offence

While the NIST framework is the right tool for the defence, Lockheed Martin’s Cyber Kill Chain is a better framework for assessing how AI might boost the attacker side of the arms race, an idea earlier proposed by American computer scientist Kathleen Fisher. (MITRE ATT&CK, another offence-themed framework, may be even better but is substantially more complex than can be easily examined in a short article.)

Table 2: Using the Cyber Kill Chain framework to categorise AI advantages for attackers

Phase of Cyber Kill Chain framework Ways AI might radically improve offence
Reconnaissance – Automatically find, purchase and use leaked and stolen credentials
– Automatically sort to find all targets with a specific vulnerability (broad) or information on a precise target (deep; for example, an obscure posting that details a hard-coded password)
– Automatically identify supply-chain or other third-party relationships that might be affected to impact the primary target
– Accelerate the scale and speed at which access brokers can identify and aggregate stolen credentials
Weaponisation – Automatically discover software vulnerabilities and write proof-of-concept exploits, at speed and scale
– Substantially improve obfuscation, hindering reverse-engineering and attribution
– Automatically write superior phishing emails, such as by reading extensive correspondence of an executive and mimicking their style
– Create deepfake audio and video to impersonate senior executives in order to trick employees
Delivery, exploitation and installation – Realistically interact in parallel with defenders at many organisations to convince them to install malware or do the attacker’s bidding
– Generating false attack traffic to distract defenders
Command and control – Faster breakout: automated privilege escalation and lateral movement
– Automatic orchestration of vast numbers of compromised machines
– Ability for implanted malware to act independently without having to communicate back to human handlers for instructions
Actions on objectives – Automated covert exfiltration of data with a less detectable pattern
– Automated processing to identify, translate and summarise data that meets specified collection requirements

Again, even though this is just a likely subset of the many ways AI will aid the offence, it demonstrates the advantage that it can bring, especially when the categories are combined.

Analysis and next steps

Unfortunately, general-purpose technologies have historically advantaged the offence, since defenders are spread out within and across organisations, while attackers are concentrated. To deliver their full benefit, defensive innovations usually need to be implemented in thousands of organisations (and sometimes by billions of people), whereas focused groups of attackers can incorporate offensive innovations with greater agility.

This is one reason why AI’s greatest help to the defence may be in reducing the number of cyber defenders required and the level of skills they need.

The US alone needs hundreds of thousands of additional cybersecurity workers—positions that are unlikely ever to be filled. Those who are hired will take years to build the necessary skills to take on advanced attackers. Humans, moreover, struggle with complex and diffuse tasks like defence at scale.

As more organisations move their computing and network tasks to the cloud, the major service providers will be well placed to concentrate AI-driven defences. The scale of AI might completely revolutionise defence, not just for the few that can afford advanced tools but for everyone on the internet.

The future is not written in stone but in code. Smart policies and investments now can make a major difference to tip the balance to the defence in the AI arms race. For instance, the US Defense Advanced Research Projects Agency—responsible for the development of technologies for use by the military—is making transformative investments, apparently having learned from experience.

In 2016, DARPA hosted the final round of its Cyber Grand Challenge to create ‘some of the most sophisticated automated bug-hunting systems ever developed’. But these computers were playing offence as well as defence. To win, they ‘needed to exploit vulnerabilities in their adversaries’ software’ and hack them. Autonomous offensive systems may be a natural investment for the military, but unfortunately would boost the offence’s advantages.

DARPA’s new experiment, the AI Cyber Challenge, is purely defensive—with no offensive capture-the-flag component—‘to leverage advances in AI to develop systems that can automatically secure the critical code that underpins daily life’. With nearly US$20 million of prize money, and backed by leading companies in AI (Anthropic, Google, Microsoft and OpenAI), this DARPA challenge could revolutionise software security.

These two challenges encapsulate the dynamics perfectly: technologists and policymakers need to invest so that defensive AIs are faster at finding vulnerabilities and patching them and their associated dependencies within an enterprise than offensive AIs are at discovering, weaponising and exploiting those vulnerabilities.

With global spending on AI for cybersecurity forecast to increase by US$19 billion between 2021 and 2025, the opportunity to finally give the defence an advantage over the offence has rarely looked brighter.


Leave a Reply

Your email address will not be published. Required fields are marked *