🦋 Glasswing #10 - How AI Impacts the Economics of Information Security
Erosion of the sum-of-efforts defence and the rise of economically viable personalized attacks.
The internet is used by over five billion people, but only a fraction of those with weak security measures experience cyberattacks each year. This discrepancy can be explained by the economics of information security, which challenges the traditional view of cybersecurity as a weakest-link problem.
Traditionally, cybersecurity has been viewed as a weakest-link problem, where the security of a system is only as strong as its weakest component. Under this view, if an individual user has weak security measures, an attacker will exploit this vulnerability.
In reality, when an attacker targets many users at scale, they face a sum-of-efforts defence rather than a weakest-link defence. This means that even if some users have very weak security, as long as the overall effort required to attack all users is higher than the expected gain, the attack will not be profitable and thus not be widely pursued. The sum-of-efforts defence explains why the majority of internet users with weak security measures remain unaffected by cyberattacks each year.
There are two kinds of attacks that can occur online: scalable attacks and targeted attacks. Scalable attacks affect many users at a low cost to the attacker, while targeted attacks are labor-intensive and focus on high-value targets. The distribution of value among internet users follows a power-law distribution with a long tail, meaning that a small number of users hold a disproportionately large amount of value.
Historically, scalable attacks have been more profitable in expectation than targeted attacks, which explains why there is such a large number of individuals with weak security measures that remain unaffected by cyberattacks each year. However, LLMs are changing this dynamic. LLMs enables attackers to scale targeted attacks in a way that was previously considered uneconomical.
In this post, I explain why the original model of the economics of information security changes from LLMs and moving forward, highly capable foundation models. I do so to further motivate the already agreed upon claim that there is an urgent need for proactive measures to protect users from emerging security threats as the pace of highly capable foundation models continues to make progress.
I am not making any new claims that are not already well understood. Instead, I am providing a formalism to a widely agreed upon claim that I found personally useful, and I hope you do as well!
Model of information security
In one model of the economics of information security (that I lean heavily on in this post), a population of users (Alice(i), for i=0,1, …N-1) is attacked by a population of attackers (Charles(j), for j=0,1, … M-1).
To date, N » M.
The attacker’s objective is to maximize their expected gain, which can be expressed as:
where Pr(SP) is the probability of the service provider detecting and blocking the attack, Pr(e_i(k)) is the probability of user i succumbing to attack k, G_i is the attacker’s gain from user i, C_j(N, k) is the cost of attacking N users with attack k, and the summation is over all attacked users.
On the other hand, the users goal is to minimize their expected loss:
where Pr(SP) is the probability of the service provider saving Alice(i) from harm, Pr(e_i(k)) is the probability of Alice(i) succumbing to attack k given her effort e_i(k), L_i is the loss Alice(i) endures when she succumbs to an attack, and e_i(k) is the effort Alice(i) devotes to defending against attack k.
Types of attacks
Traditionally, attacks could be categorized as scalable or targeted. Scalable attacks, such as automated phishing campaigns, have costs that grow sub-linearly with the number of targets. In contrast, targeted attacks, like targeted spear-phishing, have costs that increase proportionally with the number of targets.
Scalable attacks:
Targeted attacks:
The viability of an attack has largely depended on its profitability. For targeted attacks to be more profitable than scalable attacks, the following condition must hold:
Here, (Y) represents yield, (N) the number of targets, and (V) the value per target. Subscripts (s) and (t) denote scalable and targeted, respectively.
In practice, this condition is rarely met. The high costs of targeted attacks (from personalization) often outweigh the potential gains, making them economically unviable. As a result, the majority of attacks to date have been scalable attacks.
How AI impacts this model of information security?
AI is increasing the scalability and effectiveness of each attack (N_t and Y_t increase). but the most unique change is that AI makes targeted attacks that were previously uneconomical, now viable.
In an older economics of informations security paper, the authors show that for a personalized attack to be economically viable, the additional cost of personalization per user (β) must be less than the incremental gain in yield multiplied by the expected value per user:
Where Y’_s is the yield with personalization, Y_s is the yield without personalization, and V_s is the expected value per user.
The example used in the original paper is the spam campaign documented by Kanich et al, N_s = 350e6, Y_s = 28/N, V_s = $100. The authors show:
“This gives that, to be an improvement, the targeted campaign must have β < $0.00002 … Using the US minimum wage of $7.25 [in 2010] an hour this translates to 0.01 seconds effort per-user. This arithmetic is orders of magnitude away from making economic sense. While personalization would certainly improve yield, the economies of scale [in a] scalable attack model overwhelm any advantage this might bring. Thus, scalable attacks must be entirely automated with no per-user intervention whatever.”
But, with advanced AI agents, we know that this will become economically sensible!
Why does AI make this economically sensible?
AI-powered personalization can significantly enhance the effectiveness and scalability of targeted attacks. By employing thousands of AI agents to automate tasks typically executed in personalization attacks, such as Open Source Intelligence (OSINT) gathering, attackers can dramatically increase their success rates.
For example, a study found that simply addressing the target by name (e.g., "Mr. John") in an attack results in a 4.5x higher success rate on average . AI agents will likely be able to systematically collect and leverage all searchable information about an individual, such as data from Pipl, Webmi or social media platforms, and tailoring attacks accordingly at scale.
As the cost of running AI models continues to decrease with improvements in inference efficiency, personalized attacks will become increasingly economically viable. This combines the personalization and effectiveness of targeted attacks with the scalability and cost-efficiency of traditional scalable attacks, leading to a significant increase in the expected gain (G) for attackers.
The sum-of-efforts defence, which once protected users with weak security, may no longer be sufficient in the face of AI-powered attacks. Open-source LLMs can be served at 800 tokens per second, with the most capable open-source models priced at $0.65 / 1M tokens. While cybersecurity evaluations like CyberSecEval from Meta exist, these evaluations do not focus on social engineering attacks. However, studies have examined how LLMs can exacerbate social engineering attacks. For example, this paper (albeit only done in simulation and not real-world data) found that "certain human qualities - specifically naivety, impulsiveness, and carelessness - are particularly susceptible to manipulation in a phishing context."
The case against AI changing the economics of personalized attacks?
While AI undoubtedly makes social deception easier (according to Deloitte, as of 2020, 91% of all cyber attacks begin with a phishing email) and thereby helping attackers get past the initial stage of the attack (ie., authentication), it doesn’t clearly explain how AI makes it easier to move the money once the attacker has gained access to the account.
There is a lack of concrete proposals or explanations for how AI can help attackers circumvent Anti-Money Laundering (AML) preventions, such as limits on the amount of dollars that can be sent in a given international transaction. To strengthen the argument that AI changes the economics of information security, I would like to see more research on howAI can assist in the movement of funds and the evasion of AML measures (ie., some measures require a human to come in person to a bank to authorize a transaction, which an AI to date, cannot do). One argument against this is that an attacker may be able to learn what is more likely to get blocked by the banks and what isn’t, thereby increasing the probability of success again!
What can we do?
The original analysis of internet security economics identified several reasons why attacks might fail, including low average success rates, low average value per user, attacker collisions, and high exogenous fraud detection. AI has the potential to impact each of these factors. AI enables the possibility of scalable targeted attacks which can increase success rate. AI could help attackers identify and prioritize high-value targets, increasing the average value per successful attack.
Research that I think is going to be especially important important as it relates to increasing defence systems is identity research. This includes reputation systems, verifiable identities (humans and AI agents), access management protocols. I think that relying on tools like anomaly detection with AI is a losing game (it always assumes that you have the most powerful model).
Additionally, I would like to see a social engineering evaluation be built. By monitoring the capabilities of these models to socially engineer, while simultaneously monitoring the cost of launching personalized attacks, we can have an early warning signal as to when using AI will be used for scalable targeted attacks in a possible dangerous way if corresponding countermeasures are not improved.
Thanks for reading. Views are my own.
Thanks to Kim Laine, Eddy Lazzarin, Tobin South, and Thomas Dhome-Casanova for helpful conversations and feedback.