Qualifying a "Cyber Arms Race"

Several recent pieces have looked at the prospects for a “cyberwarfare arms race” and a “cyber warfare gap” concerning the US, China, Russia, and other nations with an interest in offensive cyber capabilities and doctrines. Beyond the endemic refusal to distinguish between computer network attacks and computer network exploitation, there is a larger problem looming with the piece: metrics of effectiveness. It’s important to get this right lest we have the cyber equivalent of what the “bomber gap” was to the 1960 Presidential election.

In conventional military assessment, forming a proper net assessment of a strategic competition involves looking at how both Red and Blue forces match up when put together on a straight line involves a number of different considerations. First, there’s obviously data about military spending and quantitative and qualitative comparisons between respective platforms and units. Doctrine, training, and capabilities matter too, because they form the basis for thinking about force employment. The nature of the military competition is paramount: is it, say, a duel in the Persian Gulf or a battle to dominate the ultimate high ground in military space? What specific subsets of the competition (like the radar vs. counter-radar competition in World War II aerial warfare) are important? How does socio-bureaucratic behavior translate (or not) into military effectiveness? How does both Red and Blue’s technical and economic base count towards their long-term effectiveness? I could go on and on but you probably get the point. And I’m sure that Andy Marshall’s folks are already on the case at the TS/SCI level.

But as a matter of unclassified public policy debate, there’s significant obstacles to building a cyber net assessment. First, absence of data. Krypt3ia has been blogging up a storm lately about the attribution problem and the lack of conceptual rigor inherent in our approaches:

First off, I would like to address Inductive and Deductive reasoning in this effort as one of the precepts core to these attribution attempts. By using both of these in a rigorous manner, we can attempt to shake out the truths to situations that may in fact seem clear on the face of them, but, once looked into further may be discounted, or at the very least questioned. Much of this lately has been the hue and cry that APT (Advanced Persistent Threat’s) are all pretty much originating from China. While many attacks have in fact been attributed to China, the evidence has not always been plainly clear nor, in many cases, has the evidence been anywhere in the open due to classification by the government and military.  …. It just so happens that there are many other nation states as well as other actors (private/corporate/individual) that may well be the culprits in many of the attacks we have seen over the years as well. Unfortunately, all too many times though, a flawed inductive or deductive process of determination has been employed by those seeking to lay the blame for attacks like ghostnet or ghost rat etc. Such flawed thought processes can be shown by examples like the following; This has pretty much been the mindset in the public and other areas where attacks in the recent past have been concerned. The attacks on Google for instance were alleged to have come from China, no proof was ever really given publicly to back this up, but, since the media and Google said so, well, they came from China then.. Right? While the attack may have in fact come from China, there has been no solid evidence provided, but people are willing to make inductive leaps that this is indeed the truth of it and are willing to do so on other occasions where China may have had something to gain but proof is still lacking. The same can be said with the use of deductive reasoning as well. We can deduce from circumstances that something has happened and where it may have originated (re: hacking) but, without using both the inductive method as well as the deductive with evidence to back this up, you end up just putting yourselves in the cave with the elephant trunk.

As Krypt3ia notes, there is an persistent idea that one can rely on the medium of the attack solely and ignore psychology, (geo)politics, and logic in combination with forensics. Attribution is only one symptom of a larger data problem about cyber exploitation infiltration that at times can lead to an misleading picture of adversary capabilities. Then add the more general problem that these activities largely take place at a covert and/or clandestine level. How are we really going to analytically build a picture of an adversary’s capabilities in reference to our own that will be more than impressionistic? Talking about Chinese PLA or Russian cyberwarfare capabilities is going to be like how Soviet paramilitary operations discussions were doing the Cold War: lots of out-of-context readings of military doctrine, speculations, and occasional insights. I own many volumes from the 80s on Soviet special forces and military, and some of the more egregiously wrong sections dealt with Soviet strategic deception and covert warfare. Victor Suvorov’s corpus in particular stands out as an example to avoid.

Finally, there’s a need in these assessments to actually take into account some of the unique dynamics of cyber weapons. Here, I will build off Thomas Rid’s recent discussion of cyber weapons.

  • Lightning fast (at least compared to the F-35) acquisition and development cycles and processes.
  • Vulnerabilities must pre-exist, they cannot be created. The weapon uses the system itself to create the destructive effect. At a banal level, there are tons of existing vulnerabilities, but when we get to those that matter the opportunities lessen considerably.
  • Weapons must be customized for specific vulnerabilities. The tradeoff is that while individual weapons cannot be re-used and take a lot of energy to process, they can be generated faster than most conventional weapons.
  • Weapons require a degree of intelligence preparation that suggest, absent heroic levels of social engineering, many threats to sensitive systems will be insider-based.

Note that this is an explicitly military overview, and in such a context cyber will be only one part of a mesh of larger capabilities. Most cyber threats occur within the framework of cyber conflict, a larger and more holistic category. And as Sam Liles notes in his latest entry, information security is also very different conceptually and practically from war.

CTOvision Pro Special Technology Assessments

We produce special technology reviews continuously updated for CTOvision Pro members. Categories we cover include:

  • Analytical Tools - With a special focus on technologies that can make dramatic positive improvements for enterprise analysts.
  • Big Data - We cover the technologies that help organizations deal with massive quantities of data.
  • Cloud Computing - We curate information on the technologies enabling enterprise use of the cloud.
  • Communications - Advances in communications are revolutionizing how data gets moved.
  • GreenIT - A great and virtuous reason to modernize!
  • Infrastructure  - Modernizing Infrastructure can have dramatic benefits on functionality while reducing operating costs.
  • Mobile - This revolution is empowering the workforce in ways few of us ever dreamed of.
  • Security  -  There are real needs for enhancements to security systems.
  • Visualization  - Connecting computers with humans.
  • Hot Technologies - Firms we believe warrant special attention.

 

solid
About AdamElkus

Adam Elkus is a PhD student in Computational Social Science at George Mason University. He writes on national security, technology, and strategy at CTOvision.com and the new analysis focused Analyst One, War on the Rocks, and his own blog Rethinking Security. His work has been published in The Atlantic, Journal of Military Operations Foreign Policy, West Point Counterterrorism Center Sentinel, and other publications.

Comments

  1. Adam, @aelkus thanks much for this. Great context. I have tried hard to think of metrics and have watched as DoD and others have struggled with this too. Some things are clearly measurable and I’m a huge advocate for things like the SANS consensus audit guidelines for those types of technical audits. Similar evals can be done on adversary networks. But I have never seen good metrics on mission impact of cyber operations or military readiness. I’m starting to think the best we can do is make well reasoned, informed assumptions/assessments on readiness. And I’m getting the same general conclusion on cyber net assessments. 
     
    For example, regarding attribution, I would like to point out that at a high level, there really is not an attribution problem. I can attribute any attack, I mean 100% if attacks,  with 100% confidence that I have made an attribution. Of course I know we always imply that attribution needs to be accurate. But I am trying to make the point here that decision-makers should know you can make decisions based on assumptions. And if we have not been able to think through this well enough we might have to understand this. And maybe we should make our adversaries understand this as well. Maybe we should make it policy that we will make any attribution we want to make and those we attribute attacks to will pay the consequences. 
     
    Anecdote on attribution rom the movies: ” I’m a supersticious man. And if some unluck accident should befall him, if he should be shot in the head by a police officer, or if should hang himself in his jail cell, or if he’s struck by a bolt of lightning… *then I’m going to blame some of the people in this room*… and that, I do not forgive. But, that aside, let say that I swear, on the souls of my grandchildren, that I will not be the one to break the peace we have made here today.”

  2. Adam, @aelkus thanks much for this. Great context. I have tried hard to think of metrics and have watched as DoD and others have struggled with this too. Some things are clearly measurable and I’m a huge advocate for things like the SANS consensus audit guidelines for those types of technical audits. Similar evals can be done on adversary networks. But I have never seen good metrics on mission impact of cyber operations or military readiness. I’m starting to think the best we can do is make well reasoned, informed assumptions/assessments on readiness. And I’m getting the same general conclusion on cyber net assessments. 
     
    For example, regarding attribution, I would like to point out that at a high level, there really is not an attribution problem. I can attribute any attack, I mean 100% if attacks,  with 100% confidence that I have made an attribution. Of course I know we always imply that attribution needs to be accurate. But I am trying to make the point here that decision-makers should know you can make decisions based on assumptions. And if we have not been able to think through this well enough we might have to understand this. And maybe we should make our adversaries understand this as well. Maybe we should make it policy that we will make any attribution we want to make and those we attribute attacks to will pay the consequences. 
     
    Anecdote on attribution rom the movies: ” I’m a supersticious man. And if some unluck accident should befall him, if he should be shot in the head by a police officer, or if should hang himself in his jail cell, or if he’s struck by a bolt of lightning… *then I’m going to blame some of the people in this room*… and that, I do not forgive. But, that aside, let say that I swear, on the souls of my grandchildren, that I will not be the one to break the peace we have made here today.”