We’ve had quite an interesting sequence of recent events that shows how important and enduring the issue of federal IT procurement is likely to be, as the concerns about the need to have us move to some sort of cloud-based architecture is spreading throughout the government. In the past couple of weeks, the intelligence community (IC) has published its intent to create a substantial initiative relating to evolving how it uses the cloud-based architecture for its mission. The IC was a pioneer here. In 2013, the IC adopted a single-cloud approach. They’ve had quite a successful run with their use of the cloud and have impressed other parts of the government. Now, however, the IC wants to move to a more hybrid cloud approach – a development that seems to parallel the commercial practice, where medium- and large-sized companies use between five and a dozen different cloud service providers.
The cloud strategy the Department of Defense (DoD) has released, with its single-cloud solution for its Joint Enterprise Defense Infrastructure (JEDI), gives the impression that DoD started this process with a vendor in mind and then developed a strategy afterwards. The vendor in question is Amazon Web Services (AWS), the IC’s choice in 2013.
What you see in the DoD strategy is a valiant attempt to modernize and integrate the IT infrastructure for DoD as a whole through cloud technology — yet without really fully understanding all of the challenges or all of the risks, as well as many opportunities involved. The hope seems to be that by choosing a single operator – AWS, a very competent and very experienced operator – it would then be up to the operator to solve all of the problems that DoD didn’t want to address and didn’t want to grapple with beforehand. In fact, there were a lot of problems and issues that DoD should have thought through before starting this process. Now DoD is starting to realize it needs to go back and rethink some of those. So for DoD, A for effort, but a much lower score for results and method for going about it.
Why hybrid will likely prevail
The second law of thermodynamics, which is the reason all engines work, is that entropy has to increase unless you’re prepared to expend energy. Entropy is disorder in systems. And the second law of thermodynamics has served physics quite well. There seems to be an analogous law for software and systems; in particular, uniformity will erode unless you’re prepared to expend energy or resources. So unless it’s a closed system, there will be a tendency to have a diversity of platforms or a diversity of applications.
And it shouldn’t be surprising; people tend to take the shortest path, so they would use the application that is most available, or they would build an application for the environment that’s easiest and best suited. This is going to be a reality for the future.
And if you look back, it has always been a reality for our past. Legislating for Windows platforms didn’t work out all that well. Legislating for IBM systems didn’t work out. It’s not that the initial push for uniformity produced bad systems; but eventually, there was heterogeneity, and we had to accommodate it.
So that would suggest that hybrid clouds are the natural order of things because there is and will be a diversity of services. And even if DoD had sufficient control to create a uniform cloud, to be effective, especially if DoD becomes very cloud-centric or computing-centric, its system is going to have to interact with international partners over whom DoD doesn’t have authority and an industry base over which DoD likewise doesn’t have authority.
So there is going to be a natural heterogeneity. And that says that, although letting a single-cloud contract might be an interesting first step, it would be unwise to think that we wouldn’t evolve to some kind of a hybrid cloud.
The new intelligence community solicitation talks about a sequence of industry surveys taken to elicit the technical approaches industry would undertake to meet intelligence community needs. And this would be extended over a period of more than a year, which would then inform the process of running the solicitation. The documents that were distributed to the industry in early April indicate that the process would enable the intelligence community to capture and anticipate the pace of technological change as well as to recognize where certain providers would be able to offer better services for specific mission applications.
One of the issues we need to consider is the problem that the Defense Department has, generally, with adapting new technology that is primarily located in the civil sector. The Defense Innovation Unit Experimental that Secretary Carter set up a couple of years ago was one attempt. But one of the interesting things about IT that really makes it hard to square with DoD culture is that the custom in software, for example, has been to sell products to the consumer that the producer knows are defective — that the relationship between the producer and the consumer is such that the consumer will find the problems, the producer will make the fixes and evolve the software in ways that really become useful to the users. And by being able to do this at scale, they are able to have a very short development cycle compared to the customs in the Department of Defense, where there is a very high bar to reliability, very high bar to technical maturity and so forth.
The experience the national security community in general has encountered with acquiring cloud services reflects that kind of difficulty. The way the intelligence community has proceeded has some interesting features that are worth studying — that they have gone from a monoculture to a recognition that a hybrid architecture is necessary. In fact, DoD clearly has a similar view in the sense that there are already 500 cloud service providers offering these services to DoD; what’s missing is the integration of it, which is the aspiration that DoD has not yet been able to resolve.
Security and compliance
One purported security advantage of a cloud monoculture is related to so-called attack surfaces — a metaphor for the surface area that’s available to the attacker. Think of a wall — a longer wall has a lot more places you might try to penetrate. If we were to build a hybrid cloud, then there would be lots more features, lots of duplication of function in different pieces of software, because each of the cloud providers would have had to build the same functionality. It only takes one vulnerability to get in, and so one argument against having a hybrid or a heterogeneous cloud is that it has an increased attack surface.
A monoculture would seem not to have this problem. But it has a different problem, which is that it’s much simpler, so that your attacker has an easier time analyzing it. Your attacker also has an easier time focusing attention on it. So if you wanted to do a supply chain attack, it would be easier because there are clear destinations for the things you have to compromise. So there’s this funny sort of tradeoff, and nobody has yet proffered an adequate analysis of what the terms of the tradeoff should be.
If you use metaphors, you can think of a hybrid cloud as either something with an inevitably vulnerable “weakest link,” which means you have to design internal resilience – that is, if one piece is compromised, it’s hard to leapfrog into other pieces. Or you can think of a hybrid cloud as providing an opportunity to inherit the best practices from any one of the components into other components. Both of those are really management issues; they talk about how the clouds are composed. And it’s a set of interesting design choices that you have to make.
But you also have to make them in a monoculture, because there will be diverse components, different applications and so on. You want to make sure that if one of them gets compromised, it’s not possible to leapfrog into another. So it would seem that the costs of being hybrid over mono are not significant. The same sets of problems exist, and you need to confront them.
Another issue DoD is trying to get at with its movement into a disciplined defense-wide cloud environment is the issue of compliance: what do we require of these software systems and hardware systems to be trustworthy enough for use by DoD?
We have a really bad track record in the industry and in government of defining compliance for IT systems, dating back to the Orange Book of 1985, “Department of Defense Trusted Computer System Evaluation Criteria,” our first exercise in that.
There are two risks. One is that you require people to do things that don’t actually solve the real problem, and that may have the effect of causing players to remove themselves from the marketplace; the other is that compliance prescriptions impede innovation and progress. And certainly, when we’re at the beginning of the lifecycle of a technology, as we are for clouds, you don’t want to be impeding progress. And because security in a cloud is particularly problematic, there is likely to be a fair bit of innovation in security over the next five to 10 years.
DoD has the challenge that it need to get onto a cloud soon, and what’s out there is probably not trustworthy enough for all the needs. And they also need to figure out what the prescription is so that any cloud provider can contribute. They need to set the height of the bar, so to speak.
DoD hasn’t done a good job of this, but let’s be clear, nobody has. It’s a really hard problem. It’s not because DoD is incompetent; it’s because it’s a technically difficult challenge.
A dimension of technology acquisition that has changed just in the last year or less, in response to concerns about the security of the supply chain, has been the aspiration of DoD to require vendors to deliver their products uncompromised. “Deliver Uncompromised” adds a new level of burden to vendors.
We’ve seen the consequences already of a very limited imposition of cybersecurity discipline, in which prime contractors are now required obtain an affirmation from their cascade of subcontractors that they have complied with the cybersecurity regulations that DoD has promulgated.
That single contractual provision caused 20 percent of the DoD industrial base to drop out of the market. This is a problem for smaller companies, in which a small part of their business involved with DoD. They are generally not enthusiastic about the combination of a small business base, large liability and high cost to execute.
There’s a widely anticipated reorganization likely coming to the Defense Security Service — an expanded mandate to do government-wide personnel vetting, not just DoD, and also responsibilities for some form of surveillance over the supply chain because of this “deliver uncompromised” aspiration. So you have the underlying IT infrastructure that is going to be cloud-based, probably hybrid cloud-based, but interacting with the entire organism, going from the pointy-end of the spear with the operating forces to the small mom-and-pop shops that are delivering defense products and services to the department. So it’s a much more complex ecosystem that’s going to be integrated by this cloud-based architecture.
A number of smaller contractors are going to find themselves increasingly frozen out by the kind of requirements that this new IT infrastructure will impose. This poses a risk to the industrial base, and it wasn’t quite captured in the recent White House document that covered some of the more industrial aspects of the defense industrial base rather than some of the technology matters discussed here. But there’s no doubt that the management problems associated with this infrastructure, when the principal threat to the industrial base is now coming from cyber operations by adversaries, are much more difficult.
There was an interesting vignette in one of the defense trade journals recently on an exercise taking place between artillery units of the U.S. Marine Corps and their Australian counterpart. And some piece of information that was essential to the collaborative and integrated operation of the two countries’ artillery battalions was missing because of an oversight. The U.S. Marines were not allowed to share the data with their Australian counterparts. They went back and got a waiver on it so they were able to go share the data, but you can obviously see that, in a tactical situation, this would be totally destructive of the ability of allies to work together. And that’s why, with regard to compliance, we need the ability to manage the process so that we are able to interoperate. Getting this right will have a tremendous benefit of avoiding the kind of problems we’ve had in the Kosovo air campaign or the air campaign against Libya in 2011, where the allies really can’t interoperate because they’re not able to share data in a way that’s relevant for tactical purposes, even among allies that have the closest bonds of trust and collaboration.
All of these things are starting to get poured into this stew of data sharing. Eisenhower once said that the way to solve a hard problem is to make it bigger — you embed the hard problem in a much larger context, in which case the difficult but now smaller problem sometimes becomes easier to manage. If we start to recognize the tremendous benefits that can accrue to an ability to share data more routinely, it will enable us to solve some of these problems related to compliance.
Modern defense procurement has for the most part inverted the 20th Century paradigm, in which most of the cost of a defense system was in its procurement, not its development. Now, most of the costs are in the development, not the acquisition of the product. Many years ago, we were buying about 1,500 aircraft a year; now we’re buying a couple hundred aircraft per year because these systems are rendered much more capable by their interaction with the sensor network and so forth. And so the industrial aspects of it are changing, and that’s going to be reflected in the IT ecosystem that supports it.
Last spring, a group of attackers announced a way to compromise any processor chip that is used to increase the performance of memory references. If that sounds technical, what it means is that every semiconductor manufacturer processor is vulnerable to this. Spectre and Meltdown are what it was called in the press.
So all the cloud providers have processors that have this property, which means all cloud providers, currently are vulnerable to this kind of attack. And it looks like it’s a very hard thing to change. We don’t know how to build fast processors that don’t have this vulnerability in them. So whatever we adopt, we need to adopt something that has agility capabilities so that, as these problems get discovered, we can deploy solutions to them or fixes for the time being. And that’s going to be an element of a compliance definition. And yet, you don’t see the DoD cloud strategy document talking about it. Everybody is still in thrall to the traditional DoD viewpoint on procurement: you buy the fighter, and then you hop in and fly the fighter.
But buying a computing system is only the beginning. We need to count on changing it frequently as attacks get discovered and certainly as we want to add functionality.
One of the aspirations of cloud architecture is to be able to operate at the edge. The NORTHCOM commander, who has responsibility for the defense of U.S. territory, has noted that the homeland is no longer a sanctuary. And so there’s a need to be able to operate on a global basis wherever we are. The issues here aren’t completely new. But in the data-rich environment we have, it’s not something that we’ve been able to easily do, as our experience in Afghanistan and Iraq has shown.
This is really a national problem. It’s not just a problem of DoD. The financial services industry, all of the basic infrastructures are subject to these vulnerabilities. And a problem continues to make it difficult to resolve, which is the sensitivity of the information on the threat. There’s a consensus that you just can’t protect the assets by some sort of perimeter defense. It’s a much more complicated thing.
DoD is going to let a contract for cloud services soon. Presumably, whoever gets this contract is going to be told about threat information because, if DoD understands the threat really well and doesn’t tell its partner, then the chances are reduced that the partner will be able to defend against attacks. And it’s going to be tempting to treat the partner as being privileged. But if you believe that the entire nation will benefit if all cloud providers are more trustworthy – because, after all, DoD is not responsible for everything important for our survival – and if you believe that DoD actually benefits if its non-contractors are more trustworthy because the industrial base benefits and because there is now a bigger set of choices if DoD ever wants to consider adding other clouds, then it’s in DoD’s interest not to treat this first contractor as special and to share detailed threat information with as many cloud providers as are interested — because they will use this, and they will secure their clouds in turn; in fact, they may even do it in a way that innovates, and so we’ve had a broader space of defensive innovation.
Of course, a cloud provider is only going to respond with investments to deal with this threat information if they believe they have some incentive. And the incentive is business from DoD. It’s important for whatever happens for DoD not to think in terms of being parsimonious with the threat information. The whole ecosystem will benefit if DoD at least thinks in terms of creating a very healthy cloud landscape. If done right, DoD could use this as an opportunity to raise the standard of cloud security for all operators, not just for the ones that it will be working with — an effect all the way through the system.
It’s difficult to do it in the classic way in which threat information is shared because of the risk posed to sources and methods, and the way in which information on cyber vulnerabilities is collected would jeopardize the sources and methods. However, there may be other ways in which the threat data can be shared. Instead of specifying how the data was collected to identify the threat, DoD could communicate more through prescriptive recommendations as to how to behave because a certain threat is out there.
There’s been a lot of talk about the promise of blockchain technology. But more generally, blockchains are today’s implementation of an age-old technology call the ledger, where you update in ink. And we have been building systems that way forever, even from the paper days. And so there is nothing radically new there. Using it as a currency is radically new, but that’s something that’s spun off on its own separate thread. DoD’s use and most enterprise’s use is to just use it as a ledger, as a way of recording facts, sequences of events.
People look for magic bullets from the point of view of cybersecurity. You know, if we just have this in play – if we just have blockchain or if we just have quantum-resistant algorithms, then that takes care of the problem. It’s a much more complicated evolutionary process.
But we don’t have the luxury of building systems like ledgers that only have data at rest. You also have to consider the very real vulnerability of communications links over which the information travels when it’s in motion. There seems to be great benefit from sharing data as a way of allowing separate systems to cooperate in solving a problem. The Navy has embarked on a new approach to dividing the battlespace up across a number of ships. You can think of a naive division which says every ship is worried about the area around the ship, and you can think of a more sophisticated solution where certain ships specialize in certain functions and cooperate with each other. One has sensors, one shoots, etc., This division of labor may be a way to get much cheaper solutions to many of our problems. But that only works if you can deal with data in motion.
As the millennium approached, there was a big fuss about rewriting our software to deal with the so-called Y2K problem. There was a concern that much software wouldn’t work well after New Year’s Eve. It turned out to be nothing – or we must have done a great job, because nothing failed. You can imagine all the rewriting that was done. On the other hand, the way to look at it is all that stuff needed to be rewritten anyway, and Y2K was as good an excuse as any to rewrite it. Blockchain may provide a similar excuse. There is a lot of crusty distributed software out there, and if blockchain is the sexy name that’s going to prompt companies to redo it and update it, then we’ll all be better off. It’ll be much cleaner software.
Making the hard problem harder
The aspiration reflected in DoD strategy document is less to have a single vendor for all applications than to bring all of the cloud service providers together in a way that the organism is effectively operating through a cloud-based architecture – rather than 500 clouds, as is currently the case. As the compliance issues indicate, that’s a formidable task. In addition, the old governing paradigm of the defense procurement environment – a prime contractor that basically takes a lifetime responsibility for the maintenance of the asset – no longer really works. The way in which the cloud will have to be attended to is almost certainly going to require some acquisition innovation, because DoD can’t do it the way we’ve historically done it.
IT is also a global industry, not a national one. It uses technologies from everywhere, and the history of consolidation in the industry suggests that this is likely to be a continuing process. We’ll need some means of managing the access to the intellectual property of contractors that come in and go out of the defense market. It’s a more complicated problem.
All of us — private citizens and defense folks alike — could have software that’s quite a bit more secure than what we have today. That is, we have in the labs the ways to do this. It would just cost considerably more. And the cost would be not only monetary, but also in reduced convenience. It might change some of our values. For example, we could monitor things to detect intrusions. Well, that end has a cost in terms of privacy and First Amendment rights and so on. So there needs to be a discussion, at the national level, to decide whether and how much we want to invest in more security, what we give up and what we want to get back from it. Until that discussion happens, we’re not going to make real progress.
And then the question is, who’s going to pay? And you might be thinking, oh, well, the government should pay. Or you could say, well, the industry should get less profit. Or you might say that the investors should get less. But look in the mirror. Those are you. However we apportion the cost among the participants in our society, it’s all of us. That apportionment won’t happen until there’s a broad discussion and agreement that we face an existential threat more important than — you can list 10 other issues to debate in comparative terms, from nuclear proliferation to climate change. And that’s the scale on which this has to happen. The national problem of technological security is far bigger than the JEDI procurement.