SVG
Commentary
Hudson Institute

Transcript: The Kill Chain: A Book Discussion with Christian Brose

bryan_clark
bryan_clark
Senior Fellow and Director, Center for Defense Concepts and Technology

View PDF

Following is the full transcript of the May 19th, 2020 Hudson online livestream event titled The Kill Chain: A Book Discussion with Christian Brose

Bryan Clark: Welcome to the Hudson Institute. I'm Bryan Clark, a senior fellow at the Institute. Today we're here to talk with Chris Brose, whose new book, The Kill Chain: Defending America in the Future of High-Tech Warfare, Chris Brose just released. Chris is a former staff director for the Senate Armed Services Committee. Prior to that, he was the senior policy advisor for Senator John McCain, and previously held other positions in the government. Right now he is the chief strategy officer of Anduril Industries that builds capabilities for the military, as well as the Department of Homeland Security I believe. We can talk about that as we go through this interview, but welcome, Chris. Thank you very much for being with us today.

Christian Brose: Oh, thanks, Bryan. It's great to be here.

Bryan Clark: Just to start off with, what led you to do this book? I know you've been thinking about these issues for a while and obviously this is something you've encountered a lot during your several years, both on the Armed Services Committee and advising Senator McCain as a policy advisor, but what was the genesis that led to you really sitting down and writing an entire book?

Christian Brose: It's a great question. I asked myself that constantly, why did I get into this, every day I was working on it? I'd say I guess it was basically two things. Both of these were kind of a product of the many years that I spent working on these issues when I was in the Senate and just thinking about this stuff on a day to day basis. The first was just the growing realization in the years that I was there that we just had a fundamental problem that I felt was something that people didn't appreciate. That we were losing competitive advantage. That we weren't where we needed to be from a technological standpoint, and to a certain extent an operational standpoint. It was mostly just the sense that I didn't think there was the urgency to get after that problem that we needed, considering how fast that problem was closing on us, and in some respects, actually, I think how far behind that problem we actually are.

The second was really then, again, the thinking that I had been doing around, well, what do we do about this? On the one hand, there's a lot of talk about the threat and the operational problems presented by a pure competitor like China. There's a lot of talk about technology and the importance of emerging technologies, like artificial intelligence and autonomous systems, to really being essential to enhancing America's competitive advantage from a military standpoint. It didn't feel to me though that there were concrete answers coming together. So, it's like, what do I do with this new technology? How does it allow me to think differently? Build new kinds of capabilities? Operate in different ways that are actually going to create competitive advantage for the United States military?

There's sort of a sense that we were talking about these technologies as like we're just going to layer them on top of the things we've always had and the way we've always operated and we're just going to be able to do it better. From my standpoint, look, I mean, I sort of was living at the nexus of these worlds. From the technological standpoint, really, I think, on the Hill, looking at the emergence of these technologies, the DoD's treatment of them, attempt to develop them, and then obviously looking at the operational problems and threat briefings and discussions with the Department of Defense, my view on it was, okay, well, is there a way to help bridge this divide or this gap between what it is we think we might be able to do with new technologies and what are the things that we are going to need to do or want to do differently from an operational standpoint? And, in a broader strategic context, where are we going? What is this next era going to look like? That was essentially the origins of it. And honestly, it was more of an attempt for me to get my own head around these problems, as it was. To say here's my contribution to what I think the answers look like.

Bryan Clark: I totally agree that we are at kind of a juncture, where we need to make some decisions as a nation and as a Department of Defense on how we're going to deal with the problem that, in particular, China poses, but pretty much any high tech competitor is going to pose, because we still are lugging it around a very legacy military that's not quite changed how it operates or changes how it's postured to take advantage or exploit new technologies that are becoming available. It seems like one thing you really focused on is the idea that it's not just about building these new technologies, because, arguably, they've been around for a few years, and Anduril even does some of this work, obviously. It's about changing the way that we're going to fight, the way we're going to use these technologies in terms of the operational concepts.

You talk a little bit in there about the idea of human command and machine control, which I think is an interesting way of describing what the Department often characterizes as manned/unmanned teaming, which kind of makes it seem like the manned system and the unmanned system are on par with one another and they're going to go out and act as a team and if they get separated they can operate independently, and if they come back together they can team up again. Which is obviously not a correct way to characterize or consider the introduction of unmanned or autonomous systems. How do you think of that new way of operating that we're going then need to embrace with the advent of autonomous systems? And then we can maybe talk about AI as a added element to that.

Christian Brose: Yeah, for sure. First off, to give credit where credit is due, “human command and machine teaming” was your phrase that you coined with Dan Pat, which I give credit to both of you in the book for.

Bryan Clark: Oh, thank you. Thank you.

Christian Brose: I thought it actually encapsulated the way I thought about it very nicely, which is why I sort of gave it pride of place. The reason I don't like manned/unmanned teaming is yeah, the sense that the manned and the unmanned system are equals and that they're somehow on equal footing, which I just dislike. I think the other piece of it is there is this tendency to believe that these new technologies are just so fundamentally new and different that the way that we've always thought about the control of military operations, all of the law, policy, norms, and procedures that had governed this in the past are somehow going to be thrown out the window because this stuff is just fundamentally different.

My own view is that it just isn't. It's going to be much more of a movement along that continuum from where we have been than some brand-new era. Ultimately, I think it really does come back to this question of command and control, which is again, a very familiar military concept. I think it's helpful to take the systems out of it and really get to the question of what we're ultimately talking about is the performance of military tasks. Those tasks are going to continue to be performed. The question is who or what is going to be performing them. You're still going to have superior actors who are controlling subordinate actors. Traditionally, that's been human superiors commanding human subordinates, but I think as these systems become more intelligent and more autonomous some of those lower, more technical, repetitive tasks, more mundane tasks that take an inordinate amount of human time in the US military right now.

I mean, we have tens of thousands of humans who are doing processing exploitation dissemination of sensor information, just as one example. Increasingly, more of those tasks could be performed by more intelligent, more autonomous machines. That doesn't mean that they're going to just be off doing it on their own. It's still going to be, again, through the same architecture framework of command and control now, where humans set very clear parameters for the control of military tasks. You are going to test significantly and train significantly the subordinate actors who are going to perform those tasks. And in the process of training and testing, you're going to build trust that they can do the thing that you are giving them responsibility to do.

We talk about autonomous systems as if there is such a thing. Reality is, autonomy describes the relationship between a human who is delegating tasks to someone or something other than that. So, in that respect, I think it's really more around what are the standards by which we are going to be able to come to trust machines to perform tasks that currently or previously only humans could perform. But I don't think the way in which we're going to do that is going to be any different the way that we evaluate humans in that respect, or evaluate less intelligent machines that we've been relying on for a very long time. We have processes in place to do this. And I think that that's actually going to be something that we should spend more time thinking through as a construct for how this can help us govern the emergence of and use of these new technologies in the future.

Bryan Clark: Yep, absolutely. In terms of the autonomous systems or the systems that are exerting some control over their own actions, there's a couple of different flavors, and you talk about it a little bit in the book, about you've got, essentially, higher sophisticated systems. You've got Global Hawk and systems that are very expensive, relatively small numbers of them. They can operate relatively independently and do some of their own mission planning and to respond to some of the stimuli in the environment. Then you've got cheap systems that are somewhat expendable, maybe even disposable, and they operate independently but their scope of action is very constrained. They're not really able to make a lot of decisions by themselves. Obviously there's a role for both of those. What I'm curious about is, when you think about new ways of operating that exploit autonomous systems and unmanned systems, how do you see the relationship, or how would you see both of those families of unmanned systems being used?

Are you trying to do like a war of attrition like Ender's Game, I'm going to throw a bunch of disposable, cheap things at somebody and just overwhelmed them? Do you see that as being a component of a larger force that maybe uses that episodically, but maybe the rest of the force is pursuing a set of more traditional maneuver actions and these expendable robots are just an element of that? I mean, how would you see the different types of unmanned systems being employed in military operations, as opposed to just to kind of throw a bunch of robot waves at people?

Christian Brose: It's a great question. I think perhaps the point that unifies the two, certainly in the present sense, is, whether it's a Global Hawk or something smaller and cheaper, we talk about them as unmanned systems, but they're actually pretty exquisitely manned, when you really look beneath the hood and see all of the different particular tasks that are being performed by human beings remotely, in the case of many of these unmanned systems, in order to make them operationally useful. I think to me the big change is going to be, rather than having one unmanned system or one manned system that requires an exquisite amount of human beings behind it to make it operationally useful, it's actually the inversion of that command and control relationship, where you can now have a single human being in command of large quantities of systems.

Christian Brose: To get at your question, I think the real opportunity here is getting mass back on our side. For many, many, many years we have made the choice around being qualitatively superior, even in the face of a quantitatively superior adversary. And we've been able to do that because we've had exquisite technological overmatch that's allowed us to hide, evade detection, penetrate into enemy spaces, fire a limited number of times but exquisitely accurately. So, I mean, to me, I think, the opportunity of flipping this back to say, all right, well, from an operational standpoint, hiding is going to become harder. I am going to have to confront larger waves of systems coming at me. And I think autonomy really opens up the possibility of being able to put mass back on our side, and to your point, yeah, fight some of these wars of attrition smarter and cheaper than maybe we had been expecting to. In terms of how that allows us to also then fight differently, other than just we're going to grind each other down and the last person standing wins, which I think there's something to be said for that, I do think the ability to operate faster is going to be another critical component of this.

You've written eloquently about this in terms of a decision centric model, and that's ultimately why I focused on the kill chain as an organizing concept of the book, which is it's ultimately not the particular platforms or the pieces of the systems that are interesting. It's ultimately the ability to understand what's going on. To rapidly make decisions and take relevant actions, and increasing the quantity and quality of that, the speed and the scale by which you can operate, where, again as you've written, you create so many different dilemmas for the adversary that it just fractures their ability to make decisions. I do think that is something that autonomous systems are going to provide us. A real capability advantage, sort of separate and apart from basically just we're going to grind each other down and at the end of the day we're going to have more systems on the battlefield than our competitor. Which again, is nothing to scoff at, I think, as an operational outcome.

Bryan Clark: We found in the war gaming that we did, looking at these kinds of concepts, is that the players, they liked the idea of being able to do the attrition attack and just throw a bunch of something at the adversary and overwhelm their defenses. But, what they liked is having the ability to do that as well as do some exquisite attacks while the adversary is busy dealing with the attrition battle that's happening elsewhere. They can focus a smaller number of platforms, that still might be autonomous, but they're going to go do the pinpoint strikes against the command and control nodes, the long range sensors, those capabilities are really the things that are the game changers in terms of the way the battle is going to proceed. So, getting that decision advantage, taking away his eyes while simultaneously keeping his hands busy, was something that in the war games we found very helpful.

Christian Brose: Well, just to build on that, I mean, I think the point that I try to stress in the book is that what would be what's best is really putting the focus on the outcomes that we're trying to achieve rather than getting overly consumed with what type of system is actually going to be most relevant. Because, again, I'm prepared to believe that the best answers to these problems of how do you build the sort of effective battle network that's going to solve these operational problems, it could be all legacy systems used in new ways, it could be a mixture of old technologies and new technologies. At the end of the day, it could be all brand new things. At the end of the day, it really shouldn't matter how you combine these things. But I think again, and a point you've made so well, you have to be able to combine them in a more elegant, in a more dynamic way so that you can build these different battle networks that are not just entirely all brand new things or all exquisite point to point connected old things, but really be able to get those interesting synergies between a 40, 30, 50 year old platform and some brand new autonomous system that was developed yesterday.

Bryan Clark: Exactly. Let's talk a little bit about where the US has a competitive advantage here. We could talk a little bit about how you would actually implement this kind of force and what we could leverage in terms of the US technological base, but also where do you see the fundamental advantages where the US would be able to better exploit these emerging technologies than an adversary like China?

Christian Brose: I mean, I think in a lot of these technologies, we still, as a nation, still have considerable advantages and considerable capability. I think the challenge is just aligning the advantages and capability that we do have with the actual military problems that we're facing. This is the sort of familiar conundrum of how do you get companies and founders and others who are working in these technologies but are really focused on commercial applications and not interested or actively opposed to working on military problems. I think that that is going to be a conundrum for us. I think one of the biggest advantages that the United States has is just the operational expertise and excellence that we have in the United States military, separate and apart from the technology areas. It's hard to replace, just the amount of time that we have spent solving operational problems, actually dealing with these types of challenges in combat. It's not something that we should be overly reliant upon because a lot of these problems are going to be new and different.

But, from the standpoint of really thinking about how you solve operational problems, how you bring the joint force together to do that, I think we have a lot of ability there. But at the same time, I think we need to be realistic that there are a lot of aspects of how China is going to develop and use these technologies that could very well give it a leg up over us when it comes to scale, when it comes to data collection and retention. Certainly when it comes to being, shall we say, less interested in some of the ethical concerns that I think we spend a lot of time right way focused on, that when you have a government that's sort of founded on a distrust of its own people, my sense is they're going to be a lot more willing to delegate these types of decisions to autonomous machines than the United States is.

So, I think it is going to be a longterm competition where we will have to look for areas of advantage, and we may not always be the leader in these areas. The question is how quickly can we bring these technologies in and integrate them into the force to make them operationally relevant? I think that's something that we actually have done quite well in recent years, but this is a very different type of challenge and we need to be mindful of the fact that much of what we've learned over the past 20 years may not all be transferable to this great power competition era.

Bryan Clark: One interesting thing that comes out of the way you were describing how unmanned systems or autonomous systems might get used and how some of the war gaming that we did played out was that if you are going to use your unmanned systems to try to gain a decision advantage, meaning you're going to use them to be able to operate faster, operate faster in time but also faster by operating at scale and giving the adversary more things to look at. So, if you can speed up your decision cycle and improve its quality like that, and hopefully you're creating enough deception and enough confusion on the adversary side that he is slowing his own decision cycle, then it seems like one thing we may be able to rely on this mission command.

The idea that a US force has been trained in such a way that they are willing to improvise, use their own initiative. When communications are lost, they're willing to accept tactics that might not ordinarily be what they would turn to based on doctrine. It seems like the willingness of US leaders to be able to take advantage of their own initiative and ability to improvise might be an advantage if you're looking at a decision centric fight, where you're having to use your own unmanned systems that are under your command to come up with a tactic in the absence of a planning staff or some higher direction. It definitely seems like that might be form of competitive advantage as well.

Christian Brose: I think that's right. I mean, I think the challenge is the United States military is going to have to relearn a lot about mission command as well. But I think to your point, we're much better positioned to do that than an adversary that's very top down, where there's sort of an inherent distrust in the lower ranks. I think that is 100% an advantage that we have, but, you know that's something that we're also going to have to relearn after 20 years where we certainly practiced a lot of mission command in a lot of places, but that wasn't necessarily the way a lot of these conflicts were structured.

Bryan Clark: Right. That's right. That brings me to a point that I think a lot of people will ask, is how do we actually make this transition? You discussed, and we've discussed in our own writings on this subject, that you don't have to transition to this robot force of autonomous systems right away. This could be an element of the force that gets gradually built up over time, and even a 10% contribution to the force of unmanned systems or autonomous systems makes a big difference in your operational outcomes. But how do we, other than just going back to the defense contractors and building a bunch of unmanned systems, are there better ways that DoD could be trying to take advantage of this enormous tech industrial base in the United States to try to field unmanned systems and AI enabled command control management tools more quickly than it would if it goes through the normal acquisition pipeline?

Christian Brose: To me, this is the $64 million question. It's certainly one thing to talk about all of this. I think the much harder challenge is how to do it. And again, that was something that really hit home for me and was really eye-opening in the course of doing the book. is how so many of the things that we are now saying are things that we have said over the past 20 to 30 years, network centric warfare, I mean, it all sort of rings true and very similar to I think many of the things that are being said and written now. You have to go back and ask, well, why did we fail to do or not do so many of the things that we said were so important for so many years?

Part of it, I think, is we haven't gotten the incentives, and that was a main emphasis that I put in the book. You know, I'm a big believer in incentives. I think to a large extent we've gotten exactly what we need paid for. I think the way that you begin to change that is you have to focus on the actual things that you're trying to buy. I'm a big baseball fan, a big fan of the shift to sabermetrics, where we're now measuring team outcomes rather than player inputs. I think in much the same way, getting into a position where we're actually competing out in the things that we are trying to do measured based on the outcomes that we're trying to achieve so that there's an actual process and a route, kind of a repetitive process every year with a certain amount of money held in reserve at the beginning of the year by the senior leaders of the Department of Defense with the Congress's support to say we are trying to reduce the time to close kill chains.

We're trying to enhance the decision-making advantage of US forces. We need to measure it against specific operational problems that those forces are going to have to confront. We have to get away from these kinds of broader buzzwords, like joint all domain command and control or multi-domain operations, which we could have an informed debate about what they mean, but you have to really boil them down to the specific military problems that you're going to have to solve under the conditions that you're going to have to solve them against real world adversaries. Not generalized competitors. I think that if you actually begin competing that out every year, you have an ability to see what is performing best. I think that's the best way to navigate this transition, where initially, look, much of that force is going to be our legacy force.

And then the question is going to be how can these technologies enable that legacy force to be faster, to scale more significantly? That'll be the question of how technology enables current operations, current force. Eventually, you'll start to see areas where new technologies, new capabilities will replace legacy systems because they're capable of performing better as part of that integrated battle network. But, unless you're measuring the thing you're actually trying to do, then it's just sort of every man for himself and it doesn't really get you the kind of data driven outputs that you want so that you can direct what is ultimately, I think, going to be a decreasing amount of resources toward the force you're trying to build.

I think the other piece of that is that it begins to create the incentives for industry to really understand that if they put their own money towards solving these problems they have a path to getting into actually have a merit based competition, where if they go out and fund a new battle management system or a new aircraft or weapon, there's actually the prospect that the Department of Defense has a mechanism and the Congress has a mechanism for on ramping that at scale very quickly. By the way, if someone shows up with a better capability than you this year, don't worry, because you'll have the opportunity to come back and compete next year. This won't work for everything.

I mean, you're going to be limited With larger and more capital intensive programs, like aircraft carriers and the like, but there should just be a lot more attempt to put competition back into, not in the sense that we need acquisition competition at the front end, but constant operational competition to determine what are the systems that you should actually be putting resources in and scaling them pretty considerably so that, again, you begin to see that the Department of Defense is moving money toward the things that they actually say are important. I think that's the thing that I look at, and certainly looked at from my time on the Hill, is that what senior leaders, what senior members of Congress say is interesting. What they spend money on is actually what is going to move the needle in terms of programmatic choices and then investment choices on the part of private industry and the investment community.

Bryan Clark: That brings up a couple of very interesting points. That was a fascinating discussion right there. One is requirements. The Department of Defense builds requirements today essentially using a system engineering approach, where it determines how it thinks it's going to be configured in the future. It determines what it thinks the future scenarios are likely to look like. And then they do, essentially, an analysis to figure out, well, what are the capability gaps? Given the assumption for how am I going to fight, the assumptions for what the threat looks like, and the assumptions for what my available forces will look like in 20 years from now. So, there's a bunch of assumptions built into it, and it's to come up with a point solution.

What you're talking about is very different, which is not a point solution that you're driving toward, but instead more of a bottom up attempt to improve mission outcomes. So, the Department of Defense would establish here's missions we think are important. Here's outcomes we want to have happen. Here's a range of environments in which those outcomes are needed. So, China, the South China Sea, or in the Baltic or something. It sounds like that's what you're talking about here. It's much more of a, the joint staff comes up with outcomes and missions or military problems they want to address and then a lot of the job of the Department of Defense is to harvest ideas and assess their ability to improve those outcomes.

Christian Brose: Yeah, I think that's exactly right. You said it very well. I think that unless we are actually focusing on the joint outcomes that we're trying to achieve we're going to end up buying a bunch of things that may or may not actually achieve those outcomes. I think part of my problem with the requirements process, quote unquote, is just the degree of hubris, I think, that's kind of baked into it. Which is somewhat befitting. I mean, the experience that we've just had at 30 years post Cold War, top of the heap. But I just don't think that's really going to hold up for us in the future. It's kind of a hackneyed example, but if I had to let my own requirements for my mobile device, I'd have the best flip phone in America right now.

We've got to get beyond this idea that if it's not invented in the defense establishment or cooked up inside of the Department of Defense it's somehow no good. I'd be much more interested in every year being able to say, look, I have to be able to defend forward bases from large quantities of incoming weapons. I don't care how I do that. I don't care with what I do that. The question is, can we field a better solution that reduces the likelihood that my forward bases are smoking holes in the ground 48 hours into the start of the conflict and focusing on that outcome. And then the capabilities that would come together as, as the thing that are really going to drive the expensive resources. Expense of resources, not expensive resources, but it will probably require that too. And just iterating on that, so that every year there's an understanding of whatever wins is going to get funded and we're going to come right back and figure out if there's a better way to do this next year and it's going to significantly move the needle on the money that we're spending.

Bryan Clark: That kind of raises the question of intellectual property rights and how do we design software, or potentially hardware, so that companies can have that opportunity to compete and win the contract next year even if they didn't win it this year. Because you're going to have a system that's already been developed, or some parts of a system of systems, that you're going to need to introduce your capability into, and you want to incentivize companies to do this so you don't want to tell them, well, you're going to have to give up all your IP in the process of competing in this effort to try to provide systems the US government. So, there's lots of opportunities to try to create a model or an environment where companies can retain their intellectual property rights while also modifying systems on a very regular basis. Have you been thinking about that? I mean, this is probably something Anduril deals with, in terms of bringing in other people's systems and trying to integrate those with your own.

Christian Brose: Yeah, and I think it's, I mean, to me, this is one of the core problems that we're going to have to solve. I think the department is very right to criticize industry or, I would argue, criticize itself for too often in recent years becoming beholden to proprietary solutions from industry, where they've been locked in black boxes that they've been uncapable of updating themselves out of and moving at the speed that technology is allowing them to move. That's all true and valid. My concern is that the backlash against that is going to lead toward the belief that, well, it should all just be government-owned. As if we would say, well, our experience of the F-35 has been a real downer, so the government is going to build its own high-performance aircraft from now on. I mean, it's just nonsense.

I think the real challenge is figuring out what are the parts of that architecture that the government is going to have to own and define to ensure that you do have openness, scalability, extensibility in the future. So, things like the applications programming interfaces. Reference architecture. To a certain extent, standards. Those are things that the government is going to have to, I think, define, but then really allow industry to be entrepreneurial and creative about how they bring solutions to bear. Again, I don't think it's terribly difficult. I mean, I think the way we saw this play out with the commercial internet was you had a handful of the major movers get together and hammer out a set of architectures and standards and then iteratively improve it as we go, which is why I have an Apple computer right now that's, I think, running a Google application while I'm writing Microsoft Word documents. Nobody mandated that that had to be so, it was mostly just creating the incentives for people to play together in a way that then people can develop applications on top of it. These new things can be developed without a sense of I have to know exactly what the future is going to look like in 10 years and build toward that.

I mean, it hasn't worked well for us when we've tried to do that in the past, and it's only going to get worse if we keep trying to do it in the future. It's mostly, I think, trying to determine what are the core things that the government really has to define to really turn industry and the private sector loose on these problems in a way that you get the best capability, you get a rapidly evolving capability but at the end of the day they, the government, can still have confidence that all of this stuff is going to come together and cohere the same way when I buy a new sensor for my house I can plug it into the architecture that I'm running in in my environment here. It's totally doable. And I mean, this is the thing that I come back to in the book, is this isn't witchcraft. I mean, this is things that the United States military and service members are doing every day in their private lives. There's no reason why we can't do this in defense. Yet, we're 10 years behind where the commercial world is in this respect.

Bryan Clark: To close out here, it seems like one of the things we'll have to do is incentivize industry also from a financial perspective so we can make it easier for new commercial or for new players to enter and offer solutions to these military problems. But, they're used to getting 20 time returns. They're used to VC money being used to support 10X and 20X returns, and if you're getting 10% returns, that's probably not a very successful use of VC money. So, companies that are from that world are going to have difficulty seeing the value in trying to compete for DoD dollars. Is there a way that the DoD or the government can better incentivize those companies that are used to much higher returns in the commercial side?

Christian Brose: I think that they can definitely do better as far as creating better incentives, but I think the reality is that, look, I mean, working in the defense space you're not going to get the kind of returns that a commercial software startup is going to get. So, I think to a certain extent there's just going to need to be a baseline set of expectations that maybe you can do better than the 2% or 3% that traditional industry is returning, but you're not going to get to the 20% returns that commercial software is going to get. I think that's a doable proposition, but I think, again, from the government standpoint, they need to get out of this mentality that they so value cost certainty and controlling the profits of industry that they would rather pay a billion dollars for something and know that industry only got two and a half percent profit as opposed to pay $400 million with industry getting 20% profit.

At the end of the day, we need to be aligned toward what's really important here. But, I think, from the standpoint of creating those incentives you're going to see a lot more companies and engineers and technologists and investors interested in being involved in national defense if the government is actually buying the emerging technologies that they say are important, that these companies and founders and investors want to build. Honestly, I mean, I think we overthink a lot of this from the standpoint of why is Silicon Valley, or why is the technology community not doing more with respect to the DoD. And I think a lot of it boils down to, look, if you were actually buying and deploying this technology at scale, you would see a lot more engineers who thought they could make a successful career doing national defense work, you'd see a lot more companies getting founded, and you'd see a lot more private investment going into modernizing national defense as opposed to optimizing advertising algorithms for social media.

Bryan Clark: Yeah, absolutely.

Christian Brose: There's a degree of supply and demand here, and the government needs to create that demand. I think if they do and actually put money behind what's important, you'll slowly, but nonetheless I think significantly, start to see industry respond, and traditional industry too. I mean, part of the thing that I raise in the book is you look at a lot of these earlier attempts at unmanned systems, autonomous systems, aircraft weapons, things that are struggling for funding for years, they get canceled prematurely. That doesn't exactly send a strong incentive to traditional industry that this is something that they should really be prioritizing in their portfolio, when their traditional offerings are getting funded in considerably larger increments.

Bryan Clark: Right, right. Absolutely. Well, thank you, Chris. Thank you very much for being with us today. Chris Brose, chief strategy officer for Anduril Industries. His most recent book The Kill Chain: Defending America and the Future of High-Tech Warfare. It's available right now. I'm sure it's available many places in addition to Amazon, which is where I think I got my copy.

Christian Brose: Awesome.

Bryan Clark: Thank you very much, Chris, for being with us today, and good luck on the book.

Christian Brose: Excellent. Well, thanks for having me and thanks for everything that you're doing. It's a pleasure to chat with you. As I hope you saw in the book, there's a lot of that that's got your fingerprints and influence all over it.

Bryan Clark: Thank you.

Christian Brose: I will give you credit where credit is due, but I will take all of the blame for things that I mangled and got wrong. But, honestly, it's a pleasure to be with you and I appreciate the opportunity.

Bryan Clark: Thank you. It was great having you on. Thank you very much everyone for being with us today. This is Bryan Clark for the Hudson Institute signing out. Stay safe.

View PDF