The Best Defense

The Best Defense Interview: Armitage on Pakistan's Tactical Nukes, Afghanistan's Future, and Why We Should Withdraw Now

Best Defense: Do you think Pakistan turned against the United States in Afghanistan in 2005? What makes you think that?

Richard Armitage: "When I was deputy secretary [of state], from 2001 to February 2005, I looked constantly for information that the Pakistanis were aiding the Taliban.... I did see liaison, but I could not find" strong evidence of more.

"2005, if you look at casualties [in the Afghan war]. There was the beginning of a sharp rise. I believe two things happened. The Talibs started digging up their weapons and the Pakistanis thought, Maybe the Americans will prove short of breath, and so maybe we should keep our hand in.

"There was a background to this. From our point of view, it was black and white. From a Pakistani point of view, it wasn't. In their view, we are a very unfaithful partner, with four or five divorces since 1947. So in the back of their minds is always, When are they going to cut and run?"

BD: How does that inform your view of the current situation?

Armitage: "My present view of the situation is that the Pakistan government is persuaded of the ultimate ability of the Taliban to form a deal with the Afghan government, with a rough return to corners -- the Tajik in the north, Pashtun in the south and east, the Hazaras in the middle getting kicked by everybody, and so on.

"I think in addition, Pakistan dramatically increased its nuclear arsenal after 2008-2009. They fear that we will swoop in and take them.

"With India, they now are looking at tactical nuclear weapons." [Their fear, Armitage said, is that if there is another Mumbai-like attack, India will respond with a corps-sized attack on Pakistan.] "Tactical nukes is what you'd use against a corps." [This might provoke India to escalate further.] "But Pakistan would say that its tactical nukes would deter that."

BD: I saw today (Monday) that 3 SAMs were reported intercepted near the Pakistani border. What do you make of that?

Armitage: If it were true, "That would be seen as a very unfriendly act," one directed not against Afghan forces but against our airpower. "I'd be skeptical of that" report -- it more likely is MANPADs than larger SAMs.

BD: As the United States tries to draw down its presence in Afghanistan and turn over security to Afghan forces, what do you expect Pakistan to try to do?

Armitage: "I think they will remain on the trajectory they are on" -- that is, supporting Talibs in the south and east, and keeping an eye on Indian (and possibly Russian) dealing with the Tajiks.

If internal unrest grows in Pakistan, "they may have to spend a little more time at home," but still will likely remain on the same trajectory in Afghanistan.

BD: If you had lunch with President Obama today, what would you tell him about the Afghan war and about Pakistan?

Armitage: "Twenty-five years from now, Mr. President, I can assure you there will be a nation called Afghanistan, with much the same borders and the same rough demographic makeup. I probably couldn't say that about Pakistan."

On the Afghan war, "I would say, Mr. President, it is not worth one more limb." Perhaps just leave enough for counterterror missions and maybe some trainers.

Alex Wong/Getty Images

The Best Defense

Rosa's Dystopia: The Moral Downside of Coming Autonomous Weapons Systems

By Brendon Mills

Best Defense guest columnist

Last Wednesday, Tom posted about one of the more provocative statements made during CNAS's fantastic annual conference. FP's Rosa Brooks, while discussing the morality of drones, implied that future drones with artificial intelligence would make better judgments than humans about when to kill in war. And if that's the case, she asked, how can we morally justify not using these drones? Brooks may be correct that drones one day will be better at making judgments about when to kill, yet the broader negative moral consequences of making AI drones the staple of our military far outweigh the benefits of better tactical decisions.

Drones with artificial intelligence (commonly referred to as autonomous weapons systems) do have the potential to make better decisions than humans on the battlefield because those systems will employ nearly perfect rational decision-making. Some may argue that we can never make a machine sophisticated enough to make all of the necessary decisions in an environment as complex as combat. However, Brooks reminded us that a decade ago the same was said about a computer's ability to drive a car. Yet Google's driverless cars have done exactly that -- in fact, they drive better than humans! Fellow panelist Ben FitzGerald agreed, saying that the technology will exist for autonomous weapons systems soon.

Such technology would bring some positive benefits. A massive decrease in casualties for U.S. forces represents the most obvious benefit. This would alleviate both the terrible human suffering associated with ground wars and some of the biggest long-term cost drivers of such conflicts. Autonomous weapons systems may also lead to fewer civilian casualties due to enhanced rational decision-making, which would enable them to make decisions absent the emotional stresses of combat.

However, more autonomous weapons systems on the battlefield would mean fewer humans on the battlefield, thereby reducing the costs of war and further insulating the public. The aforementioned benefit of fewer casualties and reduced human suffering represents a double-edged sword: Some already argue that the American public is too sheltered from the costs and burdens of our current wars; imagine how little attention the public would direct towards a war in which the only casualties were expensive erector sets that shoot. Ultimately, reducing the barriers to war makes war easier to choose. If it's easy to choose and the body politic doesn't care, there will be more wars.

Unfortunately, this isn't the only drawback. If we populate our military with autonomous weapons systems, our adversaries would adapt. States, and everyone else who fights these days, use war to force a policy on an adversary through violence, and our enemies wouldn't be able to change our policy by creating a scrap heap of our autonomous weapons systems on the battlefield. Instead, they'll go asymmetric and target our noncombatants because that would be the only way to truly make us hurt.

Although to some extent our enemies already do this, it's not their only option. We have people in uniform who have stood up and said, "me, not them." However, in a world where we only fight with autonomous weapons systems, targeting our civilians would represent our enemy's only hope for success.

And we're vulnerable.

In the age of cyberattacks and terrorism, we need to look for policies that seek to further insulate our noncombatants rather than serve them up as the only viable targets for our enemies to attack in the hope of incurring real costs to American society. As someone who wears the uniform, I would welcome a world in which my friends and I did not have to place ourselves in harm's way to protect the nation. But my friends and I signed up so that our enemies will fight us instead of our families. And I worry that if humans don't fight our wars, we'll have more wars and our families will be the enemy's primary targets.

Brendon Mills is a lieutenant in the Marine Corps and a graduate of both the U.S. Naval Academy and Harvard's Kennedy School of Government. He also worked as a research assistant on Tom's most recent book, The Generals. The views expressed here are his own and represent neither the Department of Defense nor the United States Marine Corps.

Wikimedia