Killer Robots: from research labs to military contractors

John Markoff of the New York Times has a new article on the development of autonomously lethal robot weaponry: Fearing Bombs that can Pick Whom to Kill. Markoff’s article is well worth a close read. There are some great examples of the rhetoric behind technological no-holds-barred optimism:

Weapons of the future, the directive said, must be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The Pentagon nonetheless argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.

In the world of national security and defense, we often see guidance that is, simply, impossible to verify. This is one such example. So long as war is war, the true inner workings of such a system will be so classified that we will not have enough insight to verify if humans really are in control at all. That is the nature of secrecy, and that means democratic oversight of the technological implementation is a pipe dream.  If we think Congress can tweak us to stay just on the ethically proper side of a fine line, well then we are fooling ourselves.

Then there is the standard value hierarchy rhetorical approach: remind us of the horror of war, of the paucity of human ethics in war, and then use this to motivate the move to robots. After all, they’re not human, so they won’t be unethical. Right?

Military analysts like Mr. Scharre argue that automated weapons like these should be embraced because they may result in fewer mass killings and civilian casualties. Autonomous weapons, they say, do not commit war crimes.

On Sept. 16, 2011, for example, British warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that were shelling civilians. Eight or more of the tanks were destroyed simultaneously, according to a military spokesman, saving the lives of many civilians.

It would have been difficult for human operators to coordinate the swarm of missiles with similar precision.

“Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing,” Mr. Scharre said.

 “Autonomous weapons do not commit war crimes.” Perhaps these folks are saying that, because robots are unemotional, they are more ethical? We know that, today, the capabilities of robots to wreak destructive havoc far exceeds their ability to reason about culture, ramifications of violence, innocence and guilt- you name it. Do we have a reason to believe this “abilities gap” is closing rather than just growing wildly as robots become even more capable? I argue that we don’t. In fact, we will always be able to say that war saves lives. But that argument has little to do with whether humans or robots are in charge, and we all know that arguments justifying the existence of war do not help us justify the existence of killer robots.  You will always find pundits who will point out examples where high-tech weaponry apparently saves many lives. Every such case is highly selective, as with all one-sided evidence, and no such evidence will help you characterize just how the system of war-making and sacrifice changes in the highly unequal warfighting future we face, when legions of autonomous robots face legions of desperate humans who have no access to similar technology. This isn’t Star Trek; rather, this is a messy, error-filled real world scenario.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s