CNN conducted a live interview with me today regarding robotics and underemployment. You can watch their edited version, which differs slightly from the broadcast. For some context: the broadcast piece began with b-roll of Baxter, and an interview with Rod Brooks, who stated that Baxter is designed to work right next to humans; therefore it doesn’t take human jobs, he explained, it merely works alongside humans. Their second recorded interview was with Red Whittaker, who stated that, to his knowledge, no robot he has ever developed has taken a human job. It was in the context of these two sound bites that they turned to me to either agree or disagree with robotics experts about just how robots may or may not threaten our jobs. The gap between how roboticists tend to talk about this technology, and the concerns of ordinary citizens is both stark and frustrating. Often I hear roboticists explaining that robots do dull and dirty jobs that no human ought to do. My gentlest possible interpretation of this is just that the ivory tower researcher may not realize what a massive proportion of humanity works hard doing difficult and manual jobs to make ends meet. Maybe they believe that, somehow, rid of all this work, we will find new and fulfilling roles for all. Or maybe researchers spend very little time thinking about societal consequences.
John Markoff of the New York Times has a new article on the development of autonomously lethal robot weaponry: Fearing Bombs that can Pick Whom to Kill. Markoff’s article is well worth a close read. There are some great examples of the rhetoric behind technological no-holds-barred optimism:
Weapons of the future, the directive said, must be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
The Pentagon nonetheless argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.
In the world of national security and defense, we often see guidance that is, simply, impossible to verify. This is one such example. So long as war is war, the true inner workings of such a system will be so classified that we will not have enough insight to verify if humans really are in control at all. That is the nature of secrecy, and that means democratic oversight of the technological implementation is a pipe dream. If we think Congress can tweak us to stay just on the ethically proper side of a fine line, well then we are fooling ourselves.
Then there is the standard value hierarchy rhetorical approach: remind us of the horror of war, of the paucity of human ethics in war, and then use this to motivate the move to robots. After all, they’re not human, so they won’t be unethical. Right?
Military analysts like Mr. Scharre argue that automated weapons like these should be embraced because they may result in fewer mass killings and civilian casualties. Autonomous weapons, they say, do not commit war crimes.
On Sept. 16, 2011, for example, British warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that were shelling civilians. Eight or more of the tanks were destroyed simultaneously, according to a military spokesman, saving the lives of many civilians.
It would have been difficult for human operators to coordinate the swarm of missiles with similar precision.
“Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing,” Mr. Scharre said.
“Autonomous weapons do not commit war crimes.” Perhaps these folks are saying that, because robots are unemotional, they are more ethical? We know that, today, the capabilities of robots to wreak destructive havoc far exceeds their ability to reason about culture, ramifications of violence, innocence and guilt- you name it. Do we have a reason to believe this “abilities gap” is closing rather than just growing wildly as robots become even more capable? I argue that we don’t. In fact, we will always be able to say that war saves lives. But that argument has little to do with whether humans or robots are in charge, and we all know that arguments justifying the existence of war do not help us justify the existence of killer robots. You will always find pundits who will point out examples where high-tech weaponry apparently saves many lives. Every such case is highly selective, as with all one-sided evidence, and no such evidence will help you characterize just how the system of war-making and sacrifice changes in the highly unequal warfighting future we face, when legions of autonomous robots face legions of desperate humans who have no access to similar technology. This isn’t Star Trek; rather, this is a messy, error-filled real world scenario.
By now you have heard chatter about Amazon’s new Echo device, and there is a somewhat low-key video you can watch to see how Amazon thinks of this device in the home on youtube. The best quick read I have seen thus far on just why Amazon wants to be the portal in answering your questions and tracking your home habits is an article written by Patrick Moorhead at Forbes, entitled Amazon Echo: What you need to consider before buying. Multiple companies are vying to be the entry point for your behavioral interaction at home: Nest, Google, Amazon, Apple, Samsung and many others. All of them want to know what you do, what you buy, and how you behave at home. Simply put, your physical home space is the Mount Everest yet to be conquered. Companies know all about your behavior on-line, but this physical world is the one missing territory in their attempts to model our desires, our needs, and how we will respond to all possible stimuli. Mediocracy is all about predicting our future behavior, and Echo is one example of the many competing products working hard to be the digital-physical glue in our lives. I have written often about this frontier, and about the fact that companies are desperate to cross the barrier, whether with sensors (computer vision) or with direct human-robot interaction (Echo). The interesting pattern we ought to look for next: just how will regulation or social pressure define boundaries of behavior mining, storage and exploitation? When government does subpoena someone’s Echo diary because they have an unhealthy interest in, say, revolution, will the public outcry regarding NSA surveillance connect the dots to ever greater opportunities for such surveillance, thanks to convenience technologies? Unfortunately, these hard issues do not rise to the level of national debate until some egregious mishandling of private information occurs. Only after we’re scarred do we even begin to pay attention to the road we’re traveling.
Kevin Kelly has an article in the newest issue of Wired: The Three Breakthroughs That Have Finally Unleashed AI on the World. Thanks to Randy Sargent and David Brooks’ post in the New York Times for making me aware of this excellent piece. Kelly spends far longer discussing where AI is headed rather than just what the three breakthroughs happen to be. His characterization of AI is very similar to the one I espouse of robotics: the AI coming right now is not some singularity-borne hyper-conscious entity that bests us all; it is the somewhat useful, only moderately intelligent AI subsystems that will be infused in everything around us, whether in the Net or the physical world:
The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need.
In a way, you can think of AI as an IQ booster- a turbocharger that will be added to all sorts of products and services swirling around us. But then, it’s not human assistance from a full-fledged NI standing next to us. It is code, with very limited capabilities. Consider how walking amongst robots in the park is simply unlike walking amongst humans, since robots are simply alien- we just don’t know as much about their provenance as we do about humans. Kelly says just the same thing about these AI’s that will commonly deal with us:
The chief virtue of AIs will be their alien intelligence.
He quotes a medical diagnostics founder saying that a child born today will rarely see a (human) doctor for diagnoses in his adult life- that the AI systems will be far better at this. I already have experienced poor human bedside manner in my life. How will our relationship to medicine change in this possible future, where bedside manner is altogether laughable?
One final quote, because I have long argued that robotics will give us a crisis of identity regarding what we consider human. This problem is not just philosophical, but a problem of power relationships: after all, if robots consistently replace ever-greater numbers of physical and cognitive labor, what is it that we will do, as humans, to earn income? The same problem faces us with AI, as IQ supplied by non-humans constantly changes our relationship to our own sense of what makes humans special, and how we ought to be treating the humans surrounding us.
We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for.
So, as David Brooks points out, all these trends points toward the concentration of knowledge in power in the hands of the few companies with the AI’s in tow. We continue the never-ending slide from human labor to human-invented capital that replaces that very labor. Except now the very labor we consider needs to grow broad, covering ever greater cognitive fields, from medecine and tutoring to friendship and companionship. I think it’s time to reread Player Piano.
In PBS’ Nova Next, Tim de Chant just published an excellent analysis of technological underemployment called Navigating the Robot Economy. This piece distributes real-world examples with analysis by two excellent economists, David Autor and Erik Brynjolfsson, both of MIT. These economists take somewhat different views, positing a variety of mechanisms justifying just how automation may be a harbinger of displaced human labor, or alternatively how automation may help those (still) employed achieve even greater prosperity. Of course, both views of the future can come true, but then the question this begs is, just how will overall inequality be affected by the dynamics of ever-improving automation? De Chant does an excellent job in bringing together modern-day examples, including those of the auto industry and the medical industry, but also reminding us of the disruptive shifts in employment that the 1800′s saw thanks to the mechanized loom. The article is worth a careful read, but will lead you mostly with the feeling of uncertainty. Even worse, Brynjolfsson makes the convincing point that the next ten years will be far more disruptive than the last ten. And that, coming from a time when the past ten years have seen a total disentangling of company productivity and worker wages, is depressing indeed.
I do not give such praise easily; but Benjamin Wallace-Wells just published a New York Magazine article about drones, and it is the very best in-depth article I have read on drones, ever: Drones and Everything After: the flying, spying, killing machines that are turning humans into superheroes. I suggest reading the whole thing, and clicking on many of the example links- the videos are both beautiful and horrifying, in all the right, provocative ways.
Much hay has been made about 3D-printed plastic guns, and along comes a new set of articles about the same folks, this time announcing that a milling machine can make the parts that really do need to be made of metal, as reported here. Of course, CNC machines have been around for a long, long time; so the idea that this is new is somewhat inaccurate. Yet as the price of automated milling machines comes down over the years, it is true that regulated production of metal parts, such as the receiver in a gun, becomes ever more irrelevant. The Maker movement is a fabulous form of empowerment; but we will always see ramifications that will be less inspiring and more threatening to our own freedom. This is a one-way street we are walking down.