Ben Way’s book, Jobocalypse, is subtitled “The End of Human Jobs and How Robots Will Replace Them.” The title summarizes the book’s attitude well, and while I agree that this issue is worthy of serious discussion, Way’s book demonstrates common fallacies that are worth identifying. Way starts with a chart showing employment slack, and here he is inspired by McAfee and Brynjolfsson at MIT. The interesting pattern is that unemployment following recession recovers both less quickly and less fully with every more recent case of recession, and this portends business recovery practices that are becoming ever less friendly toward the individual worker. Way explains just how cautious behavior on the part of a recovering company leads toward lower-cost routes to high productivity and profits rather than making long-term commitments to fully employed new workers, even in the face of increasing consumer demand. Rightly, Way identifies increasingly inexpensive and flexible automation as an important enabler of this pattern, and I agree fully with this analysis.
However in looking at automation itself and how it improves over time, Way’s argument repeats a mistaken trope so common that I believe we need to name it: Moore’s Leak (with due apologies to Gordon Moore). Way shows an oft-reproduced chart of computing power from 1900 through 2020. The chart shows MIPS per $1000 and shows a healthy doubling at least every 18 months, as suggested by Moore’s Law. Computers from various years are labeled on the graph, and the future looks bright for ever-faster computers. But the problem is the labeling: “Brain power equivalent” along the right lists bacteria, spiders, lizards, mice, monkeys and of course humans. And humans are shown easily achievable by 2020. That’s less than seven years from now, folks. Moore’s Law is a fine predictor (actually a milestone-setting device for Intel) for computing speed, but jumping over to animal equivalence forces mistaken conclusions from everyone but the computational biologists amongst us. Way’s point, based on the chart, is that robots will do everything humans can by 2020, and cheaply. For this conclusion the chart lends no support. Yes, singularists will argue that just as soon as computers are fast enough, they will also be smart enough to design their own future evolutionary conclusions, and this runaway chain reaction will yield so much intelligence that super-intelligent computers can then do what we humans have not been able to do: fully emulate a human being. But that is an indirect argument that is mostly an article of faith today.
In literal terms, computer speed just does not approach humanity. Moore’s Leak happens when we use Moore’s Law to optimistically imagine a future breakthrough that doesn’t really have anything to do with computing speed. Way predicts that robots will be cheap and capable thanks to Moore: “Within the next generation, the humanoid robots that we see in films such as I, Robot will find their way into our homes and will be able to perform almost any task more efficiently and better than any human ever could.” I disagree strongly; Way is tapping levels of actuation, hardware innovation, perception and reasoning that are more than a generation away with a statement this strong.
One problem with Moore’s Law is that is suggests that all technology becomes cheap quickly, and yet computing speed is particularly unique on this front; motors, batteries and hardware advance in fits, never along an exponential pathway, because they depend on materials science and chemical engineering breakthroughs, not on smoothly diminishing dimensional constraints on a silicon chip. So when Way says “In another twenty years, I would suspect that for sex purposes you will not know the difference between a sexbot and a human” — well, there are so many engineering, non-computational challenges impeding this one.
Way’s book evaluates robotic job replacement industry by industry, from education and defense to farming, and this vertical analysis is a useful organization; however Moore’s Leak muddies nearly every analysis in the book. There is one additional problem with this form of futurecasting, and it comes from an oversimplification of just what humans do. Way states that the farm of the future, with robots in the employ of the farmer, looks like a “centralized security office.” The farmer becomes a manager who simply watches robots do all the farmwork. I think part of the problem here is misunderstanding just what farmers do. Yes, they drive tractors and harvesters, and those have been recently automated. But the farmers I know also diagnose messy hay baler failures, problem-solve the intermittent breaker on their milk house vacuum pump, shovel a whole lot of hay into the barn and a whole lot of something else out, hand-feed the heifer, deal with deliveries, argue with hunters, overhaul the tractor transmission, et cetera.
Automation has a serious effect on the dynamics of employment- absolutely. But robots will not be perfect anytime soon, and what’s more important, robots need not become perfect to force ever-greater underemployment; mediocre technological advances are well capable of that on their own. Please try not to embrace Moore’s Leak when you extrapolate from Moore’s Law.