In The New Yorker two weeks ago, Gary Marcus writes an article, Moral Machines, about the impending need for moral decision-making in both driverless vehicles and autonomous warcraft that make lethal decisions. He pushes a refrain often heard in the world of robotic weaponry–that one day, just maybe, robotic weapons might be more even-handed and therefore more ethical than human soldiers–and extends the concept through some interviews to cars. While it is true that cars can be essentially lethal weapons (and thus we become so very willing to restrict their use when, for instance, inebriated), the landmines that Marcus remarks upon with this reasoning are indeed very true: neither the rules of war nor the rules of driving are perfectly encodable in a logic system anyway–human intuition plays a crucial aspect whenever we operate at the boundary cases. And, of course, boundary cases often demand very fast reaction, so deferring just at that moment to the human occupant who happens to be busily perusing the New Yorker instead of driving– well, it’s just not practical. Nevertheless Marcus proposes we are two to three decades away from computers that drive more safely than humans. My issue is that, as an artificial intelligence engineer, I’m afraid that estimate might just be way off. We may be able to be safer in a whole lot of situations–statistically. And yet, when strange boundary cases trigger auto-cars to crash when regular humans would never do so–well, it will be quite extraordinarily unfair, even thought it may be safer when rationalized at a statistical level. Robot Futures supposes that these inevitable boundary cases are precisely what make early robot artifacts in our social world surprisingly mediocre when compared to the giddy, perfect instances of autonomy that we see in so many science fiction movies. Tighten your seat belts.