One of the oddest aspects of imagining a post-Singularity world is that we often forget how tortuous the path can be between the present-day state of technology and some techno-nirvana (or techno-hell, depending on your perspective). It is always worthwhile diving into the authentic, current state of robotics to calibrate our current position on the path toward fully fledged robot androids suitable for telepresence and identity-download. The Wall Street Journal’s Geoffrey Fowler conducted a test of the more expensive robotic vacuum cleaners recently, reporting on it with In Battle of Robot Vacuums, No Clean Sweep. At one level, the results are humorous because they show just how far robot vacuums still have to go, after more than a decade on task. They wedge under chairs, they eat shag until dead, and they get caught up in wires under the couch. Robot vacuum companies report to Fowler that these machines are just accessories- they aren’t really supposed to replace vacuuming– and that many will find happier times with their robot if they think of their relationship more as a collaboration– organizing their house in such a way as to compensate for the robot’s shortcomings. This begs the question- do we really want robots that both fail to replace chores and additionally demand extra maintenance and attention from us? Apparently we do, as these ‘bots do sell. I have good friends who swear by their Roomba, particularly because of the dog hair their pets leave all over the wood floors. Mediocre robots may not upend society or kill Dyson and Hoover’s traditional markets. But they certainly provide a new outlet for spending time and money on technology. Robot smog is not necessarily all high-quality material.
Now for the second part of this double-header, Lance Ulanoff writes, for Mashable, Can You Teach a Robot How Not To Be a Jerk? This is a great example of the gap between the semantics of our words and the reality of a robot implementation. The researchers are attempting to enable robots to be more pro-social- for instance yielding under all the right circumstances to humans who want to use the elevator first. They call this line of work robot ethics, and that term carries some weighty responsibility and baggage along with it. Will these robots consider utilitarian or Kantian ethics when making decisions about how to relate to humans? Do they consider issues of greater good and would they be able, after-the-fact, to justify their actions based on an ethical analysis? Urm, no. The researchers are hard-coding rules– recipes that will make the robot’s behaviors in public less objectionable. They are turning down the Jerk dimmer knob using simple rules, and Ulanoff admits that in some cases the robot, though not ethical, is able to fake it better than before. The researchers did discover something ethicists would smile about: there are not hard and fast rules you can code up for the robot that, universally, guarantee ethical behavior. No surprise there. But the resulting solutions are all underwhelming: should we create rules for some particular situations that can fake good, ethical robo-behavior, then where does that leave us when the robot is in a slightly different situation that doesn’t quite map to the same underlying issues? And can the robot even tell any of this when it’s simply following its rules. Robots may have quite a hard time reasoning about the boundary of their capabilities, and this may make faking it a hard act to tame.