Let Them Make Robots

Thanks to Saman Amirpour for pointing me the way of a new IEEE Story by Tekla Perry: Robotics Company Prepares to Take Responsibility for Displaced Workers. Perry explains that Momentum Machines is busy automating the preparation of hamburgers. The machine peaks out at 6 burgers per minute- I think that outdoes the needs of even the most popular In-n-Out in San Francisco. Perry explains that Momentum recognizes that its machines will cause jobs to be eliminated when existing restaurants adopt the machine. I have lectured in the past about Employment Impact Assessments, and the lack thereof when automation changes the employment landscape at a company. But the solution offered by Momentum is tone-deaf in so many ways. They specifically offer discounted technical training to the former line cooks who were displaced. This seems like an idea born of startup-brainstorming, rather than ethnographically studying the needs of line cooks to understand just how pressured they are between low-paying jobs and debt on a day-to-day basis. Then there is the fact that Momentum plans to build its own restaurants. The line order cooks they will not hire are lost opportunities for jobs, they are not actual humans with pink slips in hand. In the end automation paves many paths to job loss and poverty, and I believe it is doing so more rapidly than it optimistically “unleashes job innovation.” Providing technical training is, obviously, a move to be applauded. But it is, in this case, a rhetorical move that does not offer any sort of structural solution to the basic problem: Momentum is spending millions of dollars to make machines that do the work of many employees for whom their paycheck is critical to quality of life.

Mediocracy: It’s been here for years.

The Guardian’s Alex Hern writes about the latest Internet behavior experimentation drama– this one OKCupid. Well, it turns out they, too, experiment on users. The rating you see is often accurate, and sometimes a lie. Just to see what happens. I particularly enjoyed the rhetorical response from OKCupid itself (another example of value hierarchy, for the rhetoricians amongst you) according to co-founder Christian Rudder:

“if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

Yes, indeed. The Internet gives us value, and tests us. It maximizes derived value for data owners– and if that means feeding us false information to build more valuable, more accurate behavioral models, well that’s a trade that companies will happily make, so long as we keep visiting. How would we feel if Safeway or Whole Foods did this? Just imagine- what if they sold us ground beef, but it actually contained horse meat.  I wonder how that would go down.

The trick with mediocracy, critically, is that it is a one-way path. It was easy to have drones that do surveillance, and argue that they will never, ever be armed. Yet decades later, drones have weapons. The thresholds are crossed, with some flapping of wings, but those squawks die down and the new becomes the new normal. So the Internet involves experimentation. It will not tend towards greater honesty and greater transparency- not automatically, and not by nature of the economic logic of corporations– that I can promise you.

 

 

 

 

Alone With Silicon

In this past Sunday Review section of the New York Times, Louise Aronson, an associate professor of geriatrics at UCSF, writes about robot caregiving in The Futures of Robot Caregivers.  This short article is well worth a read because it’s a geriatrician’s eye view of the role of robots in home care. Aronson points out that many are exceedingly alone at home, and that the solution to this problem is robot caregiving, for safety (emergency response), chores (fold your laundry, clean the bathroom) and for emotional companionship (conversation).  The article is one compelling view, but there are alternatives worthy of discussion. It is easy, after all, to make a value hierarchy argument, as Aronson does:

But most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.

Indeed, abusive people are terrible. But the interesting question is, what is the right role for robotics in the home. Should robotic technologies melt into the walls, providing a smart home that is safe and responsive, or should robots have a tangible form in the home, depending on social expectations to engage in artificial conversations with the lonely. And just how artificially emotional do we allow these conversations to become?

Sherry Turkle is indeed good reading on this subject, as Aronson points out. But in quoting Turkle regarding Paro, Aronson did not quite point out Turkle’s key thesis: that these relationships between robots and humans are wholly artificial, that they make use of forms of deception that convince the human of a depth that simply isn’t there, and that, in Turkle’s opinion, this is seriously questionable from an ethical point of view.

My own point of view comes down further on the smart home side than the android side. As for companionship, while I realize that there aren’t enough caregivers, I would remind the reader that the U.S. is not Japan. We have massive, chronic underemployment. We ought to spend real thinking energy figuring a way to turn caregiving into a sufficiently viable service economy to help with our idled human populace before we replace the potential for their work with robotic androids.

 

Jibo goes public

I was interviewed last week about Cynthia Breazeal’s new company, Jibo, and the press release just came off embargo. So the stories are out as of 9 this morning.

The video, such as the one at NYT, is an interesting study in the idea of presenting technology as our social partner. One of the most interesting things about the design Breazeal has gone after is to minimize what is really tough in robotics- batteries and legs/wheels, instead focusing on a tabletop device, and implementing actuation that is elegantly simple, yet capable of displaying some emotion through physical change.

I am going to be very interested to see how the blogosphere analyzes the robot, both from the perspective of privacy and from the perspective of sociality and humanity (think Her for one extreme perspective).