Alone With Silicon

In this past Sunday Review section of the New York Times, Louise Aronson, an associate professor of geriatrics at UCSF, writes about robot caregiving in The Futures of Robot Caregivers.  This short article is well worth a read because it’s a geriatrician’s eye view of the role of robots in home care. Aronson points out that many are exceedingly alone at home, and that the solution to this problem is robot caregiving, for safety (emergency response), chores (fold your laundry, clean the bathroom) and for emotional companionship (conversation).  The article is one compelling view, but there are alternatives worthy of discussion. It is easy, after all, to make a value hierarchy argument, as Aronson does:

But most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.

Indeed, abusive people are terrible. But the interesting question is, what is the right role for robotics in the home. Should robotic technologies melt into the walls, providing a smart home that is safe and responsive, or should robots have a tangible form in the home, depending on social expectations to engage in artificial conversations with the lonely. And just how artificially emotional do we allow these conversations to become?

Sherry Turkle is indeed good reading on this subject, as Aronson points out. But in quoting Turkle regarding Paro, Aronson did not quite point out Turkle’s key thesis: that these relationships between robots and humans are wholly artificial, that they make use of forms of deception that convince the human of a depth that simply isn’t there, and that, in Turkle’s opinion, this is seriously questionable from an ethical point of view.

My own point of view comes down further on the smart home side than the android side. As for companionship, while I realize that there aren’t enough caregivers, I would remind the reader that the U.S. is not Japan. We have massive, chronic underemployment. We ought to spend real thinking energy figuring a way to turn caregiving into a sufficiently viable service economy to help with our idled human populace before we replace the potential for their work with robotic androids.

 

Jibo goes public

I was interviewed last week about Cynthia Breazeal’s new company, Jibo, and the press release just came off embargo. So the stories are out as of 9 this morning.

The video, such as the one at NYT, is an interesting study in the idea of presenting technology as our social partner. One of the most interesting things about the design Breazeal has gone after is to minimize what is really tough in robotics- batteries and legs/wheels, instead focusing on a tabletop device, and implementing actuation that is elegantly simple, yet capable of displaying some emotion through physical change.

I am going to be very interested to see how the blogosphere analyzes the robot, both from the perspective of privacy and from the perspective of sociality and humanity (think Her for one extreme perspective).

Facing Mediocracy

In Robot Futures I write about the concept of how computer algorithms can observe our behavior, then experiment on a massive scale with customized signals to each of us, to see just how we respond to each stimulus and, over time, to build a model with enough felicity to approximately ‘remote-control’ individuals. I called this form of manipulation mediocracy- control by media rather than by people. Sounds far-fetched? The news machine is helping us all see the thin edge of the mediocracy wedge arriving thanks to Facebook, Cornell and UCSF. Many know the details of the story now- Facebook did human subject research by manipulating the emotional content of users’ newsfeeds, then studying how this affected the emotional content of each user’s posts.

The most interesting analysis of this case that I have read so far is by Adrienne LaFrance of The Atlantic. Her story zeroes in on the ethical question of Review Board regulation. We do IRB-approved research all the time, and what I find interesting about this case is the concept that an IRB could have approved this particular study without the obvious informed consent it ought to require.  Generally, any situation in which one wishes to manipulate a person’s inputs requires definitively telling them that you are doing a study, explaining that they can opt out, and then asking their permission, with opt-out.  LaFrance’s story cuts to this issue, and to the role, or non-role, of three institutions in thinking through the ethics of a technological manipulation.

The ability of corporations to collect massive data, cut through that data using machine learning techniques, and present manipulations back to us will only grow over the years. As we happen to stumble upon evidence of such manipulation, expect the ethical case to become only more complex.

 

Bridging the digital and physical at home

In interviews, I often talk about the desire of digital property owners to cross over into the physical world, since it represents a fabulous new, uncharted territory. Robotic sensing and robotic actuation provide the means for that digital-physical jump, and the Nest thermostat is one such example, with sound and pyroelectric sensors that can detect whether or not you are home and active. With Nest’s purchase, Google has taken on new physical information in the home, and the Wall Street Journal’s Role Winkler and Alistair Barr report today on Nest’s announcement that it will share occupant status information with Google’s services, and with third party companies as well. The upside of this fluid connection between your home movements, home sensors, internet-connected devices planted on your dining room wall, and Google’s broad array of services have to do with convenience: your Android phone tells your house that you’re unexpectedly heading home, turn the thermostat on; your garage door notices you have left home, close the garage door you accidentally left open; your phone is off but you are home when you need to be at a meeting, email you and get you out of the house. But of course, all this convenience will come along with the interesting profit-motive question: how can the behavioral information itself, or the convenience afforded by such interconnections, transform data across millions of households into revenue? Time will tell just how digital-physical bridges become moneymaking mints, and I guarantee some of the smartest minds are working on that.