Mediocracy: It’s been here for years.

The Guardian’s Alex Hern writes about the latest Internet behavior experimentation drama– this one OKCupid. Well, it turns out they, too, experiment on users. The rating you see is often accurate, and sometimes a lie. Just to see what happens. I particularly enjoyed the rhetorical response from OKCupid itself (another example of value hierarchy, for the rhetoricians amongst you) according to co-founder Christian Rudder:

“if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

Yes, indeed. The Internet gives us value, and tests us. It maximizes derived value for data owners– and if that means feeding us false information to build more valuable, more accurate behavioral models, well that’s a trade that companies will happily make, so long as we keep visiting. How would we feel if Safeway or Whole Foods did this? Just imagine- what if they sold us ground beef, but it actually contained horse meat.  I wonder how that would go down.

The trick with mediocracy, critically, is that it is a one-way path. It was easy to have drones that do surveillance, and argue that they will never, ever be armed. Yet decades later, drones have weapons. The thresholds are crossed, with some flapping of wings, but those squawks die down and the new becomes the new normal. So the Internet involves experimentation. It will not tend towards greater honesty and greater transparency- not automatically, and not by nature of the economic logic of corporations– that I can promise you.

 

 

 

 

Alone With Silicon

In this past Sunday Review section of the New York Times, Louise Aronson, an associate professor of geriatrics at UCSF, writes about robot caregiving in The Futures of Robot Caregivers.  This short article is well worth a read because it’s a geriatrician’s eye view of the role of robots in home care. Aronson points out that many are exceedingly alone at home, and that the solution to this problem is robot caregiving, for safety (emergency response), chores (fold your laundry, clean the bathroom) and for emotional companionship (conversation).  The article is one compelling view, but there are alternatives worthy of discussion. It is easy, after all, to make a value hierarchy argument, as Aronson does:

But most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.

Indeed, abusive people are terrible. But the interesting question is, what is the right role for robotics in the home. Should robotic technologies melt into the walls, providing a smart home that is safe and responsive, or should robots have a tangible form in the home, depending on social expectations to engage in artificial conversations with the lonely. And just how artificially emotional do we allow these conversations to become?

Sherry Turkle is indeed good reading on this subject, as Aronson points out. But in quoting Turkle regarding Paro, Aronson did not quite point out Turkle’s key thesis: that these relationships between robots and humans are wholly artificial, that they make use of forms of deception that convince the human of a depth that simply isn’t there, and that, in Turkle’s opinion, this is seriously questionable from an ethical point of view.

My own point of view comes down further on the smart home side than the android side. As for companionship, while I realize that there aren’t enough caregivers, I would remind the reader that the U.S. is not Japan. We have massive, chronic underemployment. We ought to spend real thinking energy figuring a way to turn caregiving into a sufficiently viable service economy to help with our idled human populace before we replace the potential for their work with robotic androids.

 

Jibo goes public

I was interviewed last week about Cynthia Breazeal’s new company, Jibo, and the press release just came off embargo. So the stories are out as of 9 this morning.

The video, such as the one at NYT, is an interesting study in the idea of presenting technology as our social partner. One of the most interesting things about the design Breazeal has gone after is to minimize what is really tough in robotics- batteries and legs/wheels, instead focusing on a tabletop device, and implementing actuation that is elegantly simple, yet capable of displaying some emotion through physical change.

I am going to be very interested to see how the blogosphere analyzes the robot, both from the perspective of privacy and from the perspective of sociality and humanity (think Her for one extreme perspective).

Facing Mediocracy

In Robot Futures I write about the concept of how computer algorithms can observe our behavior, then experiment on a massive scale with customized signals to each of us, to see just how we respond to each stimulus and, over time, to build a model with enough felicity to approximately ‘remote-control’ individuals. I called this form of manipulation mediocracy- control by media rather than by people. Sounds far-fetched? The news machine is helping us all see the thin edge of the mediocracy wedge arriving thanks to Facebook, Cornell and UCSF. Many know the details of the story now- Facebook did human subject research by manipulating the emotional content of users’ newsfeeds, then studying how this affected the emotional content of each user’s posts.

The most interesting analysis of this case that I have read so far is by Adrienne LaFrance of The Atlantic. Her story zeroes in on the ethical question of Review Board regulation. We do IRB-approved research all the time, and what I find interesting about this case is the concept that an IRB could have approved this particular study without the obvious informed consent it ought to require.  Generally, any situation in which one wishes to manipulate a person’s inputs requires definitively telling them that you are doing a study, explaining that they can opt out, and then asking their permission, with opt-out.  LaFrance’s story cuts to this issue, and to the role, or non-role, of three institutions in thinking through the ethics of a technological manipulation.

The ability of corporations to collect massive data, cut through that data using machine learning techniques, and present manipulations back to us will only grow over the years. As we happen to stumble upon evidence of such manipulation, expect the ethical case to become only more complex.