Facing Mediocracy

In Robot Futures I write about the concept of how computer algorithms can observe our behavior, then experiment on a massive scale with customized signals to each of us, to see just how we respond to each stimulus and, over time, to build a model with enough felicity to approximately ‘remote-control’ individuals. I called this form of manipulation mediocracy- control by media rather than by people. Sounds far-fetched? The news machine is helping us all see the thin edge of the mediocracy wedge arriving thanks to Facebook, Cornell and UCSF. Many know the details of the story now- Facebook did human subject research by manipulating the emotional content of users’ newsfeeds, then studying how this affected the emotional content of each user’s posts.

The most interesting analysis of this case that I have read so far is by Adrienne LaFrance of The Atlantic. Her story zeroes in on the ethical question of Review Board regulation. We do IRB-approved research all the time, and what I find interesting about this case is the concept that an IRB could have approved this particular study without the obvious informed consent it ought to require.  Generally, any situation in which one wishes to manipulate a person’s inputs requires definitively telling them that you are doing a study, explaining that they can opt out, and then asking their permission, with opt-out.  LaFrance’s story cuts to this issue, and to the role, or non-role, of three institutions in thinking through the ethics of a technological manipulation.

The ability of corporations to collect massive data, cut through that data using machine learning techniques, and present manipulations back to us will only grow over the years. As we happen to stumble upon evidence of such manipulation, expect the ethical case to become only more complex.


Bridging the digital and physical at home

In interviews, I often talk about the desire of digital property owners to cross over into the physical world, since it represents a fabulous new, uncharted territory. Robotic sensing and robotic actuation provide the means for that digital-physical jump, and the Nest thermostat is one such example, with sound and pyroelectric sensors that can detect whether or not you are home and active. With Nest’s purchase, Google has taken on new physical information in the home, and the Wall Street Journal’s Role Winkler and Alistair Barr report today on Nest’s announcement that it will share occupant status information with Google’s services, and with third party companies as well. The upside of this fluid connection between your home movements, home sensors, internet-connected devices planted on your dining room wall, and Google’s broad array of services have to do with convenience: your Android phone tells your house that you’re unexpectedly heading home, turn the thermostat on; your garage door notices you have left home, close the garage door you accidentally left open; your phone is off but you are home when you need to be at a meeting, email you and get you out of the house. But of course, all this convenience will come along with the interesting profit-motive question: how can the behavioral information itself, or the convenience afforded by such interconnections, transform data across millions of households into revenue? Time will tell just how digital-physical bridges become moneymaking mints, and I guarantee some of the smartest minds are working on that.


The Best of Spectrum Edition

IEEE Spectrum has published their 50th anniversary issue this month, entitled The Future We Deserve. I have enjoyed reading the entire magazine, and see many relevant connections between their prognostications about our human-machine future and those that I discuss in Robot Futures. There are, of course, several places where IEEE Spectrum’s text demonstrates various forms of naive thinking and, as ever, it is fun to point these out. Here are my top five excerpts for your digestion:

On the future of self-driving cars, Philip Ross says that in 30 years, cars won’t even need our advice. Then he makes a Ginger-level prediction:

Accident rates will plummet, parking problems will vanish, streets will narrow, cities will bulk up, and commuting by automobile will become a mere extension of sleep, work and recreation. With no steering column and no need for a crush zone in front of the passenger compartment (after all, there aren’t going to be any crashes), car design will run wild

This reminds me of Ginger becomes smart people said “this two-wheeled machine will change the way cities are designed and built.” No accidents, no bumpers and new versions of cities, all in thirty years? Well, at least, once people stop flying airplanes, they’ll stop crashing, right? Oh, wait, no, the Washington Post reports that drones crash too…

Eliza Strickland writes about a very exciting aspects of robo-empowerment: intelligent, active prosthetics that compensate far more fully for missing limbs and joints.  This work is outstanding, and a well-recited trope emerges regarding the day when cyborg limbs are preferable to the real thing: the day when folks start chopping off their legs. On purpose:

MIT’s Hugh Herr imagines that synthetic body parts could easily become more desirable than biological parts, especially as people age. ‘You wake up at the age of 50 and your joints are stiff, but your friend has bionic limbs that he upgrades every year that make him feel like an 18-year-old,’ Herr says. ‘What would you do?’

I must say, as someone approaching fifty, the thought of switching wholly to robot legs has far less appeal than doing exactly what many do now as their joints age: going in for joint replacement, which is an interesting side story not covered by this article. After all, the real bionics we are creating are not only part-machine, part-human; the massive majority doing this look the same as the rest of us from the outside. The technology has melted into the interior, replacing the joint itself literally whilst leaving the nerves and muscles untouched. This is a much likelier story of robot-human merging for the masses.

Ariel Blaicher, writing about wearable computers, pushes over the edge of computer forethought- to where computers know us better than we know ourselves. Always a fun place to end up:

And when our computers know us better than we know ourselves, they will help us to communicate better with one another. They will monitor our conversations and inform us when others are bored, inspired, hurt, grateful, or just not on the same page. They will encourage us to speak up when we are shy, …

Those sci-fi writers out there amongst you, I encourage you to write some outstanding, dystopian stories about just what happens when AI-puppetmasters control our every communication act. Of course, the very fundamental concept of a computer knowing me better than I know myself raises a basic philosophical question, since it’s not at all clear what this means, really. Can anything or anyone know me better than I know me?

From the home robot department, Erico Guizzo suggests just how domestic robots may become affordable:

A free app will let your robot find all the socks in the bin; a paid version will find, pair and place them in your drawer.

Oh, joy. So our homes just might be locations of the upsell, with negotiations between us, our credit cards and our robots. I can make you Beef Stroganoff for dinner. It is ranking three stars. But for an extra eight dollars tonight, I can make you a vegetable ragout and souffle that has ranked five stars. Or, perhaps, I actually know where you misplaced your reading glasses. For only three dollars, I’ll bring them to you and give you the new glasses-tracking app, so you never lose them again. I can give you one hint for free: they’re not in the basement. Somewhere, somebody is brimming with new home-robot-upsell opportunities, and the rest of us are rolling our eyes.



Robot Smog for breakfast, anyone?

The Washington Post’s Craig Whitlock published an article yesterday, Crashes Mount as Military Flies more Drones in the U.S., that is fascinating as a read and also a useful catalyst to look at just how experts respond to the trope of domestic drones crashing across our country. What is interesting here is not so much the specific accident rate of drones- I suspect it will evolve, as a value, until it is not that different from piloted aircraft- but the fact that they do, indeed, crash. And that, therefore, as we massively increase the number of flying aircraft thanks to the economies of scale applied to drones, we will also massively increase air-ground accidents.

My very favorite passage explaining this forthcoming robot smog  follows:

Navy officials said the drone came no closer than 40 miles to the Capitol. Jamie Cosgrove, a Navy spokeswoman, said a software anomaly prevented the drone from flying its preprogrammed route in the event of a lost satellite link. The Navy denied a request from The Post for its investigative report on the incident.

I love the use of the phrase “software anomaly.” The correct term in engineering circles is: bug. Their software is buggy. As all software is; and this is the crux of the matter: we distance the human from the control loop and replace human judgement with software, and we ought to remember that all software- every lat bit of it- has bugs. Forever. So our robot smog will not simply consist of well-functioning flying, autonomous craft. It will consist of flying, autonomous and buggy craft.

Enjoy your breakfast.


Here come the pre-lethal drones

I argue that one unusual side effect of our newly forged maker culture is that anyone can make anything, and they frequently will. Of course this applies to companies too. Enter South African firm Desert Wolf, which is apparently marketing a pepper spray equipped drone for just about anyone to purchase. Is this a prank announcement or the real deal? That is not yet totally clear, but watch for how the pundits and press cover and interpret this move.