In today’s New York Times, Daniel Simons and Christopher Chabris write about the safety of heads-up displays such as Google Glass in Is Google Glass Dangerous. The article notes that one motivation for a heads-up display is to stop us all from looking down at a smartphone screen when walking (or running, or driving, or eating dinner with a friend). Simons and Chabris correctly note that any distraction is a distraction nonetheless- wherever it may be in the field of view; and that future studies need to attend to just how we feed humans side information without endangering their main-line actions or proving too great a cognitive disruption. All fair statements, but the authors go further, warning that heads-up displays in airplanes can even cause crashes, and referencing the famous gorilla-suit ‘inattention blindness’ experiments that, in many flavors, have shown that people ignore what their brain considers to be irrelevant detail (yes, sometimes gorillas are essentially irrelevant; sorry).
Human factors researchers study the issue of situation awareness and visual inputs with great care for all sorts of machinery operators, including pilots. When we pilot airplanes, numerous lessons concentrate on establishing a scan pattern that specifically induces cognitively registering all the instrument readings and the visual windscreen view all at the right speed, with the right minimum time between visual scans. So, if we are all going to pilot our bodies whilst receiving real-time visual inputs via Glass, will we all go through similar classes to refine our visual-cognitive behavior? I will argue no on that one. Instead, we are going to have a new crutch: intelligence machine interaction. Take Glass and put it on the automobile driver. Dangerous? What if the car drives itself. Now it’s less dangerous, although there are some pathological cases that are really quite scary still. We are going to surface ever greater non-local information to our brains through our devices, whether or not Glass is a hit. In Attention Dilution Disorder, I argue that this will make us both broader and shallower, just as argued by the likes of Sherry Turkle and others. The difference is, we will also start passing off some of our decision-making and action authority to our intelligent systems, from cars to Now accounts that interact with our restaurant reservations, doctor’s appointments and, eventually, superficial conversations. So don’t think of Google Glass in a vacuum- consider a future in which the CEO of Me, Inc. future takes hold- you will be less a first-person actor and more an information-rich manager of many AI systems. You will pilot not just your body but your personal, both on-line and real-world. I wonder if these future trips will be as authentic and enjoyable as today’s travels.