The final chapter of Robot Futures suggests that robotic technologies at their very best can help us be more human: they can forge deeper empowerment into our communities, enabling us to make smarter choices for the future. Just this morning TIME released their TIMELapse story, which shows change across the earth, natural and man-made, over the past thirty years at a level of detail that was unthinkable before. TIME has done a fantastic job in researching the stories that these moving, explorable images back, then telling each story in enough detail to give the reader an eye for interactively exploring the data set themselves. The image content comes from NASA and USGS, and the heavy computational lifting is possible because of Google. The CREATE Lab had a role in creating the interface for zooming and exploring massive the massive data in time and space. Some related links that I’m happy to share:
TIME’s site, now live
Google Earth Engine
Our GigaPan Time Machine demonstration site
David Streitfeld writes in today’s New York Times about early privacy debates concerning Google Glass. Devices that make photography and videography essentially effortless change the boundaries between public capture and assumptions of privacy through transcience- surely noone is capturing everything I’m doing? Streitfeld’s article rightly points out that new technology can test existing legal frameworks in unforeseen ways, in this case challenging First Amendment and fair use intuitions by loosing new, slightly more uncomfortable scenarios. The article notes that one developer has enabled Google Glass to snap pictures with the blink of an eye rather than the more outwardly obvious tap of the eyeglass frame. I would like to add a bit of Robot Futures-style analysis to the debate. First, note that many cities already have camera networks that record essentially everything that the citizens do outdoors. Wearable cameras give such infrastructure a more tangible face, and move the power relationship from the state to individuals– from the uncontrollable to the unpredictable. Of course all such cameras only increase in resolution; what is hidden in the fog of lens limits today will be revealed tomorrow, and so the saturation of our spaces by recording devices will witness a one-way march. The second point I want to make is that automation, machine vision and AI tend us away from user-gestured snapshots and toward capturing everything, all the time, effortlessly because technology promises, one day, to make retrieval easy, even from a nearly infinite mountain of recorded data. This is a trend that further changes our relationshp to photography: what does it mean when we don’t actively choose what to frame and when to shoot, instead trusting in the fact that, since everything is captured, we can always retrieve any particular shot we could have taken. We become less active because technology promises to give us convenience over decision-making. The cameras are here, already, and there will be vibrant debates regarding privacy and fair use. But no matter where these legal policies settle, the road we are on seems to lead toward machine autonomy in lieu of individual empowerment.
IEEE Spectrum’s Ariel Bleicher reports, in Rise of the Eye Phones, about the incorporation of gaze-tracking in future smartphones. LG and Samsung have both announced features such as gaze-based scrolling and video pause/resume. But the key line in Bleicher’s article is: “..analysts and researchers say their products only scratch the surface of what’s possible.” Outstanding interfaces await, where people seamlessly browse information without losing their train of thought. That’s the good bit. On the other side, as pointed out succinctly by John Villasenor of the Bookings Institution, there’s the question of just how much information on your attention is collected by all your devices, and how corporations will use that behavioral data to model you and model your responses to every stimulus. As I describe in New Mediocracy, tinteractions will be customized uniquely to each and every one of us as machines learn to read our every gesture and as giant cloud-based learning systems become ever more adept at collecting all that massive data, then mining it to reveal actionable marketing intelligence.
This week Assemble Pittsburgh will host an evening event where we will discuss Robot Futures and current events in technology and empowerment. All are welcome, and information about the event in Pittsburgh is available at Assemble’s website. There are upcoming conversations I will be having about Robot Futures in Moscow and Philadelphia in June; I will post information about those as it becomes available.
Rebecca Morelle at the BBC reports on a new first from Defense Distributed and Cody Wilson, ironically a law student at University of Texas-Austin, in Working Gun Made with 3D printer. The idea of a printed firearm has been a trope in the press and in the technorati conversation for some time, and it was only a matter of time until someone claimed the mantle with their very own invention. The press has reported widely on the milestone, and the expert responses they have elicited are mostly quite superficial, early reactions: it’s harder to make one than buy one today; 3D printed guns will only come down in price and will proliferate; we need to ban these now; and of course this is impossible to ban effectively because the cat’s forever out of the bag.
There are a few lessons I think we ought to take away both from the fact that Wilson made this gun, and from the ecology of responses in the blogosphere. First of all, I see this not as a turning point but as early evidence of a frontal mass of robot smog, as I label it in Robot Futures. Guns are provocative because they are designed to do damage, and indeed Wilson intends for maximum provocation in this case. But the number of people who will use plastic 3D guns rather than the conventional variety to kill people will be small for the time being. Note that Wilson is a law student- not a dedicated engineer- and this is a good double-whammy to consider. Everyone can become an inventor, and their knowledge of law and society will not necessarily give pause to their considerations. The power of 3D printing isn’t in printing guns; it’s in printing anything – specifically including what you haven’t considered yet, and what you think noone will have the temerity to really build. The unintended side effects are what I find fascinating because, with good intentions, many will enter a complex ethical and legal landscape. When someone publishes the designs for a steering wheel dial that helps a Parkinson’s patient drive, and when that dial fractures and crashes happen because the plastic is just not an appropriate material, where are all the lines of accountability? A plastic gun is not very resilient, but it can be extremely light. When a super-lightweight gun that can do no more than fire a single bullet is mated to a tiny flying drone, then we open up a new frame of reference for remote control violence.
When we respond to new technologies by simply imagining replacement of existing devices with newly manufactured devices, we are not using our imagination nearly well enough. It is the new categories of devices that will be borne of new techniques that will threaten our understanding of law and ethics far more comprehensively- that is where our attention needs to lie. Reactive laws that ban specific devices, always after they’ve been demonstrated, do nothing to provide us with appropriate trajectories forward.
Salvador Rodriguez writes today in the LA Times about the release of Google Now for iPhone and iPad products, following on the heels of its rollout last year for Android products. The central idea in Google Now hasn’t changed: the software considers your context, from schedule information to GPS coordinates and recent actions, and uses that context to decide on what information to provide to you, proactively. That information can be pure data, such as a weather forecast, or it can be more active, such as telling you that it’s time to end the meeting, walk out the door, and catch the bus that will arrive in eight minutes, otherwise you will be late for the party downtown. Of course it offers to call your next appointment up for you and modify your schedule too.
Google Now is an example of technology at the tip of an iceberg that I try to portray in Attention Dilution Disorder. As software and robot systems track the complex constraints of our lives- locations, schedules, friendships, promises, reservations- and learn how to reason about them, the power relationship between the individual and their software ‘agents’ begins to invert. Information is the currency of today’s app’s, but this will evolve into the currency of imperatives: our agents tell us when to get up, when to walk out, which way to turn, when we can slow down and order a dessert and when there is no time for that. I have seen wealthy individuals who are managed by their executive assistants- there is a deep irony in the super-powerful who are nearly remote-controlled by their assistants. As our software becomes ever more proactive, those of us who are not that wealthy by a mile can experience the irony of these inverted relationships too- the good along with the bad.
A recent Wall Street Journal article on drones does a good job of demonstrating the clarity of drone photographs available to civilians today. Multiply resolution by several factors to imagine just what will be possible for anyone with the wherewithal in a couple of years, just to think forward to a very short-term robot futures. Happy visual prognosticating.