Great program on the downstream consequences of driverless cars…from
Great program on the downstream consequences of driverless cars…from
Here is an interesting Guardian story by Paul Lewis about how many technologists who have helped design addictive features into our apps and phone are themselves turning away now, realizing the many erosions in quality of life induced by their very innovations.
Thanks to my colleague for calling out this Washington Post article about Russian facebook ads explicitly designed to render informed choice less effective in our democracy. In mediocracy I wrote about the power of behavioral analytics plus advertising to usurp our agency and decision-making process. This is yet another example of that happening already:
We just published a brief article on the health of our cities and the usefulness of interactive visualization:
Classical marketing strategies call for a deep understanding of customers—deep enough to enable subtle changes in communication design that lead to modified customer behavior. In other words, companies have spent many decades stretching to find the right buttons to push in order to transform captive audiences into loyal purchasers. From sugar and salt in snack products to co-location of the right complementary goods in a supermarket, these classic approaches appeal to our near-Pavlovian tendency to find elegantly simple lines of causality from stimulus to response.
AI changes the rules of this game because, rather than evaluating large classes of customer archetypes (e.g. white male, forties, middle class), corporations can collect individual behavioral data from every consumer, and turn AI loose to experimentally arrive at just the right stimulus for each and every mark. The very notion that AI can turn data into money, in turn, has driven a massive inflation in the value of collected consumer behavior, catalyzing dozens of companies to position themselves explicitly as gatherers of human behavior across the Internet. Brick-and-mortar stores are now in on the very same land grab; your supermarket loyalty card generates revenue by empowering the supermarket to sell and resell your behavior for far more profit than is lots in item discounts given back to you. The fundamental trade that AI requires in order to convert past behavior into future purchasing is individual loss of privacy. Yet studies in the wild have consistently shown that we will often trade away long-term rights, such as privacy and security, in the interests of short-term gains, such as convenience and monetary savings.
Yet AI as behavioral analyst is, at least initially, relatively passive. It watches, senses patterns, and learns how to understand consumer choice. The choice resides still with the consumer, albeit modified by bespoke stimuli that may be very hard to counteract. AI is the observer, the understander; it is a cognitive agent building a representational schema of each consumer.
But what happens when AI as observer is supplanted by AI as actor? When AI has observed us long enough, and when its model of our future behavior is highly accurate, will it be able to make our future decisions for us, saving us the time and effort required to decide? Gmail’s new feature in 2017, Smart Reply, is operating at the new boundaries of just such decision systems. This AI learning-driven feature reads and parses the user’s incoming email with sufficient fidelity to propose three alternative responses. Instead of actually replying to an email by writing a response, the Gmail user is offered the chance to simply click on AI-written responses, designed to suit the writing style of the user in the context of that particular sender. Consider the figure below, a screen shot from an actual Gmail interaction in May 2017:
Gmail’s AI agent is learning that the user often uses a small ‘i,’ and so the third alternative suggested is an informal respose that tries to mimic the user’s colloquial habits. These suggested responses interrupt the user’s process of creating original messages, providing real savings in time as a trade for good-enough phrasing. But such reply options directly blur the boundary between human writer and autonomous reflex; the receiver of the message has no way of knowing that Gmail, and not her friend, actually wrote the message in question.
AI-based replies modify human-human interconnections in two directional ways: the message receiver becomes insulated and aliased from the sender, because the authenticity of a message becomes ever more questioned; and the message sender loses a fragment of their direct agency in the creation of a de novo message as a speech act.
Technophiles will respond that, as the AI-based observer becomes more capable at exact mimicry, then the automatic response will match their intended response so well that authenticity is preserved. This attitude, a technology indoctrination into loss of personal control, misses the fact that such AI learning operates well only in the most nominal of spaces; give it a boundary case, and its suggestions will betray its true identity immediately. Below is another Gmail Smart Reply screen shot from the very same day, but this time responding to a spam scheme:
Here, Google’s AI has managed to misconstrue the message entirely, replacing an aware and cynical human reader with a naïve robot that is truly clueless. The greatest irony, of course, lies in the fact that the message itself is created, not by a human Gina, but by a computer algorithm that is itself informed by an AI decision system. Two human simulacra, each operating in the space of a greatly oversimplified conception of humanity, engage in a vapid dialogue that exists only to lead a human mark to give up money: hook, line and sinker. It is worth wondering how written communication will evolve as the space of human writers is further hybridized by AI-based authors that appear, along varying levels of fidelity, to be operating as first-class members of society. If such AI systems effectively change human discourse, then AI has become the tail that wags the dog.
I participated in a series of interviews at the World Economic Forum on robotics and underemployment, and the WEF group has published this interview now here:
I just participated in a workshop on teaching Ethics and AI, this one at the AAAI 2017 conference that is full of students and faculty from universities all around. What was different, and I found striking, is that the audience is now peppered with journalists, assigned to cover even a professional AI workshop. Over my last twenty years, I have never before seen this sort of coverage, and it is indicative of just how top-of-mind AI has become in the consciousness of the press and the public. This is a good thing, I will argue, so long as a public discourse really gets at the social ramifications of these new technologies. In any case, two articles are already out on our little workshop, one in Fortune, the other in The Register.