- From behavioral analytics to monetization
Classical marketing strategies call for a deep understanding of customers—deep enough to enable subtle changes in communication design that lead to modified customer behavior. In other words, companies have spent many decades stretching to find the right buttons to push in order to transform captive audiences into loyal purchasers. From sugar and salt in snack products to co-location of the right complementary goods in a supermarket, these classic approaches appeal to our near-Pavlovian tendency to find elegantly simple lines of causality from stimulus to response.
AI changes the rules of this game because, rather than evaluating large classes of customer archetypes (e.g. white male, forties, middle class), corporations can collect individual behavioral data from every consumer, and turn AI loose to experimentally arrive at just the right stimulus for each and every mark. The very notion that AI can turn data into money, in turn, has driven a massive inflation in the value of collected consumer behavior, catalyzing dozens of companies to position themselves explicitly as gatherers of human behavior across the Internet. Brick-and-mortar stores are now in on the very same land grab; your supermarket loyalty card generates revenue by empowering the supermarket to sell and resell your behavior for far more profit than is lots in item discounts given back to you. The fundamental trade that AI requires in order to convert past behavior into future purchasing is individual loss of privacy. Yet studies in the wild have consistently shown that we will often trade away long-term rights, such as privacy and security, in the interests of short-term gains, such as convenience and monetary savings.
- AI as the decision-making gamechanger
Yet AI as behavioral analyst is, at least initially, relatively passive. It watches, senses patterns, and learns how to understand consumer choice. The choice resides still with the consumer, albeit modified by bespoke stimuli that may be very hard to counteract. AI is the observer, the understander; it is a cognitive agent building a representational schema of each consumer.
But what happens when AI as observer is supplanted by AI as actor? When AI has observed us long enough, and when its model of our future behavior is highly accurate, will it be able to make our future decisions for us, saving us the time and effort required to decide? Gmail’s new feature in 2017, Smart Reply, is operating at the new boundaries of just such decision systems. This AI learning-driven feature reads and parses the user’s incoming email with sufficient fidelity to propose three alternative responses. Instead of actually replying to an email by writing a response, the Gmail user is offered the chance to simply click on AI-written responses, designed to suit the writing style of the user in the context of that particular sender. Consider the figure below, a screen shot from an actual Gmail interaction in May 2017:
Gmail’s AI agent is learning that the user often uses a small ‘i,’ and so the third alternative suggested is an informal respose that tries to mimic the user’s colloquial habits. These suggested responses interrupt the user’s process of creating original messages, providing real savings in time as a trade for good-enough phrasing. But such reply options directly blur the boundary between human writer and autonomous reflex; the receiver of the message has no way of knowing that Gmail, and not her friend, actually wrote the message in question.
AI-based replies modify human-human interconnections in two directional ways: the message receiver becomes insulated and aliased from the sender, because the authenticity of a message becomes ever more questioned; and the message sender loses a fragment of their direct agency in the creation of a de novo message as a speech act.
Technophiles will respond that, as the AI-based observer becomes more capable at exact mimicry, then the automatic response will match their intended response so well that authenticity is preserved. This attitude, a technology indoctrination into loss of personal control, misses the fact that such AI learning operates well only in the most nominal of spaces; give it a boundary case, and its suggestions will betray its true identity immediately. Below is another Gmail Smart Reply screen shot from the very same day, but this time responding to a spam scheme:
Here, Google’s AI has managed to misconstrue the message entirely, replacing an aware and cynical human reader with a naïve robot that is truly clueless. The greatest irony, of course, lies in the fact that the message itself is created, not by a human Gina, but by a computer algorithm that is itself informed by an AI decision system. Two human simulacra, each operating in the space of a greatly oversimplified conception of humanity, engage in a vapid dialogue that exists only to lead a human mark to give up money: hook, line and sinker. It is worth wondering how written communication will evolve as the space of human writers is further hybridized by AI-based authors that appear, along varying levels of fidelity, to be operating as first-class members of society. If such AI systems effectively change human discourse, then AI has become the tail that wags the dog.