Category Archives: Uncategorized

Hawking Gets it Right

Stephen Hawking has, in the past few years, made some statements about Artificial Intelligence that are oft-quoted and, to be, somewhat misleading. His voiced concern was existential– about whether we are girding for the possibility (small ‘p’) that AI might decide that we humans are not so useful to have around, Skynet and all. But his commentary today in The Guardian, This is the most dangerous time for our planet, is outstandingly well done. He comments crisply on some of the most critical problems we face, weaving together the concerns we should all have regarding global balkanization,  extremism, climate change, climate refugee dynamics, technologically induced underemployment, inequity. It’s all there, folks, ready for the reading. I heartily recommend Hawking’s newest commentary- share it with your friends. His final thoughts are that we must be humble even in the heights of the ivory tower. I’ll go one step further: we need to find a path toward global empathy, and this empathy in turn needs to totally reprogram how we think of externalities and consequences of action and inaction both. It is time to be one team.

 

This is no Turing Test

Steve Connor of The Guardian reports today that Volvo has announced that their first wave of self-driving cars will be unmarked so that drivers cannot distinguish them from human-driven cars. Wow. This is a remarkable design choice. In Robot Futures I spoke of the spectre that, in a dystopian future, if we don’t use design right, it will be hard to know when robots are “backed” by human sensibility and when they are truly autonomous. Now we find out Volvo is going to do this. On purpose. Built into that notion, if we unpack it carefully, are two presumptions: (1) our first wave of cars are so awesome that people don’t need to treat them differently, ever. (2) people are evil to robots, so let’s hid robots in peoples’ clothing.

This is chilling, actually. These cars will behave differently than people. They will have capabilities in extenuating circumstances that are altogether different than those of people. If I step in front of one of their cars making eye contact with the fellow behind the steering wheel, thinking he’s driving, I will assume that because he is driving, he will not hit my little dog, which happen to have tarmac-colored radar-absorbing fur. Pity the poor dog. I want intentional transparency and empowerment for us humans as robots pervade our space. What I don’t want is purposeful obscurity of robot technologies around us, just so I cannot adjust to the robot’s shortcomings, or their strange surveillance-oriented ways, or, or…

 

Bot Pollution

A CNN Money report by Ivana Kottasova last week noted that Oxford university researchers determined that 20-30% of all tweets about Clinton and Trump are actually generated by bots. Thanks to Jason Campbell for forwarding this article. What is interesting about this machine-turbocharging of the automation echo chamber is the notion that our discourse may become increasingly polluted by algorithms that are not distinguishable from personally held human opinions. Whilst this has been true from time immemorial for the famous and the rich, their publicists and writers are a relatively small fraction of societal discourse. But automation replicates far more quickly, and I can easily imagine a day when the majority of discourse in all directions is automated, with us humans just scratching the surface. Not a pretty site, as the owners of the messaging will be concentrated in the hands of the bot-makers. This is just like the concentration of wealth we see, but instead of concentrating wealth, we concentrate the generation of opinions and thought leadership.

 

AI, and not the military

John Markoff’s article today in the New York Times, Devising Real Ethics for Artificial Intelligence, describes how five major companies are working together on the question of social ramifications of AI in the near future. Sounds familiar! Interestingly, they wish to disregard both the Singularity and military applications, for now. The article is good, but the end-all is that a corporate process that is not transparent seems to be all we’ve got just now.

Even the social waiting job is at risk

Eater’s Matt Sedaca just published a piece about trends in automation for back-kitchen and waitstaff in restaurants. The tropes are eye-opening, if only because some technologists are so very eager to have machines to anything they can:

Whenever a job can be done equally good by a machine, I say: Let them

Well, that’s one recipe for where machines operate, but not necessarily one that is cognizant of any of the externalities.  There are technical considerations, to be sure; but there is more to the equation than that: dignity, purpose, social interaction. Heavy words that are more sociological than computational.

 

The Machine Employment Debate

The June 25 – July 1 issue of The Economist featured a special report on the future of Artificial Intelligence titled March of the Machines. This report features an array of articles; but, more than anything else, this report tries to accomplish three basic missions: (1) convince the reader the Artificial Intelligence has reached a tipping point where it is, finally, able to solve a slew of problems better than humans; (2) allay anxiety about future chronic underemployment due to automation of current job categories; and (3) tone down the concern that AI will rise up and exterminate us humans. I write ‘us’ for the human readers of this blog post, of course. Not you web robots and automatic parsers.

As for (1), the story is all about deep neural nets, and the amazing new results that have come to us just in the past decade. I agree with this editorial position; it is really quite extraordinary the speed with which a number of previously human provinces are being bested by machine intelligence. Is it all Deep Learning? Not exactly. Some is, to be sure, but that is a simplification. Computational intelligence is also succeeding because of the Internet itself- in many cases, old learning algorithms have a new lease on life because millions of on-line examples, thanks to the likes of Facebook and Youtube, enable optimization systems to optimize like never before, separating examples and counter-examples with an efficiency previously unimaginable. The story is about neural nets, but also about massive increases in storage space, processor speed and Internet-repositories of examples by millions of us human Internet users. Yet there are differences between what computers and humans do, because how we demonstrate and embody intelligence is still worlds apart. Why do these autonomous cars crash, even though they’re statistically safer? Because they are exposed to situations that their learning systems never quite encountered. That is a spooky situation for us humans because, in many situations where robots fail, we humans have a common sense that would almost never have caused that error. Alien species breed alien error forms.

As for (3), I agree, again. The case here for disagreement is less subtle. AI is simply nowhere near taking over our planet and killing us. But. As I often point out, the real issue is, as AI concentrates power, knowledge and wealth massively in the hands of the few corporations, will they take over (more than they already have)? That is the question. AI is not an existential threat to humanity. But it just might be an existential threat to us anyway!
This brings us to (2). The Economist tries hard to be optimistic about automation and jobs. After all, people will tend to these AI systems, and there will be whole categories of jobs we don’t even know about yet! The Luddite example is brought up, yet again. As is statistics cherry picked from specific examples of automation. A favorite: banks brought ATM machines, but now we have far more small branches. This is true, dear Economist, but that’s not the whole story. Whole sets of interactive kiosks are making human beings redundant in all those tiny branches we are to celebrate. It is always interesting that, in the same article, writers can say that the progress of AI is disruptive now- it is nothing like before; and then in the same breath, that we can extrapolate employment dynamics just the same as prior improvements in machinery. Disruptive or not disruptive? Make up your mind!

 

 

Layers of Autonomy

Last year I gave an interview to the New York Times magazine cover story on autonomous driving cars that hit too close to home:

‘‘If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign. The chances of all that happening are diminishingly small — it’s very, very unlikely — but the problem is we will have millions of these cars. The very unlikely will happen all the time.’’

Unfortunately, the recent fatal Tesla accident which has been thoroughly reported involved the side of a truck trailer. In this case what we know, so far, is that the side of the trailer was white and thin, and that the sky was bright, possibly washing out the camera with glare or blooming from overexposure. There are so many responses in the blogosphere already. The techno-optimists say that Tesla will tweak their code so this particular case does not happen again. Special case after special case which, any programmer will tell you, has a logistical tale of accidental side effects that only increase as the baggage of special cases dragged along blows up in the programmer’s face. Statistician/demographers have already explained that these rare cases are acceptable, because the average risk of death from car accidents still goes down as cars automate. But of course this begs the question, just how are we measuring the mean? Am I mean? If I pay attention when I drive, never text, and don’t drink, then which is safer, me or an autonomous car? Fundamentally, we can make the world safer whilst creating a lottery system for accidents. How does this redistribute error and harm in society, and when is this ethical and unethical? There is much, much more to this than statistics or bug-tweaking. There are underlying questions about interaction design: do we design autonomy to replace people in such ways that new forms of error surface, or do we empower people to become incrementally safer, even if it means our technological trajectory is slower and more intentional? You know where I stand.

AI for Empathy?

Mark Blunden of the Evening Standard yesterday, London, wrote about Amelia, an AI now replacing human council workers in Enfield, England. Two quotes are especially telling; from James Rolfe of council government there:

The customer shouldn’t see that they are interacting with a digital agent, it should be a seamless experience.

And from the president of the company selling the robot, IPSoft:

[not about AI] replacing labour with cheaper labour, but replacing labour with cognitive systems – to be able to answerr a question as a human would understand it.

So we are saving money (not counting externalities of course), replacing human-facing high-touch verbal service with AI, and we are doing it whilst trying to ensure customers will not even realize they are speaking with a non-human entity. What is the formula here? AI – Humanity = Cost Savings – Empathy?

Here is the original article as printed:

DSCN2496.JPG