I just participated in a workshop on teaching Ethics and AI, this one at the AAAI 2017 conference that is full of students and faculty from universities all around. What was different, and I found striking, is that the audience is now peppered with journalists, assigned to cover even a professional AI workshop. Over my last twenty years, I have never before seen this sort of coverage, and it is indicative of just how top-of-mind AI has become in the consciousness of the press and the public. This is a good thing, I will argue, so long as a public discourse really gets at the social ramifications of these new technologies. In any case, two articles are already out on our little workshop, one in Fortune, the other in The Register.
Just published a blog with Khalid Koser and Achim Steiner on refugees in these crazy times:
Dan Shewan at The Guardian just published a story worth reading about robotics and underemployment: Robots will destroy our jobs – and we’re not ready for it.
I am getting a crazy number of invitations to events speaking to the future of our world with AI a-coming. So I just published a Huffpost on this topic to try to drive down some misconceptions. It’s called Three Not-laws of AI.
Stephen Hawking has, in the past few years, made some statements about Artificial Intelligence that are oft-quoted and, to be, somewhat misleading. His voiced concern was existential– about whether we are girding for the possibility (small ‘p’) that AI might decide that we humans are not so useful to have around, Skynet and all. But his commentary today in The Guardian, This is the most dangerous time for our planet, is outstandingly well done. He comments crisply on some of the most critical problems we face, weaving together the concerns we should all have regarding global balkanization, extremism, climate change, climate refugee dynamics, technologically induced underemployment, inequity. It’s all there, folks, ready for the reading. I heartily recommend Hawking’s newest commentary- share it with your friends. His final thoughts are that we must be humble even in the heights of the ivory tower. I’ll go one step further: we need to find a path toward global empathy, and this empathy in turn needs to totally reprogram how we think of externalities and consequences of action and inaction both. It is time to be one team.
Steve Connor of The Guardian reports today that Volvo has announced that their first wave of self-driving cars will be unmarked so that drivers cannot distinguish them from human-driven cars. Wow. This is a remarkable design choice. In Robot Futures I spoke of the spectre that, in a dystopian future, if we don’t use design right, it will be hard to know when robots are “backed” by human sensibility and when they are truly autonomous. Now we find out Volvo is going to do this. On purpose. Built into that notion, if we unpack it carefully, are two presumptions: (1) our first wave of cars are so awesome that people don’t need to treat them differently, ever. (2) people are evil to robots, so let’s hid robots in peoples’ clothing.
This is chilling, actually. These cars will behave differently than people. They will have capabilities in extenuating circumstances that are altogether different than those of people. If I step in front of one of their cars making eye contact with the fellow behind the steering wheel, thinking he’s driving, I will assume that because he is driving, he will not hit my little dog, which happen to have tarmac-colored radar-absorbing fur. Pity the poor dog. I want intentional transparency and empowerment for us humans as robots pervade our space. What I don’t want is purposeful obscurity of robot technologies around us, just so I cannot adjust to the robot’s shortcomings, or their strange surveillance-oriented ways, or, or…
A CNN Money report by Ivana Kottasova last week noted that Oxford university researchers determined that 20-30% of all tweets about Clinton and Trump are actually generated by bots. Thanks to Jason Campbell for forwarding this article. What is interesting about this machine-turbocharging of the automation echo chamber is the notion that our discourse may become increasingly polluted by algorithms that are not distinguishable from personally held human opinions. Whilst this has been true from time immemorial for the famous and the rich, their publicists and writers are a relatively small fraction of societal discourse. But automation replicates far more quickly, and I can easily imagine a day when the majority of discourse in all directions is automated, with us humans just scratching the surface. Not a pretty site, as the owners of the messaging will be concentrated in the hands of the bot-makers. This is just like the concentration of wealth we see, but instead of concentrating wealth, we concentrate the generation of opinions and thought leadership.