George Stetten recommended a book to me written by the late Joseph Weizenbaum: Computer Power and Human Reason, published in 1976. The book is an absolutely fascinating look at the ethics of Artificial Intelligence written by an inventor who achieved a fame that he never foresaw. Weizenbaum was the creator of ELIZA, the computer “psychotherapist” program, and he was not just stunned but horrified by the acceptance ELIZA received. He meant the program as a parody, and yet it was taken so seriously that his own secretary demanded privacy so she could have intimate conversations with the program. Psychologists wrote about how this program can be a serious patient treatment tool, and for Weizenbaum this was an abomination of the essence of human interaction. The book quotes and discusses the vision of some of the early, great founders of A.I., from Herb Simon and Allen Newell to John McCarthy and Ed Feigenbaum. In it Weizenbaum talks of how research practice itself views approximating human behavior, and how this changes our own comprehension of what defines our humanity. There is a great quote from Simon and Newell that is stunning in its timelessness: “There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until – in the visible future – the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” Now the incredible part is that this quote is from 1958. It could be used identically today, and does a great job of summarizing the enticement of the Singularity crowd.
In another section, Weizenbaum argues that there are some activities that robots and computers simply shouldn’t do, and he argues that speech understanding is one of them. He argues about the inherent cultural common ground essential to conversation, and wonders, what can computers that understand human speech possibly do that is good for society? It’s a great section because it is both in conflict with everything technological around us today, and yet at the same time tickles ethical issues that are just as relevant today, in a world where we talk to computers with ever-greater frequency.
The book is well worth a read, and if you find the technology-rich sections too overbearing, you can skip chapters 1 through 3 and enjoy the rest of the book just as much.