Last year I gave an interview to the New York Times magazine cover story on autonomous driving cars that hit too close to home:
‘‘If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign. The chances of all that happening are diminishingly small — it’s very, very unlikely — but the problem is we will have millions of these cars. The very unlikely will happen all the time.’’
Unfortunately, the recent fatal Tesla accident which has been thoroughly reported involved the side of a truck trailer. In this case what we know, so far, is that the side of the trailer was white and thin, and that the sky was bright, possibly washing out the camera with glare or blooming from overexposure. There are so many responses in the blogosphere already. The techno-optimists say that Tesla will tweak their code so this particular case does not happen again. Special case after special case which, any programmer will tell you, has a logistical tale of accidental side effects that only increase as the baggage of special cases dragged along blows up in the programmer’s face. Statistician/demographers have already explained that these rare cases are acceptable, because the average risk of death from car accidents still goes down as cars automate. But of course this begs the question, just how are we measuring the mean? Am I mean? If I pay attention when I drive, never text, and don’t drink, then which is safer, me or an autonomous car? Fundamentally, we can make the world safer whilst creating a lottery system for accidents. How does this redistribute error and harm in society, and when is this ethical and unethical? There is much, much more to this than statistics or bug-tweaking. There are underlying questions about interaction design: do we design autonomy to replace people in such ways that new forms of error surface, or do we empower people to become incrementally safer, even if it means our technological trajectory is slower and more intentional? You know where I stand.