“Algorithm Aversion” and Scary Headlines
Author: Robert Fischer, GTiMA President | Date: February 21, 2017
If it bleeds, it leads.
That’s the long-accepted dictum of how a news organization makes its biggest profit margin. News outlets provide an essential public service, but they must survive as businesses as well. Psychology Today noted this trend, saying that “news is a money-making industry, one that doesn’t always make the goal to report the facts accurately.”
Perhaps that’s why the Washington Post’s science section recently ran an article — the meat of which ended up being an exploration of why public fears of autonomous vehicles (AVs) are irrational — under the headline “Will the Public Accept the Fatal Mistakes of Self-Driving Cars?”
The article explored the idea of “algorithm aversion,” which, according to the Post and its sources, is the idea that “people are … more inclined to forgive mistakes by humans than machines.” The story cites public anxieties about refrigerators in the 1920s as a parallel example to current concerns about AVs, noting that “although scientists understood that cold storage could cut down on food-borne illnesses, reports of refrigeration equipment catching fire or leaking toxic gas made the public wary.”
All true, and a pertinent parallel. So why did the article need to begin with the line “How many people could self-driving cars kill before we would no longer tolerate them?” It might grab a reader’s attention, but the attention-grabbing part essentially represents the direct opposite of what the piece is about. This question is discarded and never addressed as the article proceeds. What purpose does it serve other than to stoke fear and get clicks?
Other recent articles in the pages of the Post follow this trend. When Uber rolled out its self-driving test back in September, the paper covered it. Buried in the article were some important facts: in the seventh paragraph, the writer mentioned that the vehicles “will have two trained safety drivers on each ride.” The twentieth paragraph — third from the bottom — briefly described Pittsburgh Mayor William Peduto’s first ride in one of the Uber vehicles, of which he said “There was no time I was fearful or worried … I’m more worried when I’m on the road with an 18-year-old who is learning how to drive.”
That all sounds pretty good — so why did that article need the headline “Why Uber is Turning The Streets of a U.S. City into its Laboratory”? Or, in another article, why refer to voluntary Uber AV passengers as “guinea pigs”? A whole city reduced to an experimental lab? Humans as powerless and expendable as test-subject “guinea pigs”? Sounds positively dystopian. But, to anyone that knows about AV tech, it also sounds ludicrous and dramatically overstated.
Elon Musk weighed in on this issue back in October, saying “if, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people. Next question.” That’s a big statement — but unfortunately, it’s correct.
The fact of the matter is that AVs promise a level of safety that is currently impossible in our world of error-prone human drivers. The Post has quoted quite a few experts in many articles to that effect. But in order to address the logical fallacy of “algorithm aversion,” do they really need to sell the story using attention-grabbing headlines and scare quotes? The stakes are too high to be using frightening language.
Because the real “bleeding” that should “lead” are facts that, while far more dramatic than the material being covered in the AV realm, are perhaps not “news”, since they’ve been known for many years now. Each year, over 35,000 people die in car accidents. A full third of these come from intoxicated drivers. Another third are the result of reckless speeding. Distracted drivers represent another twelve percent. Human error is the cause of 94% of auto-related deaths. And across all modes of transport, these automobile deaths represent 95% of total fatalities, according to the U.S. Bureau of Transportation Statistics. And perhaps one reason the total number of airplane and watercraft deaths are in the three-digit range instead of deca-thousands is that each relies on some degree of automation to reduce human error.
If the Washington Post is actually concerned about safety, perhaps it should start covering the 92 people killed every day in this country as a result of human error behind the wheel. They wouldn’t even need clever turns of phrase or headlines that don’t correspond to the information below them: the number itself is scary enough on its own.
Apparently “algorithm aversion” leads us to forgive human error more than we do machine error. But does that make it right?