ONE FUN FACT I TELL my students about Daniel Kahneman: He won the Nobel Prize for economics without ever taking an economics course in college. Kahneman is a psychologist whose discoveries laid the foundation for the new science of behavioral economics.
One of his most important findings is that loss feels twice as painful to us as gain feels good, so the emotional scales aren’t balanced when we make economic decisions. For instance, workers will wait years to join a 401(k) because contributions can feel like a loss in spending power. The bottom line: People aren’t the rational maximizers imagined by older order economists like Milton Friedman. What followed was a new economic model built on how people actually behave, warts and all.
Now Kahneman has issued a new book on flaws in human decision-making. In Noise: A Flaw in Human Judgment, Kahneman—along with co-authors Cass R. Sunstein and Olivier Sibony—define noise as unwanted variability in professional judgment, which they find just about everywhere. Consider:
Results like these reveal a gaping hole in expert judgment, yet it usually goes unseen and uncorrected for three reasons, Kahneman and his colleagues argue. First, professional judgment often lacks a clearly right answer, so a wrong result isn’t immediately obvious. Second, professionals offer smooth, coherent reasons for their decisions, which sound awfully convincing to laypeople like us. Third, the great disparity in professional judgment is perceived only when many judgments are examined side-by-side. This statistical approach isn’t our natural way of thinking. It takes time, effort and training. Yet it tends to reveal thousands of errors that don’t cancel one another out. Rather, the mistakes add up.
Studies of federal judges show how this noise can be reduced—and how unpopular such efforts can be. In a 1974 study, judges were given identical hypothetical cases and asked to recommend prison sentences. The same heroin dealer could be sentenced to one year by one judge and 10 years by another. A bank robber might get five years or 18, depending on who was pronouncing sentence.
The study won the attention of Senator Edward M. Kennedy, who—after a decade of work—won passage of the Sentencing Reform Act of 1984. Once the severity of a crime and the criminal history of the defendant were tabulated, new sentencing guidelines held judges to a narrow range. The guidelines worked. The difference in sentence lengths between judges fell from 4.9 months to 3.9 months, on average.
The experiment showed that a “model of the judge” can deliver more reliable results than a living, breathing judge. The factors that went into sentences were cut-and-dried, so the human tics of the living judges were nullified. A similar success was achieved in bail hearings for defendants awaiting trial. A ridiculously simple, two-factor model more accurately predicted a defendant’s likelihood of jumping bail than most human judges could. (The factors were the defendant’s age and the number of prior court hearings the prisoner had missed.) The model lowered incarceration rates, as well as racial discrimination in jail populations, without an increase in the numbers of runaway defendants.
Despite their evident success, many federal judges hated the new guidelines. They said it tied their hands and robbed them of their professional discretion. I recall writing a story about a judge who cried from the bench when pronouncing sentence, saying he was unable to show mercy on a prisoner only tangentially involved in a crime. After many anecdotes like this, Congress repealed the guidelines and sentencing disparities began to rise.
Kahneman writes that people resist replacing professional judgment with models of the professional, even if they yield more reliable results. As humans, we know we make mistakes, and yet we expect perfection in a machine or an algorithm. More than 40,000 people die on U.S. roads annually. But until self-driving cars are 1,000 times safer than human drivers, Kahneman writes, people won’t trust their cars to drive them.
Buckle up. We’re in for a bumpy ride.
Greg Spears worked as a reporter for the Knight Ridder Washington Bureau and Kiplinger’s Personal Finance magazine. After leaving journalism, he spent 23 years as a senior editor at Vanguard Group on the 401(k) side, where he implored people to save more for retirement. Greg currently teaches behavioral economics at St. Joseph’s University in Philadelphia as an adjunct professor. The subject helps shed light on why so many Americans save less than they might. He is also a Certified Financial Planner certificate holder. Check out Greg’s earlier articles.
Want to receive our weekly newsletter? Sign up now. How about our daily alert about the site's latest posts? Join the list.
It is that feeling of loss-of-control. Which is why many are scared of flying while it is safer than driving. Not being in control is unsettling.
One of the biggest technical challanges is the mix of self-driving and driver cars, since humans are much less predictable.
Eventually we will convert self-driving cars and driving will be a special privilege. The day may come when few if any will be allowed to drive….
In for a bumpy ride – imagine when we convert mostly to self-drive and folks discover that self-drive cars will not run you over (unlike human drivers:)). Hence one could walk out into the middle of I95 and completely stop traffic, the same for any city street! Control of streets will transfer from cars to pedestrians…
Kahneman’s examples illustrate why it is risky to put your trust in a single financial advisor. The random variability in advisors’ judgments is also the overwhelming reason for the differences in their performance.
It is also worth noting that Kahneman did his groundbreaking work with Amos Tversky who no doubt would have shared the Nobel Prize if he hadn’t died before it was awarded to Kahneman.
Whenever I read about a celebrity such as Tiger Woods getting in an accident I am pretty sure a self-driving vehicle would be safer.