Search the KHIT Blog

Saturday, May 29, 2021

Kahneman et al's "noise" metaphor

"A flaw in human judgment."

It's a metaphor.
A figure of speech here denoting the unhelpful, wasteful imprecision wrought by excessive (and mostly remediable) variability in judgment and decision-making.
"Noise" in the aural definition context is that random conglomeration of sound waves that masks useful and desired "signals."

Inter- or intra-personal cognitive "noise" masks truths, impairing accurate judgments.

They commence with the venerable "accuracy / precision target" analogy, wherein "accuracy" simply means clustering tightly to the center bull's eye (Team A).

Team B is quite precise—and quite precisely "off" ("mis-calibrated," "biased"). Team C is simply randomly imprecise, and Team D is both imprecise and apparently biased.
This was a staple analogy for me during my university time in the early 2,000's as a "Critical Thinking / Argument Analysis" Adjunct. "You can be quite precise, and quite precisely wrong."
The focus of this book is "imprecision," not "bias," the latter of which is independent of variability per se. "Bias" has gotten way more ink than "noise." The authors, in addition to providing an extensive survey of the problem, posit a detailed method of analysis and mitigation.

An excerpt:
Matters of taste and competitive settings all pose interesting problems of judgment. But our focus is on judgments in which variability is undesirable. System noise is a problem of systems, which are organizations, not markets. When traders make different assessments of the value of a stock, some of them will make money, and others will not. Disagreements make markets. But if one of those traders is randomly chosen to make that assessment on behalf of her firm, and if we find out that her colleagues in the same firm would produce very different assessments, then the firm faces system noise, and that is a problem.

An elegant illustration of the issue arose when we presented our findings to the senior managers of an asset management firm, prompting them to run their own exploratory noise audit. They asked forty-two experienced investors in the firm to estimate the fair value of a stock (the price at which the investors would be indifferent to buying or selling). The investors based their analysis on a one-page description of the business; the data included simplified profit and loss, balance sheet, and cash flow statements for the past three years and projections for the next two. Median noise, measured in the same way as in the insurance company, was 41%. Such large differences among investors in the same firm, using the same valuation methods, cannot be good news.

Wherever the person making a judgment is randomly selected from a pool of equally qualified individuals, as is the case in this asset management firm, in the criminal justice system, and in the insurance company discussed earlier, noise is a problem. System noise plagues many organizations: an assignment process that is effectively random often decides which doctor sees you in a hospital, which judge hears your case in a courtroom, which patent examiner reviews your application, which customer service representative hears your complaint, and so on. Unwanted variability in these judgments can cause serious problems, including a loss of money and rampant unfairness.

A frequent misconception about unwanted variability in judgments is that it doesn’t matter, because random errors supposedly cancel one another out. Certainly, positive and negative errors in a judgment about the same case will tend to cancel one another out, and we will discuss in detail how this property can be used to reduce noise. But noisy systems do not make multiple judgments of the same case. They make noisy judgments of different cases. If one insurance policy is overpriced and another is underpriced, pricing may on average look right, but the insurance company has made two costly errors. If two felons who both should be sentenced to five years in prison receive sentences of three years and seven years, justice has not, on average, been done. In noisy systems, errors do not cancel out. They add up…

Kahneman, Daniel; Sibony, Olivier; Sunstein, Cass R.. Noise (pp. 30-32). Little, Brown and Company. Kindle Edition
From the Conclusion:
Noise is the unwanted variability of judgments, and there is too much of it. Our central goals here have been to explain why that is so and to see what might be done about it. We have covered a great deal of material in this book, and by way of conclusion, we offer here a brisk review of the main points, as well as a broader perspective.

Judgments As we use the term, judgment should not be confused with “thinking.” It is a much narrower concept: judgment is a form of measurement in which the instrument is a human mind. Like other measurements, a judgment assigns a score to an object. The score need not be a number. “Mary Johnson’s tumor is probably benign” is a judgment, as are statements like “The national economy is very unstable,” “Fred Williams would be the best person to hire as our new manager,” and “The premium to insure this risk should be $12,000.” Judgments informally integrate diverse pieces of information into an overall assessment. They are not computations, and they do not follow exact rules. A teacher uses judgment to grade an essay, but not to score a multiple-choice test.

Many people earn a living by making professional judgments, and everyone is affected by such judgments in important ways. Professional judges, as we call them here, include football coaches and cardiologists, lawyers and engineers, Hollywood executives and insurance underwriters, and many more. Professional judgments have been the focus of this book, both because they have been extensively studied and because their quality has such a large impact on all of us. We believe that what we have learned applies to judgments that people make in other parts of their lives, too…
(pp. 336-337).
I enjoyed it. Highly recommended.


With respect to wildly divergent, increasingly acrimonious social media "judgment noise," I have a theory involving the technical stats term "Kurtosis"—distributional relative "skinny" or "fat tails" where the lower and upper extremes of a platykurtic distributional curve contain excessive proportions of the data under study e.g.,

My hypothesis is that the "democratization" of online discourse—materially resulting from the effective elimination of economic and editorial "barriers to entry" to opinion dissemination irrespective of truth-value—has resulted in a frenzied "click-bait" cacophony culture wherein the only way to attract attention is to be more outlandish than the prior person. A "populist" exacerbation of the old media "If-It-Bleeds-It-Leads" eyeballs-magnet ethos.

In fact, I would flip the allusive bell curve image on its head, where the bulk of noise is in the tails:

'eh? to wit-
Random screengrab today. 

Given our long-standing interest in self-awareness, it is surprising how little science has traditionally had to say about it. What features of our brains enable us to think about ourselves? What are our strengths and weaknesses in this respect and how do they influence how we decide, learn, and interact? Can we train self-awareness, and how does this improve our performance? In the past three decades, however, research addressing such questions has been picking up speed. In Know Thyself, cognitive neuroscientist Stephen Fleming synthesizes this multifaceted research into an admirably coherent narrative and outlines how the resulting knowledge may be applied to solve societal problems.
Bears on "judgment," no? (And, so-called "Deliberation Science.") We'll see. Just got it. Deep into the Sally Weintrobe book at the moment.

No comments:

Post a Comment