Regression to the mean

Regression to the mean

diagram based table.jpg

My colleague at IHI, Don Goldmann, posed a question last week about the phenomenon of regression to the mean.   How can we use statistical process control insights to help people understand regression to the mean?

Background

Francis Galton explored regression to the mean in the 19th century.   The scatter diagram to the left represents data he used in his explanation:  children of short parents tend to be shorter than the population average but not as short as their parents.   Similarly, children of tall parents tend to be taller than the population average but not as tall as their parents.   

An SPC Explanation for regression to the mean

If you watch a process that is stable—the process is in a state of control in Walter Shewhart’s sense--the process will have a mean level useful for prediction.   While you can compute an average of past values for any process, that average of past values does not predict the mean of future values unless the process is stable. 

Assuming stability, the best point prediction is the mean of past values. The best interval prediction is the range within the control limits calculated from past values. 

If you just saw a high value above the mean of the control chart of values from that stable system, what can you say about the next value?   Your point prediction is the mean value of the control chart.   Your prediction will be lower than what you just saw under the assumption that the process is still stable until the evidence suggests otherwise.   If you just saw a low value, your bet about the next value is 'the mean', which is higher than what you just saw.   If you keep track of the predictions and values, point by point, and think of your predictions as bets, you will win most of the bets.    That's regression to the mean in SPC terms.

On the other hand, if you intervene in the system on purpose or there is a change in the causal system that arises from something other than your intervention, the system may now no longer behave like the stable system in the past.    The betting strategy will no longer be so successful.

Suppose you plot 20 more values from the system and now all the points are above the previous process mean.  The stable process betting strategy will be less successful as the values are not reverting to the previous process mean.  You will lose a higher portion of your bets than in the stable setting.

Example

individaulcontrolchart.jpg

The picture shows a plot of 40 values.  The first 20 values are the base values, drawn as a random sample from a Normal distribution with mean 15 and standard deviation 3.  These values yield the mean center line and the control limits.   No base values are outside the control limits and no other ‘pattern’ rules are violated.  If these values were generated by a process, the process appears to be stable.

The next 20 values, drawn from the same distribution, illustrate the betting rule:  values shown as squares represent winning bets, values shown as triangles are losing bets.  That is, a low value will tend to be followed by a higher value and a high value will tend to be followed by a low value, where low and high are relative to the process mean.   The bet is not a sure thing as 7 out of 20 bets are losses; just as Galton observed, the regression is only a tendency

Under the assumptions embedded in my simulation, the betting strategy wins on average about 74% of the time.   I’ve posted the R code used to create the control chart and the simulation arithmetic on GitHub here.

Why care about regression to the mean?

Don noted that Daniel Kahneman has a compelling story to illustrate the pernicious effects that arise from not recognizing regression to the mean.

Here’s Kahneman and Amos Tversky’s version of the story from their 1982 book Judgment under Uncertainty:  Heuristics and Biases, Cambridge University Press (pp. 67-68):

“The instructors in a flight school adopted a policy of consistent positive reinforcement recommended by psychologists. They verbally reinforced each successful execution of a flight maneuver. After some experience with this training approach, the instructors claimed that contrary to psychological doctrine, high praise for good execution of complex maneuvers typically results in a decrement of performance on the next try.

Regression is inevitable in flight maneuvers because performance is not perfectly reliable and progress between successive maneuvers is slow. Hence pilots who did exceptionally well on one trial are likely to deteriorate on the next, regardless of the instructors’ reaction to the initial success. The experienced flight instructors actually discovered the regression but attributed it to the detrimental effect of positive reinforcement. This true story illustrates a saddening aspect of the human condition. We normally reinforce others when their behavior is good and punish them when their behavior is bad. By regression alone, therefore, they are most likely to improve after being punished and most likely to deteriorate after being rewarded. Consequently, we are exposed to a lifetime schedule in which we are most often rewarded for punishing others and punished for rewarding.”

Managers are at risk of attributing spurious causes or misunderstanding subsequent changes in performance when they work with a system that approximately exhibits stability. 

Does sending a critical email after you see a low value lead to better subsequent performance?  How can you tell if process performance has really improved?   Control chart rules--or even more simply, run chart rules—should guide process managers’ words and actions. You will have fewer disappointments, less inappropriate reinforcement, and greater chances of detecting meaningful changes in process performance.

Note about the flight instructor story

As far as I can tell, the flight instructor story first appeared in a 1974 article in Science (Amos Tversky and Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, Science, 27 Sep 1974: Vol. 185, Issue 4157, pp. 1124-1131).  Chapter 17 of Kahneman’s 2011 Thinking Fast and Slow (Farrar, Strauss and Giroux:  New York), summarizes the history and core issues of regression to the mean and includes the flight instructor story, too. 

 

 

 

 

 

This Old House-Part 1

This Old House-Part 1

It’s the variation that will hurt you

It’s the variation that will hurt you