24-08-2016
Super User

Kano cyclesDr. Noriaki Kano sketched a pair of linked cycles at a lecture on Total Quality Control (1990 in Tokyo, my hand written notes).



Kano’s picture applies both to service or product, hardware or software.   At the heart of the sketch is some value creating system, like a standard sequence of steps to deliver effective health care.  



Kano showed the steps of problem-solving the TQC way—the QC Story—like the path of a satellite launched from Earth that conducts a scientific investigation and then returns home after accomplishing its mission.  The picture shows an intimate connection between improvement (the outer cycle) and the core value producing system (the inner cycle). 



I’ve been thinking about the Kano picture because I am preparing to coach project teams in two healthcare improvement collaboratives next month.  (Collaboratives are an action/learning method developed by the Institute for Healthcare Improvement, http://www.ihi.org/Engage/collaboratives/Pages/default.aspx.)



One collaborative involves dental clinics in community health centers that aim to improve oral health of children and adolescents.  The other collaborative brings together health care organizations that seek to improve end-of-life planning and care, partnering with their patients.



Both collaboratives start with existing care systems that have opportunities to improve performance.



In past collaboratives, I often have asked teams to think about a pilot population of patients who should experience improved care.  However, I haven’t explicitly asked teams to specify a unit, department, service line or value stream that represents Kano’s inner cycle.  



While the specification of a pilot population can imply a specific care system—e.g. “all patients served by the Downtown Clinic” –if teams don’t characterize the inner cycle, it is tough to land the improvement project safely.    During the project, there isn’t good communication between mission control (management of the inner cycle) and the satellite improvement project.  At the end of the project, teams struggle to integrate lessons from their scientific problem-solving into the core care system.



My coaching notes for the two upcoming collaboratives now include a reminder to have my teams describe their care system—the home planet—before launching improvement.



Notes



Kano’s Design-Use-Check-Act cycle maps exactly to the Plan-Do-Study-Act in the Model for Improvement. 



In TQC short-hand, a problem is the undesirable result of a job—that is, a problem is a gap between what we expect to occur when we use the design and what actually occurs.  See e.g. H. Kume Introduction to Statistical Methods for Quality Improvement(1985), AOTS:  Tokyo; Chapter 10: QC Story.)





17-08-2016
Support

Stimulated by a project informed by Brian Maskell’s work on Lean Accounting (introduced here), I’ve been thinking about the number of measures needed to guide managers and improvers.

To manage a value-stream, Brian and his colleagues at BMA, Inc. recommend just six measures to get started:

“It is important to have few measurements. To focus people's attention and motivate continuous Lean improvement, we must select a few well-chosen measures. The measurements also provide a balance of information, are easy to understand and use, and reflect Lean issues.”

(Maskell, Brian H. et al. Practical Lean Accounting, 2nd Edition. Productivity Press, 2011-08-15. VitalBook file, p. 149. For his manufacturing clients, Brian starts with these measures: (1) Sales per person; (2) On-time delivery; (3) Dock-to-dock time; (4) First Time Through aka first pass yield; (5) Average cost per unit; (6) Accounts Receivable days outstanding.)

The API authors of The Improvement Guide (2nd edition) give similar advice for people applying the Model for Improvement:

“Multiple measures are almost always required to balance competing interests and to help ensure that the system as a whole is improved. Try to keep the list to six measures or fewer. Strive to develop a list that is useful and manageable, not perfect.” (p. 95)

Six measures take less work than 12 or 24

Maskell and my API colleagues both know a lot about measurement. They all recognize that measurement typically costs time and money.

First, there are operational costs to acquire data and maintain adequate quality of measurement so this week’s dot on a chart has consistent meaning with the dot from last week. Somebody has to do that work and get paid to do so.

There’s also psychological cost: each additional measure imposes a task burden on decision makers, taxing mental and emotional capacity to integrate additional information, draw useful inferences, and make decisions.

Cost considerations aside, are there any reasons to believe that five or six measures may be enough to guide managers as they work to maintain and improve system performance?

Evidence from the Center for Adaptive Behavior and Cognition: Less sometimes is more

Researchers at the Center for Adaptive Behavior and Cognition summarized several of their early studies in Simple Heuristics That Make Us Smart (Evolution and Cognition), published in 1999.

They describe “frugal and fast” heuristics—rules of thumb, useful shortcuts or approximations—as particular methods to search and generate answers to problems.

In certain situations that demand prediction in an environment with incomplete information, they demonstrate that methods that use a relatively small number of cues or problem features can outperform methods that use many more features.

“A computationally simple strategy that uses only some of the available information can be more robust, making more accurate predictions for new data, than a computationally complex, information-guzzling strategy that overfits.”

“Robustness [making accurate predictions for new data] goes hand in hand with speed, accuracy, and especially information frugality. Fast and frugal heuristics can reduce overfitting by ignoring the noise inherent in many cues and looking instead for the ‘swamping forces’ reflected in the most important cues. Thus, simply using only one or a few of the most useful cues can automatically yield robustness. Furthermore, important cues are likely to remain important. The informative relationships in the environment are likely to hold true when the environment changes…”

(Gerd Gigerenzer;Peter M. Todd;ABC Research Group. Simple Heuristics that Make Us Smart (Evolution and Cognition) (Kindle Locations 336-339). Kindle Edition.)

Near the end of chapter 5, the authors observe:  “The fact that a heuristic can disobey the rational maxim of collecting all available information and yet be the most accurate is certainly food for thought.” (Kindle Locations 1512-1513)


Back to Measures

The problem environments described in chapter 5 of Simple Heuristics that Make Us Smart are all decisions comparing two instances. The decision task is to answer the question “Is A better than B?” given multiple cues or factors where there is uncertainty and incomplete information. Furthermore, a few cues are important and capture the main information in the system and are likely to remain important in the near future.

The chapter 5 decision task resembles the situation faced by a value-stream manager or improvement team.  The general question is a comparison of the value stream or system with itself (A versus B): is the value stream or system better or worse this week than last?  And managers and improvers in any real situation always have some level of uncertainty and incomplete information.  If the value stream or system is approximately stable (in a control chart sense), the causal structure will be about the same week to week.

So here's the question: Can we use a small number of measures to see if the current state of our system better than the recent past?

And prospectively, about the future: Can we use a small number of measures to predict and manage improvement?

Maskell and API answer yes to both questions if the small number of measures balance competing forces (like cost, quality, delivery, and safety).  

The ABC Group’s research provides a bit of theory on why those answers makes sense. 

 



Additional Note

The ABC research looks at heuristics in a way that differs from the research program initiated by Daniel Kahneman and Amos Tversky. The Kahneman-Tversky program examines the ways people typically reason and infer, identifying inconsistencies with decision rules derived from logic and probability theory. The inconsistencies are termed "biases.". The ABC group views typical reasoning and inferences by people as functional methods that may perform well in specific environments, where performance is based on success in predicting outcomes, not alignment with abstract rules of reasoning.







09-08-2016
Support

My friend and colleague Kevin Kelleher, principal at BMGI, taught me a rule of thumb 25 years ago.    I call it the Kelleher Rule:

Collect data on key system or process properties “one step” faster than your capacity to make decisions and interventions using those data.

Kelleher’s rule suggests that if you have time to reflect and act weekly—that is, you can carry out a Plan-Do-Study-Act cycle weekly in your management of the system or process--then you should collect some data every day.

The sketch at left shows typical time steps, which are separated by a factor of about 4, as you read down the scale.

When you track numbers in a table or add dots to a plot, a daily value gives you a sense of trends or shifts within the week, especially if you have past weeks for reference. This is the logic behind use of specific rules for run charts

It’s also usually a good idea not to react to a single data point unless the single point is very unusual. Control charts are one class of tools to judge whether an individual point is unusual enough to justify a rational response through detective work and action. 

Prospectively, if you make a change to the system or process, you’d like to have evidence that the change is working or not relatively quickly—and quickness again is relative to your management capacity. If you have a weekly management stand-up review meeting and make a change on Monday, Kelleher’s rule guides you to get data daily to assess impact of your change to inform next week’s review.

There’s a psychological aspect of data and action, too. It seems that the frequency of observation can stimulate a response—several observations can overwhelm my inertia and denial or ignorance, prodding me to reflect and then maybe even act to make things better. That is, awareness and signals from the environment can prime the pump for change.

Sending the Wrong Message

I thought about Kevin’s rule this week as I built a reporting template for an improvement collaborative. The collaborative aims to improve oral health for kids; we’re working with 20 dental clinics in community health centers across the U.S.

Like many improvement collaboratives I’ve seen, the reporting template uses months as the time step: in a spreadsheet, we have several columns of measures and every row in the sheet is a month.

The participating teams will send in their templates every 30 days; we’ll use a web app I built in R and shiny that will collate the numbers and make sets of “small multiple” run charts. (I’ve posted examples of Shiny apps here and here.)

The web display will help us understand progress and identify teams that look like they’re making exceptional progress, our “bright spots.”

Now, we intend that improvement teams will test changes to improve system or process performance much more frequently than once a month. However, in my experience the monthly reporting cycle often superposes a monthly rhythm on the project work that can delay progress.

The inverse of the Kelleher Rule—e.g. if I collect data once a month, then my management plan-do-study-act work just has to happen quarterly—is a disaster.

My interim conclusion: in the oral health collaborative, I have to help participants decouple the monthly rhythm represented in the project reporting template from the testing and management action cycles. The improvement teams will need daily and weekly data to learn and improve; the monthly report should not dictate project pace.

Connection to Lean Thinking

The initial application of Lean methods occurred in mechanical fabrication and assembly operations, starting with Toyota decades ago. Getting value to flow in fabrication and assembly directly translates to “one-piece” flow of fabricated part or assembled component. Mass (batch) production—characterized by bunches of materials and parts staged ahead of each operation that requires picking up, putting down, waiting, and tracking where stuff is at any given point—has enormous levels of waste.

What would application of one-piece flow and elimination of batch processing thinking mean for my collaborative improvement project?

It looks like we should avoid batching our project management into monthly buckets. We should drive to smaller and smaller chunks of time for our Plan-Do-Study-Act cycles.

As a consequence, since we ought to be able to work to weekly or twice-weekly management in the dental clinics, we should seek daily or half-day data to inform progress and not be fooled by the monthly report.



Back to top