Benchmarking for Improvement

Benchmarking for Improvement

benchmarkimprovement1.jpg

Analytical laboratories that produce measurements to inform commerce or health care require explicit methods and demonstrations of standard practice to support confident interpretation and application.  Control charts of errors --deviations from defined standard samples--should show a state of statistical control, with acceptably low variation.  

16 years ago, my colleague Jerry Langley taught me that some measurement systems can still be useful even if they fall short of what we expect to see in an analytical lab. 

Here’s the situation Jerry had in mind:  groups of teams that seek to improve performance on a specific topic.   The teams agree to share methods and results to learn from each other; the teams use a set of common measures.   Typically, the conveners of the project distribute a table of measure definitions and offer guidance on extracting data from record systems; the aim is “good enough” measure alignment rather than the solid standardization attained by the analytical laboratory.   

This past year, I helped two groups of health centers share performance data in oral health improvement projects.  Each group of health centers benchmarked themselves against the group and against their own past behavior.  The health centers differ in electronic records systems and levels of staff knowledge and training in measurement methods.  These differences contribute to differences in local measurement systems among health centers. 

Can the project faculty and the health centers learn anything by comparing and contrasting performance?

According to Jerry, both faculty and health centers can learn from each other so long as the measurement system within each health center is roughly the same, month to month.   We don’t need the measurement system in every health center to strictly match the other systems.

Here’s why:   In these projects, health centers seek to improve and sustain performance over time; a rank ordering of health centers in any given month does not provide much insight.  We don’t seek benchmark by ranking to find winners or losers in any specific month.

On the other hand, health centers that show improvement or sustain good performance month after month deserve attention:   how do people in these health centers organize work and care for their patients?    This is benchmarking for improvement.

A “small-multiples” array of run charts provides a starting point for improvement benchmarking; the display helps the viewer focus on patterns within and across health centers.

The figure below shows a small-multiples display from one of the oral health projects that began August 2016.  Generated by a Shiny web app developed in R, the display shows monthly data on the measure “Caries Risk Assessment” for 26 health centers, here labeled A through Z. 

Teams see their own performance and all the other health centers side by side; the Shiny app shows the actual names of the health centers, not the disguised code here.   Project faculty encourage teams to inquire about the work organizations of health centers that have consistently high results (health centers G and H).  The faculty has particular interest in health centers that show a jump in performance (O, D and Y) as well as those that show steady improvement (F, K and S) to help us assess how well our teaching and support translates into improved performance.

benchmarkimprovement2.jpg
Sustained Attention

Sustained Attention

Go See Advice

Go See Advice