Measurements Good Enough for Guidance
The second question in the Model for Improvement (link here) asks: How will you know that a change is an improvement?
The usual answer to this question involves one or more measures. A measure requires that you invent or borrow a method, the steps someone or some system needs to carry out to generate a number or a rank or a choice like “Yes” or “No.” The method translates a concept like “cost of care” or “pain” or “waiting time” into use through a specific operation.
(To say a change is an improvement requires a bit more structure—not only one or more measurements but also criteria for judgment that there is improvement and a decision whether the observations do or do not meet the criteria. For example, we might use a control chart of measurements and look for a shift in the measurements after the change, compared to the values before the change. In other words, we need an operational definition of improvement. See W.E. Deming (1986), Out of the Crisis, MIT Center for Advanced Engineering Studies, Cambridge MA, Chapter 9: Operational Definitions, Conformance, Performance.)
Sometimes quick and dirty measures are good enough to provoke insights and add structure to your work.
I spoke with some colleagues in a government agency last week about a project that aims to improve planning of events—“summits.” The summits are periodic events that bring healthcare providers, payers, and regulators together to focus on better care for patients.
My colleagues have limited time to develop sophisticated measures; they are hard pressed to get their work done while carving out a little bit of time to make event planning better with each cycle.
Here’s the measurement tool they’ve been using to gauge whether their planning is getting better. For the most recent past summit and the summit now being planned, they asked colleagues on the team to assess the planning:
There are multiple criticisms you might raise about this tool—for example, people will avoid an honest appraisal and default to neutral or agree; there are unclear definitions of concepts like “good job” or “Attractive Topic” and so on.
Nonetheless, the tool provides a bit of structure as the planning team reflects on their process and is a start in measuring improvement.
(1) The set of statements is a prompt to review several aspects of planning, a hedge against forgetting to reflect on the team’s work in the press to “get things done.”
(2) A summary of responses provides weight in discussion with peers and managers about ways to make the planning process more effective and efficient.
For example, team members evaluated “enough time to plan the summit” as “Neutral” for both the past summit and the current cycle underway, while the other statements were assessed either as “Agree” or “Strongly Agree.”
Given the usual bias to give answers at the Agree side of the scale, this is an important signal.
Resolution of planning time issues requires help from managers supervising the planning team. The common responses from the team members can open the door to testing ways to relieve time pressure in the next summit planning.