Rereading Fisher: Experimental Design and Quality Improvement in Healthcare
My IHI colleague Gareth Parry with Maxine Power at the Salford Royal NHS Foundation Trust, Salford, UK, recently published an editorial in BMJ Quality and Safety. They discussed the role of randomized controlled trials in improvement (Parry G, Power M. BMJ Quality and Safety 2015;0:1–3. doi:10.1136/bmjqs-2015-004862)
Parry and Power reviewed a specific intervention also published in BMJ Quality and Safety, an experiment targeted to improve stroke care (Williams L, Daggett V, Slaven JE, et al. “A cluster-randomized quality improvement study to improve two inpatient stroke quality indicators.” BMJ Quality and Safety 2015. Published Online First 24 Aug 2015. doi:10.1136/bmjqs-2015-004188).
In the stroke study, hospitals were chosen at random (each hospital functioned as a cluster for further selection of patients) and then each hospital received one of two management interventions.
Parry and Power outline the limits of randomized controlled trials to help people choose beneficial management changes. Assessment of better management options seems to differ from the biomedical applications of RCTs that began with Austin Bradford Hill’s first publication in 1950. For example, Parry and Power advise experimenters to describe the heterogeneity of the hospitals to understand factors that may affect impact of management changes.
Parry and Power go on to say: “Importantly, embracing heterogeneity does not rule out randomisation. Revisiting the work of Ronald A Fisher [pictured above], one of the pioneering figures in experimental design, suggests applying approaches such as factorial designs on a more local level.”
I appreciate their advice; I have a project coming up in early 2016 that is a candidate for a designed experiment where the treatments are management interventions. What kind of designs should we consider?
I pulled out my copy of Fisher's Design of Experiments (1st edition 1935, 8th edition, 1966; Oliver and Boyd, Edinburgh) to begin to apply Fisher's perspective.
Here are my notes:
Addressing Variation among Hospitals: The Randomized Block Design (RBD)
In chapter IV of Design of Experiments, Fisher discusses in detail a design especially effective in agricultural research: the randomized block design.
In the randomized block design, treatments (like varieties of a crop) are applied side by side within chunks of land, which are the blocks. What experimental designs can shed light on important differences in the proposed treatments?
As Fisher describes in clear language, for a given level of experimental effort, the RBD typically allows you to estimate treatment effects with much greater precision than a simple randomized design, in which each treatment is applied to one or more complete blocks. Blocks in agricultural experiments typically are relatively heterogeneous; on the other hand, within each block, conditions are relatively homogenous. The local homogeneity within each block drives the increase in precision.
How Would We Apply the Randomized Block Design to the Stroke Study?
Consider hospitals as blocks and then apply the two management interventions within each hospital. In the simplest case, variation among hospitals can be eliminated from the estimate of intervention effects.
The RBD focuses the analysis on the different impacts of the management interventions; differences among hospitals are not of primary interest. In the RBD, you won't analyze the care of individual patients per se; you work with the aggregated results for patients in each block x treatment combination. (In analogy with an agricultural trial, individual patients in the proposed RBD are like individual plants. In the agricultural trial, we work with an aggregated function of the plants, for example total weight of harvest.)
However, if you suspect that there is an important hospital x intervention interaction—that is, the effect of a specific intervention depends on the hospital beyond a change in mean level response associated with the hospital--then we have to consider a generalized randomized block design. A GRBD requires replications of the treatments within blocks; with the extra experimental information, you can assess the hospital x treatment interaction cleanly. For example, see Addelman, S. (1969), “The Generalized Randomized Block Design,” The American Statistician, 23,35-36.
Is the RBD Feasible in Tests of Management Interventions?
By the structure of the RBD, you need to have multiple experimental units within each hospital. Each unit receives one level of management intervention. You need at least as many units as levels of intervention; we need multiple units per intervention if you need to assess hospital x intervention interactions.
In the stroke experiment, if each hospital only has one unit that cares for stroke patients, then you can’t apply the RBD.
If each hospital has at least two work units, corresponding to the two interventions in the stroke study, you still have a challenge.
Assuming you can engage both work units in a study, you face this issue: will there be “cross-talk” between the units? Cross-talk may mean that instead of two distinct interventions, you actually have two hybrid interventions that share bits and pieces of the interventions in unanticipated ways. In other words, the two interventions may not be as separated as you think. If you don’t have a clear understanding of the interventions, your inferences are less solid than if you can maintain clean distinctions.
In agricultural experiments, researchers pay attention to something like the separation issue. For example, Fisher discusses “…the necessity of discarding a strip at the edge of each plot. The width of the strip depends on the competition of neighboring plants for moisture, soil nutrients and light…” (p. 61, 8th edition).
It is not clear to me yet how to assure separation in management interventions in a designed experiment with interventions as treatments.
(In local quality improvement work you typically want cross-unit discussion and sharing of effective practices as you develop standard care processes common to both units. You want to find management structures and work practices that reliably deliver excellent care.)
Next Steps
I'll report on progress wrestling with experimental designs in the coming weeks. For now, I thank Parry and Power for the provocation to revisit Fisher.