Communicating causality
the ubiquity and importance of causal dia-
grams within epidemiology is evidenced by four articles
presented in this issue of the European Journal of Epi
Sonja A. Swanson 0 1
0 Department of Epidemiology, Harvard T. H. Chan School of Public Health , 677 Huntington Avenue, Boston, MA 02115 , USA
1 Department of Epidemiology, Erasmus Medical Center , P.O. Box 2040, CA 3000, Rotterdam , The Netherlands
When and why are causal diagrams useful? One of the most evident successes of causal diagrams is in supplementing story-telling. With a few arrows and letters, an investigator can tell a story of a data-generating process.
Causal diagrams as (formal) story-telling
-
Communicating causality
For a reader fluent in causal diagrams, even a dauntingly
complex story can now be quickly and fully digested. In
this way, we have seen a series of ‘‘paradoxes’’
demystified, including proposed explanations for the so-called
Berkson’s [8], birth-weight [9], obesity [10], and
Simpson’s [11] paradoxes. Similarly, causal diagrams focused
our attention on the structures of oft overlooked potential
biases, such as biases due to time-dependent confounding
in stratification-based analyses [12], mediator-outcome
confounding in mediation analyses [13], selecting on
treatment in instrumental variable analyses [14], and na¨ıve
per-protocol restrictions in randomized trial analyses [15].
Readers familiar with causal diagrams will recognize that
many of these examples can be described as
colliderstratification biases, and that, while some encompass
previously recognized threats to validity, these potential biases
were infrequently mentioned until their associated causal
diagrams were drawn.
Beyond demystifying perplexing patterns or
illuminating subtle problems that exist across many studies, causal
diagrams can also facilitate debates regarding a specific
study’s conclusions. Consider two investigators who are in
disagreement over whether a specific study’s analysis and
conclusions were appropriate. If these two investigators
‘‘speak DAG’’ (directed acyclic graph) then they may
seamlessly convey their assumptions and ideas to one
another with little fear of miscommunication. Perhaps the
two investigators will realize they had different causal
diagrams in mind, and that favoring one analytic approach
over another depends on which causal diagram is drawn—
and thus on particular assumptions that, undrawn, might
have suggested favoring a different analysis. Perhaps they
will even be able to collect further data to help settle on
which causal diagram—which set of assumptions—is more
reasonable. Such discussions, which can be cumbersome
and confusing without a formal language, can take place
quickly and explicitly when supplemented with causal
diagrams.
In these ways, a causal diagram, like a picture, is worth
one thousand words. Unlike artwork, however, where the
‘‘thousand words’’ convey a subjective perspective, a
causal diagram should convey exactly the thousand words
its creator and all other fluent readers would attribute to it.
Causal diagrams are useful because they facilitate precise
communication, but ignoring the formal rules that govern
them can lead to miscommunication. For some examples of
this, we can turn to an article in this issue of the European
Journal of Epidemiology in which Greenland and
Mansournia [3] caution how failing to read a causal DAG as
encoding only structural (not random) confounding or
failing to be explicit about faithfulness when presumed can
lead readers of a causal diagram to perceive a different
‘‘thousand words’’ than intended.
As with any tool that can streamline communication,
there is also a danger of causal diagrams providing a false
sense of security when they are constructed without
investigators applying deep thought and subject matter
knowledge. To see this, consider the use of causal diagrams
in the context of instrumental variable analyses. Many
epidemiology studies with instrumental variable analyses
redraw the same textbook instrumental variable causal
diagram to justify their analysis, yet the story is rarely as
straightforward as the one depicted in that causal diagram.
Herna´n and Robins [16], Swanson et al. [14] and
VanderWeele et al. [17] have presented expanded versions of
this standard graph that illustrate relatively subtle yet
potentially common ways in which bias could arise. Thus,
redrawing the textbook version of a causal diagram may
oversimplify the likely data-generating process and even
offer false comfort when applied to a specific study. Of
note, some have argued that causal diagrams are not useful
in the context of instrumental variable analyses because
‘‘the’’ DAG seems so simple that drawing it does not add to
our understanding of the process [18]. While causal
diagrams (arguably) add less to our understanding of what is a
true instrument, we have seen many examples of causal
diagrams adding substantially to our understanding of what
is not an instrument.
If two (...truncated)