[R] Diamond graphs, again.

Richard A. O'Keefe ok at cs.otago.ac.nz
Fri Sep 26 08:30:00 CEST 2003


I have now obtained copies of all three medical papers that
the "Diamond Graphs" article based its examples on.

Figures 4 and 5: "Blood Pressure and End-Stage Renal Disease in Men".

    The two predictor variables (systolic and disastolic blood pressure)
    are not only continuous, they are correlated.  Recoding as some kind
    of "size" (c1.diastolic + c2.systolic) and "shape" (maybe
    log(systolic/diastolic) might have been interesting.
    
    The real summary that I think anyone reading that paper would rely
    on is not the 3d bar chart (figure 2) but a table (table 3) which
    relates blood pressure category (optimal, normal, high-normal,
    stage 1/2/3/4 hypertension) to adjusted relative risk (with 95%
    confidence interval).

    Reading the article, other (listed) factors also affected relative
    risk, and it could have been useful to present some kind of multi-
    dimensional table.

    Comparing the original 3d bar plot and table with the diamond graph,
    two things stand out:
    (a) the higher the bar (= the bigger the hexagon), the *less* the
        amount of data it is based on.  This can be seen very clearly
        in the table; it cannot be seen at all in either the bar plot
        or the diamond graph.  If I'm reading the article correctly
        (hard, because the table and 3d bar plot don't use exactly the
        same categories), the lowest bar is based on 40 times as muh
        data as the highest bar (and the relative risk has a suitably
        wide confidence interval).
    (b) one would expect the risk to increase monotonically with
        each predictor.  It doesn't.  This stands out very clearly
        in the 3d bar plot.  It is very hard to see at all in the
        diamond graph.  Once I saw it in the 3d plot, I could (just)
        detect it in the diamond graph, but the diamond graph would
        never have called my attention to it.

    In fairness to both the 3d bar plot and the diamond graph, they
    _could_ be made to show an equivalent of error bars.  Let the
    bar (or hexagon) be coloured black from 0 to the lower end-point
    of the confidence interval, then red (if colour is desired) or
    grey (if it is not) from the lower end-point of the confidence
    interval to the upper end-point, with a "black belt" at the nominal
    value.

    [Oh DRAT!  I could have patented that extension to diamond graphs!
     Tsk tsk.  I'll never get rich, I'll always be 'ard up.]

Figure 7:  "Dual Effects of Weight and Weight Gain on Breast Cancer Risk".

    In my previous message, I commented that I found it hard to believe
    that weight *change* should be considered alone.  It wasn't.  In fact,
    that's part of the point of the article.

    I also commented that it seemed to me that the one categorical
    predictor was probably a surrogate for a continuous variable.
    Imagine me slapping my head and saying "but I _knew_ that!"

    The explanation is in the editor's comment, not cited in the Diamond
    Graphs paper, so here it is:
	Editorial, "Weight and Risk for Breast Cancer",
	Jennifer L. Kelsey & John Baron,
	JAMA, November 5, 19997--Vol 278, No.17
    The point is that weight and hormone treatment are *both* surrogates
    for "lifetime estrogen dose profile".  In post-menopausal women,
    female hormones _are_ still produced, in fat (which is an active
    tissue).  I _knew_ that.  So in fact there is a _single_ explanatory
    variable (some kind of weighted cumulative exposure) which both
    hormone therapy and body mass index affect.  This raises the obvious
    point that adding a third predictor (typical hormone levels during
    years of fertility) might well be very informative.  But how would
    diamond graphs cope with that?

    But wait:  the abstract says "Higher [body mass index] was associated
    with LOWER breast cancer incidence before menopause" but a "positive
    relationship was seen among postmenopausal women who had never used
    hormone replacement".  It also says "Weight gain after the age of 18
    years was UNRELATED to breast cancer incidence before menopause but
    was POSITIVELY associated with indicence after menopause".  The
    editorial cited above makes this point also.

    That is, in order to see the results of that study, you need a
    display which
	- shows weight
	- shows weight change
	- shows hormone therapy use
	- distinguishes between breast cancer before menopause
	  and breast cancer after menopause.

    The first sentence in the body of the paper is "The relation of
    body weight to breast cancer is complex."

    If there is an easy way to produce an "equiponderant display" with
    three predictors on a two-dimensional piece of paper, I do not know
    what it may be.  It's certain that diamond graphs, as described in
    the TAS article, cannot do justice to the data from this study.

    In contrast, the tables in the paper made the difference between    
    pre- and post-menopausal outcomes clear, and above all, included
    confidence intervals.  Why do the confidence intervals matter?
    Well, table 2 of the paper shows that the "multivariate-adjusted
    relative risk" confidence intervals for premenopausal women all
    contain 1 (with a fairly high p for trend), so there _might_ not,
    on this evidence, be any effect at all, while the relative risk
    confidence intervals for postmenopausal women all contain 1 except
    for gains of 20kg or more (where the relative risk could be as low
    as 1.2).  Since the study was based on 1000 premenopausal women and
    1517 postmenopausal ones, while the effect is biologically plausible,
    it doesn't appear to be anywhere near as strong as one might fear.

    Once again, BOTH 3d bar plots AND diamond graphs are at fault for
    not giving any indication of variability/noise/error bars/...,
    and BOTH could be fiddled with to improve this.  In this case,
    it is quite impossible to see from the diamond graphs in figure
    7 of the TAS article what is quite clear from the tables in the
    original source.


My background is AI, not medicine, so I came to these articles with a
"machine learning" bias.  I was expecting to see models trained on a
subset of the data and evaluated on another subset (cross-validation).
None of them did.  One of the many things to like about R that R makes
it comparatively easy to do cross-validation.

What have we seen as common themes?

1.  The so-called "categorical" variables were (in 5 out of 6 cases)
    measured as continuous variables and then cut to quartiles or
    quintiles or the like.

2.  More explanatory variables than 2 were considered in the sources,
    and in each case more than 2 were actually important or at least useful.

3.  Presenting information without "error bars" can be seriously misleading.

How does R help?

1.  R lets us do scatter plots, smoothing, density estimation, &c.

2.  R gives us "lattice" plots, amongst others.

3.  We can construct graphs with error bars in R.

The big challenge seems to be graphical presentation of higher-
dimensional data, things like spinning plots, grand tours, &c.
And for that, there's Rgobi.




More information about the R-help mailing list