[R] When is *interactive* data visualization useful to use?

Claudia Beleites cbeleites at units.it
Fri Feb 11 20:21:55 CET 2011


Dear Tal, dear list,

I think the importance of interactive graphics has a lot do with how visual your 
scientific discipline works. I'm spectroscopist, and I think we are very 
visually oriented: if I think of a spectrum I mentally see a graph.

So for that kind of work, I need a lot of interaction (type: plot, change a bit, 
plot again), e.g.
One example is the removal of spikes from Raman spectra (caused e.g. by cosmic 
rays hitting the detector). It is fairly easy to compute a list of suspicious 
signals. It is already much more complicated to find the actual beginning and 
end of the spike. And it is really difficult not to have false positives by some 
automatic procedure, because the spectra can look very different for different 
samples. It would just take me far longer to find a computational description of 
what is a spike than interactively accepting/rejecting the automatically marked 
suspicions. Even though it feels like slave work ;-)

Roughly the same applies for the choice of pre-processing like baseline 
correction. A number of different physical causes can produce different kinds of 
baselines, and usually you don't know which process contributes to what extent. 
In practice, experience suggests a method, I apply it and look whether the 
result looks as expected. I'm not aware of any performance measure that would 
indicate success here.

The next point where interaction is needed pops up as my data has e.g. spatial 
and spectral dimensions. So do the models usually: e.g. in a PCA, the loadings 
would usually capture the spectroscopic direction, whereas the scores belong to 
the spatial domain. So I have "connected" graphs: the spatial distribution 
(intensity map, score map, etc.), and the spectra (or loadings).
As soon as I have such connections I wish for interactive visualization:
I go back and forth between the plots: what is the spectrum that belongs to this 
region of the map? Where on the sample are high intensities of this band? What 
is the substance behind that: if it is x, the intensities at that other spectral 
band should correlate. And then I want to compare this to the scatterplot (pairs 
plot of the PCA score) or to a dendrogram of HCA...

Also, exploration is not just prerequisite for models, but it frequently is 
already the very proper scientific work (particularly in basic science). The 
more so, if you include exploring the models: Now, which of the bands are 
actually used by my predictive models? Which samples do get their predictions 
because of which spectral feature?
And, the "statistical outliers" may very well be just the interesting part of 
the sample. And the outlier statistics cannot interprete the data in terms of 
interesting ./. crap.

For presentation* of results, I personally think that most of the time a careful 
selection of static graphs is much better than live interaction.
*The thing where you talk to an audience far awayf from your work computer. As 
opposed to sitting down with your client/colleague and analysing the data together.

> It could be argued that the interactive part is good for exploring (For
> example) a different behavior of different groups/clusters in the data. But
> when (in practice) I approached such situation, what I tended to do was to
> run the relevant statistical procedures (and post-hoc tests)
As long as the relevant measure exists, sure.
Yet as a non-statistician, my work is focused on the physical/chemical 
interpretation. Summary statistics are one set of tools for me, and interactive 
visualisation is another set of tools (overlapping though).

I may want to subtract the influence of the overall unchanging sample matrix 
(that would be the minimal intensity for each wavelength). But the minimum 
spectrum is too noisy. So I use a quantile. Which one? Depends on the data. I'll 
have a look at a series (say, the 2nd to 10th percentile) and decide trading off 
noise and whether any new signals appear. I honestly think there's nothing 
gained if I sit down and try to write a function scoring the similarity to the 
minimum spectrum and the noise level: the more so as it just shifts the need for 
a decision (How much noise outweighs what intensity of real signal being 
subtracted?). It is a decision I need to take. With number or with eye. And 
after all, my professional training was thought to enable me taking this 
decision, and I'm paid (also) for being able to take this decision efficiently 
(i.e. making a reasonably good choice within not too long time).

After all, it may also have to do with a complaint a colleague from a 
computational data analysis group once had. He said the bad thing with us 
spectroscopists is that our problems are either so easy that there's no fun in 
solving them, or they are too hard to solve.

> - and what I
> found to be significant I would then plot with colors clearly dividing the
> data to the relevant groups. From what I've seen, this is a safer approach
> then "wondering around" the data (which could easily lead to data dredging
> (were the scope of the multiple comparison needed for correction is not even
> clear).
Sure, yet:
- Isn't that what validation was invented for (I mean with a proper, new, 
[double] blind test set after you decided your parameters)?
- Summarizing a whole data set into a few numbers, without having looked at the 
data itself may not be safe, either:
- The few comparisons shouldn't come at the cost of risking a bad modeling 
modelling strategy and fitting parameters because the data was not properly 
examined.

My 2 ct,

Claudia (who in practice warns far more frequently of multiple comparisons and 
validation sets being compromised (not independent) than of too few data 
exploration ;-) )

-- 
Claudia Beleites
Dipartimento dei Materiali e delle Risorse Naturali
Università degli Studi di Trieste
Via Alfonso Valerio 6/a
I-34127 Trieste

phone: +39 0 40 5 58-37 68
email: cbeleites at units.it



More information about the R-help mailing list