I need to assess the validity of an approach used by a third party.

First, we did paired t-test on each time point across the whole time course of [variable X] to determine statistically significant differences between conditions A and B [for our group of subjects]. Shapiro-Francia normality test has been performed before the paired t-test for each time point and outliers have been removed to ensure the normal distribution.

In other words, they have subjects 1..n in conditions A and B, who are measured and generate time series data in each condition. They isolate each timepoint 1..k and test difference using paired t-test, so obtaining:

t1 = t-test([s1A ... snA], [s1B ... snB])

...

tk = t-test([s1A ... snA], [s1B ... snB])

They then correct for FWE.

So, they have 2 time series from 2 conditions, of the same variable. They want to show statistical difference of conditions. My intuition would say that they should use something like confidence bands: i.e. a principled test that two time series are drawn from separate distributions can be obtained by confidence band methods (e.g. Korpela, etal, 2014 - https://dl.acm.org/citation.cfm?id=2664081).

I would intuit that the t-test approach is undermined by non-iid data, even though they are testing each point separately. But I'm not certain about this. My intuition, I think, stems from the fact that the error bars calculated independently for each time point would be narrower than the confidence band calculated across the whole time series. But I would wish to make this case more clearly, and without possibility of error!

All thoughts appreciated.

More Benjamin Ultan Cowley's questions See All
Similar questions and discussions