Ranting on MVPA

Tags: : Statistics, Machine Learning, Machine Learning

The use of MVPA for signal detection/localization in neuroimaging has troubled me for a long time. Somehow the community refuses to acknowledge that for the purpose of localization, multivariate tests (e.g. Hotelling’s \(T^2\)) are preferable. Why are multivariate tests preferable than accuracy tests?

  1. They are more powerful.
  2. They are easier to interpret.
  3. They are easier to implement.
  4. Because they are not cross validated then:
    1. They are computationally faster.
    2. They do not suffer biases in the cross validation scheme.

I read and referee papers where authors go to great lengths to interpret their “funky” results. To them I say: Your cross validation scheme is biased and your test statistic is leaving power on the table! Please consult a statistician and replace your MVPA with a multivariate test. For a more “scientific explanation” read [1] and [2].

If you justify the use of the prediction accuracy because it is also an effect-size, then please acknowledge that effect size is a different problem than localization and read the multivariate effect size literature (e.g. [3]).

When would I really want to use the prediction accuracy as a test statistic? When doing actual decoding and not localization, such as brain-computer interfaces.


[1] Rosenblatt, Jonathan, Roee Gilron, and Roy Mukamel. “Better-Than-Chance Classification for Signal Detection.” arXiv preprint arXiv:1608.08873 (2016).

[2] Gilron, Roee, et al. “What’s in a pattern? Examining the type of signal multivariate analysis uncovers at the group level.” NeuroImage 146 (2017): 113-120.

[3] Olejnik, Stephen, and James Algina. “Measures of effect size for comparative studies: Applications, interpretations, and limitations.” Contemporary educational psychology 25.3 (2000): 241-286.

Written on September 3, 2017