How Many Trials Should You Include in Your ERP Experiment?

Boudewyn, M. A., Luck, S. J., Farrens, J. L., & Kappenman, E. S. (in press). How many trials does it take to get a significant ERP effect? It depends. Psychophysiology.

One question we often get asked at ERP Boot Camps is how many trials should be included in an experiment to obtain a stable and reliable version of a given ERP component. It turns out there is no single answer to this question that can be applied across all ERP studies. 

In a recent paper published in Psychophysiology in collaboration with Megan Boudewyn, a project scientist at UC Davis, we demonstrated how the number of trials, the number of participants, and the magnitude of the effect interact to influence statistical power (i.e., the probability of obtaining p<.05). One key finding was that doubling the number of trials recommended by previous studies led to more than a doubling of statistical power under many conditions. Interestingly, increasing the number of trials had a bigger effect on statistical power for within-participants comparisons than for between-group analyses. 

The results of this study show that a number of factors need to be considered in determining the number of trials needed in a given ERP experiment, and that there is no magic number of trials that can yield high statistical power across studies. 

Replication, Robustness, and Reproducibility in Psychophysiology


Interested in learning more about issues affecting reproducibility and replication in psychophysiological studies? Check out the articles in this special issue of Psychophysiology edited by Andreas Keil and me featuring articles by many notable researchers in the field.

Andreas and I will be discussing these issues and more with other researchers at a panel the opening night of the Society for Psychophysiological Research (SPR) annual meeting in Quebec City October 3-7

How to p-hack (and avoid p-hacking) in ERP Research

Luck, S. J., & Gaspelin, N. (2017). How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)Psychophysiology, 54, 146-157.

Figure 3b.jpg

In this article, we show how ridiculously easy it is to find significant effects in ERP experiments by using the observed data to guide the selection of time windows and electrode sites. We also show that including multiple factors in your ANOVAs can dramatically increase the rate of false positives (Type I errors). We provide some suggestions for methods to avoid inflating the Type I error rate.

This paper was part of a special issue of Psychophysiology on Reproducibility edited by Emily Kappenman and Andreas Keil.