New Paper: Does the P3b component reflect working memory updating?

Carrasco, C. D., Simmons, A. M., Kiat, J. E., & Luck, S. J. (in press). Enhanced working memory representations for rare events. Psychophysiology. https://doi.org/10.1111/psyp.70038 [preprint]


For decades, many ERP researchers have believed that the P3b wave (sometimes called P300) is a scalp manifestation of a process that updates working memory. This idea originated with Manny Donchin’s context updating hypothesis of the P3b (Donchin, 1981). Donchin’s idea of context was pretty different from working memory, but as this hypothesis percolated through the field over time, it gradually morphed into the idea that the P3b reflects the updating of working memory.

Rolf Verleger mounted a major attack on the original context updating hypothesis in a classic review article in BBS (Verleger, 1988), which was followed by a vigorous rebuttal by Donchin and Coles (1988). These are interesting papers to read, but they did not settle the issue. In the ensuing years, as the field became more focused on working memory instead of context, I’m aware of no studies that directly tested the hypothesis that the P3b reflects working memory updating.

One reason for the lack of direct evidence is that the oddball paradigms typically used to elicit the P3b do not provide a sensitive assessment of working memory. In a typical paradigm, for example, participants would see a sequence of 90% Xs and 10% Os, and the task would be to press one button for X and another button for O. The responses are made immediately, so it is not necessary to store the stimuli in working memory. Even if participants were asked to make a delayed response, the Xs and Os are so easily discriminable that memory performance would likely be at ceiling.

Figure 1

Figure 1

A few years ago, my lab (especially Carlos Carrasco, Aaron Simmons, and John Kiat) got interested in trying to test the working memory encoding hypothesis. We ran a couple of experiments, but we couldn’t quite figure out the right design. Finally, we figured it out. We used a modified oddball paradigm in which a little dot appeared at one of many locations around an circle (see Figure 1).

Figure 2

On each trial, participants pressed one button if the circle was close to one of the four cardinal axes (left, right, top, and bottom) and a different button if it was close to one of the four diagonals (upper left, upper right, lower left, and lower right). One of these two categories was rare (the oddballs) and the other was frequent (the standards; counterbalanced across trial blocks). As is usual in oddball paradigms, the P3b was much larger for trials in the rare category than for trials in the frequent category (see Figure 2).

Figure 3

The main question was whether the location of the dot was encoded in working memory better for the oddball trials than for the standard trials. To assess this, the experimental design contained occasional probe trials on which participants used the mouse to click on the exact location of the dot (see Figure 3). That is, after participants made the cardinal/diagonal buttonpress responses, they were sometimes then asked to click on the remembered location of the dot. This happened on only 12.5% of trials, selected at random, so that participants would mainly focus on the oddball task.

Figure 4

We looked at the accuracy of these probe responses, calculated as the (absolute value of the) angular distance between the true location and the reported location. As shown in Figure 4, the response error of the probe responses was reduced for the oddball trials relative to the standard trials. In other words, working memory was better for the P3b-eliciting oddball trials than for the standards. Moreover, we found that participants with large P3b amplitudes on oddball trials had smaller response errors on oddball trials (whereas this correlation was not present for standards).

At first glance, these findings seem like support for the idea that the P3b reflects working memory updating. However, the story is not that simple. For example, when we looked at single-trial P3b amplitudes, response errors were not lower for trials with larger P3b amplitudes than for trials with smaller P3b amplitudes.

We also used ERP decoding to test whether the exact location of the dot was better stored in working memory on oddball trials than on standard trials (click here for information about how ERP decoding works and how you can decode your own data using ERPLAB Toolbox). As shown in Figure 5, we could decode the location of the dot better on oddball trials than on standard trials during the period following the P3b component. Note, however, that this was a pretty small difference that only barely crossed the threshold for statistical significance (p = .048). I would really like to see this effect replicated before fully believing it. However, the behavioral effect was rock solid (and replicated in a follow-up experiment).

What can we conclude from these findings? When we started the project, I knew that it would be difficult to draw any strong causal conclusions about the relationship between the P3b component and working memory updating. That is, even if we saw both a larger P3b and improved working memory on oddball trials, this would just be a correlation and could potentially be explained by a third variable such as attention. But if we saw a big difference in working memory between oddballs and standards, and if we found that working memory was better on trials with larger P3b amplitudes, this would be at least consistent with the idea that the process that produces the P3b on the scalp is also involved in working memory encoding.

However, although we saw an enormous difference in P3b amplitude between oddball trials and standard trials, we saw only small differences in working memory between oddballs and standards, whether measured via behavioral response errors on probe trials or EEG decoding accuracy. If the process that generates the scalp P3b voltage plays a major role in working memory encoding, then we would have expected a much larger working memory difference between oddballs and standards. Moreover, although we found that participants with larger P3b amplitudes had smaller response errors, we did not find any evidence that working memory was any better on trials with larger P3b amplitudes (even though we looked very hard for such a relationship). The bottom line is that, although I was really hoping we would finally provide some direct evidence for the working memory encoding hypothesis, the results of this study have actually convinced me that the P3b is probably not related to working memory encoding.

What, then, explains the small but statistically significant differences in working memory accuracy between oddballs and standards, along with the subjectwise correlation between P3b amplitude and behavioral response errors? A very plausible explanation is that both the P3b component and working memory encoding are facilitated by increased attention. That is, there are several sources of evidence that rare events trigger increased attention, and this could independently produce a larger P3b and improved working memory.

Of course, this is just one experiment, so I wouldn’t say that the working memory encoding hypothesis is completely dead. But given our new findings and the general lack of direct evidence for the hypothesis, it’s on life support.

If the P3b doesn’t reflect working memory encoding, then what does it reflect? This seems like a significant question: the P3b is huge and is observed across a broad range of experimental paradigms, and it’s reasonable to assume that the underlying process must be important for the brain to devote so many watts of energy to it. In fact, I find it embarrassing that our field has not answered this question in the 60 years since the P3b was first discovered.

My best bet is that the P3b is related to the process of making decisions about actions (where the term actions is broadly construed to include the withholding of responses and mental actions such as counting). This is related to the fact that the amplitude of the P3b is related to the probability of a task-defined category, not the probability of a physical stimulus category. Rolf Verleger has a nice review of the evidence for this idea (Verleger, 2020). But it is still not clear to me why the brain devotes so many watts of energy to creating a large P3b when a rare task-defined category occurs. Verleger notes that several hypotheses about the P3b are compatible with the finding of a larger P3b for oddballs than for standards, but in my view these hypotheses have a hard time explaining the enormous size of the P3b observed for oddballs. This is a longstanding mystery in need of a solution!

New software package: ERPLAB Studio

We are excited to announce the release of a new EEG/ERP analysis package, ERPLAB Studio. We think it’s a huge improvement over the classic EEGLAB user interface. See our cheesy “advertisement” video to get a quick overview.

Rather than operating as an EEGLAB plugin, ERPLAB Studio is a standalone Matlab program that provides a more efficient and user-friendly interface to the most commonly used EEGLAB and ERPLAB routines.

With ERPLAB Studio, you automatically see the EEG or ERP waveforms as soon as you load a file. And as soon as you perform an operation, you see what the new EEG/ERP looks like. For example, when you filter the data, you immediately see the filtered waveforms.

You can even select multiple datasets and apply an operation like artifact detection on all of them in one step. And then you can immediately see the results, such as which EEG epochs have been marked with artifacts.

We give you access to EEGLAB’s ICA-based artifact correction tools, but with a nice bonus. You can plot the ICA activations in the same window with the EEG data, making it easy to see which ICA components correspond to specific artifacts such as eyeblinks.

The program has an EEG tab for processing continuous and epoched EEG data, and an ERP tab for processing averaged ERPs.

The automatic ERP plotting makes it easy for you to view the data laid out according to the electrode locations. And we have an Advanced Waveform Viewer that can make publication-quality plots.

ERPLAB Studio is mainly just a new user interface. Under the hood, we’re running the same EEGLAB and ERPLAB routines you’ve always used. And scripting is identical.

ERPLAB Studio is included in version 11 and higher of ERPLAB. You simply follow our download/installation instructions and then type estudio from the Matlab command line.

If you’re new to ERPLAB, we strongly recommend that you go through our tutorial before starting to process your own data.

If you already know how to use the original version of ERPLAB (which we now call ERPLAB Classic), you can quickly learn how to use ERPLAB Studio with our Transition Guide.

We also have a manual that describes every feature in detail.

New Paper: Using Multivariate Pattern Analysis to Increase Effect Sizes for ERP Amplitude Comparisons

Carrasco, C. D., Bahle, B., Simmons, A. M., & Luck, S. J. (2024). Using multivariate pattern analysis to increase effect sizes for event-related potential analyses. Psychophysiology, 61, e14570. https://doi.org/10.1111/psyp.14570 [preprint]

Multivariate pattern analysis (MVPA) can be used to “decode” subtle information from ERP signals, such as which of several faces a participant is perceiving or the orientation that someone is holding in working memory (see this previous blog post). This approach is so powerful that we started wondering whether it might also give us greater statistical power in more typical experiments where the goal is to determine whether an ERP component differs in amplitude across experimental conditions. For example, might we more easily be able to tell if N400 amplitude is different between two different classes of words by using decoding? If so, that might make it possible to detect effects that would otherwise be too small to be significant.

To address this question, we compared decoding with the conventional ERP analysis approach with using the 6 experimental paradigms in the ERP CORE. In the conventional ERP analysis, we measured the mean amplitude during the standard measurement window from each participant in the two conditions of the paradigm (e.g., faces versus cars for N170, deviants versus standards for MMN). We quantified the magnitude of the difference between conditions using Cohen’s dz (the variant of Cohen’s d corresponding to a paired t test). For example, the effect size in the conventional ERP comparison of faces versus cars in the N170 paradigm was approximately 1.7 (see the figure).

We also applied decoding to each paradigm. For example, in the N170 paradigm, we trained a support vector machine (SVM) to distinguish between ERPs elicited by faces and ERPs elicited by cars. This was done separately for each subject, and we converted the decoding accuracy into Cohen’s dz so that it could be compared with the dz from the conventional ERP analysis. As you can see from the bar labeled SVM in the figure above, the effect size for the SVM-based decoding analysis was almost twice as large as the effect size for the conventional ERP analysis. That’s a huge difference!

We found a similar benefit for SVM-based decoding over conventional ERP analyses in 7 of the 10 cases we tested (see the figure below). In the other 3 cases, the ERP and SVM effects were approximately equivalent. So, there doesn’t seem to be a downside to using decoding, at least in terms of effect size. But there can be a big benefit.

Because decoding has many possible benefits, we’ve added it into ERPLAB Toolbox. It’s super easy to use, and we’ve created detailed documentation and a video to explain how it works at a conceptual level and to show you how to use it.

We encourage you to apply it to your own data. It may give you the power to detect effects that are too small to be detected with conventional ERP analyses.

Registration is now full for the 2024 ERP Boot Camp

The demand for the 2024 ERP Boot Camp was far beyond our expectations, and we reached our maximum registration of 30 people within one day. We already have a waiting list of over 30 people, so we have closed the registration site.

We realize that this is very disappointing to many people. We hope to offer another workshop like this next summer, or possibly earlier.

If you would like to get announcements about upcoming boot camps and webinars, you should join our email list.

You may also consider hosting a Mini ERP Boot Camp at your institution (in person or over Zoom).

Important Changes to the 2024 ERP Boot Camp

We are disappointed to announce that we will not be holding a regular 10-day ERP Boot Camp this summer.

We have held Boot Camps nearly every summer since 2007, supported by a series of generous grants from NIMH that allowed us to provide scholarships for all attendees. Unfortunately, although our recent renewal proposal received extremely positive reviews and scores, we were recently given the surprising and disappointing news that the renewal will not be funded this year. We believe that the ERP Boot Camp provides essential training to the field, and we will continue to pursue financial support to continue holding 10-day ERP Boot Camps in the future.

In the meantime, we have partial funding that will allow us to hold a 5-day ERP Boot Camp this summer from July 8-12, 2024 in Davis, California. The workshop will include 5-days of lectures and activities on EEG and ERP measures, including practical and theoretical issues.

Unfortunately, we will not be able to provide scholarships to pay for travel and lodging costs, and we must charge a registration fee. We are very sorry if this causes a hardship.

We are no longer taking applications through our application portal. Instead of a competitive application process, we will simply accept the first 30 people who complete the registration process and pay the registration fee. This provides an opportunity to attend for individuals who might otherwise not make it through our ordinary application process, which is highly competitive.

The registration fee will be $1000 (or $900 for people who register by April 15). The registration fee will cover 6 nights in a single occupancy hotel room (arriving July 7 and departing July 13), daily breakfast at the hotel, a catered lunch for each day of the workshop, and a group dinner. You must pay the registration fee with a credit card when you register. There are no exceptions to the registration fee policy.

Registration is now open at https://na.eventscloud.com/793175.

Given that we will accept the first 30 registrants, we encourage you to register as soon as possible. Registration will close on May 20, but we anticipate that the workshop will be filled up long before then.

You must pay for your own transportation to Davis. Davis is approximately 20 minutes away from the Sacramento Airport (SMF). You can take the Davis Airporter shuttle service or a rideshare service from SMF to Davis. If you are coming from outside North America, you may want to fly into the San Francisco airport (SFO), which is 135 km (84 miles) from Davis. We recommend taking the Davis Airporter from SFO to Davis.

New Papers: Optimal Filter Settings for ERP Research

Zhang, G., Garrett, D. R., & Luck, S. J. (in press). Optimal filters for ERP research I: A general approach for selecting filter settings. Psychophysiology. https://doi.org/10.1111/psyp.14531 [preprint]

Zhang, G., Garrett, D. R., & Luck, S. J. (in press). Optimal filters for ERP research II: Recommended settings for seven common ERP components. Psychophysiology. https://doi.org/10.1111/psyp.14530 [preprint]

What filter settings should you apply to your ERP data? If your filters are too weak to attenuate the noise in your data, your effects may not be statistically significant. If your filters are too strong, they may create artifactual peaks that lead you to draw bogus conclusions.

For years, I have been recommending a bandpass of 0.1–30 Hz for most cognitive and affective research in neurotypical young adults. In this kind of research, I have found that filtering from 0.1–30 Hz usually does a good job of minimizing noise while creating minimal waveform distortion.

However, this recommendation was based on a combination of informal observations from many experimental paradigms and a careful examination of a couple paradigms, so it was a bit hand-wavy. In addition, the optimal filter settings will depend on the waveshape of the ERP effects and the nature of the noise in a given study, so I couldn’t make any specific recommendations about other experimental paradigms and participant populations. Moreover, different filter settings may be optimal for different scoring methods (e.g., mean amplitude vs. peak amplitude vs. peak latency).

Guanghui Zhang, David Garrett, and I spent the last year focusing on this issue. First we developed a general method that can be used to determine the optimal filter settings for a given dataset and scoring method (see this paper). Then we applied this method to the ERP CORE data to determine the optimal filter settings for the N170, MMN, P3b, N400, N2pc, LRP, and ERN components in neurotypical young adults (see this paper and the table above).

If you are doing research with these components (or similar components) in neurotypical young adults, you can simply use the filter settings that we identified. If you are using a very different paradigm or testing a very different subject population, you can apply our method to your own data to find the optimal settings. We added some new tools to ERPLAB Toolbox to make this easier.

One thing that we discovered was that our old recommendation of 0.1–30 Hz does a good job of avoiding filter artifacts but is overly conservative for some components. For example, we can raise the low end to 0.5 Hz when measuring N2pc and MMN amplitudes, which gets rid of more noise without producing problematic waveform distortions. And we can go all the way up to 0.9 Hz for the N170 component. However, later/slower components like P3b and N400 require lower cutoffs (no higher than 0.2 Hz).

You might be wondering how we defined the “optimal” filter settings. At one level, the answer is simple: The optimal filter is the one that maximizes the signal-to-noise ratio without producing too much waveform distortion. The complexities arise in quantifying the signal-to-noise ratio, quantifying the waveform distortion, and deciding how much waveform distortion is “too much”. We believe we have found reasonably straightforward and practical solutions to these problems, which you can read about in the published papers.

ERP Decoding for Everyone: Software and Webinar

You can access the recording here.
You can access the final PDF of the slides
here.
You can access the data
here.

fMRI research has used decoding methods for over 20 years. These methods make it possible to decode what an individual is perceiving or holding in working memory on the basis of the pattern of BOLD activity across voxels. Remarkably, these methods can also be applied to ERP data, using the pattern of voltage across electrode sites rather than the pattern of activity across voxels to decode the information being represented by the brain (see this previous blog post). For example, ERPs can be used to decode the identity of a face that is being perceived, the emotional valence of a scene, the identity and semantic category of a word, and the features of an object that is being maintained in working memory. Moreover, decoding methods can be more sensitive than traditional methods for detecting conventional ERP effects (e.g., whether a word is semantically related or unrelated to a previous word in an N400 paradigm).

So far, these methods have mainly been used by a small set of experts. We aim to change that with the upcoming Version 10 of ERPLAB Toolbox. This version of ERPLAB will contain an ERP decoding tool that makes it trivially easy for anyone who knows how to do conventional ERP processing to take advantage of the power of decoding. It should be available in mid-July at our GitHub site. You can join the ERPLAB email list to receive an announcement when this version is released. Please do not contact us with questions until it has been released and you have tried using it.

On July 25, 2023, we will hold a 2-hour Zoom webinar to explain how decoding works at a conceptual level and show how to implement in ERPLAB Toolbox. The webinar will begin at 9:00 AM Pacific Time (California), 12:00 PM Eastern Time (New York), 5:00 PM British Summer Time (London), 6:00 PM Central European Summer Time (Berlin).

The webinar is co-sponsored by the ERP Boot Camp and the Society for Psychophysiological Research. It is completely free, but you must register in advance at https://ucdavis.zoom.us/meeting/register/tJUrc-CtpzorEtBSmZXJINOlLJB9ZR0evpr4. Once you register, you will receive an email with your own individual Zoom link.

We will make a recording available a few days after the webinar on the ERPinfo.org web site.

Please direct any questions about the webinar to erpbootcamp@gmail.com.

Applications now being accepted for UC-Davis/SDSU ERP Boot Camp, July 31 – August 9, 2023

The next 10-day ERP Boot Camp will be held July 31 – August 9, 2023 in San Diego, California. We are now taking applications, which will be due by April 1, 2023. Click here for more information.

We are currently planning to hold this workshop as an in-person event. However, these plans are subject to change as the COVID-19 pandemic evolves. If the event is held in person, we will require that everyone is fully vaccinated, and we will also implement any other safety measures that are warranted at the time of the workshop.

New Book: Applied ERP Data Analysis

I’m excited to announce my new book, Applied ERP Data Analysis. It’s available online FOR FREE on the LibreTexts open source textbook platform. You can cite it as: Luck, S. J. (2022). Applied Event-Related Potential Data Analysis. LibreTexts. https://doi.org/10.18115/D5QG92

The book is designed to be read online, but LibreTexts has a tool for creating a PDF. You can then print the PDF if you prefer to read on paper.

I’ve aimed the book at beginning and intermediate ERP researchers. I assume that you already know the basic concepts behind ERPs, which you can learn from my free online Intro to ERPs course (which takes 3-4 hours to complete).

Whereas my previous book focuses on conceptual issues, the new book focuses on how to implement these concepts with real data. Most of the book consists of exercises in which you process data from the ERP CORE, a set of six ERP paradigms that yield seven different components (P3b, N400, MMN, N2pc, N170, ERN, LRP). Learn by doing!

With real data, you must deal with all kinds of weird problems and make many decisions. The book will teach you principled approaches to solving these problems and making optimal decisions.

Side note: my approach in this book was inspired by Mike X Cohen’s excellent book, Analyzing Neural Time Series Data: Theory and Practice.

You will analyze the data using EEGLAB and ERPLAB, which are free open source Matlab toolboxes. Make sure to download version 9 of ERPLAB. (You may need to buy Matlab, but many institutions provide free or discounted licenses for students.) Although you will learn a lot about these specific software packages, the exercises and accompanying text are designed to teach broader concepts that will translate to any software package (and any ERP paradigm). The logic is much more important than the software!

One key element of the approach, however, is currently ERPLAB-specific. Specifically, the book frequently asks whether a given choice increases or decreases the data quality of the averaged ERPs, as quantified with the Standardized Measurement Error (SME). If this approach makes sense to you, but you prefer a different analysis package, you should encourage the developers of that package to implement SME. All our code is open source, so translating it to a different package should be straightforward. If enough people ask, they will listen!

The book also contains a chapter on scripting, plus tons of example scripts. You don’t have to write scripts for the other chapters. But learning some simple scripting will make you more productive and increase the quality, innovation, and reproducibility of your research.

I made the book free and open source so that I could give something back to the ERP community, which has given me so much over the years. But I’ve discovered two downsides to making the book free. First, there was no copy editor, so there are probably tons of typos and other errors. Please shoot me an email if you find an error. (But I can’t realistically provide tech support if you have trouble with the software.) Second, there is no marketing budget, so please spread the word to friends, colleagues, students, and billionaire philanthropists.

This book was also designed for use in undergrad and grad courses. The LibreTexts platform makes it easy for you to create a customized version of the book. You can reorder or delete sections or whole chapters. And you can add new sections or edit any of the existing text. It’s published with a CC-BY license, so you can do anything you want with it as long as you provide an attribution to the original source. And if you don’t like some of the recommendations I make in the book, you can just change it to say whatever you like! For example, you can add a chapter titled “Why Steve Luck is wrong about filtering.”

If you are a PI: the combination of the online course, this book, and the resources provided by PURSUE give you a great way to get new students started in the lab. I’m hoping this makes it easier for faculty to get more undergrads involved in ERP research.