New software package: ERPLAB Studio

We are excited to announce the release of a new EEG/ERP analysis package, ERPLAB Studio. We think it’s a huge improvement over the classic EEGLAB user interface. See our cheesy “advertisement” video to get a quick overview.

Rather than operating as an EEGLAB plugin, ERPLAB Studio is a standalone Matlab program that provides a more efficient and user-friendly interface to the most commonly used EEGLAB and ERPLAB routines.

With ERPLAB Studio, you automatically see the EEG or ERP waveforms as soon as you load a file. And as soon as you perform an operation, you see what the new EEG/ERP looks like. For example, when you filter the data, you immediately see the filtered waveforms.

You can even select multiple datasets and apply an operation like artifact detection on all of them in one step. And then you can immediately see the results, such as which EEG epochs have been marked with artifacts.

We give you access to EEGLAB’s ICA-based artifact correction tools, but with a nice bonus. You can plot the ICA activations in the same window with the EEG data, making it easy to see which ICA components correspond to specific artifacts such as eyeblinks.

The program has an EEG tab for processing continuous and epoched EEG data, and an ERP tab for processing averaged ERPs.

The automatic ERP plotting makes it easy for you to view the data laid out according to the electrode locations. And we have an Advanced Waveform Viewer that can make publication-quality plots.

ERPLAB Studio is mainly just a new user interface. Under the hood, we’re running the same EEGLAB and ERPLAB routines you’ve always used. And scripting is identical.

ERPLAB Studio is included in version 11 and higher of ERPLAB. You simply follow our download/installation instructions and then type estudio from the Matlab command line.

If you’re new to ERPLAB, we strongly recommend that you go through our tutorial before starting to process your own data.

If you already know how to use the original version of ERPLAB (which we now call ERPLAB Classic), you can quickly learn how to use ERPLAB Studio with our Transition Guide.

We also have a manual that describes every feature in detail.

New Paper: Using Multivariate Pattern Analysis to Increase Effect Sizes for ERP Amplitude Comparisons

Carrasco, C. D., Bahle, B., Simmons, A. M., & Luck, S. J. (2024). Using multivariate pattern analysis to increase effect sizes for event-related potential analyses. Psychophysiology, 61, e14570. https://doi.org/10.1111/psyp.14570 [preprint]

Multivariate pattern analysis (MVPA) can be used to “decode” subtle information from ERP signals, such as which of several faces a participant is perceiving or the orientation that someone is holding in working memory (see this previous blog post). This approach is so powerful that we started wondering whether it might also give us greater statistical power in more typical experiments where the goal is to determine whether an ERP component differs in amplitude across experimental conditions. For example, might we more easily be able to tell if N400 amplitude is different between two different classes of words by using decoding? If so, that might make it possible to detect effects that would otherwise be too small to be significant.

To address this question, we compared decoding with the conventional ERP analysis approach with using the 6 experimental paradigms in the ERP CORE. In the conventional ERP analysis, we measured the mean amplitude during the standard measurement window from each participant in the two conditions of the paradigm (e.g., faces versus cars for N170, deviants versus standards for MMN). We quantified the magnitude of the difference between conditions using Cohen’s dz (the variant of Cohen’s d corresponding to a paired t test). For example, the effect size in the conventional ERP comparison of faces versus cars in the N170 paradigm was approximately 1.7 (see the figure).

We also applied decoding to each paradigm. For example, in the N170 paradigm, we trained a support vector machine (SVM) to distinguish between ERPs elicited by faces and ERPs elicited by cars. This was done separately for each subject, and we converted the decoding accuracy into Cohen’s dz so that it could be compared with the dz from the conventional ERP analysis. As you can see from the bar labeled SVM in the figure above, the effect size for the SVM-based decoding analysis was almost twice as large as the effect size for the conventional ERP analysis. That’s a huge difference!

We found a similar benefit for SVM-based decoding over conventional ERP analyses in 7 of the 10 cases we tested (see the figure below). In the other 3 cases, the ERP and SVM effects were approximately equivalent. So, there doesn’t seem to be a downside to using decoding, at least in terms of effect size. But there can be a big benefit.

Because decoding has many possible benefits, we’ve added it into ERPLAB Toolbox. It’s super easy to use, and we’ve created detailed documentation and a video to explain how it works at a conceptual level and to show you how to use it.

We encourage you to apply it to your own data. It may give you the power to detect effects that are too small to be detected with conventional ERP analyses.

Registration is now full for the 2024 ERP Boot Camp

The demand for the 2024 ERP Boot Camp was far beyond our expectations, and we reached our maximum registration of 30 people within one day. We already have a waiting list of over 30 people, so we have closed the registration site.

We realize that this is very disappointing to many people. We hope to offer another workshop like this next summer, or possibly earlier.

If you would like to get announcements about upcoming boot camps and webinars, you should join our email list.

You may also consider hosting a Mini ERP Boot Camp at your institution (in person or over Zoom).

Important Changes to the 2024 ERP Boot Camp

We are disappointed to announce that we will not be holding a regular 10-day ERP Boot Camp this summer.

We have held Boot Camps nearly every summer since 2007, supported by a series of generous grants from NIMH that allowed us to provide scholarships for all attendees. Unfortunately, although our recent renewal proposal received extremely positive reviews and scores, we were recently given the surprising and disappointing news that the renewal will not be funded this year. We believe that the ERP Boot Camp provides essential training to the field, and we will continue to pursue financial support to continue holding 10-day ERP Boot Camps in the future.

In the meantime, we have partial funding that will allow us to hold a 5-day ERP Boot Camp this summer from July 8-12, 2024 in Davis, California. The workshop will include 5-days of lectures and activities on EEG and ERP measures, including practical and theoretical issues.

Unfortunately, we will not be able to provide scholarships to pay for travel and lodging costs, and we must charge a registration fee. We are very sorry if this causes a hardship.

We are no longer taking applications through our application portal. Instead of a competitive application process, we will simply accept the first 30 people who complete the registration process and pay the registration fee. This provides an opportunity to attend for individuals who might otherwise not make it through our ordinary application process, which is highly competitive.

The registration fee will be $1000 (or $900 for people who register by April 15). The registration fee will cover 6 nights in a single occupancy hotel room (arriving July 7 and departing July 13), daily breakfast at the hotel, a catered lunch for each day of the workshop, and a group dinner. You must pay the registration fee with a credit card when you register. There are no exceptions to the registration fee policy.

Registration is now open at https://na.eventscloud.com/793175.

Given that we will accept the first 30 registrants, we encourage you to register as soon as possible. Registration will close on May 20, but we anticipate that the workshop will be filled up long before then.

You must pay for your own transportation to Davis. Davis is approximately 20 minutes away from the Sacramento Airport (SMF). You can take the Davis Airporter shuttle service or a rideshare service from SMF to Davis. If you are coming from outside North America, you may want to fly into the San Francisco airport (SFO), which is 135 km (84 miles) from Davis. We recommend taking the Davis Airporter from SFO to Davis.

New Papers: Optimal Filter Settings for ERP Research

Zhang, G., Garrett, D. R., & Luck, S. J. (in press). Optimal filters for ERP research I: A general approach for selecting filter settings. Psychophysiology. https://doi.org/10.1111/psyp.14531 [preprint]

Zhang, G., Garrett, D. R., & Luck, S. J. (in press). Optimal filters for ERP research II: Recommended settings for seven common ERP components. Psychophysiology. https://doi.org/10.1111/psyp.14530 [preprint]

What filter settings should you apply to your ERP data? If your filters are too weak to attenuate the noise in your data, your effects may not be statistically significant. If your filters are too strong, they may create artifactual peaks that lead you to draw bogus conclusions.

For years, I have been recommending a bandpass of 0.1–30 Hz for most cognitive and affective research in neurotypical young adults. In this kind of research, I have found that filtering from 0.1–30 Hz usually does a good job of minimizing noise while creating minimal waveform distortion.

However, this recommendation was based on a combination of informal observations from many experimental paradigms and a careful examination of a couple paradigms, so it was a bit hand-wavy. In addition, the optimal filter settings will depend on the waveshape of the ERP effects and the nature of the noise in a given study, so I couldn’t make any specific recommendations about other experimental paradigms and participant populations. Moreover, different filter settings may be optimal for different scoring methods (e.g., mean amplitude vs. peak amplitude vs. peak latency).

Guanghui Zhang, David Garrett, and I spent the last year focusing on this issue. First we developed a general method that can be used to determine the optimal filter settings for a given dataset and scoring method (see this paper). Then we applied this method to the ERP CORE data to determine the optimal filter settings for the N170, MMN, P3b, N400, N2pc, LRP, and ERN components in neurotypical young adults (see this paper and the table above).

If you are doing research with these components (or similar components) in neurotypical young adults, you can simply use the filter settings that we identified. If you are using a very different paradigm or testing a very different subject population, you can apply our method to your own data to find the optimal settings. We added some new tools to ERPLAB Toolbox to make this easier.

One thing that we discovered was that our old recommendation of 0.1–30 Hz does a good job of avoiding filter artifacts but is overly conservative for some components. For example, we can raise the low end to 0.5 Hz when measuring N2pc and MMN amplitudes, which gets rid of more noise without producing problematic waveform distortions. And we can go all the way up to 0.9 Hz for the N170 component. However, later/slower components like P3b and N400 require lower cutoffs (no higher than 0.2 Hz).

You might be wondering how we defined the “optimal” filter settings. At one level, the answer is simple: The optimal filter is the one that maximizes the signal-to-noise ratio without producing too much waveform distortion. The complexities arise in quantifying the signal-to-noise ratio, quantifying the waveform distortion, and deciding how much waveform distortion is “too much”. We believe we have found reasonably straightforward and practical solutions to these problems, which you can read about in the published papers.

ERP Decoding for Everyone: Software and Webinar

You can access the recording here.
You can access the final PDF of the slides
here.
You can access the data
here.

fMRI research has used decoding methods for over 20 years. These methods make it possible to decode what an individual is perceiving or holding in working memory on the basis of the pattern of BOLD activity across voxels. Remarkably, these methods can also be applied to ERP data, using the pattern of voltage across electrode sites rather than the pattern of activity across voxels to decode the information being represented by the brain (see this previous blog post). For example, ERPs can be used to decode the identity of a face that is being perceived, the emotional valence of a scene, the identity and semantic category of a word, and the features of an object that is being maintained in working memory. Moreover, decoding methods can be more sensitive than traditional methods for detecting conventional ERP effects (e.g., whether a word is semantically related or unrelated to a previous word in an N400 paradigm).

So far, these methods have mainly been used by a small set of experts. We aim to change that with the upcoming Version 10 of ERPLAB Toolbox. This version of ERPLAB will contain an ERP decoding tool that makes it trivially easy for anyone who knows how to do conventional ERP processing to take advantage of the power of decoding. It should be available in mid-July at our GitHub site. You can join the ERPLAB email list to receive an announcement when this version is released. Please do not contact us with questions until it has been released and you have tried using it.

On July 25, 2023, we will hold a 2-hour Zoom webinar to explain how decoding works at a conceptual level and show how to implement in ERPLAB Toolbox. The webinar will begin at 9:00 AM Pacific Time (California), 12:00 PM Eastern Time (New York), 5:00 PM British Summer Time (London), 6:00 PM Central European Summer Time (Berlin).

The webinar is co-sponsored by the ERP Boot Camp and the Society for Psychophysiological Research. It is completely free, but you must register in advance at https://ucdavis.zoom.us/meeting/register/tJUrc-CtpzorEtBSmZXJINOlLJB9ZR0evpr4. Once you register, you will receive an email with your own individual Zoom link.

We will make a recording available a few days after the webinar on the ERPinfo.org web site.

Please direct any questions about the webinar to erpbootcamp@gmail.com.

Applications now being accepted for UC-Davis/SDSU ERP Boot Camp, July 31 – August 9, 2023

The next 10-day ERP Boot Camp will be held July 31 – August 9, 2023 in San Diego, California. We are now taking applications, which will be due by April 1, 2023. Click here for more information.

We are currently planning to hold this workshop as an in-person event. However, these plans are subject to change as the COVID-19 pandemic evolves. If the event is held in person, we will require that everyone is fully vaccinated, and we will also implement any other safety measures that are warranted at the time of the workshop.

New Book: Applied ERP Data Analysis

I’m excited to announce my new book, Applied ERP Data Analysis. It’s available online FOR FREE on the LibreTexts open source textbook platform. You can cite it as: Luck, S. J. (2022). Applied Event-Related Potential Data Analysis. LibreTexts. https://doi.org/10.18115/D5QG92

The book is designed to be read online, but LibreTexts has a tool for creating a PDF. You can then print the PDF if you prefer to read on paper.

I’ve aimed the book at beginning and intermediate ERP researchers. I assume that you already know the basic concepts behind ERPs, which you can learn from my free online Intro to ERPs course (which takes 3-4 hours to complete).

Whereas my previous book focuses on conceptual issues, the new book focuses on how to implement these concepts with real data. Most of the book consists of exercises in which you process data from the ERP CORE, a set of six ERP paradigms that yield seven different components (P3b, N400, MMN, N2pc, N170, ERN, LRP). Learn by doing!

With real data, you must deal with all kinds of weird problems and make many decisions. The book will teach you principled approaches to solving these problems and making optimal decisions.

Side note: my approach in this book was inspired by Mike X Cohen’s excellent book, Analyzing Neural Time Series Data: Theory and Practice.

You will analyze the data using EEGLAB and ERPLAB, which are free open source Matlab toolboxes. Make sure to download version 9 of ERPLAB. (You may need to buy Matlab, but many institutions provide free or discounted licenses for students.) Although you will learn a lot about these specific software packages, the exercises and accompanying text are designed to teach broader concepts that will translate to any software package (and any ERP paradigm). The logic is much more important than the software!

One key element of the approach, however, is currently ERPLAB-specific. Specifically, the book frequently asks whether a given choice increases or decreases the data quality of the averaged ERPs, as quantified with the Standardized Measurement Error (SME). If this approach makes sense to you, but you prefer a different analysis package, you should encourage the developers of that package to implement SME. All our code is open source, so translating it to a different package should be straightforward. If enough people ask, they will listen!

The book also contains a chapter on scripting, plus tons of example scripts. You don’t have to write scripts for the other chapters. But learning some simple scripting will make you more productive and increase the quality, innovation, and reproducibility of your research.

I made the book free and open source so that I could give something back to the ERP community, which has given me so much over the years. But I’ve discovered two downsides to making the book free. First, there was no copy editor, so there are probably tons of typos and other errors. Please shoot me an email if you find an error. (But I can’t realistically provide tech support if you have trouble with the software.) Second, there is no marketing budget, so please spread the word to friends, colleagues, students, and billionaire philanthropists.

This book was also designed for use in undergrad and grad courses. The LibreTexts platform makes it easy for you to create a customized version of the book. You can reorder or delete sections or whole chapters. And you can add new sections or edit any of the existing text. It’s published with a CC-BY license, so you can do anything you want with it as long as you provide an attribution to the original source. And if you don’t like some of the recommendations I make in the book, you can just change it to say whatever you like! For example, you can add a chapter titled “Why Steve Luck is wrong about filtering.”

If you are a PI: the combination of the online course, this book, and the resources provided by PURSUE give you a great way to get new students started in the lab. I’m hoping this makes it easier for faculty to get more undergrads involved in ERP research. 

Representational Similarity Analysis- A great method for linking ERPs to computational models, fMRI data, and more

Representational similarity analysis (RSA) is a powerful multivariate pattern analysis method that is widely used in fMRI, and my lab has recently published two papers applying RSA to ERPs. We’re not the first researchers to apply RSA to ERP or MEG data (see, e.g., Cichy & Pantazis, 2017; Greene & Hansen, 2018). However, RSA is a relatively new approach with amazing potential, and I hope this blog inspires more people to apply RSA to ERP data. You can also watch a 7-minute video overview of RSA on YouTube. Here are the new papers:

  • Kiat, J.E., Hayes, T.R., Henderson, J.M., Luck, S.J. (in press). Rapid extraction of the spatial distribution of physical saliency and semantic informativeness from natural scenes in the human brain. The Journal of Neuroscience. https://doi.org/10.1523/JNEUROSCI.0602-21.2021 [preprint] [code and data]

  • He, T., Kiat, J. E., Boudewyn, M. A., Segae, K., & Luck, S. J. (in press). Neural Correlates of Word Representation Vectors in Natural Language Processing Models: Evidence from Representational Similarity Analysis of Event-Related Brain Potentials. Psychophysiology. https://doi.org/10.1111/psyp.13976 [preprint] [code and data]

Examples

Before describing how RSA works, I want to whet your appetite by showing some of our recent results. Figure 1A shows results from a study that examined the relationship between scalp ERP data and a computational model that predicts the saliency of each location in a natural scene. 50 different scenes were used in the experiment, and the waveform in Figure 1A shows the representational link between the ERP data and the computational model at each moment in time. You can see that the link onsets rapidly and peaks before 100 ms, which makes sense given that the model is designed to reflect early visual cortex. Interestingly, the link persists well past 300 ms. Our study also examined meaning maps, which quantify the amount of meaningful information at each point in a scene. We found that the link between the ERPs and the meaning maps began only slightly after the link with the saliency model. You can read more about this study here.

FIGURE 1

Figure 1B shows some of the data from our new study of natural language processing, in which subjects simply listened to stories while the EEG was recorded. The waveform shows the representational link between scalp ERP data and a natural language processing model for a set of 100 different words. You can see that the link starts well before 200 ms and lasts for several hundred milliseconds. The study also examined a different computational model, and it contains many additional interesting analyses.

In these examples, RSA allows us to see how brain activity elicited by complex, natural stimuli can be related to computational models, using brain activity measured with the high temporal resolution and low cost of scalp ERP data. This technique is having a huge impact on the kinds of questions my lab is now asking. Specifically :

  • RSA is helping us move from simple, artificial laboratory stimuli to stimuli that more closely match the real world.

  • RSA is helping us move from qualitative differences between experimental conditions to quantitative links to computational models.

  • RSA is helping us link ERPs with the precise neuroanatomy of fMRI and with rich behavioral datasets (e.g., eye tracking).

Figure 1 shows only a small slice of the results from our new studies, but I hope they give you the idea of the kinds of things that are possible with RSA. We’ve also made the code and data available for both the language study (https://osf.io/zft6e/) and the visual attention study (https://osf.io/zg7ue/). Some coding is skill is necessary to implement RSA, but it’s easier than you might think (especially when you use our code and code provided by other labs as a starting point).

Now let’s take a look at how RSA works in general and how it is applied to ERP data.

The Essence of Representational Similarity Analysis (RSA)

RSA is a general-purpose method for assessing links among different kinds of neural measures, computational models, and behavior. Each of these sources of data has a different format, which makes them difficult to compare directly. As illustrated in Figure 2, ERP datasets contain a voltage value at each of several scalp electrode sites at each of several time points; a computational model might contain an activation value for each of several processing units; a behavioral dataset might consist of a set of eye movement locations; and an fMRI dataset might consist of a set of BOLD beta values in each voxel within a given brain area. How can we link these different types of data to each other? The mapping might be complex and nonlinear, and there might be thousands of individual variables within a dataset, which would limit the applicability of traditional approaches to examining correlations between datasets.

RSA takes a very different approach. Instead of directly examining correlations between datasets, RSA converts each data source into a more abstract but directly comparable format called a representational similarity matrix (RSM). To obtain an RSM, you take a large set of stimuli and use these stimuli as the inputs to multiple different data-generating systems. For example, the studies shown in Figure 1 involved taking a set of 50 visual scenes or 100 spoken words and presenting them as the input to a set of human subjects in an ERP experiment and as the input to a computational model.

As illustrated in Figure 2A, each of the N stimuli gives you a set of ERP waveforms. For each pair of the N stimuli, you can quantify the similarity of the ERPs (e.g., the correlation between the scalp distributions at given time point), leading to an N x N representational similarity matrix.

FIGURE 2

The same N stimuli would also be used as the inputs to the computational model. For each pair of stimuli, you can quantify the similarity of model’s response to the two stimuli (e.g., the correlation between the pattern of activation produced by the two stimuli). This gives you an N x N representational similarity matrix for the model.

Now we’ve transformed both the ERP data and the model results into N x N representational similarity matrices. The ERP data and the model originally had completely different units of measurement and data structures that were difficult to relate to each other, but now we have the same data format for both the ERPs and the model. This makes it simple to ask how well the similarity matrix for the ERP data matches the similarity matrix for the model. Specifically, we can just calculate the correlation between the two matrices (typically using a rank order approach so that we only assume a monotonic relationship, not a linear relationship).

Some Details

The data shown in Figure 1 used the Pearson r correlation coefficient to quantify the similarity between ERP scalp distributions. We have found that this is a good metric of similarity for ERPs, but other metrics can sometimes be advantagous. Note that many researchers prefer to quantify dissimilarity (distance) rather than similarity, but the principle is the same.

Each representational similarity matrix (RSM) captures the representational geometry of the system that produced the data (e.g., the human brain or the computational model). The lower and upper triangles of the RSM as described in this approach are mirror images of each other and are redundant. Similarly, cells along the diagonal index the similarity of each item to itself and are not considered in cross-RSM comparisons. We therefore use only the lower triangles of the RSMs. As illustrated in Figure 2a, the representational similarity between the ERP data and the computational model is simply the (rank order) correlation between the values in these two lower triangles.

When RSA is used with ERP data, representational similarity is typically calculated separately for each time point. That is, the scalp distribution is obtained at a given time point for each of the N stimuli, and the correlation between the scalp distributions for each pair of stimuli is computed at this time point. Thus, we have an N x N RSM at each time point for the ERP data. Each of these RSMs is then correlated with the RSM from the computational model. If the model has multiple layers, this process is conducted separately for each layer.

For example, the waveforms shown in Figure 1 show the (rank order) correlation between the ERP RSM at a given time point and the model RSM. That is, each time point in the waveform shows the correlation between the ERP RSM for that time point and the model RSM.

ERP scalp distributions can vary widely across people, so RSA is conducted separately for each participant. That is, we compute an ERP RSM for each participant (at each time point) and calculate the correlation between that RSM and the Model RSM. This gives us a separate ERP-Model correlation value for each participant at each time point. The waveforms shown in Figure 1 show the average of the single-participant correlations.

The correlation values in RSA studies of ERPs are typically quite low compared to the correlation values you might see in other contexts (e.g., the correlation between P3 latency and response time). For example, all of the correlation values in the waveforms shown in Figure 1 are less than 0.10. However, this is not usually a problem for the following reasons:

  • The low correlations are mainly a result of the noisiness of scalp ERP data when you compute a separate ERP for each of 50-100 stimuli, not a weak link between the brain and the model.

  • It is possible to calculate a “noise ceiling,” which represents the highest correlation between RSMs that could be expected given the noise in the data. The waveforms shown in Figure 1 reach a reasonably high value relative to the noise ceiling.

  • When the correlation between the ERP RSM and the model RSM is computed for a given participant, the number of data points contributing to the correlation is typically huge. For a 50 x 50 RSM (as in Figure 1A), there are 1225 cells in the lower triangle. 1225 values from the ERP RSM are being correlated with 1225 values from the model RSM. This leads to very robust correlation estimates.

  • Additional power is achieved from the fact that a separate correlation is computed for each participant.

  • In practice, the small correlation values obtained in ERP RSA studies are scientifically meaningful and can have substantial statistical power.

RSA is usually applied to averaged ERP waveforms, not single-trial data. For example, we used averages of 32 trials per image in the experiment shown in Figure 1A. The data shown in Figure 1B are from averages of at least 10 trials per word. Single-trial analyses are possible but are much noisier. For example, we conducted single-trial analyses of the words and found statistically significant but much weaker representational similarity.

Other Types of Data

As illustrated in Figure 2A, RSA can also be used to link ERPs to other types of data, including behavioral data and fMRI data.

The behavioral example in Figure 2A involves eye tracking. If the eyes are tracked while participants view scenes, a fixation density map can be constructed showing the likelihood that each location was fixated for each scene. An RSM for the eye-tracking data could be constructed to indicate the similarity between fixation density maps for each pair of scenes. This RSM could then be correlated with the ERP RSM at each time point. Or the fixation density RSMs could be correlated with the RSM for a computational model (as in a recent study in which we examined the relationship between infant eye movement patterns and a convolutional neural network model of the visual system; Kiat et al., 2022).

Other types of behavioral data could also be used. For example, if participants made a button-press response to each stimulus, one could use the mean response times for each stimulus to construct an RSM. The similarity value for a given cell would be the difference in mean RT between two different stimuli.

RSA can also be used to link ERP data to fMRI data, a process called data fusion (see, e.g., Mohsenzadeh et al., 2019). The data fusion process makes it possible to combine the spatial resolution of fMRI with the temporal resolution of ERPs. It can yield a millisecond-by-millisecond estimate of activity corresponding to a given brain region, and it can also yield a voxel-by-voxel map of the activity corresponding to a given time point. More details are provided in our YouTube video on RSA.