-
Notifications
You must be signed in to change notification settings - Fork 1
Home
#ERA Toolbox Wiki
##Description
The ERP Reliability Analysis (ERA) toolbox is an open-source Matlab program that uses generalizability (G) theory to evaluate the reliability of ERP data. The purpose of the toolbox is to characterize the dependability (G-theory analog of reliability) of ERP scores to facilitate their calculation on a study-by-study basis and increase the reporting of these estimates.
The ERA toolbox provides information about the minimum number of trials needed for dependable ERP scores and describes the overall dependability of ERP estimates. All information provided by the ERA toolbox is stratified by group and condition to allow the user to directly compare dependability (e.g., a particular group may require more trials to achieve an acceptable level of dependability than another group).
Instructions for downloading the toolbox can be found here.
###Why another toolbox?
Reliability is a property of scores (the data in hand), not of measures. This means that P3, error-related negativity (ERN), late positive potential (LPP), etc. (insert your favorite ERP component here) is not reliable in some "universal" sense. Since reliability is context dependent, demonstrating the reliability of LPP scores in undergraduates at UCLA does not mean LPP scores recorded from children at NYU will automatically be reliable. Measurement reliability needs to be demonstrated on a population-by-population, study-by-study, component-by-component basis.
The purpose of the ERA toolbox is to facilitate the calculation of dependability estimates to characterize the data in hand. Psychometric studies have been useful in suggesting cutoffs and characterizing the overall reliability of ERP components, but those cutoffs have been used to infer measurement reliability in other studies in other contexts. Why infer reliability from trial counts instead of just measuring reliability directly?
My hope is that the ERA toolbox will help researchers demonstrate the reliability of their data directly, so they don't rely on trial counts to infer reliability. Mismeasurement of ERPs leads to misunderstood phenomena and mistaken conclusions. Mismeasurement compromises validity. Improving measurement, by ensuring score reliability, improves our trust of inferences drawn from observed scores and the likelihood of our findings replicating.
##Citations
The formulas implemented in the ERA Toolbox were developed by Dr. Scott Baldwin. Information about the formulas can be found in the following paper:
Baldwin, S. A., Larson, M. J., & Clayson, P. E. (2015). The dependability of electrophysiological measurements of performance monitoring in a clinical sample: A generalizability and decision analysis of the ERN and Pe. Psychophysiology, 52, 790-800. doi: 10.1111/psyp.12401
##License
Copyright (C) 2016 Peter E. Clayson
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program (gpl.txt). If not, see http://www.gnu.org/licenses/.