Thesis defense Alex Brandmeyer (Donders Serie 146)
19 March 2014
Promotors: Prof.dr. P. Desain,Prof.dr. J. McQueen. Coprpomotor: Dr. M. Sadakata
Auditory perceptual learning via decoded EEG neurofeedback: a novel paradigm
Our experiences of sound are so much more than just sound: the notes, rhythms and instrumentation of our favorite songs, the familiar voices of colleagues, friends and loved ones during conversations, or the powerful words of a moving speech. We perceive these sounds effortlessly and transparently, unaware of the complex neurophysiological mechanisms that underlie our experiences. Using electroencephalography (EEG) and event-related potentials (ERPs), differences in the brain responses of native (L1) and non-native (L2) speakers to speech sound contrasts can be observed. Such differences are due to the long-term effects of auditory perceptual learning: our brains adapt to the specific linguistic and auditory environments in which we live. The application of multivariate pattern classification analysis techniques to this type of neuroimaging data allows for the analysis of these brain responses in real-time, and can serve as the basis for neurofeedback paradigms designed to enhance perceptual learning during the acquisition of a new language or during musical training. Furthermore, the presentation of this type of neurofeedback can enhance specific components of the auditory ERP underlying perceptual discrimination and categorization processes, suggesting that further development of this paradigm could lead to it's eventual use in music and language education.