Oscillatory Dynamics During Gestural Enhancement of Degraded Speech
Face-to-face communication often consists of an audio-visual binding between auditory and visual input such as lip movements and co-speech gestures. Specifically iconic gestures, such as a drinking gesture, can help a listener understand speech in adverse listening conditions. In this project we investigate how this visual input can enhance speech comprehension, in both clear and adverse listening conditions. Within the Language in Interaction consortium, this project aims to investigate how different brain areas communicate to each other when a listener couples iconic gestures to speech.
Using magnetoencephalography (MEG) and electroencephalography (EEG), we investigate whether oscillatory synchronization within and between brain regions predicts the degree of integration and comprehension of speech coupled with co-speech gestures in situations where speech is degraded. We study these enhancement effects, oscillatory dynamics and ERP/ERF’s in both native as non-native listeners, to understand whether both groups have a similar benefit effects of these visual cues. Additionally, we investigate whether there is a general brain mechanism that can account for these enhancement effects.
Dutch Science Foundation (NWO) Gravitation Programme - Language in Interaction Consortium (2014 - 2018)
> Other research on Gestures