Speech Perception in Audiovisual Communication (SPEAC)

How is it possible that we can easily have a conversation with someone even if that someone is shouting over several other talkers, speaks in busy street noise, wears a face mask, talks very fast, has a strange accent, or produces uhm’s all the time?

The human brain is uniquely equipped to successfully perceive the speech of those around us – even in quite challenging listening environments. At the SPEAC lab, we investigate the psychological and neurobiological mechanisms that underlie the exceptional human behavior of spoken communication. We specifically focus on how humans integrate input from multiple modalities, including such visual cues as lip movements, facial expressions, and hand gestures.

 


 

Speech Perception in Audiovisual Communication (SPEAC)

The work we do contributes to a better understanding of how multimodal spoken communication can usually take place so smoothly. For instance, how do seemingly meaningless up-and-down hand movements, known as beat gestures, influence what words we hear? How do listeners manage to understand talkers in challenging listening conditions, such as in loud background noise or when there are competing talkers around? How do listeners ‘tune in’ to a particular talker with his or her own peculiar pronunciation habits? What is the role of context (i.e., acoustic, semantic, and situational context) in speech processing? Finally, we also develop methodological tools to facilitate research in the speech sciences.

The kinds of behavioral experiments we run include (i) playing participants artificially manipulated videos with a speech categorization task (what’s this word?); (ii) speech-in-noise intelligibility experiments (what’s this sentence?); (iii) various psycholinguistic paradigms such as repetition priming (e.g., lexical decision). We use eye-tracking to study the time-course of speech processing on a millisecond timescale (e.g., visual world paradigm). We also apply neuroimaging techniques (EEG, MEG, tACS) to uncover the neurobiological mechanisms involved in the temporal decoding of speech, with a particular focus on oscillatory dynamics.

Wanna know more? Check out our lab website at https://hrbosker.github.io including some demos of the kinds of experiments we run...

Research group information

Click on one of the links below for more information about this research group or contact one of the members of this group.

Contact information

Postal address
Postbus 9104
6500HE NIJMEGEN