Speech and Learning

Our group seeks to understand how humans learn and recognize the building blocks of spoken language: its sounds and words. How are the consonants and vowels of speech recognized? How are larger prosodic structures (syllables, lexical stress patterns) recognized?  How are words learned, remembered, recognized and produced in first and second languages?  How do children become and remain such expert listeners in their native language, and why is listening and speaking often so much harder in a nonnative language?  A key concept that we investigate is plasticity in speech processing: Language users are continuously tuning in to speech (e.g. to the characteristics of individual talkers and to the demands of different listening contexts) and learning new words throughout their lives.

Research in the group bridges across disciplines: cognitive psychology, linguistics, phonetics and neuroscience.  We use behavioral, neuroscientific and computational approaches to examine speech processing in adults and children.

In past research, group members have studied how spoken words are recognized, focusing for example on the lexical segmentation problem – the fact that speech lacks reliable word boundary cues (it has no equivalent to the spaces between words in print).  Insights into segmentation were made through many crosslinguistic comparisons (including between signed and spoken language).  We have also gained insights into how listeners solve the variability problem -- the fact that spoken words are acoustically hugely variable not only segmentally but also prosodically.  Our work on variability has advanced understanding of perceptual learning in speech (especially on how listeners tune in to new talkers).  We have developed a computational model of continuous speech recognition, the Bayesian model Shortlist B.  We have also made discoveries about how new sounds (i.e. in a nonnative language) are learned and about how new words (in the native language and in a new language) are learned and remembered.  Learning has also been used successfully as a tool to investigate the mental representation of sounds and words.  Other past work has discovered aspects of the nature of the relationship between speech recognition and speech production.  We have also found ways to apply knowledge about language learning to improve language education (e.g. learning nonnative speech sounds, literacy education).

In ongoing research, we are exploring inter-individual variability in several domains: learning sounds, words and grammar in new languages, pronunciation in a second language, and processing in the native language.  We are also continuing to develop theoretical models of spoken-word recognition and to discover more about the role of prosody in speech processing. 

An NWO-funded (SSH Open Competition) project on the variability problem has recently started. We will explore the hypothesis that listeners store knowledge about how individual talkers speak and ’plug in’ that knowledge as they recognize those individuals’ words. We will build a new computational model with ’plug-ins’ (the Adaptive Bayesian Continuous speech [ABC] model) and run word-learning experiments to test it.

Research group information

Click on one of the links below for more information about this research group or contact one of the members of this group.

Contact information

Postal address
Postbus 9104
6500HE NIJMEGEN