Cross-modal Language Processing in Real Time
In daily life, people communicate about events that are visual and spatial in nature. For example, you may see a woman running to catch a train at the station. The question we ask is how people translate such spatial events into language. In this project we will investigate sign language users, who are influenced by the visual modality and iconic structures to express space, and compare them with spoken language users, who do not make use of such visually motivated form-meaning mappings. Moreover, we will investigate whether typological differences between sign languages affect how spatial events are encoded.
We will use the eye-tracking technique to give insight into these questions. Eye-trackers have been extensively used to study interactions between vision, attention and language processing but have never been used to investigate spatial cognition in deaf signers. Implementing this technique will provide a close temporal link between gaze and language that could reveal how signers and spoken language users understand and express spatial events.
This project is supported by a Vici Grant (2015-2020) for the project "Giving cognition a hand: Linking spatial cognition to linguistic expression in native and late learners of sign language and bimodal bilinguals", awarded to Prof. Aslı Özyürek.
Dutch Science Foundation (NWO) - VICI Grant (2015 - 2020)
> Other research on sign languages