The objective of our research group is to elucidate the computational and neural strategies in human sensorimotor processing, using modeling, psychophysical and neuroimaging techniques. Our main focus is on how sensory information is transformed into spatial representations and motor actions and how different spatial and motor representations are learnt, updated and/or maintained during self-motion.
Optimal control and Bayesian models are developed to guide our research. A vestibular motion platform, a haptic device (Phantom) and a robotic manipulandum (vBot and 3Bot), all associated with virtual reality and combined with kinematic recording techniques for eye, head and limb-movements (Eyelink, Optotrak), facilitate our behavioral experiments. Neuroimaging (fMRI, EEG and MEG) and perturbation techniques (TMS/tDCS/GVS) are used to identify the neural circuitry and neuronal communication for sensorimomotor integration, particularly in cerebral cortex.
Our research objective is pursued from a neuroscience systems point of view, with research projects studying topics such as spatial perception, visuospatial updating, multisensory integration, effector selection, motor learning, and sensory-guided actions of eye and limb movements. We also exploit our experimental paradigms in clinical settings to understand the fundamental mechanisms that underlie the disorders in patients with sensorimotor deficits.
Recently, we have started to also use virtual reality and wearable technology, and methods from data science and artificial intelligence, to connect the laboratory and the real-world, and make contributions to the field of Naturalistic Neuroscience. The group contributes to the Perception, Action, and Decision Making research theme of the Donders Institute.