Unveiling the computational nature of long-term memory is one of the greatest open challenges of neuroscience and artificial intelligence. Human beings effortlessly form vivid memories from individual and often inconsequential occurrences, which can be vividly re-experienced decades after the event. Instead of being a fateful encoding of past events, our memories are largely reconstructed from contextual information, blurring the line between memory and imagination. Recent breakthroughs in generative machine learning research, which led to the impressive generative capabilities of diffusion models and autoregressive transformers, opened the door to a mechanistic and algorithmic understanding of this complex interplay between memory and creative generation.
In this lab, we work on understanding and exploiting these phenomena by combining generative machine learning methods with ideas from neuroscience, probability theory and theoretical physics. We aim at both advancing the state-of-the-art of generative artificial intelligence and using these insights to cast light on the nature of the continuum connecting human memory and imagination. In addition to this more theoretical work, we conduct applied research in probabilistic inference, deep learning and medical prediction and decision making.