Bradley Love looks into the camera
Bradley Love looks into the camera

DCCN Colloquium: Bradley Love

Thursday 10 October 2024, 2 pm - 3 pm
Taming the neuroscience literature with predictive and explanatory models

Models can help scientists make sense of an exponentially growing literature. 

In the first part of the talk, I will discuss using models as predictive tools. In the BrainGPT.org project, we use large language models (LLMs) to order the scientific literature. On a benchmark, BrainBench, that involves predicting experimental results from methods, we find that LLMs exceed the capabilities of human experts. Because the confidence of LLMs is calibrated, they can team with neuroscientists to accelerate scientific discovery. 

In the second part of the talk, I focus on models that can provide expalantions bridging behaviour and brain measures. Unlike predictive models, explanatory models can offer interpretations of key results. I'll discuss work that suggests intuitive cell types (e.g., place, grid, concept cells, etc.) are of limited scientific value and naturally arise in complex networks, including random networks. In this example, the explanatory model is serving as a baseline which should be surpassed prior to making strong scientific claims. I'll end by noting the complementary roles explanatory and predictive models play.

When
Thursday 10 October 2024, 2 pm - 3 pm
Speaker
Bradley Love (University College London)