In this course, students learn to use methods derived from computational complexity theory for analyzing the (in)tractability of cognitive models, and for identifying sources of complexity in a model. Students also learn how this knowledge can be used to make model revisions that yield tractability. As two competing models may differ in the nature of their sources of complexity, the analyses can also yield novel empirical predictions that can be used to test the models. |
|
The functioning of the human brain can be studied and modeled at different levels of abstraction ranging from the neural implementation level to a cognitive computational level. Ideally, models postulated at the computational level are consistent with the brain resources available at the neural level. Building computational models that fit with human brain resources can be quite challenging. This is illustrated by the fact that many computational models in Cognitive (Neuro)science postulate brain computations that are - on closer inspection - computationally intractable. Here ‘computational intractability' means that the postulated computations require more resources (such as time, space, memory, hardware) than a human mind/brain or any computational mechanism has realistically available.
Examples of intractable computational models can be found in almost all cognitive domains, including perception, learning, language, planning, decision-making, communication, and reasoning. Intractability makes these models psychologically and neurally implausible as cognitive computational level models of brain functioning. However, there are ways to deal with this problem by identifying sources of complexity in these models and investigating if they can be removed from the model without the loss of explanatory power. This course covers several concepts and techniques that can be used to this end.
|
 |
|