Dr J.H.P. Kwisthout (Johan)
Programme director - Artificial Intelligence
Associate professor - Artificial Intelligence
Principal Investigator - Donders Institute for Brain, Cognition and Behaviour
Thomas van Aquinostraat 4
6525 GD NIJMEGEN
6500 HE NIJMEGEN
I am internationally recognized as one of the experts on computational complexity aspects of Bayesian networks, which was also the topic of my PhD research. After my PhD period I specialized in the complexity of approximating the most probable explanation for observed variables in such networks. I proposed a new heuristic, so-called Most Frugal Explanation, for this computationally intractable problem. In addition, I proposed new parameterized complexity classes that capture the notion of efficient randomized computations. In contrast to the traditional parameterized complexity class FPT, these new classes parameterize the probability of answering correctly, rather than computation time. In recent years I became interested in (resource-bounded) probabilistic inferences in the brain, in particular in the context of the "Predictive Processing" account in neuroscience. Together with researchers at the Donders Institute for Brain, Cognition, and Behaviour, I contributed computational models and mathematical formalizations to this account. In particular, I identified the level of detail of predictions and the trade-off between making precise predictions and predictions that are likely to be correct as crucial aspects of the theory. I also contributed computational models of various mechanisms of prediction error minimization and studied the computational complexity of sub-processes in the Predictive Processing account.
My current research focuses both on more conceptual aspects of Predictive Processing, such as the question how the generative models are developed in infancy, as well as computational aspects, such as in investigating how the proposed algorithms for these computations can be rendered tractable. I am also more generally interested in resource-bounded approximate computations in the brain and in neuromorphic hardware, in particular in computational (in-)tractability relative to the brain's/chip's resources.
- Johan Kwisthout, Harold Bekkering, and Iris van Rooij (2017). To be precise, the details don't matter: On predictive processing, precision, and level of detail of predictions. Brain and Cognition, 112, 84-91. Full text
- Johan Kwisthout (2015). Most Frugal Explanations in Bayesian Networks. Artificial Intelligence, 218, 56 - 73. Full text
- Johan Kwisthout (2011). Most Probable Explanations in Bayesian Networks: Complexity and Tractability. International Journal of Approximate Reasoning, 52 (9), 1452 - 1469. Full text
- Johan Kwisthout and Iris van Rooij (2020). Computational resource demands of a predictive Bayesian brain. Computational Brain and Behaviour, 3(2), 174 - 188. Full text
- Iris van Rooij, Mark Blokpoel, Johan Kwisthout, and Todd Wareham (2019). Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge, UK: Cambridge University Press.
Research grants and prizes
- 2022 - NWO TTW Perspectief Grant (as main applicant and project leader) "Personalised Care in Oncology". More information
- 2022 - NWO ENW M1 grant for 4 year PhD project "From approximate weighted model counting to approximate Bayesian inference". More information
- 2020 - Internal DCC grant for 4 year PhD project "How to grow an internal model: A toolbox for the computational modeller". Main applicant: Johan Kwisthout; co-applicant: Iris van Rooij.
- 2017 - Internal DCC grant for 4 year PhD project "Understanding predictive processing in development: Modelling the generation of generative models". Joint applicants: Johan Kwisthout and Sabine Hunnius.
- 2016 - NWO EW TOP grant (compartment 2) for 4 year PhD project "Parameterized complexity of approximate Bayesian inferences". More information
- 2020 - 2028 Neuromorphic computer architectures, inspired by the structure of the brain, co-locate computation and memory in artificial (spiking) neural networks. This allows for a new way of thinking about algorithms and data and designing energy-efficient algorithms that encode information in time-dependent spiking behaviour. Theory, however, is now seriously lagging behind hardware innovations. We do not yet fully understand and utilize the potential and limitations of these new architectures. In this research project we lay the foundations for an urgently needed computational complexity framework for neuromorphic hardware focusing on energy as a vital resource. Foundational algorithmics and complexity-theoretical work is augmented by translating abstract algorithms on generic computing devices to actual implementations on Intel’s new Loihi architecture, allowing us to validate and refine the theoretical models. More information
- 2022 - 2026 Bayesian inference - changing a probability distribution in the light of new data - is a crucial computation in many AI systems which, unfortunately, is computationally costly, even if only an approximate solution is sought. We translate this problem to another problem, i.e., the problem of weighted counting of possible solutions to a logical formula. We then apply a novel technique, namely, dividing the solutions into groups of approximately the same size that can be counted easily, to approximate the total count. We prove that this yields a fast approximate solution for the original problem as well. More information
- 2020 - 2024 An influential theoretical position in cognitive neuroscience is that the brain is a 'prediction machine' that uses internal generative models to guide perception and action. Integrating new information with existing internal models is a crucial 'operation' that underlies learning and education, but also development, behaviour change, creativity, incident recovery, and drug rehabilitation. When cast into Bayesian terms (e.g., predictive coding) this integration translates into (non-parametric) changes to the structure of existing models, such as the introduction of new variables, the conditioning of an existing variable on a new contextual dependence, or the re-allocation of probability mass over variables. There is currently no formal framework that systematically addresses these operations, which seriously constrains scientists that aim to use Bayesian computational models to describe (neuro-)cognitive phenomena (and interpret data) that require structure changes in internal models. More information
- 2023 - 2028 Each year, more people are being diagnosed with cancer. They will often receive a standardized medical treatment that disregards the patient’s expectations about their quality of life during and after treatment. The PersOn programme uses artificial intelligence to analyse all data available – of the type of cancer as well as the individual patient – to predict the expected outcome of various treatment options. Using this information, the clinician and the patient will jointly choose the personalised care path that maximises the probability of having the highest quality of life. More information
- AI Research Colloquium
- AI for Healthcare
- AI in the Connected World
- Artificial Intelligence: Principles & Techniques
- Bachelor Thesis
- Cognitive Computational Neuroscience
- Introduction Artificial Intelligence
- Master Thesis Computing Science
- Master Thesis Information Sciences
- Multi Agent Systems
- Neuromorphic Computing
- Neuromorphic Engineering
- Reinforcement Learning
- Research Internship
- Research Seminar