Probabilistic Deep Learning
Course infoSchedule
Course moduleSOW-MKI69
Credits (ECTS)6
Language of instructionEnglish
Offered byRadboud University; Faculty of Social Sciences; Artificial Intelligence;
dr. L. Ambrogioni
Other course modules lecturer
dr. L. Ambrogioni
Other course modules lecturer
Contactperson for the course
dr. L. Ambrogioni
Other course modules lecturer
dr. L. Ambrogioni
Other course modules lecturer
Academic year2021
SEM2  (31/01/2022 to 15/07/2022)
Starting block
Course mode
Registration using OSIRISYes
Course open to students from other facultiesYes
Waiting listNo
Placement procedure-
In this course you will learn to implement deep probabilistic models that combine techniques from deep learning and Bayesian statistics. At the end of this course the student is able to: 
  • Understand the theoretical and computational principles behind variational Bayesian inference, normalizing flows and variational autoencoders.
  • Implement state-of-the-art probabilistic deep learning algorithms using PyTorch.
  • Understand the relationships between variational inference and probabilistic reinforcement learning.
  • Implement deep reinforcement learning agents capable of acquiring information (inference) and use it to collect reward in complex environments. 
Deep learning has revolutionized the fields of machine learning and AI, achieving human level performance in several domains. However, standard deep learning techniques cannot quantify the uncertainty of their predictions. In this sense, agents powered by deterministic deep learning struggle to deal with intelligent behavior under uncertainty as they are not able to distinguish what is currently known and what is still unknown. 

In this course you will learn how to model uncertainty by combining deep learning with Bayesian statistics. In the first part of the course you will learn the theoretical foundation of gradient-based stochastic variational inference, including advanced techniques such as VAEs and normalizing flows. 
In the second part of the course you will use these probabilistic deep learning methods to build intelligent agents that can explore their environment, collect information and use it to reach their goal states. In this second part you will learn how to integrate modern deep reinforcement learning techniques such as deep Q-learning with variational inference.

Presumed foreknowledge
The course requires proficiency in Python programming, deep learning and Bayesian statistics. Prior knowledge of the basics of reinforcement learning is highly recommended.
Test information
The final grade will be assigned based on the result of a final exam that will involve the detailed analysis of one or more recently published papers.
The course is a 6EC semester course. The first part of the course serves to introduce the framework of stochastic variational inference, variational autoencoders and normalizing flows. The second part of the course focuses on deep reinforcement learning and in particular on the topic of goal-directed exploration. The techniques implemented in the first part will be used as components in reinforcement learning agents. The students will implement these techniques in Python using PyTorch in their take-home assignments.The final exam comprises theoretical questions, implementation questions and the analysis of a recently published research paper.
Instructional modes
Attendance MandatoryYes

Test weight1
Test typeExam
OpportunitiesBlock SEM2, Block SEM2