After taking this course
- You know the three goals of inference and can demonstrate examples of each of these.
- You understand why in Bayesian inference some parameters cannot be computed exactly, and you know when and how to use approximate solutions.
- You can model realistic problems using (hierarchies of) common probability distributions.
- You can use tools in R and JAGS to fit such a model to real data and draw conclusions based on this.
- You can compare different models and find the model that best explains your data (for some definition of ‘best’).
|
|
In science, but also in our day-to-day life, we have to come to terms with the fact that we can never know everything. That means that we are inherently uncertain about the way things are – whether it is recognizing who the person across the street is, or deciding which theoretical model best describes the results of our study. When we acknowledge uncertainty, the claims we make become accompanied by probabilities. These can either reflect the relative number of times some event occurs (e.g. the number of times a coin comes up heads or tails, divided by the number of coin flips), or our subjective belief in the event (e.g. I believe for this coin heads is twice as likely as tails).
The first interpretation of probability is known as frequentist statistics, and this topic will be studied in course SOW-BKI107. Here, we explore the second interpretation of probability, which is associated with Bayesian statistics. You will learn how a few simple equations give rise to a powerful framework for distribution of credibility.
|
|
|