Bayesian prerequisites | Bayes Rule
Bayes Rule
Bayes rule is given like this:
\[p(\theta \mid X) =\frac{p(X \mid \theta) p(\theta)}{p(X)}\]Where:
- $\theta$ are parameters
- $X$ observations
- $p(X)$ the evidence
- $p(X \mid \theta)$ likelihood (how well parameters explain our data)
- $p(\theta)$ Prior
- $p(\theta \mid X)$ posterior (probability of parameters after we observer the data)
In here we use small $p$ for probability of continuous distributions and $P$ for discrete distributions.
Probability tells us what is the chance of something given a data distribution.
Probabilistic model
In here we introduce the concept of probabilistic model together with the concepts of:
- the likelihood
- prior
- posterior
- MLE (Maximum Likelihood Estimation) and
- MAP (Maximum A Priori Estimation)
Probabilistic model is joint distribution of all its random variables.
Likelihood tells us, given some parameters, what is the best distribution of data. Likelihood is similar to the concept of loss function in classical approach.
Prior is similar to the concept of regularization in classical approach
Posterior is more specific knowledge of parameters after we selected prior knowledge that we get using the Bayes rule. The process to get the posterior from prior we call inference which is a cute name for deriving conclusions after we learned something.
Evidence also called marginal likelihood is what you get when you integrate
\[p(X)=\int_\theta p(X \mid \theta) \,p(\theta)\, d\theta\]Similar to posterior on model parameters we can compute posterior on latent variables, but then we need prior on latent variables $p(z)$.
\[p(z \mid X) =\frac{p(X \mid z) p(z)}{p(X)}\]The problem in inference with latent variables is that we need likelihood on latent variables.
\[p(X \mid z) = \int_\theta p(X \mid z,\theta)p(\theta)d(\theta)\]Sometimes is easier to compute posterior distribution on latent variables conditioned on model parameters:
\[p(z \mid X, \theta) =\frac{p(X \mid z, \theta) p(z)}{p(X \mid \theta)}\]To marginalize out latent variables:
\[p(X \mid \theta) = \int_z p(X \mid z,\theta)p(z)d(z)\]MLE and MAP are parameter estimators and the process to find parameters is called Bayesian optimization.
MLE is a function seeking for a model as a set of parameters that best fits the data. MLE uses likelihood function (minimum of the negative log likelihood function) to estimate the fit.
MAP is using minimum of the negative log posterior function to estimate Maximum a Posteriori fit. We also say MAP is a summarization of the posterior.
Together with estimating parameters probabilistic models are used for inference and prediction. Later problems are connected with the difficult integrations unless we have conjugate prior and posterior.
Probabilistic model is specified by the joint distribution of all its random variables. This joint distribution will not show how random variables are connected together. For RV inner connection we use graphs.
Conjugate prior for a likelihood function
Prior is said to be conjugate for a likelihood function if the posterior would stay in the same family of distributions as prior.
Matches: