Thanks to Niall MacCrann (OSU) and Joe Zuntz (ROE) for triggering this discussion and pushing me to finally put this quick note online.

Here is a simple premise: we constrain the parameters of a model, for example by inferring their posterior distribution, and then consider extensions of this model, add parameters, and repeat the analysis. One obvious way to compare models in the Bayesian framework is to use Bayes factors, which involve evidence ratios.

In some cases, some of the extra parameters may be unconstrained. Do those unconstrained parameters affect the Bayesian evidence?

It is not trivial to answer this question by looking at the generic formulation of Bayes factors. But it can be addressed by using the Savage-Dickey Density Ratio (SDDR), which we will re-derive here.

Let us consider a base model consisting of a set of parameters that we constrain with data . The Bayesian evidence is where the two distributions in the integral are the likelihood and the prior, respectively.

We now consider a second model , which extends in the sense that it includes the parameters as well as other parameters . Importantly, reduces to when , and we say that the models are nested.

We can write the Bayesian evidence as where we have introduced the likelihood and the prior for . Let’s also rewrite the evidence as where we now make use of a marginal posterior distribution, where has been marginalized over, but not . Importantly, this is valid for any value of , and in particular .

Our final ingredient is to connect the likelihoods in the two models by noting that .

The evidence ratio between the two models (connected to the Bayes factor via a multiplicative term ) reads .

We immediately see that if the priors on and are independent and , which is a very common situation, the Bayes factor simplifies to

In other words the ratio between the marginalized posterior and prior on evaluated at in the extended model . This is the famous Savage-Dickey Density Ratio (SDDR). Another pedagogical derivation can be found in this paper, and some useful remarks here.

_We now go back to our original question: we add one or multiple parameters to a model , with prior . If those parameters come out truly unconstrained by our data, then the marginalized posterior distribution will reduce to the prior, regardless of . Hence, the evidence ratio is equal to one, and _this parameter has no effect on our inference or interpretation of the data.