Keywords: bayesian |  MCMC |  mcmc engineering |  formal tests | 

Summary

The Gewecke test takes two non-overlapping samples and compares their means, trying to see how similar they are. The Gelman-Rubin test compares the winthin-chain to between-chain variance.

So far, we have looked for stationarity visually. We have inspected traces to see if they are white-noisy. We have made histograms over subsets of the chain, and traceplots of multiple chains with different starting points. Let us now look at some formal tests.

Gewecke Test: Testing for difference in Means

Let $H_0$ be the null hypothesis that there is no difference in the means of the distributions from each of the stationary chains. That is

In other words, the distribution of the difference in chain samples is a 0-mean distribution.

The standard deviation of this distribution is:

Now, assume the usual rejection of $H_0$ if the p-value is below the 5% limit: that is, if $H_0$ is correct, there is only a 5% chance of the absolute value of the mean difference being larger than 2 standard deviations (1.96 to be precise). That is:

Gelman-Rubin Test

This test also uses multiple chains. It compares between-chain and within-chain variance. If these are very different we havent converged yet.

Lets assume that we have m chains, each of length n. The sample variance of the $j$th chain is:

Let $w$ be the mean of the within-chain variances. Then:

Note that we expect the winthin changes to be all equal asymptotocally as $n \to \infty$ as we have then reached stationarity.

Let $\mu$ be the mean of the chain means:

The between chain variance can then be written as:

This is the variance of the chain means multiplied by the number of samples in each chain.

We use the weighted average of these two to estimate the variance of the stationary distribution:

Since the starting points of our chains are likely not from the stationary distribution, this overestimates our variance, but is unbiased under stationarity. There $n \to \infty$ and only the first term sruvives and gives us $w$.

Lets define the ratio of the estimated distribution variance to the asymptotic one.:

Stationarity would imply that this value is 1. The departure from stationarity, the overestimation then shoes up in a ratio larger than 1.

Autocorrelation and Mixing: Effective Sample size

As we have seen, autocorrelation and stationarity are related but not identical concepts. A large autocorrelation may happen due to strong correlations in parameters (which can be measured), or due to smaller step sizes which are not letting us explore a distribution well. In other words, our mixing is poor. Unidentifiability also typically causes large correlations (and autocorrelations) as two parameters may carry the same information.

The general observation that can be made is that problems in sampling often cause strong autocorrelation, but autocorrelation by itself does not mean that our sampling is wrong. But it is something we should always investigate, and only use our samples if we are convinced that the autocorrelation is benign.

A good measure to have that depends on autocorrelation is effective sample size(ESS). With autocorrelation, the ‘iid’ness of our draws decreases. The ESS quantifies how much. The exact derivation involves us going into time series theory, which we do not have the time to do here. Instead we shall just write the result our: