Figures
Abstract
Multilevel linear models allow flexible statistical modelling of complex data with different levels of stratification. Identifying the most appropriate model from the large set of possible candidates is a challenging problem. In the Bayesian setting, the standard approach is a comparison of models using the model evidence or the Bayes factor. Explicit expressions for these quantities are available for the simplest linear models with unrealistic priors, but in most cases, direct computation is impossible. In practice, Markov Chain Monte Carlo approaches are widely used, such as sequential Monte Carlo, but it is not always clear how well such techniques perform. We present a method for estimation of the log model evidence, by an intermediate marginalisation over non-variance parameters. This reduces the dimensionality of any Monte Carlo sampling algorithm, which in turn yields more consistent estimates. The aim of this paper is to show how this framework fits together and works in practice, particularly on data with hierarchical structure. We illustrate this method on simulated multilevel data and on a popular dataset containing levels of radon in homes in the US state of Minnesota.
Citation: Edinburgh T, Ercole A, Eglen S (2023) Bayesian model selection for multilevel models using integrated likelihoods. PLoS ONE 18(2): e0280046. https://doi.org/10.1371/journal.pone.0280046
Editor: Alessandro Barbiero, Universita degli Studi di Milano, ITALY
Received: July 26, 2022; Accepted: December 20, 2022; Published: February 15, 2023
Copyright: © 2023 Edinburgh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data and source files are available from the accompanying repository at Zenodo (https://doi.org/10.5281/zenodo.7314381). The working version of the repository is available at GitHub (https://github.com/tedinburgh/model-evidence-with-integrated-likelihood).
Funding: TE is funded by Engineering and Physical Sciences Research Council (EPSRC) National Productivity Investment Fund (NPIF) EP/S515334/1, reference 2089662. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/S515334/1.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: ABC, Approximate Bayesian computation; AIC, Akaike information criterion; MCMC, Markov Chain Monte Carlo; SMC, Sequential Monte Carlo
Introduction
Multilevel models provide a generalisation of linear models to settings in which the model parameters (e.g. regression coefficients) are in some way stratified by groups within the population [1]. For example, individuals in the population may belong to a much smaller set of groups or clusters, and data may be available on the level of the individual and the level of the group. Such hierarchical data structures occur naturally in a wide array of scientific applications, examples of which include phylogenetics, education, healthcare and medicine [2, 3]. This concept can be arbitrarily extended to any number of groupings that exist within the population, either hierarchically or without nesting. A simple linear model that does not include the multilevel structure is generally regarded as an inferior model choice in such situations as it neglects information inherently within the group structure. Instead, multilevel models explicitly model at each level of granularity. A wide variety of structures are possible, which raises an important question: how may we identify an optimal model structure from a number of competing hypothesised models? For example, should we include hierarchical structure, and should we prefer a multilevel model with varying intercepts or both varying slopes and intercepts? The answer to this question is generally context-specific, relating to the overarching goals of an analysis, e.g. inference or prediction, and to any prior knowledge the researcher has about the problem. In conjunction with this, there exists an array of criteria that can be used to compare the suitability of two separate models. For example, in the frequentist setting, the mostly widely used is the Akaike information criterion (AIC) [4], though other approaches include false-discovery rate [5] and likelihood ratio tests [6, 7].
In this work, we instead focus on Bayesian approaches to model selection, where the usual strategy is to calculate the Bayes factor of two competing models. This is defined as the ratio of the model evidence for each model, where the model evidence is the likelihood integrated over all model parameters with respect to the prior. A key advantage of using the model evidence for model comparison is that it implicitly discourages overfitting by penalising model complexity, since including additional parameters will increase the dimension of the parameter space to be integrated over. By way of contrast, the penalty on model complexity has to be artificially introduced in the AIC framework.
Direct calculation of the model evidence and Bayes factor is well-established for linear models under a normal-inverse-gamma prior (e.g. [8]), but cannot be obtained analytically for multilevel models, as the integral is intractable. As a result, the Bayes factor must be estimated, either by directly approximating the integral as a sum, for instance using importance sampling [9] or sequential Monte Carlo [10], or by jointly estimating posterior probabilities of proposed models through approximate Bayesian computation (ABC) methods, or by numerical methods [11, 12]. In ABC methods, a hierarchical Markov Chain Monte Carlo (MCMC) sampling scheme alternates between two sampling steps, first across the indices denoting each model and then for model parameters of the current chosen model. This requires specification of prior probabilities for the individual models, in addition to priors for the parameters of each model. The relative acceptance frequencies in the chain for the model index then provides an approximation to the posterior probabilities for the models. This, alongside the given priors, allow an estimation of the Bayes factor that bypasses the need to estimate model evidence for each model. A key challenge in such an approach though, is to ensure sufficient mixing in the MCMC chain for the model index, since if the MCMC spends too long exploring only one model, the resulting extreme autocorrelation biases the posterior probability estimates. There are several approaches to this ABC framework, including reversible-jump MCMC [13] and product-space MCMC [14]. In contrast to this imposed hierarchical structure of models, sequential Monte Carlo (SMC) can be run separately on each model, as a by-product of the algorithm is a direct estimate of the log model evidence. This is achieved by a combination of Metropolis-Hastings and importance sampling, in which the likelihood is optimised using a simulated annealing process. Whilst these MCMC approaches are widely used, the estimates tend to suffer dramatically in high-dimensional settings, due to challenges in adequately sampling associated complex high-dimensional parameter spaces.
This motivates the approach to the estimation of Bayes factors that we take here, using partially-integrated likelihoods instead of full likelihoods. We treat (potentially high-dimensional) non-variance parameters, such as the regression coefficients in the model, as nuisance parameters, and we analytically integrate these out with respect to conjugate Gaussian priors, since this reduces the dimension of the problem. This reduces the full likelihood on all parameters to an integrated likelihood on only variance parameters. We can then estimate the model evidence by returning to sequential Monte Carlo (or any of the aforementioned estimation methods), which yields improved results, reduces the bias and variance in estimates, and typically improves computational efficiency.
We illustrate our technique using both simulated data and Minnesota radon contamination dataset introduced by [1]. For the former, we simulate four datasets with multilevel structure that correspond to four models described in Methods and then estimate the model evidence for each model and each dataset. In the latter, we estimate the model evidence for various multilevel models proposed by the authors in [1]. This dataset contains measurements of the radon level in houses in the US state Minnesota, as well as predictors at the individual house level and at the county level. The grouping of houses within counties provides an inherent hierarchical structure. As radon is a carcinogen, identifying areas with higher concentrations of radon may be an important consideration in decision-making for homeowners and county authorities. The Minnesota radon contamination dataset has been used by several software packages to illustrate multilevel modelling approaches, such as in the Python module PyMC [15, 16].
Methods
As multilevel linear models are a generalisation of linear models, we can also view a simple linear model as the single-level case within the multilevel framework. For both clarity and computational reasons, we consider linear models and multilevel linear models separately, first summarising notation and then providing the integrated likelihoods in each case, given suitable priors. Multilevel linear models are often interchangeably described as mixed models, where fixed and random effects are equivalent to the population-wide and group-specific variables. We use the vocabulary of multilevel linear models, to mirror the work of [1] as this allows for higher-level generaliation. We provide open-access code for our work at [17]. In this code, we use PyMC for SMC sampling, given the full likelihoods and the integrated likelihoods that we have derived.
Definitions and notation
We first describe a linear model, in a setting with no multilevel structure. We use to denote the data, which contains the observations (yi, xi), for i = 1, …, n. The independent variables xi are generally assumed to be vector-valued, with dimension d, and we denote the corresponding regression coefficient vector as β. We assume this contains an intercept term (i.e. the first element of xi is 1 for all i). We focus on the subset of generalised linear models with normal distribution and identity link, which may be considered to be the simplest case for continuous observations, yi. In a Bayesian setting, we require prior distributions for each model parameter, in this case the coefficient β and variance parameter σ2, to fully define a model. For a fixed-variance multivariate normal distribution likelihood, the conjugate prior for the mean is another multivariate normal distribution. Therefore, we choose to assign a prior of this form for β. In this section, we will not need specify a distributional form of the prior for σ2 (though we later use an inverse-gamma prior). A linear model, denoted
, is:
(1)
The parameters of this linear model are denoted by θ = (βT, σ2)T. In addition to θ, we have various hyperparameters, which include μ, Σ and any belonging to the unspecified distribution
. Given our choice of multivariate normal prior for β, we could arbitrarily eliminate the mean μ by translating the data:
(though we would need to factor this into the interpretation of any results). However, we generally assume the data have been normalised or centred and scaled for computational reasons. MCMC sampling tends to be more efficient with such preprocessed data, because of reduced auto-correlation in sampling chains, and a translation of the data to eliminate the μ may conflict with this. We assume independence of priors, so the prior for θ is the product of the individual priors for β and σ2. An alternative specification sets a normal-inverse-gamma distribution for the joint prior distribution of β and σ2, i.e.
, or
,
. Assuming such a relationship between β and σ2 has computational benefits, as this is conjugate for both parameters, meaning it is possible to fully integrate out over both parameters to get an analytic expression for the model evidence. However, while this prior is convenient, it is generally not useful or realistic in practice [8, 18], and the conjugate nature of the prior does not extend to multilevel models. Given a model
with parameters θ, we define the following:
- Likelihood, given parameters θ:
- Prior distribution function for θ:
- Integrated likelihood, with integration over β:
- Model evidence (or marginal likelihood):
- Akaike information criterion:
, where k is the number of unconstrained parameters.
We now extend this notation to a multilevel linear model. We denote the data, , as (yij, xij, zj), for the ith observation in group j, with i = 1, …, nj, j = 1, …, J and ∑j nj = n. As before, we focus on the normal-identity case, and we assume variables at the individual-level, xij, and group-level, zj, are vector-valued, with corresponding d-dimensional individual-level and m-dimensional group-level regression coefficients, β and α respectively. The multilevel framework contains a model for each level of the data, as below:
(2)
We could straightforwardly introduce higher-level groups in analogous manner, though we will not elaborate on this here. It is also worth mentioning that the multilevel linear model can be rewritten as a single-level linear model with correlated errors [19]. For our purposes, it is more convenient to retain the multilevel formulation and, furthermore, to absorb the group-level variables, zj, and group-level regression coefficients, α, into their individual-level counterparts, i.e.
and (βT, αT) ↦ βT. The prior for the combined regression coefficient has mean
and block diagonal covariance matrix diag(Σβ, Σα) = Σ. Instead of the group-level uj, we now model ηj as a group-level deviation from the ‘population average’, and we consider the J-dimensional vector η = (η0, …, ηJ) as an additional nuisance variable to integrate out. We then rewrite the above model as:
(3)
We now have model parameters
, and model hyperparameters μ, Σ and those from the unspecified distributions
and
. The remaining quantities introduced above are much the same, except:
- Model evidence:
- Integrated likelihood, with integration over β and η:
Finally, we introduce a more general multilevel model, in which any regression coefficient may also vary by group, with a higher-level model for that variability, as opposed to just an intercept term. This model is:
(4)
A key distinction here, which differentiates this from a linear model, is that Ση is an inherent variable of the model, rather than a fixed Bayesian hyperparameter. Also, as Ση is a symmetric positive-definite matrix, we parameterise this through a vector ν instead of specifying the full matrix, but we do not make any assumption about the form of Ση beyond this. As an example, it could be Ση(ν) = νI, where I is the identity matrix, though this assumption of independence is restrictive. As before, we centre the group-level coefficients ηj by absorbing the ‘average’ into the β coefficient and, therefore, there can be an overlap between the variables included in zij and xij.
To implement this integrated likelihood approach under any MCMC sampling scheme, such as SMC or reversible-jump MCMC, it is useful to calculate beforehand all products and sums involving just the data that are used within the log integrated likelihoods, which are defined in Eqs 6, 7 and 8. Then, the MCMC sampling scheme samples any variance components of the model, accepting or rejecting the proposed state by evaluating the log integrated likelihood at this state. For example, in the simple linear model, first compute terms like ∑ij xij, then sample
and accept or reject
via the log integrated likelihood
(Eq 6). Computations involved in the log integrated likelihood are all included in the accompanying code, and this is agnostic to the MCMC sampling scheme chosen. After completing the MCMC sampling, the model evidence can be estimated as appropriate [20–22].
We can compare two competing models for the data using the Bayes factor: . These may, for instance, contain different subsets of the independent variables or have different prior beliefs for the hyperparameters, though the data
must remain fixed, i.e. both models contain the same n individuals. The value of the Bayes factor indicates the strength of evidence for one model over the other. Interpretation is generally provided via tables proposed by [23] or [24].
Integrated likelihood for the linear model
The integrated likelihood for linear models can often be found in Bayesian textbooks, such as [8], though for clarity, we include this using the notation above. The integrated likelihood is:
Rearranging the integrand and integrating out β, we get:
where we define:
(5)
In practice, we work with the logarithm of the integrated likelihood, particularly for computational reasons. This is:
(6)
Integrated likelihood for the linear model with normal-inverse-gamma conjugate prior
For a linear model with conjugate normal-inverse-gamma prior , we define the following:
The posterior distribution for β and σ2 is then also normal-inverse-gamma,
. The log integrated likelihood and full log model evidence are (see [8], but note this uses a different reparameterisation):
Integrated likelihood for a simple multilevel linear model
For the simple multilevel linear model (Eq 3):
Note that
We first consider the integral in square brackets, completing the square in ηj in the expression:
This gives:
Then, rearranging for β as in the linear model case:
where we define:
Finally, we get the integrated likelihood for the simple multilevel linear model:
As before, a version of this integrated likelihood derivation can also be found in [8], but, in this case, it is given in simplified matrix algebra form, where the dependence on
and
is left unspecified. The log integrated likelihood is:
(7)
Integrated likelihood for a general multilevel linear model
In the more general case (Eq 4), the steps are almost identical:
Then:
where we now have additional definitions:
Finally, we get the log integrated likelihood for the more general multilevel linear model:
(8)
Example: Simulation study
We illustrate this approach first on simulated datasets based on the Prophet model of [25], which seeks to model a variable y as a non-linear function of time t. By specifying a suitable flexible multi-dimensional transform of t, which includes piece-wise linear and Fourier transform terms, we convert this problem to a linear model (or by extension a multilevel model). The piece-wise linear component requires pre-specified points sn, n = 1, …, d1, at which the function is continuous but not smooth, i.e. the gradient changes. The Fourier transform component requires a specified periodicity P and is truncated at 2d2 terms. The dimension of x is then d = 1 + d1 + 2d2. The basic model structure is then as follows, where E[⋅] denotes expectation:
Though this describes a non-linear relationship between y and t, the model itself is linear because it is linear in the coefficient β. For each model type we have described in previous sections (linear, simple multilevel, general multilevel and linear model with fully conjugate normal-inverse-gamma prior), we generate a simulated dataset corresponding to that model, i.e. as if this is the ‘true’ model. We then evaluate each model on all four datasets, estimating the model evidence via the integrated likelihood and the full likehood.
To generate the data, we first simulate multilevel group structure and tij, which in turn generates covariates xij that have the form above. The covariates zij, which are those that vary by group within a general multilevel model, are defined as a (centred) subset of the xij. Both group structure and covariates, including zij, are shared across all datasets. The underlying structure is not explicitly used in the linear model or the associated linear dataset, as there is no relationship between the group membership and outcome variables. For each dataset, we sample ‘true’ model coefficients, which are then regarded as fixed, and we then compute the outcome variable, yij, as defined by the form of the corresponding model. Together with the multilevel structure and the covariates, this outcome variable forms the dataset, . We describe the datasets in more detail below.
In all datasets, we set J = 15 and n = 1000, which are the number of groups and of observations respectively. To assign multilevel group membership within the data, we sample the integers j = 1, …, J with replacement with probability pj. In order to generate unequal group sizes, we sample pj from a Dirichlet distribution with parameter α = (2, …, J + 1), such that ∑jpj = 1 and that E[pj] = (j + 1)/J2. We also have:
- For all datasets:
The constants added to zij are such that E[zij] = (1, 0, 0). We also specify a covariance hyperparameter S for simulating ‘true’ coefficients b from a multivariate Gaussian distribution, where S is a d × d positive-definite:
The elements of S1 were chosen to allow flexibility in the gradients of t in each interval [sn, sn+1], with larger values as δn represents only a change in the gradient from the interval [sn−1, sn] to the interval [sn, sn+1]. Similarly, λ was set to a small value so that the Fourier element did not dominate the piece-wise linear component.
- Linear dataset:
- Simple multilevel dataset:
- General multilevel dataset:
- Linear dataset using normal-inverse-gamma distribution:
The joint distribution for b and s2 is
, which is a conjugate prior for the Gaussian linear model, with E[β] = 0, Cov[β] = S.
As the expected value of is b/(a − 1), and ηj and ϵij are independent, in each case the outcome variable yij should have similar expected value and variance, if the entire data generation was repeated multiple times, i.e. Eθ[E[yij|θ]] = 0 and Eθ[var(yij − bTxij|θ)] = 0.2, where θ here indicates both covariate values and coefficients, e.g. θ = (tij, b, s2). This is important as it means the choice of model priors should contribute less to the model evidence than the model structure, when we evaluate each model against each dataset. All four simulated datasets are shown in Fig 1 and available in the accompanying repository.
Each dot represents a datapoint, with the model covariates x = g(t) a deterministic and multi-valued non-linear function of t. In addition, the line bTx is shown for all values of t ∈ [0, 1] for each dataset. For , the lines bTx + hj are also included, for j = 1, …, J = 15. Similarly, for
, the lines
are also included, where z is also a deterministic multi-valued function of t.
We now regard the ‘true’ coefficients, e.g. b and s, as fixed but unknown, and we specify models corresponding to each dataset. We specify priors for each model, which are similar to the distributions from which we generated the ‘true’ coefficients. In place of the covariance matrix S used to generate data, the prior for β has covariance Σ, which shares the diagonal terms of S but is zero elsewhere. The models are as follows:
- For all models:
- Linear model:
- Simple multilevel model:
- General multilevel model:
- Linear model using normal-inverse-gamma distribution prior:
For each dataset, we expect the model with ‘true’ structure to have the largest model evidence. Table 1 shows the performance of the models on the respective datasets, illustrating how our approach not only leads to a decrease in the uncertainty of the estimated model evidence, but can also prevent model misspecification where sampling using the full likelihood is unable to do so, correctly identifying the ‘true’ model for each dataset. For models to
, we cannot evaluate the bias of the log model evidence estimates, with no direct solution to compare to. However, for the linear model with normal-inverse-gamma prior, direct computation shows that estimates using the integrated likelihood are unbiased and closer to the analytic solution than the estimates using the full likelihood. Furthermore, in every instance within this example the computational cost of running the SMC sampling algorithm on the variance parameters with the integrated likelihood was reduced compared to sampling all parameters with the full likelihood.
This was computed with sequential Monte Carlo (SMC), using the integrated likelihood and separately using the full likelihood. This was repeated for 8 random initialisations with 2000 draws at each step in SMC, and we present the mean and standard deviation of the model evidence from each run. For each approach, the model with the strongest evidence is marked with * (this is not clear for with rounding here, but see the repository [17] for full results). For each model, we also report the time taken to complete the sampling for all initialisations, averaged across the different datasets.
We can also compare the posterior distributions for model coefficients, compared to the ‘true’ coefficients. Table 2 shows the Mahalabonis distance between b and the posterior distribution for β, for each combination of dataset and model. In every case, the ‘true’ coefficient is closer to the posterior distribution from the MCMC using the integrated likelihood than it is to the posterior from the MCMC using the full likelihood. The Mahalabonis distance is defined as the following, where the posterior for β has mean μ and covariance Σ:
This was computed with sequential Monte Carlo (SMC), separately using the integrated likelihood and the full likelihood.
For the integrated likelihoods, the posterior mean and covariance can be recovered by averaging and
(or
and
) over all values of σ2 (or
and
) in the MCMC posterior trace. For the full likelihoods, the posterior mean and covariance are computed directly as the sample mean and sample covariance from the MCMC trace for β. For example, for
, the Mahalabonis distances for integrated and full likelihoods are as follows, where
and
are the expressions in Eq 5:
Example: Minnesota radon contamination
We next investigate real-world data, the Minnesota radon contamination dataset [19]. We describe various models that fit within this framework outlined above, as proposed for this dataset in [19]. We deviate from their notation (e.g. renaming coefficients) for consistency with our notation above. The hierarchical structure in this dataset is given in decreasing geographic granularity, with 919 individual measurements grouped within 85 counties, and data are available at the individual measurement-level and the county-level. The maximum number of measurements per county is 116 and the minimum is 1. The explanatory variable is the measurement of the radon level on a logarithmic scale, which has mean (standard deviation) 1.265Â (0.819). Comparing across counties, the minimum and maximum values for the average log radon level was 0.410 and 2.606. The covariates considered here are an individual-level indicator variable identifying the floor the measurement was taken on (0 for basement, 1 for first floor), and county-wide uranium levels on a logarithmic scale. 83% of the measurements were taken in the basement, and the mean (standard deviation) of the log uranium levels was 0.014 (0.384), with a minimum and maximum across all counties of −0.882 and 0.528.
We denote the standardised (mean 0 and standard deviation 1) log radon measurements by yij, the indicator floor variable by tij, and the county-wide log uranium levels (also standardised) by vj. Unless otherwise specified, we include an intercept term in each model, but adjust the model matrix xij so that it becomes . We could equivalently rewrite this as
, where the indicator function, 1A, is equal to 1 if condition A is True and 0 otherwise. This means that we index by tij rather than including it as a binary variable. The primary reason for this is that we then express the same prior uncertainty for measurements that come from the basement floor and from the first floor, instead of increased uncertainty when tij = 1, as discussed in [16]. In the context of linear model notation, we discard the group-level j index, so that the index i runs over all individuals, i = 1, …, n. To denote the group j ownership for a particular individual i, we instead using the index notation j[i]. For example, if the individual 10 belongs to group 4, then j[10] = 4. The models suggested by Gelman and Hill include the following single-level linear models:
- Complete pooling: all counties are pooled to a single group, with a single intercept and gradient used for all counties, whilst the county-wide uranium levels are not included in the model. By ‘averaging’ the intercept term, this completely ignores any variation in the radon levels across counties. The model is:
- Complete pooling, with county-level variables: as above, but with county-wide log uranium measurements included in the model. This at least contains some county-wide information, but does not directly model at the level of counties, as in the multilevel models.
- Unpooled intercept: each county has a separate intercept term. Although the county-level data is included via indicator variables that identify group membership, there is again no explicit model at the county-level. This is referred to as no pooling in the PyMC multilevel modelling notebook [16], though the coefficient for the floor/basement indicator variable is pooled across counties. We could also include the county-wide log uranium measurements here, but this will result in a non-identifiable model with collinear predictors.
- No pooling: each county is modelled completely independently of others, with separate intercepts and gradients. This will usually overfit the data, and perform relatively poorly for counties with limited data. In practice, 25 out of 85 counties have no measurements from the first floor, and we exclude those components in the vectors β and xi. The dimension of β is then 85 + 60 = 145.
The multilevel models are the following:
- Partial pooling: county-wide variability is modelled directly as ηj, a deviation from the ‘average’ intercept. This uses first multilevel model formulation as described in (3).
- Varying slopes and intercepts: in this model, we allow variability in both intercept and slope (i.e. the floor the measurement was taken on) across counties. This uses the more general multilevel model (described in Eq 4). We evaluate a version of this that includes an off-diagonal (correlation) term in the Ση prior.
Where complete pooling and no pooling represent two extremes in model dimension within the linear model framework, Gelman and Hill [19] describe the multilevel model as akin to partial pooling, in which there is natural shrinkage of the non-pooled parameters (e.g. those featuring the index j[i]) to the mean (the ‘average’ in the complete pooling case). This can be seen as a compromise between the two linear model extremes.
In each of these models, we set a multivariate normal prior on β and inverse-gamma
prior on each univariate variance component (σ2,
and
). In model
, Ση was parameterised by
, where the first two components were diagonal terms, which had
priors, ρν had a truncated normal prior on the interval [−1, 1] with mean 0 and variance 1, and the non-zero off-diagonal term was ρνσν,1σν,2. The number of unconstrained model parameters, k, in linear models is equal to the number of independent variables, which is the same as the dimension of the model evidence integral. In multilevel models, integration also happens over latent variables, while k is just number of independent variables plus the number of variance components. Table 3 compares these models, in terms of the model evidence and the AIC, where we use SMC to estimate the model evidence using the full likelihoods and the derived integrated likelihoods. S1–S6 Figs. shows measurements and model fits for a subset of counties.
The model evidence was computed with sequential Monte Carlo (SMC), using separately the integrated likelihood presented and the full likelihood. This was repeated for 8 random initialisations with 2000 draws at each step in SMC, and we present the mean and standard deviation of the model evidence from each run. The table also shows the number of model parameters, k, and the ranking (where smaller is better) of models for each approach.
Discussion
Multilevel structure within data unlocks an increasing number of modelling choices for statisticians, though this additional modelling flexibility presents a challenge in deciding what and how to model the data. We present an approach to Bayesian model selection for multilevel models that estimates the model evidence using integrated likelihoods instead of full likelihoods. We treat a subset of variables (regression coefficients with Gaussian priors) as nuisance variables that we analytically integrate out, which reduces the dimensionality of the model, as is standard in conjugate analysis. By converting the problem in this manner, we limit the impact of issues surrounding high-dimensional sampling, a key difficulty in sampling schemes for estimation of the desired quantities. As both examples show, estimates of the model evidence using the integrated likelihood are more consistent and robust. For the simulated data, this approach correctly identifies the ‘true’ model for each dataset and the ‘true’ coefficients are more closely described by the posterior distribution when using the integrated likelihood than when using the full likelihood. For a linear model with normal-inverse-gamma prior, we can also compute the log model evidence directly, and the estimates using the integrated likelihood have less bias and variance than those using the full likelihood. We believe the bias in the model evidence estimates using the full likelihood is likely shared by other models, particularly the multilevel models, because of higher dimensionality, though there is no gold standard to confirm this. These observations extend to the Minnesota radon contamination dataset, where the discrepancy between estimates and their variance is significant. The integrated likelihood is more consistent with the frequentist AIC, following a similar ranking, though this does not measure exactly the same thing. Although static SMC is asymptotically unbiased in the data size n [20], it is sometimes unclear, when dealing with high-dimensional models, what constitutes an unbiased estimate in practice when there is no analytic solution available. In high-dimensional settings, methods that directly estimate the model evidence integral may easily accumulate errors, leading to poor estimates. In Table 1, we notice some improvement in computational cost for the simulated datasets, but we believe the computational cost depends on a number of factors, such as dimensions n, d and m and the number of groups J, and so are reluctant to make a general statement about this. Sampling using a highly-nonlinear low-dimensional integrated likelihood may in some instances be more computationally challenging than using the high-dimensional product of simpler likelihoods. In the second example, both full and integrated likelihood methods were broadly similar in terms of computational cost for the linear model and simple multilevel linear model, but the integrated likelihood was more expensive for the general multilevel model, as this involved repeated computation and inversion of a large number of covariance matrices.
Bayesian model selection can be extended to a wide range of related problems fairly straightforwardly, such as variable selection and nested models (i.e. a comparison of two models where one is entirely contained with the other, as opposed to a nested structure in the data). It is worth emphasising a distinction between the model that best describes the data and the model that best achieves the research objective, which may not always coincide. For example, if the goal is to make inference on parameters associated with specific variables, then we should not exclude these variables on the basis of an evaluation of some model selection criteria. As George Box stated in one of the most well-known aphorisms in statistics [26]: ‘All models are wrong, but some are useful’. In a model selection problem, Bayesian approaches are particularly advantageous, because they factor prior uncertainty about model parameters in a way that naturally imposes a penalty on model complexity to prevent overfitting to the data. An important consideration is the choice of suitable priors for a given problem, to adequately balance previous scientific knowledge and information that the new data provides. Informative or weakly informative priors are typically preferred to non-informative priors, which will often not be suitable if there is insufficient data available. For linear models, the Bayes factor is a monotonic function of the classical F statistic in the limit as the prior variances tend to infinity [8]. Similarly, the posterior distribution of β given σ2 also aligns with classical frequentist inference in this limit. It is worth emphasising that posteriori maximisation of the model evidence over prior hyperparameters is generally not appropriate in the context of an inference question about particular dataset. Some caution should be taken to avoid ‘retro-fitting’ priors based on the data, as this can be viewed as converting a priori fixed hyperparameters (part of the model definition) into tunable parameters, under which the inference question may not remain as initially intended. However, empirical Bayes [27], which performs such an optimisation on a wider dataset (for example, using radon contamination from other states to estimate sensible priors), can be viewed as a shrinkage approach and a bridge from frequentist estimation to the fully Bayesian approach. With a priori justification, it is certainly possible to compare a discrete set of models that are identical except from different prior hyperparameters. For example, two statisticians may have wildly different prior beliefs based on previous research, and therefore propose separate prior distributions, which in turn can influence inference they make on model parameters, and we could ask whose model best describes the data. As always, prior predictive checking should be used to ensure priors give a reasonable coverage in predicted values.
We have limited this work to the simplest generalised linear models, with normal distribution and identity link (the canonical link function), choosing normal priors to mirror conjugate priors for a Gaussian likelihood. As any likelihood from the exponential family has a conjugate prior distribution, this analytic marginalisation can be similarly extended to generalised linear models under similarly chosen priors; for example, logistic regression (generalised linear model with Bernoulli distribution and logit link) with a beta conjugate prior on the Bernoulli parameter p. This allows the approach we have presented to be generalised to a much larger class of data and models.
Data and source code availability
The source code and data is available in the following repository:
- Project name: Bayesian model selection for multilevel models using integrated likelihoods
- Project home page: https://github.com/tedinburgh/model-evidence-with-integrated-likelihood
- Operating system(s): Platform independent
- Programming language: Python 3.9.12
- Other requirements: Python modules—numpy 1.21.5 or higher, pandas 1.4.2 or higher, pmyc 4.1.5 or higher, arviz 0.12.1 or higher, scipy 1.7.3 or higher, argparse 1.1 or higher, statsmodels 0.13.2, palettable 3.3.0.
- License: MIT License
The current version of the repository has a permanent DOI at Zenodo [17]. The Minnesota radon dataset is contained within the module PyMC and can be opened directly from there and the simulated datasets can be reproduced exactly when running the relevant Python script from the repository above. Additionally, all datasets are available as.csv files on the repository above.
Supporting information
S1 Fig. Model
fits for a subset of counties.
Each dot represents a measurement at either basement or ground floor level. The format of the figure follows [16], with the same counties represented, though we have standardised the log radon and uranium levels, so the y-axis scale is slightly different. The model fit is from the integrated likelihood sampling, and is shown as a gradient line from basement to ground floor, with one standard deviation from the mean in dotted lines.
https://doi.org/10.1371/journal.pone.0280046.s001
(TIF)
Acknowledgments
We would like to thank Dr Torben Sell (University of Edinburgh) for his insight and advice about challenges in MCMC sampling. A CC BY or equivalent licence is applied to the AAM arising from this submission.
References
- 1.
Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press; 2006.
- 2.
Goldstein H. Multilevel Models in Educational and Social Research. Charles Griffin & Co; Oxford University Press; 1987.
- 3.
Leyland AH, Goldstein H. Multilevel Modelling of Health Statistics. Wiley series in probability and statistics. Chichester, UK: Wiley; 2001.
- 4.
Akaike H. Information Theory and an Extension of the Maximum Likelihood Principle. In: Petrov BN, Csáki F, editors. 2nd International Symposium on Information Theory. Budapest, Hungary: Akadémiai Kiadó; 1973. p. 267–281.
- 5. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Stat Soc. 1995;57(1):289–300.
- 6. Neyman J, Pearson ES, Pearson K. IX. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London Series A, Containing Papers of a Mathematical or Physical Character. 1933;231(694-706):289–337.
- 7. Wilks SS. The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. The Annals of Mathematical Statistics. 1938;9(1):60–62.
- 8.
O’Hagan Anthony. Kendall’s Advanced Theory of Statistics, Vol 2B: Bayesian Inference. 2nd ed. Arnold; 2004.
- 9. Kloek T, van Dijk HK. Bayesian Estimates of Equation System Parameters: An Application of Integration by Monte Carlo. Econometrica. 1978;46(1):1–19.
- 10. Liu JS, Chen R. Sequential Monte Carlo Methods for Dynamic Systems. J Am Stat Assoc. 1998;93(443):1032–1044.
- 11. Foulley JL, San Cristobal M, Gianola D, Im S. Marginal likelihood and Bayesian approaches to the analysis of heterogeneous residual variances in mixed linear Gaussian models. Comput Stat Data Anal. 1992;13(3):291–305.
- 12. Heagerty PJ, Zeger SL. Marginalized Multilevel Models and Likelihood Inference. Stat Sci. 2000;15(1):1–19.
- 13. Green PJ. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika. 1995;82(4):711–732.
- 14. Carlin BP, Chib S. Bayesian Model Choice via Markov Chain Monte Carlo Methods. J R Stat Soc Series B Stat Methodol. 1995;57(3):473–484.
- 15. Salvatier J, Wiecki TV, Fonnesbeck C. Probabilistic programming in Python using PyMC3. PeerJ Computer Science. 2016;2:e55.
- 16.
Salvatier J, Wiecki TV, Fonnesbeck C. A Primer on Bayesian Methods for Multilevel Modeling; 2020. https://docs.pymc.io/en/v3/pymc-examples/examples/case_studies/multilevel_modeling.html.
- 17.
Edinburgh T, Ercole A, Eglen SJ. Source code for “Bayesian model selection for multilevel models using integrated likelihoods”; 2022. https://doi.org/10.5281/zenodo.7444054.
- 18.
Bathelmé S. Priors of convenience; 2012. https://dahtah.wordpress.com/2012/08/22/priors-of-convenience/.
- 19. Gelman A. Multilevel (Hierarchical) Modeling: What It Can and Cannot Do. Technometrics. 2006;48(3):432–435.
- 20. Kantas N, Doucet A, Singh SS, Maciejowski JM. An Overview of Sequential Monte Carlo Methods for Parameter Estimation in General State-Space Models. IFAC Proceedings Volumes. 2009;42(10):774–785.
- 21. Chib S. Marginal Likelihood from the Gibbs Output. J Am Stat Assoc. 1995;90(432):1313–1321.
- 22. Chib S, Jeliazkov I. Marginal likelihood from the metropolis–Hastings output. J Am Stat Assoc. 2001;96(453):270–281.
- 23.
Jeffreys H. The Theory of Probability. OUP Oxford; 1998.
- 24. Kass RE, Raftery AE. Bayes Factors. J Am Stat Assoc. 1995;90(430):773–795.
- 25. Taylor SJ, Letham B. Forecasting at Scale. Am Stat. 2018;72(1):37–45.
- 26. Box GEP. Science and Statistics. J Am Stat Assoc. 1976;71(356):791–799.
- 27. Casella G. An Introduction to Empirical Bayes Data Analysis. Am Stat. 1985;39(2):83–87.