Overview of Meta-Analysis, Part 5a (of 7): Primary Meta-AnalysesPosted: April 12, 2012 | |
The previous four parts of this seven-part overview of meta-analysis focused on obtaining data and preparing them for the central task addressed in this fifth part: meta-analyzing effect-size (ES) estimates, which I’ll cover in three subparts focused on meta-analytic models (Part 5a) and procedures for fitting them to ESs (Parts 5b and 5c). In the last two parts (6 and 7) I’ll address follow-up techniques to assess potential problems with these primary analyses, as well as useful ways to report these analyses’ results. (Topics for all seven parts of this overview are listed in Part 1.)
Task 5: Fit Meta-Analytic Models to Effect Sizes
Statisticians and other methodologists have developed countless techniques for comparing and combining results across studies, especially since the mid-1970s. Even superficially covering the plethora of diverse methods proposed for these purposes would entail an extensive review far beyond the present scope. Instead, I focus on a subset of widely used meta-analytic models and accompanying procedures for estimation and inference. This will serve as a foundation for discussing—in later posts—numerous techniques that fit into this framework as well as extensions or variants that involve similar core ideas.
An aside about scope: The models considered here don’t pertain directly to certain specialized meta-analytic techniques such as combining p values, vote counting, and artifact adjustments (e.g., in validity generalization studies), or to graphics used commonly in meta-analyses (e.g., forest plot, funnel plot, radial plot). That said, the models I consider do share certain key ideas with those methods and others not addressed in this post. (end of aside)
More specifically, I focus on models that are appropriate when each study contributes one estimate of an ES, where the ES estimator’s sampling distribution is approximately normal with a variance that’s essentially known. In the three sections below I describe several such models that differ by whether and how ES parameters vary among studies; this rather long post constitutes Part 5a. In two separate posts, Parts 5b and 5c, I’ll describe—and illustrate using real-data examples—procedures for estimating these models’ (hyper)parameters and making inferences about these quantities (e.g., hypothesis tests, confidence intervals [CIs]), with an emphasis on classical/frequentist precision-weighted techniques. Also in Part 5c I’ll mention extensions and variants of these models and procedures.
When presenting each model below I comment on its statistical story about how the observed ES estimates arose. Each model’s story includes one or more deterministic components in the form of parameters or hyperparameters—parameters that characterize a distribution of parameters (i.e., a hyperdistribution)—as well as assumptions about stochastic components in the form of random errors. One general approach to meta-analytic data analysis entails selecting a model’s whose story is appropriate for our situation, fitting that model to observed ES estimates, assessing the model’s adequacy (e.g., justifiability of assumptions, match with data), and using results from an adequate model to estimate and make inferences about (hyper)parameters of interest.
Each meta-analytic model I present can be expressed usefully in two levels or stages: a within-study model for ES estimates’ variation among samples of subjects (Level 1), and a between-studies model for ES parameters’ variation among studies (Level 2). Readers acquainted with multilevel or hierarchical models may find these two-level models familiar. I present six such models, all of which share essentially the same within-study model; their between-studies models differ with respect to (hyper)parameters typically of interest to meta-analysts.
An aside about notation: In what follows I use notation for ESs described in Part 1 of this overview and that for CVs described in Part 2. I try to distinguish between random variables and their realizations, mainly for ES parameters; this complicates notation a bit but improves precision in meaning. (end of aside)
The within-study model describes the conditional sampling distribution of each study’s ES estimator, given a value for the study’s ES parameter and other info about the study (e.g., sample size[s]). In terms of a linear model this model is just the ES parameter plus random error due to sampling of subjects; that is, different hypothetical samples of subjects (of the same size) would yield different ES estimates. Largely for convenience, we assume this error is normally distributed with a known CV, whose square root is the estimator’s standard error. We can write this as
Yi = θi + Ei ,
where Ei ~ N(0, σi2) and σi2 is known. If θi is a realization of the random ES parameter Θi—more on this distinction below—we can express this model somewhat more precisely as
Yi = Θi + Ei ,
with the additional stipulation that Θi and Ei are independent. Equivalently, we could avoid notation for random errors by writing the ES estimator’s distribution as either
Yi ~ N(θi, σi2)
when θi is fixed or
Yi | θi ~ N(θi, σi2)
when Θi is random.F1 Expressed either way, this model departs from standard versions of related models, such as multilevel models, in that we observe only one realization of Yi and the CV is known and can vary among studies.
I consider this a model for “generic ideal” ESs, whose ES estimators are truly normal with a known CV—that is, their sampling distribution conforms exactly to the model. Most realistic ESs depart at least somewhat from this model, especially with not-large samples. For instance, a sample Pearson correlation or proportion might conform well to this model only with several hundred subjects, especially when the parameter is near -1 or 1, but a Fisher z-transformed correlation tends to conform better.F2 Also, as noted in Part 2 of this overview, for some ESs the CV depends on the unknown ES parameter (i.e., σi2 is a function of θi) and hence is subject to estimation error. Unhappily, ESs that conform better than others tend to be less familiar and harder to interpret.
Between-Studies Models: No Study-Level Covariates
Whereas the within-study model expresses an ES estimator’s dependence on the ES parameter, each of the six between-studies models in this section and the next specifies whether and how ES parameters vary among studies. In the simplest model they don’t vary (i.e., between-studies homogeneity), and in more complex models they vary systematically, randomly, or both, possibly as a function of study-level features treated as covariates/moderators.F3 The three models in this section, in particular, ignore study-level covariates, whereas the three in the next section include them. My unconventional names for these models are meant to reflect their key aspects (e.g., “simple” without covariates and “moderated” with covariates).
Simple homogeneous fixed effects (SHoFE). One minimal between-studies models posits a common ES parameter shared by all studies in our meta-analytic collection. That is, the studies’ ES parameters are homogeneous and don’t depend on any study-level covariates. We can write this as
θi = μ ,
where μ is a fixed but unknown parameter.F4 Plugging this expression into the within-study model yields the combined model
Yi = μ + Ei
or, in distribution form,
Yi ~ N(μ, σi2) .
This model’s essential story is that each ES estimate deviates from the common ES parameter due to only that study’s random sample of subjects. In Part 5b I’ll mention techniques for estimating and making inferences about μ, this model’s only unknown parameter, as well as (hyper)parameters in subsequent models. Inferences based on this model generalize to only studies like those in our collection—that is, studies with the same constellations of features (but different samples of subjects). Some authors call this conditional inference. Because in many research domains studies vary at least slightly on features that influence their ES parameters, strict between-studies homogeneity is rare, so this highly constrained model is seldom defensible.
Simple heterogeneous fixed effects (SHeFE). Some meta-analysts and methodologists seem to view the above SHoFE model as the (only) fixed-effects model without covariates. A less constrained model, however, posits that each study’s ES parameter deviates from a mean ES parameter by an unknown fixed amount. Denoting this deviation for Study i as ηi, we can write this model as
θi = μ + ηi ,
where ηi can be any real value. In this model the ES parameters again are fixed and don’t depend on study-level covariates, but they’re permitted to vary among studies (i.e., between-studies heterogeneity). We can write the corresponding combined model as either
Yi = μ + ηi + Ei
Yi ~ N(μ + ηi, σi2).
Its essential story is that each ES deviates from the mean ES parameter by not only the sampling of subjects but also a fixed amount, such as due to the influence of one or more study-level features not modeled explicitly. Crucially, because ηi is not random, the only source of randomness in Yi is the sampling of subjects represented by Ei. As with the SHoFE model, this is appropriate if we view any inference regarding μ—such as a CI or test—as conditional: generalizing to only studies like those in our collection.
Simple random effects (SRE). When meta-analysts suspect heterogeneity of ES parameters, they often posit a model in which deviations from the mean ES parameter are random instead of fixed. Denoting this random deviation for Study i as Ui, we can write this model as
Θi = μ + Ui ,
where the random error has mean 0 and between-studies variance component τ2—that is, E(Ui) = 0 and Var(Ui) = τ2. The distinction between Ui and the above SHeFE model’s fixed ηi might be clarified by considering hypothetical replications of a given meta-analysis: Under the SHeFE model Study i‘s ηi (and hence its θi) is the same for every replication, because only random samples of subjects vary over replications; under the SRE model Study i‘s realization of Ui additionally varies over replications (and hence so does its θi = mu + ui). Some meta-analytic procedures further assume Ui is normally distributed, so that Ui ~ N(0, τ2). Under this normality assumption, we could write this model equivalently without the random error as
Θi ~ N(μ, τ2) .
As with the above SHoFE and SHeFE models, this model’s ES parameters don’t depend on study-level covariates. We can write the combined SRE model as
Yi = μ + Ui + Ei ,
where Ui and Ei are independent, or, assuming normality for Ui (and using a statistical fact about compound normal-normal variables),
Yi ~ N(μ, τ2 + σi2) .
This model’s essential story is that studies’ observed ESs deviate from a mean ES parameter due to two random sources: a hyperdistribution with mean μ and variance τ2 represents random variation of ES parameters (e.g., due to varying combinations of unmodeled study features), and each ES estimate deviates from its ES parameter due to sampling of subjects. Meta-analytic procedures for this model typically focus on estimation and inference for the hyperparameters μ and τ2.
Some authors refer to the ES estimator’s variance in the combined SRE model, Var(Yi) = τ2 + σi2, as its unconditional or marginal variance to signify its incorporating both sources of random error—in contrast to the within-study model’s CV. This relates to a crucial property of this model: By treating ES parameters as random, it formally supports generalizing inferences (e.g., about μ) more broaderly to a universe of studies from which those in our collection were sampled, essentially by incorporating τ2 into standard errors, CIs, and tests. That is, results from these inferential procedures reflect both sources of random error in hypothetical replications of the meta-analysis—sampling of ES parameters and subjects. Some authors call this unconditional inference.
Between-Studies Models: One or More Study-Level Covariates
In contrast to the previous three between-studies models, the three models in this section include another source of variation: They permit a study’s ES parameter to depend on one or more study-level features treated as covariates. (Part 3 of this overview addressed study-level features as a type of ES feature.) Some authors call these meta-regression models. Each model below is a generalization of its counterpart among the previous three models.
By way of notation, let’s denote the number of non-intercept covariates by q; collect their coefficients in the (q+1)-element column vector β = [β0 β1 β2 ... βq]T, where β0 typically denotes an intercept; and collect Study i‘s covariate values in the (q+1)-element row vector xi = [x0i x1i x2i ... xqi], where typically x0i = 1 for an intercept. We can use the scalar/dot product to express the linear predictor’s weighted sum compactly as
xiβ ≡ β0x0i + β1x1i + β2x2i + … + βqxqi .
As in (multiple) linear regression and the general linear model used widely in primary studies, xi may contain continuous/quantitative covariates, coded values for categorical covariates (e.g., dummy codes, effect codes, contrasts, orthogonal polynomials), powers (e.g., x2i = x1i2), products (e.g., for interactions), and other types of regressors. When working with categorical covariates we might prefer an equivalent parameterization of the model that depicts more explicitly each level’s ES parameter or each factor’s effect (e.g., cells mean or effects models). To avoid complications due to random or missing covariates, I assume here that any covariates in a model are fixed and observed for all studies.
Moderated homogeneous fixed effects (MHoFE). To generalize the SHoFE model above by allowing Study i‘s ES parameter to depend on study-level covariates, we can use the between-studies model
θi = xiβ .
As a simple example, a model with q = 1 covariate x and an intercept is just
θi = β0 + β1xi .
If this x represented a dichotomy, we might instead parameterize this as an ANOVA-type model with either an ES parameter for each level or the grand mean of and difference between these two ES parameters. At any rate, plugging the between-studies model into the within-study model yields the combined model
Yi =xiβ + Ei
or, in distribution form,
Yi ~ N(xiβ, σi2) .
This model’s essential story is that ES parameters may vary systematically among studies due to variation in covariates, but an ES estimate deviates from its predicted/expected ES parameter (based on the covariate[s]) due to only sampling of subjects. Meta-analysts using this model typically estimate and make inferences about elements of β, the fixed but unknown coefficients that represent relations between ES parameters and covariates (i.e., fixed effects). As with the SHoFE model, (conditional) inferences based on this model generalize to only studies like those in our collection.
Conceptually, we can envision this model in terms of the regression line, curve, or more general surface over values of xi that’s determined by xiβ for a collection of studies: Their ES parameters fall exactly on this surface, and variation of their ES estimates around this surface is governed by σi2. So, ES parameters for studies sharing a given value of xi are homogeneous; we might call this conditional homogeneity.
Moderated heterogeneous fixed effects (MHeFE). Just as the above MHoFE model generalizes the SHoFE model, we can generalize the SHeFE model by adding study-level covariates. Namely, replacing the SHeFE model’s μ with the linear predictor xiβ yields the between-studies model
θi = xiβ + ηi .
Compared to the MHoFE model, this model permits Study i‘s ES parameter to deviate from its linear predictor by the fixed, unknown value ηi. We can write the corresponding combined model as
Yi = xiβ + ηi + Ei
Yi ~ N(xiβ + ηi, σi2).
Its essential story is that predicted ES parameters may vary systematically among studies due to variation in covariates, and a study’s ES estimate deviates from its covariate-predicted ES parameter due to some fixed amount in addition to sampling of subjects. Another interpretation, if we view ηi as the combined effect of excluded study-level covariates, is that ES parameters vary among studies due to both modeled and unmodeled covariates. From either perspective, the only random source of variation in Yi is the sampling of subjects, so this model supports generalizations to only studies like those in our collection.
Conceptually, we can view this model as permitting a collection of studies’ ES parameters to deviate by fixed amounts from the regression surface (i.e., conditional heterogeneity). Hence, xiβ represents a sort of mean ES parameter for studies that share a given value of xi, and both ηi and σi2 govern variation of ES estimates around this surface.
Moderated random effects (MRE). This final model generalizes the above SRE model by adding to its random ES-parameter variation a systematic source of variation related to study-level covariates. We can write this as
Θi = xiβ + Ui ,
where E(Ui) = 0 and Var(Ui) = τ2. If we further assume normality for Ui, we can instead write the model as
Θi ~ N(xiβ, τ2) .
In contrast to the SRE model, Ui and τ2 now represent residual between-studies heterogeneity, beyond that due to the linear predictor’s covariates. Also, whereas the MHeFE model’s xi and ηi are fixed quantities that remain constant over hypothetical replications of the meta-analysis, in this MRE model only xi is fixed while Ui varies over replications. Some authors refer to this as a mixed-effects model, acknowledging both fixed and random sources of variation. We can write the combined MRE model as
Yi = xiβ + Ui + Ei ,
where Ui and Ei are independent, or, assuming normality for Ui,
Yi ~ N(xiβ, τ2 + σi2) .
This model’s essential story is that predicted ES parameters may vary systematically among studies due to variation in covariates, and a study’s ES estimate deviates from its covariate-predicted ES parameter due to two random sources: study-level variation (e.g., due to unmodeled covariates) and sampling of subjects. Estimation and inference usually focus on the hyperparameters β and τ2. Because this model incorporates study-level sampling via Ui and τ2, it supports broader generalizations: to a universe of studies from which those in our collection were sampled.
Conceptually, we can view this model as permitting a collection of studies’ ES parameters to deviate randomly from the regression surface (i.e., conditional heterogeneity). Hence, xiβ represents a mean ES parameter for studies that share a given value of xi, and both τ2 and σi2 govern ES estimates’ variation around this surface.
That ends this superficial intro to conventional meta-analytic models. It’s informative to consider relations among the six models I’ve presented; for instance, the SHoFE model is a special case of the other five models, and each of the MHeFE and MRE models contains other models as special cases. In Part 5b I’ll elaborate on these nesting relations before describing how to estimate and make inferences about these models’ (hyper)parameters—μ, τ2, or elements of β. In Part 5c I’ll mention useful extensions and variants of these models and procedures, such as for multivariate or other dependent ESs and other special types of data.
1. The conditional notation “Yi | θi” stands for “Yi | (Θi = θi),” which represents the random variable Yi‘s conditional distribution, given that the random variable Θi takes the specific realization θi.
3. Authors variously refer to regressor variables in regression models as explanatory, independent, or predictor variables, and in meta-analysis these are often called moderator variables. I follow the convention of calling them covariates.