# Overview of Meta-Analysis, Part 5a (of 7): Primary Meta-Analyses

**Posted:**April 12, 2012 |

**Author:**A. R. Hafdahl |

**Filed under:**Overview of Meta-Analysis |

**Tags:**between-studies variance component, categorical data, conditional variance, effect size, fixed effect, heterogeneity, meta-analysis, meta-regression, moderator, multilevel model, random effect |2 Comments

The previous four parts of this seven-part overview of meta-analysis focused on obtaining data and preparing them for the central task addressed in this fifth part: meta-analyzing effect-size (ES) estimates, which I’ll cover in three subparts focused on **meta-analytic models** (Part 5a) and **procedures for fitting them to ESs** (Parts 5b and 5c). In the last two parts (6 and 7) I’ll address follow-up techniques to assess potential problems with these primary analyses, as well as useful ways to report these analyses’ results. (Topics for all seven parts of this overview are listed in Part 1.)

## Task 5: Fit Meta-Analytic Models to Effect Sizes

Statisticians and other methodologists have developed countless techniques for comparing and combining results across studies, especially since the mid-1970s. Even superficially covering the plethora of diverse methods proposed for these purposes would entail an extensive review far beyond the present scope. Instead, I focus on a subset of widely used meta-analytic models and accompanying procedures for estimation and inference. This will serve as a foundation for discussing—in later posts—numerous techniques that fit into this framework as well as extensions or variants that involve similar core ideas.

An aside about scope: The models considered here don’t pertain directly to certain specialized meta-analytic techniques such as combining *p* values, vote counting, and artifact adjustments (e.g., in validity generalization studies), or to graphics used commonly in meta-analyses (e.g., forest plot, funnel plot, radial plot). That said, the models I consider do share certain key ideas with those methods and others not addressed in this post. (end of aside)

More specifically, I focus on models that are appropriate when** each study contributes one estimate of an ES**, where the ES estimator’s **sampling distribution is approximately normal** with a **variance that’s essentially known**. In the three sections below I describe several such models that differ by whether and how ES parameters vary among studies; this rather long post constitutes Part 5a. In two separate posts, Parts 5b and 5c, I’ll describe—and illustrate using real-data examples—procedures for estimating these models’ (hyper)parameters and making inferences about these quantities (e.g., hypothesis tests, confidence intervals [CIs]), with an emphasis on classical/frequentist precision-weighted techniques. Also in Part 5c I’ll mention extensions and variants of these models and procedures.

When presenting each model below I comment on its **statistical story** about how the observed ES estimates arose. Each model’s story includes one or more *deterministic* components in the form of parameters or *hyperparameters*—parameters that characterize a distribution of parameters (i.e., a *hyperdistribution*)—as well as assumptions about *stochastic* components in the form of random errors. One general approach to meta-analytic data analysis entails selecting a model’s whose story is appropriate for our situation, fitting that model to observed ES estimates, assessing the model’s adequacy (e.g., justifiability of assumptions, match with data), and using results from an adequate model to estimate and make inferences about (hyper)parameters of interest.

Each meta-analytic model I present can be expressed usefully in two levels or stages: a **within-study model** for ES estimates’ variation among samples of subjects (Level 1), and a **between-studies model** for ES parameters’ variation among studies (Level 2). Readers acquainted with multilevel or hierarchical models may find these two-level models familiar. I present six such models, all of which share essentially the same within-study model; their between-studies models differ with respect to (hyper)parameters typically of interest to meta-analysts.

An aside about notation: In what follows I use notation for ESs described in Part 1 of this overview and that for CVs described in Part 2. I try to distinguish between random variables and their realizations, mainly for ES parameters; this complicates notation a bit but improves precision in meaning. (end of aside)

## Within-Study Model

The within-study model describes the **conditional sampling distribution of each study’s ES estimator**, given a value for the study’s ES parameter and other info about the study (e.g., sample size[s]). In terms of a linear model this model is just the ES parameter plus random error due to sampling of subjects; that is, different hypothetical samples of subjects (of the same size) would yield different ES estimates. Largely for convenience, we assume this error is normally distributed with a known CV, whose square root is the estimator’s standard error. We can write this as

*Y _{i}* = θ

*+*

_{i}*E*,

_{i}where *E _{i}* ~

*N*(0, σ

_{i}^{2}) and σ

_{i}^{2}is known. If θ

*is a realization of the random ES parameter Θ*

_{i}*—more on this distinction below—we can express this model somewhat more precisely as*

_{i}*Y _{i}* = Θ

*+*

_{i}*E*,

_{i}with the additional stipulation that Θ* _{i}* and

*E*are independent. Equivalently, we could avoid notation for random errors by writing the ES estimator’s distribution as either

_{i}*Y _{i}* ~

*N*(θ

*, σ*

_{i}

_{i}^{2})

when θ* _{i}* is fixed or

*Y _{i}* | θ

*~*

_{i}*N*(θ

*, σ*

_{i}

_{i}^{2})

when Θ* _{i}* is random.

^{F1}Expressed either way, this model departs from standard versions of related models, such as multilevel models, in that we observe only one realization of

*Y*and the CV is known and can vary among studies.

_{i}I consider this a model for “generic ideal” ESs, whose ES estimators are truly normal with a known CV—that is, their sampling distribution conforms exactly to the model. Most realistic ESs depart at least somewhat from this model, especially with not-large samples. For instance, a sample Pearson correlation or proportion might conform well to this model only with several hundred subjects, especially when the parameter is near -1 or 1, but a Fisher *z*-transformed correlation tends to conform better.^{F2} Also, as noted in Part 2 of this overview, for some ESs the CV depends on the unknown ES parameter (i.e., σ_{i}^{2} is a function of θ* _{i}*) and hence is subject to estimation error. Unhappily, ESs that conform better than others tend to be less familiar and harder to interpret.

## Between-Studies Models: No Study-Level Covariates

Whereas the within-study model expresses an ES estimator’s dependence on the ES parameter, each of the six between-studies models in this section and the next specifies **whether and how ES parameters vary among studies**. In the simplest model they don’t vary (i.e., between-studies homogeneity), and in more complex models they vary systematically, randomly, or both, possibly as a function of study-level features treated as covariates/moderators.^{F3} The three models in this section, in particular, ignore study-level covariates, whereas the three in the next section include them. My unconventional names for these models are meant to reflect their key aspects (e.g., “simple” without covariates and “moderated” with covariates).

**Simple homogeneous fixed effects (SHoFE)**. One minimal between-studies models posits a common ES parameter shared by all studies in our meta-analytic collection. That is, the studies’ ES parameters are homogeneous and don’t depend on any study-level covariates. We can write this as

θ* _{i}* = μ ,

where μ is a fixed but unknown parameter.^{F4} Plugging this expression into the within-study model yields the combined model

*Y _{i}* = μ +

*E*

_{i}or, in distribution form,

*Y _{i}* ~

*N*(μ, σ

_{i}^{2}) .

This model’s essential story is that each ES estimate deviates from the common ES parameter due to only that study’s random sample of subjects. In Part 5b I’ll mention techniques for estimating and making inferences about μ, this model’s only unknown parameter, as well as (hyper)parameters in subsequent models. Inferences based on this model generalize to only studies like those in our collection—that is, studies with the same constellations of features (but different samples of subjects). Some authors call this *conditional inference*. Because in many research domains studies vary at least slightly on features that influence their ES parameters, strict between-studies homogeneity is rare, so this highly constrained model is seldom defensible.

**Simple heterogeneous fixed effects (SHeFE)**. Some meta-analysts and methodologists seem to view the above SHoFE model as *the* (only) fixed-effects model without covariates. A less constrained model, however, posits that each study’s ES parameter deviates from a mean ES parameter by an unknown fixed amount. Denoting this deviation for Study i as η* _{i}*, we can write this model as

θ* _{i}* = μ + η

*,*

_{i}where η* _{i}* can be any real value. In this model the ES parameters again are fixed and don’t depend on study-level covariates, but they’re permitted to vary among studies (i.e., between-studies heterogeneity). We can write the corresponding combined model as either

*Y _{i}* = μ + η

*+*

_{i}*E*

_{i}or

*Y _{i}* ~

*N*(μ + η

*, σ*

_{i}

_{i}^{2}).

Its essential story is that each ES deviates from the mean ES parameter by not only the sampling of subjects but also a fixed amount, such as due to the influence of one or more study-level features not modeled explicitly. Crucially, because η* _{i}* is

*not*random, the only source of randomness in

*Y*is the sampling of subjects represented by

_{i}*E*. As with the SHoFE model, this is appropriate if we view any inference regarding μ—such as a CI or test—as conditional: generalizing to only studies like those in our collection.

_{i}**Simple random effects (SRE)**. When meta-analysts suspect heterogeneity of ES parameters, they often posit a model in which deviations from the mean ES parameter are random instead of fixed. Denoting this random deviation for Study *i* as *U _{i}*, we can write this model as

Θ* _{i}* = μ +

*U*,

_{i}where the random error has mean 0 and between-studies variance component τ^{2}—that is, E(*U _{i}*) = 0 and Var(

*U*) = τ

_{i}^{2}. The distinction between

*U*and the above SHeFE model’s fixed η

_{i}*might be clarified by considering hypothetical replications of a given meta-analysis: Under the SHeFE model Study*

_{i}*i*‘s η

*(and hence its θ*

_{i}*) is the same for every replication, because only random samples of subjects vary over replications; under the SRE model Study*

_{i}*i*‘s realization of

*U*additionally varies over replications (and hence so does its θ

_{i}*= mu +*

_{i}*u*). Some meta-analytic procedures further assume

_{i}*U*is normally distributed, so that

_{i}*U*~

_{i}*N*(0, τ

^{2}). Under this normality assumption, we could write this model equivalently without the random error as

Θ* _{i}* ~

*N*(μ, τ

^{2}) .

As with the above SHoFE and SHeFE models, this model’s ES parameters don’t depend on study-level covariates. We can write the combined SRE model as

*Y _{i}* = μ +

*U*+

_{i}*E*,

_{i}where *U _{i}* and

*E*are independent, or, assuming normality for

_{i}*U*(and using a statistical fact about compound normal-normal variables),

_{i}*Y _{i}* ~

*N*(μ, τ

^{2}+ σ

_{i}^{2}) .

This model’s essential story is that studies’ observed ESs deviate from a mean ES parameter due to two random sources: a hyperdistribution with mean μ and variance τ^{2} represents random variation of ES parameters (e.g., due to varying combinations of unmodeled study features), and each ES estimate deviates from its ES parameter due to sampling of subjects. Meta-analytic procedures for this model typically focus on estimation and inference for the hyperparameters μ and τ^{2}.

Some authors refer to the ES estimator’s variance in the combined SRE model, Var(*Y _{i}*) = τ

^{2}+ σ

_{i}^{2}, as its

*unconditional*or

*marginal*variance to signify its incorporating both sources of random error—in contrast to the within-study model’s CV. This relates to a crucial property of this model: By treating ES parameters as random, it formally supports generalizing inferences (e.g., about μ) more broaderly to a universe of studies from which those in our collection were sampled, essentially by incorporating τ

^{2}into standard errors, CIs, and tests. That is, results from these inferential procedures reflect both sources of random error in hypothetical replications of the meta-analysis—sampling of ES parameters and subjects. Some authors call this

*unconditional inference*.

## Between-Studies Models: One or More Study-Level Covariates

In contrast to the previous three between-studies models, the three models in this section include another source of variation: They permit a study’s ES parameter to depend on one or more study-level features treated as covariates. (Part 3 of this overview addressed study-level features as a type of ES feature.) Some authors call these *meta-regression* models. Each model below is a generalization of its counterpart among the previous three models.

By way of notation, let’s denote the number of non-intercept covariates by *q*; collect their coefficients in the (*q*+1)-element column vector **β** = [β_{0} β_{1} β_{2} ... β* _{q}*]

^{T}, where β

_{0}typically denotes an intercept; and collect Study

*i*‘s covariate values in the (

*q*+1)-element row vector

**x**

*= [*

_{i}*x*

_{0}

_{i}*x*

_{1}

_{i}*x*

_{2}

*...*

_{i}*x*], where typically

_{qi}*x*

_{0}

*= 1 for an intercept. We can use the scalar/dot product to express the linear predictor’s weighted sum compactly as*

_{i}**x**_{i}**β** ≡ β_{0}*x*_{0i} + β_{1}*x*_{1i} + β_{2}*x*_{2i} + … + β* _{q}x_{qi}* .

As in (multiple) linear regression and the general linear model used widely in primary studies, **x*** _{i}* may contain continuous/quantitative covariates, coded values for categorical covariates (e.g., dummy codes, effect codes, contrasts, orthogonal polynomials), powers (e.g.,

*x*

_{2}

*=*

_{i}*x*

_{1}

_{i}^{2}), products (e.g., for interactions), and other types of regressors. When working with categorical covariates we might prefer an equivalent parameterization of the model that depicts more explicitly each level’s ES parameter or each factor’s effect (e.g., cells mean or effects models). To avoid complications due to random or missing covariates, I assume here that any covariates in a model are fixed and observed for all studies.

**Moderated homogeneous fixed effects (MHoFE)**. To generalize the SHoFE model above by allowing Study *i*‘s ES parameter to depend on study-level covariates, we can use the between-studies model

θ* _{i}* =

**x**

_{i}**β**.

As a simple example, a model with *q* = 1 covariate *x* and an intercept is just

θ* _{i}* = β

_{0}+ β

_{1}

*x*

*.*

_{i}If this *x* represented a dichotomy, we might instead parameterize this as an ANOVA-type model with either an ES parameter for each level or the grand mean of and difference between these two ES parameters. At any rate, plugging the between-studies model into the within-study model yields the combined model

*Y _{i}* =

**x**

_{i}**β**+

*E*

_{i}or, in distribution form,

*Y _{i}* ~

*N*(

**x**

_{i}**β**, σ

_{i}^{2}) .

This model’s essential story is that ES parameters may vary systematically among studies due to variation in covariates, but an ES estimate deviates from its predicted/expected ES parameter (based on the covariate[s]) due to only sampling of subjects. Meta-analysts using this model typically estimate and make inferences about elements of **β**, the fixed but unknown coefficients that represent relations between ES parameters and covariates (i.e., fixed effects). As with the SHoFE model, (conditional) inferences based on this model generalize to only studies like those in our collection.

Conceptually, we can envision this model in terms of the regression line, curve, or more general surface over values of **x*** _{i}* that’s determined by

**x**

_{i}**β**for a collection of studies: Their ES parameters fall exactly on this surface, and variation of their ES estimates around this surface is governed by σ

_{i}^{2}. So, ES parameters for studies sharing a given value of

**x**

*are homogeneous; we might call this*

_{i}*conditional homogeneity*.

**Moderated heterogeneous fixed effects (MHeFE)**. Just as the above MHoFE model generalizes the SHoFE model, we can generalize the SHeFE model by adding study-level covariates. Namely, replacing the SHeFE model’s μ with the linear predictor **x**_{i}**β** yields the between-studies model

θ* _{i}* =

**x**

_{i}**β**+ η

*.*

_{i}Compared to the MHoFE model, this model permits Study *i*‘s ES parameter to deviate from its linear predictor by the fixed, unknown value η* _{i}*. We can write the corresponding combined model as

*Y _{i}* =

**x**

_{i}**β**+ η

*+*

_{i}*E*

_{i}or

*Y _{i}* ~

*N*(

**x**

_{i}**β**+ η

*, σ*

_{i}

_{i}^{2}).

Its essential story is that predicted ES parameters may vary systematically among studies due to variation in covariates, and a study’s ES estimate deviates from its covariate-predicted ES parameter due to some fixed amount in addition to sampling of subjects. Another interpretation, if we view η* _{i}* as the combined effect of excluded study-level covariates, is that ES parameters vary among studies due to both modeled and unmodeled covariates. From either perspective, the only random source of variation in

*Y*is the sampling of subjects, so this model supports generalizations to only studies like those in our collection.

_{i}Conceptually, we can view this model as permitting a collection of studies’ ES parameters to deviate by fixed amounts from the regression surface (i.e., *conditional heterogeneity*). Hence, **x**_{i}**β** represents a sort of mean ES parameter for studies that share a given value of **x*** _{i}*, and both η

*and σ*

_{i}

_{i}^{2}govern variation of ES estimates around this surface.

**Moderated random effects (MRE)**. This final model generalizes the above SRE model by adding to its random ES-parameter variation a systematic source of variation related to study-level covariates. We can write this as

Θ* _{i}* =

**x**

_{i}**β**+

*U*,

_{i}where E(*U _{i}*) = 0 and Var(

*U*) = τ

_{i}^{2}. If we further assume normality for

*U*, we can instead write the model as

_{i}Θ* _{i}* ~

*N*(

**x**

_{i}**β**, τ

^{2}) .

In contrast to the SRE model, *U _{i}* and τ

^{2}now represent

*residual*between-studies heterogeneity, beyond that due to the linear predictor’s covariates. Also, whereas the MHeFE model’s

**x**

*and η*

_{i}*are fixed quantities that remain constant over hypothetical replications of the meta-analysis, in this MRE model only*

_{i}**x**

*is fixed while*

_{i}*U*varies over replications. Some authors refer to this as a

_{i}*mixed-effects*model, acknowledging both fixed and random sources of variation. We can write the combined MRE model as

*Y _{i}* =

**x**

_{i}**β**+

*U*+

_{i}*E*,

_{i}where *U _{i}* and

*E*are independent, or, assuming normality for

_{i}*U*,

_{i}*Y _{i}* ~

*N*(

**x**

_{i}**β**, τ

^{2}+ σ

_{i}^{2}) .

This model’s essential story is that predicted ES parameters may vary systematically among studies due to variation in covariates, and a study’s ES estimate deviates from its covariate-predicted ES parameter due to two random sources: study-level variation (e.g., due to unmodeled covariates) and sampling of subjects. Estimation and inference usually focus on the hyperparameters **β** and τ^{2}. Because this model incorporates study-level sampling via *U _{i}* and τ

^{2}, it supports broader generalizations: to a universe of studies from which those in our collection were sampled.

Conceptually, we can view this model as permitting a collection of studies’ ES parameters to deviate randomly from the regression surface (i.e., conditional heterogeneity). Hence, **x**_{i}**β** represents a mean ES parameter for studies that share a given value of **x*** _{i}*, and both τ

^{2}and σ

_{i}^{2}govern ES estimates’ variation around this surface.

That ends this superficial intro to conventional meta-analytic models. It’s informative to consider relations among the six models I’ve presented; for instance, the SHoFE model is a special case of the other five models, and each of the MHeFE and MRE models contains other models as special cases. In Part 5b I’ll elaborate on these nesting relations before describing how to estimate and make inferences about these models’ (hyper)parameters—μ, τ^{2}, or elements of **β**. In Part 5c I’ll mention useful extensions and variants of these models and procedures, such as for multivariate or other dependent ESs and other special types of data.

## Footnotes

**1.** The conditional notation “*Y _{i}* | θ

*” stands for “*

_{i}*Y*| (Θ

_{i}*= θ*

_{i}*),” which represents the random variable*

_{i}*Y*‘s conditional distribution, given that the random variable Θ

_{i}*takes the specific realization θ*

_{i}*.*

_{i}**2.** To express “conform better” more rigorously we could, for instance, consider properties of the ES estimator’s sampling distribution as sample size increases.

**3.** Authors variously refer to regressor variables in regression models as explanatory, independent, or predictor variables, and in meta-analysis these are often called moderator variables. I follow the convention of calling them covariates.

**4.** An even simpler model would specify this common parameter’s value, such as 0, 1/2, or 1 under a null hypothesis.

broader

[...] previous three posts on fitting models to effect sizes (ESs)—Parts 5a, 5b, and 5c—were the core of my seven-part overview of meta-analysis. With only two posts [...]

[...] is the last of three posts in Part 5 of my overview of meta-analysis. In Part 5a I described six conventional meta-analytic models for effect-size (ES) estimates, and in Part 5b I [...]