Wednesday, June 1, 2016

Alternatives to mediation (in data analysis)

The following is not vetted. It is some thoughts inspired by several papers I have dealt with recently. It is also about statistics, a new topic for this blog, but one I will probably write more about.

In some studies we measure several variables, and we are primarily interested in the correlation between two of them, e.g., cognitive style and political ideology. When this correlation is found, we are also interested in what the other variables can tell us about why this happens. For example, we might be interested in things like religiosity, or education. Cognitive style might, for example, affect religiosity, and religion, in turn could affect social conservatism, which is (in some countries) related to religious teaching. Let's call the target variable, political ideology, Y, and the main predictor X, and the other variable M (for "mediator" or "middle"). Assume that X and Y are correlated, and this correlation is what we are trying to explain.

The logic of classical mediation is best understood in terms of simple and partial correlations. For this purpose, partial correlations are equivalent to regressions, since the significance test is the same. If the dependent variable is Y, the predictor is X, and the mediator is M, we need to show that r(X,M) and r(Y,M|X) are both significant. The second is a partial correlation (or regression coefficient, or semi-partial). The first is consistent with the claim that X affects M. The second is consistent with the claim that M affects Y and that this effect is not the result of the correlation of M with X.

Mediation tests are most useful when X is an experimental manipulation. Even then we worry about r(Y,M|X) being an artifact. It could be that the causality is an affect of Y on M rather than an effect of M on Y. Or, Y and M could both be affected by some un-measured fourth variable. We could avoid these problems by experimentally manipulating both X and M. Even then one might argue that the experimental manipulation of M is affecting something different from the M that varies spontaneously in the population or the M that is affected by Y.

More generally, in many tests of mediation, almost anything could cause anything else. Moreover, if X, M, and Y are all influenced by roughly the same causal factors, then M will "mediate" the "effect" of X on Y, or the "effect" of Y on X, if M is just the variable that is most highly correlated with these underlying causes. I have never seen a mediation analysis that attempts to correct for the extent to which the different variables correlate with the causal factors that each is supposed to be sensitive to. This sort of validity coefficient surely affects what counts as a significant mediator and what does not.

Note also that any test of mediation is about variation. It is possible that M does affect Y but that the variation in M is mostly error by the time you remove the common variance between M and X (by partialing).

So what should we do instead? One thing is to look at the simple correlations between X and M and between M and Y. If both are large enough (with "significant" being one criterion of that, but significance depends on sample size, which is irrelevant here), then we would conclude that variation in M is a possible explanation of the correlation between X and Y. It correlates with both of them. If it does not correlate with one of them, and if we have no reason to expect any additional variables that affect M and X or Y in opposite directions (thus obscuring a real correlation), then M could not explain the X Y correlation. (For example, X correlates with sex, but sex does not correlate with some other measure Y. Hence sex is not a possible explanation.)

Here "explaining the correlation" simply means that some source of variation exists that affects X, Y, and M. The fact that M is part of this list tells us something about what that source of variation might be.

Can we say more than this? Consider a stricter criterion. Suppose we regress M on X and Y, and we require that both regression coefficients are present (high enough by some standard). Such a result would seem to rule out the possibility that r(X,M) or r(Y,M) are high simply because X and Y are correlated.  Suppose, for example, r(X,M|Y), the regression coefficient or partial correlation, is zero even though r(X,M) is positive. This would suggest that X does not really have any common source of variation with M.

This does not quite follow. For example, it could be that X and M are affected equally by some set of variables Z, but X is affected by some additional variables that also affect Y. Thus, X and M are redundant measures of some of the factors that affect X, M, and Y. Similarly, it could happen that X and M are affected by exactly the same set of variables, but X is a more reliable measure than M. This could reduce the role of M to zero in a regression model.

However, if we regress M on X and Y and find that both coefficients are high enough, then it is more plausible that M is indeed capturing some of common variance affecting X an Y (compared to the simple correlations of M with X and M with Y).

In general, I do not think we can learn much from anything other than the simple correlations r(X,M) and r(M,Y). If both of these are positive, then whatever M "measures" is a possible source of variation that accounts for the correlation between X and Y. But regression of M on X and Y could also be useful, if both coefficients are positive.

Mediation tests do have some uses. They can be useful as a manipulation check, a way of testing whether an experimental variable did what it was supposed to do. And, if its effect varies across subjects, does the variation help to explain the variation in outcomes.  For example, cognitive therapy for depression (manpulated) changes how people think about the causes of bad events (measured by the Attributional Style Questionnaire), which, in turn, affects their depressive symptoms. The therapy is focused on the thinking, not the symptoms, so this is a manipulation check.

No comments:

Post a Comment