25 May 2011

Unhealthful News 145 - Statins prevent heart attacks, except maybe in real life in Sweden?

There is a joke about economists that upon observing something working in practice they immediately set out to try to figure out if it works in theory.  No one ever seems to make a joke of it (perhaps because it is less funny), but a similar observation applies to health researchers.  In their case, they observe that something works in the real world and they wonder if it works in the highly artificial confines of a randomized trial.  What they far too seldom seem to wonder is if the opposite is true, if the semi-theoretical result that is based on trials really works out.

A new study (which does not seem to have made the news, which is probably just as well) in the Journal of Negative Results in BioMedicine looked at statin use and the rate of AMI (acute myocardial infarction – i.e., heart attack).  The study was more in the economic style than epidemiology (which is to say that the authors explained their methods and used a purpose-built comparison rather than just forcing everything into a logistic regression and not explaining what they did).  To summarize the basic result, they did not find a correlation between rate of statin use (across geography, time, and age range) and AMI rates.

This is rather troubling since one of the Accepted Truths of preventive medicine right now is that statins provide substantial benefit at very little cost.  But this information should not be dismissed because randomized trials got a different result.  Randomized trials do not represent the real-world circumstances in which people act.  For economic exposures (i.e., consumer choice or pure behavior – e.g., smoking cessation), trials are often almost useless.  For purely biological exposures (say, something in the water) or attempts to evaluate existing behaviors (such as the effects of an exposure that some people just happen to have, studied by forcing others to be exposed in a trial), this is not such a problem.  Most medical exposures fall somewhere in between – statins have a biological effect, but actually using them as directed is economic (a consumer behavior).

There are some obvious possible stories that make the new result misleading and the trial results exactly right after all.  If statins are used more by subpopulations that need them more (i.e., have more people at higher risk of disease) then there will be a simple confounding problem (called "confounding by indication") wherein high risk causes the exposure, so people with the exposure do worse than average even if the exposure is beneficial.  For a population where most everyone at high and moderate risk are consistently using statins, this confounding would largely disappear.

Another possible explanation is that they did not look at the data correctly.  What they did sounds reasonable, but it is impossible to know that for sure.  For one thing, the rate of fatal AMI seemed to do the "right" thing even though non-fatal AMI seemed to go a little bit the wrong way.  You will recall that I often question whether authors who found a positive result hunted around for a statistical model that generated their preferred outcome.  It should be realized that using the wrong statistical model and getting a misleading negative result is a much simpler exercise.  It is very easy to fail to find something that really exists by analyzing the data wrong.  It is not clear if they authors hunted around a bit to see if maybe their negative result was not so robust if they changed their analysis (that is, if it might be that they just missed it by looking at the data one particular way).

And I think there is some reason to worry.  The authors demonstrate some holes in their knowledge of scientific epistemology.  They wrote:
Results from an ecological study are best not being interpreted at the individual level, thus avoiding the ecological fallacy. However, the results can be used as a basis for discussion and for the generation of new alternative hypotheses.
A disturbing number of people seem to think there is something called the "ecological fallacy" that implies that you cannot draw conclusions about the effect of an exposure on people based on ecological data.  That is simply wrong.  There is one odd way in which ecological data can steer you to an incorrect causal conclusion that is not present for other types of studies, which is that it is possible that having a higher rate of exposure in a population causes a higher rate of the outcome in the population, but for an individual this is not true.  An example is that having more guns in a population causes people to be more likely to be shot by stranger.  However, having a gun yourself does not make you more likely to be shot by a stranger (I am setting aside the fact that it makes you enormously more likely to be shot by a family member or by yourself). 

But oddities like this are rare and usually fairly predictable.  Beyond that, the challenges with ecological data are just the same as with any other study design: measurement error, confounding, etc. There is no fallacy, and usually there is no reason to think there is a "ecological paradox" like with the guns (it is not really a paradox either, but that term is a lot closer to correct than "fallacy").  Indeed, population-wide ecological data has some advantages over other data, creating a tradeoff rather than a clear advantage.  There is no more an ecological fallacy that makes it necessarily worse than there is a "sampling fallacy" that makes other study designs necessarily worse.

As for generating new alternative hypotheses, allow me:  Hypothesis 1 = statins do not work so well when used by regular people in real life as compared to the artificial situation in trials.  Hypotheses 2A (B,C…) = statins do not work as well in subpopulation A (or B or C…) as they do in trial populations.  Hypothesis Variant 3 = this is true in Sweden but not elsewhere.  Hypothesis Variant 4 = the observed lack of correlation will change when there is greater use of statins.  There, done.  I generated the hypotheses.  Shall we get on with figuring out what is really true?

Probably not.  The randomized trials have spoken, and any contrary evidence will be dismissed by those who do not understand it (which includes most of the people who make health policy).  There is probably no harm done in ignoring the other evidence in this case, because even if statins are a bit less impressive than currently thought, they still should be used a lot more.  Still, it is not so reassuring that the reaction to this from those who tell millions of people how to live healthier will likely be to ignore it because it must be wrong, rather than to act like scientists and make the effort to assure themselves by figuring out why this result occurred.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.