Bring Your Beef to the Source!

Red Meat Blamed for One in Ten Early Deaths” – newspaper headlines following the publication of “Red Meat Consumption and Mortality” in Archives of Internal Medicine this week.  But this MiniBlog is not about red meat, it is about the practice of science (specifically, epidemiology) to inform public health, and reactions and re-reactions to it when particular interest groups like or do not like the findings of a particular study.

But first, the study itself. It found an association between eating red meat (transparently defined in the paper) and elevated risks of all-cause mortality.  Numerous potential confounders (eg, that red meat eaters tend to be smokers or do less exercise) were accounted for, and the association was still there.  Overall there was about a 12% increase in risk of all-cause mortality associated with eating one additional portion of red meat per day.  However, and here’s the reason not to worry about this study, in rounded terms the risks outlined in the paper before eating red meat are only circa 10 in 1,000 per year, so an increase of 12% elevates this to 11.2 in 1,000 per year, and it is likely that the vast majority of the population do far more risky things than this – driving a car 10,000 miles a year would be one example of a behaviour that elevates all-cause mortality risk way beyond eating red meat.

So, reactions to the study…  Firstly, the media.  Well, of course, this is just the sort of study that the media loves: “Red meat is not only unhealthy but can be positively lethal, according to a major US study”.  Needless to say, this is scaremongering, and an elevated risk from 10 in 1,000 to 11.2 in 1,000 is hardly “positively lethal”.  Irresponsible reporting of science, yes!  But bad science….?

This is where the second tranche of reactions to the study come in.  Numerous bloggers, nutritionalists, newly expert science methodologists and other assorted malcontents who dislike the study findings have been queueing up to tell each other that the science is flawed largely, it seems, because they have some sort of nutritional axe to grind (or in some cases, advice to sell).  These ‘critiques’ largely conclude that because not all potential confounders can be adjusted for and/or because the study is evidence of an association not causation, epidemiology itself is a flawed approach, and that only evidence from randomised controlled trials should be allowed to contribute to understanding. Well, not really:

(a) Epidemiology is an established practice that is responsible for many of the health improvements in society over the last half century.  Epidemiology first unearthed the association between smoking and several diseases – remember, smoking was good for you in the 50’s and 60’s.

(b) Epidemiology provides important sources of evidence when experimentation is not possible, particularly areas where negative health outcomes are expected.  It is simply not ethically feasible to randomise people into two groups and to ask one of them to undertake behaviours that you know or suspect to be detrimental to health (eg, smoke 40 cigarettes per day) simply so you can measure what happens.  In such cases, epidemiology provides our best sources of public health evidence.

(c) Epidemiology is a key part of the armoury of scientific endeavour to inform public health outcomes.  This discussion piece in The Lancet is clear about that.

(d) Epidemiology has a role to play in both forming hypotheses about causation AND in contributing to the volume of evidence about public health impacts and issues.  This meta-analysis of 22 studies on non-vigorous physical activity is an example of the latter that resulted in a dataset of almost one million people.

(e) In some high profile cases, neither epidemiology nor randomised controlled trials can claim to have settled public health issues.  In the long-controversial case of HRT, coronary heart disease and breast cancer, a commentary in The Lancet noted that “Recent reanalyses have brought results from observational and randomised studies into line.  The results are surprising.  Neither design held superior truth.  The reasons for the discrepancies were rooted in the timing of HRT and not in differences in study design”.  This discussion is a really nice illustration of the respective contributions that randomised controlled trials and epidemiology can make.

There is nothing in the red meat study that merits trashing the science or value of epidemiological research.  Epidemiological research is an important contributor to our understanding of a vast range of public health issues.  However, if anyone believes that they can demonstrate or effectively argue that the science of the red meat study is flawed, Archives of Internal Medicine (the journal that published the study) actively encourages high quality comments and critiques on the research it publishes in its peer-reviewed ‘Letters to the Editor’ section.  So if you have a “beef” with the conduct of the study why not take it to the source?  Here’s the submission link for you.

But beware!… science is hard!  It is hard because the fundamental requirement of science is to convince those with whom you most strongly disagree of the voracity of your arguments.  Self-published online critiques of science, targeted at and endorsed by those who already share your world view, are nothing more than holding court in the pub…. and that’s about as far from science as it gets!

Advertisements