How to make sense of all those health studies

  • Discuss Comment, Blog about
  • Print Friendly and PDF

I was reminded a couple of weeks ago of how confusing for the general public it can be to evaluate results from scientific research. A press release appeared in the media citing a study which found ("demonstrated") that consumption of caffeine during pregnancy increased the risk of miscarriage. The study, done at Kaiser Permanente in Oakland, observed a group of 200 women. Two days later, another press release appeared. A study of 2,407 women conducted at the Mount Sinai School of Medicine in New York found "little indication of possible harmful effects of caffeine on miscarriage risk."


If you have read health reports over the years, you know this is a continuing cycle. Margarine is healthier than butter ... no, it's worse for you than butter! Hormone replacement therapy protects the heart ... no, it doesn't protect the heart and is associated with increased risk for breast cancer! Antioxidant supplements can help prevent atherosclerosis ... no, antioxidant supplements have no effect, and may even be harmful! Oat bran can decrease risk for heart disease ... no, maybe not. You get the point.


Let's take a look at how researchers come to the conclusions they do. Several types of research designs can be used depending on available funding and the goals of the researchers. The main type of studies from which diet and health information is obtained are epidemiological studies. This type of research studies the incidence of disease in populations of people. There are retrospective epidemiological studies that look at people who have been diagnosed with or died from a disease such as cancer or cardiovascular disease and attempt to, in retrospect, determine what factors may have caused the disease.


Information from a prospective study is much more valuable. A large number of people (the larger the better) who are similar in age and occupation are given periodic questionnaires about their lifestyle habits (diet, exercise, tobacco and alcohol use and various other parameters) throughout their lifetimes - yes, they sign on for life. There are a number of current studies that are using this research design, several of which have been ongoing for 40-plus years and started with anywhere from 30,000 to 50,000 subjects. This is a very powerful type of study that yields a vast amount of information.


What do researchers do with this information? They use statistical analysis to, as they like to say, "tease out" what is significant regarding the relationship between various lifestyle habits and disease. TONS of statistical analysis.


And what do statistics tell us? They tell us how strong those relationships are. Not whether a specific behavior or set of behavior cause a disease, but whether those behaviors put us "at risk" for getting a disease. This is how the tobacco companies managed for such a long time to maintain that cigarette smoking did not cause cancer. Yes, people who were longtime smokers were more likely to get lung cancer than non-smokers, but the studies that gave us this information did not reveal causality. No research study of this design will ever demonstrate cause and effect.


So why the contradictory results we see over time in health-related research? A single study cannot answer all the questions, and, in fact, good research adds to the body of knowledge, but will result in more questions. In theory, follow-up studies should narrow the focus, increase in depth and use more rigorous testing. Conclusions drawn from subsequent studies may or may not support earlier research. This process is repeated over and over again.


How can a layperson begin to evaluate the studies he or she learns about in the media? First, evaluate the source. News media, by definition reports news. "New" findings are more interesting than another study supporting what we already think we know. Be skeptical. New information can be considered preliminary - don't change your habits based on one report. Look at the subjects. Who were they? Were they people like you, or rats? How many subjects were there? The more subjects studied, the stronger the correlations that are reported. Over what period of time did the research take place? The longer, the better. Ask yourself whether the findings are supported by previous research.


The last few questions fall into the realm of methodology. You won't obtain this from news media, so if you are really interested in finding out about the research, go to PubMed or another Web site that has the original study. There you will find not only the methodology, but other studies addressing the same question, so you can evaluate the information in context.


When reading about the next health-related "discovery," think about this: The discovery is very likely based on correlations. I have found a correlation between washing my car and a subsequent rain. I'd be willing to bet I could find a statistical analysis that would find this correlation significant. Can I claim cause and effect?




• Fresh Ideas: Starting conversations by sharing personal perspectives on timely and timeless issues. Hazel Bowen lives in Washoe Valley and taught nutrition at UNR from 1996 to 2007. She is presently teaching an online nutrition class through UNR Extended Studies Program.

Comments

Use the comment form below to begin a discussion about this content.

Sign in to comment