Recent articles have heralded coffee’s health benefits, particularly when it comes to heart health. We delve deeper into the medical studies these articles are based on to determine if, in fact, coffee truly is good for cardiovascular health.
Recent Articles Suggest That Coffee Helps Your Heart
In 2004, the general consensus in the press was that coffee was bad for your heart.
But let’s skip ahead 10 years. From 2015, from Healthday.com, came this headline: “Love Coffee? Your Heart May, Too.”
The story begins, “Drinking three to five cups of coffee a day may reduce the risk of developing clogged arteries, which in turn might reduce the risk of heart attacks, a new study suggests.”
This is a transcript from the video series The Skeptic’s Guide to Health, Medicine, and the Media. Watch it now, on The Great Courses Plus.
The way that sentence reads, that coffee reduces the risk of clogged arteries, which in turn might reduce the risk of heart attacks is a way of explaining how the surrogate marker—in this case, a measurement of clogged arteries—is connected to the real clinical endpoint of importance: heart attacks.
In 2015, this was a study of over 25,000 men and women in South Korea. So, a legitimate criticism might be, would the same study in an ethnically more-diverse American population show the same thing?
We don’t know. All of the subjects participated in an interview about their eating habits and coffee consumption—there, again, we’ve got that problem of accurate recall—and then they underwent a CT scan to measure calcium deposits near their hearts.
Those CT findings are, yes, a surrogate marker, but it’s well-correlated with cardiovascular risk. Basically, there should not be calcium in the lining of your blood vessels; if there is, that’s atherosclerotic plaque.
Different surrogate markers occupy different places in the chain that leads to the most important endpoint. We can think of those early risk factors for heart disease as the back of the chain, things like blood cholesterol levels or inflammatory markers, or high blood pressure, or smoking.
Down further is a marker that measures calcium in the blood vessels, meaning that there is more than just an increased risk of atherosclerosis—there’s real atherosclerosis occurring. The next step further down would be to count episodes of cardiac chest pain or heart attacks; and the final endpoint would be death.
The best surrogate markers, the ones you need to take most seriously, are the ones closest to the end of the chain. So this Korean study weighs against those earlier studies about blood pressure and lab tests. Here, in 2015, we found that more coffee means less atherosclerosis in the chest as measured by CT scans.
Other headlines about this same study included, from Newsweek, “Drinking Three Cups of Coffee a Day Reduces Risk of Heart Attack” or from CBS Philly “Study: Drinking Coffee Can Reduce Risk of Heart Attack.”
That CBS Philly story started with this sentence: “You begin to wonder just how many studies can be done on coffee, but the more we do the more positive results we’re starting to see.” That context is good; it acknowledges the shifting perspective as more studies on better markers have been published.
Learn more about whether coffee is good for your heart
Limitations of the Korean Coffee Study
But not all journalists were as enthusiastic. From Forbes, we have the headline: “No, Drinking Coffee Won’t Save Your Life or Prevent Heart Attacks.”
The first line of this story reads, “Once again, the media has swallowed the bait hook, line, and sinker.” This author’s main criticism of the media’s handling of the Korean study has nothing to do with surrogate markers.
Instead, it’s about the very nature of this kind of study: It was an observational, not an experimental, study. And observational studies cannot definitively show a cause-and-effect relationship.
A study is observational if exposure to the study variable isn’t under the control of the researchers. In this Korean study, the researchers didn’t tell the participants whether or not to drink coffee. They just observed what happened, without randomization.
If this were a clinical trial design, the researchers would have divided up the 25,000 participants and assigned them, randomly, to either drink or not drink coffee.
In an observational study, you look at a group of people and measure things that are already happening—in this case, their coffee-drinking habits—and you correlate that with some kind of measured outcome, like the calcium found on their heart CT scans.
Along the way, you also ask about or measure all sorts of other things, like maybe smoking habits, or whether there’s a family history of heart disease, or whether people exercise. All of those factors also influence the measured endpoint, and by asking about them, you can mathematically control for those variables in your conclusion.
You’d say, for instance, after controlling for smoking and family history and exercise, coffee consumption was correlated with a decreased measurement of atherosclerosis.
But, in a real-life observational study, you can’t measure everything. Maybe there are other factors.
Perhaps coffee drinkers also put cream in their coffee, or consume more sugar, or maybe they come from neighborhoods that are wealthier. Maybe while they drink their coffee they walk slower, so they get less exercise walking to work.
Who knows? There are likely thousands of variables that contribute to heart-attack risk, and you cannot measure all of them to know that it’s the coffee that causes the risk. Maybe, and this is a huge limitation of observational studies, the risk was from something else that wasn’t measured.
There’s another very important shortcoming of observational studies, called “reverse causality.” What if people who have had heart disease, or maybe just people worried about getting heart disease because of a family history, were less likely to drink coffee? In that case, it’s heart disease risk that drives drinking less coffee, not the other way around.
Observational studies cannot ever say that A causes B. They can only say that A is correlated with B.
That’s a distinction that’s almost always missing from newspaper headlines. Think about the difference between Newsweek’s “Drinking Three Cups of Coffee a Day Reduces Risk of Heart Attack” and the much more accurate, “Drinking Three Cups of Coffee a Day Is Associated with a Decreased Risk of Heart Attacks.”
Experimental Versus Observational Trials
The best way to prove causality—that is, to know that thing A causes thing B—is with an experiment, in a clinical trial. A well-done clinical trial provides the strongest evidence for medical decision-making.
Take a group of people that are all pretty much the same and randomize them into two groups. One group takes the medication and the other group doesn’t.
You need a placebo, too, so the people in each group don’t know which group they’re in. Then, measure the outcome.
If you did the study right, the only difference between the people is the exposure to the thing you’re studying, so that must be the cause of the observed difference in outcome.
And you know that the exposure took place before the outcome, so you don’t have to worry about reverse causality. Bingo!
Now, there are many pitfalls to doing this perfectly. But done well, an experimental study (that is, a randomized clinical trial) is the strongest way to show that one thing is a cause of another.
You’ll have to use your skeptic’s toolkit to know if a news article is about an observational study or an experimental study. Look at the details provided to determine if the study was strong enough to support the conclusions.
Remember that experiments have to include randomization, and there must be something done to a subgroup of the study participants before the endpoint is measured. If what you’re reading about is only a survey, or a passive collection of information, that’s probably an observational study.
That’s not to say that observational studies are worthless. Sometimes, you cannot practically or ethically do an experiment.
You’re not going to randomize half the people on an airplane to jump out without parachutes; but what you can do is observe that people who happened to fall without a parachute are unlikely to survive. Likewise, you could randomize 25,000 people into groups that are either required to or forbidden from drinking coffee.
Perfect science isn’t always practical. Still, we should do the best we can, and media accounts would serve us best if they accurately reflect a study’s shortcomings.