This is the Weight and Healthcare newsletter! If you like what you are reading, please consider subscribing and/or sharing!
In my work around weight science and healthcare, I see a lot of confusion about, and misuse of, statistics. Today I thought I would point out the three of the most common issues that I experience.
Sure, intentional weight loss fails 95% of the time, you just have to keep trying until you’re in the 5%.
I know not everyone took statistics, but I did, so let me assure you that this isn’t how statistics work on the most basic level (remember that this is the “logic” that many people use when playing the lottery.) in fact, weight loss is worse than the lottery in this respect because repeated attempts can actually have decreasing odds of success. The body responds to weight loss attempts by changing physiologically to become a weight-gaining, weight-maintaining machine, which it continues to do even after the diet ends. This can make repeated attempts even less likely to result in significant, long-term weight loss. Moreover, many people regain more than they lost, meaning that if they (or their healthcare provider) had a specific weight/BMI in mind, they may end up farther from it than they started. Not to mention that “failure” (being clear that the diet failed the patient, and not the other way around) is not benign. Weight cycling (losing weight and then gaining it back) is linked to significant harm, including health issues that get blamed on being higher weight.
But It’s Statistically Significant
In the most simplified explanation, if a study result is “statistically significant,” it means that it’s more likely that the result was caused by the study intervention than by chance. So participants could have lost an average of one pound, but if it’s determined that it’s more likely that the one pound loss was due to the weight loss intervention being studied than by chance, then that one-pound loss is statistically significant.
There are a couple of ways that this goes wrong.
Sometimes people either think that “statistically significant” means “important” (or they hope that other people will think that’s what it means,) so they’ll say that a result in a study was “statistically significant” without mentioning that the actual effect (the amount of weight loss, for example) was very small (one might even say…insignificant.)
Something else that happens with weight science is that the conclusion of a study (which is often the only part that is not behind a paywall) will state that participants lost “a significant amount of weight” when what they really mean is that they lost a small amount of weight, but that the weight loss was statistically significant. Whether accidentally or on purpose, due to the colloquial meaning of significant this misleads people (including healthcare practitioners) to believe that the intervention was far more successful than it actually was. So the conclusion might say that subjects lost a significant amount of weight when, if you get behind the paywall and dig into the study, you’ll find that they lost 2.9% of their body weight (and often, had already started regaining it when the study ended.)
Percent increase of complication risk vs percent of complication risk
Many healthcare procedures have risks of complications. Typically (and, again, this is a simplified explanation) the decision to treat is based on the benefits of the treatment versus the risk of the procedure. The same procedure may have a different risk of complications for people with different circumstances. For example, people with hemophilia can have a higher risk of bleeding during surgery and a higher risk of poor wound healing and infection immediately following surgery than those who do not have hemophilia.
To be clear, I’m not suggesting that higher risk justifies denial of care, and I’m giving the most simplified possible view of this in the service of just explaining the statistical issue. It gets very complicated in everything from the methodology of the research used to determine the risk of complications to the structures of privilege and oppression that lead some people’s lives to be valued more highly than others. Complication risk is often used as the “justification” for BMI-based healthcare denials (wherein healthcare is held hostage for a weight loss ransom and I wrote about that in more detail here. )
I recently encountered an example of the issues with confusing these when I received an email from a patient who was facing a BMI-based denial of surgery. The surgeon insisted that there was a 100% complication rate for the procedure for people with a BMI over 40. That wasn’t my understanding and it didn’t strike me as likely, so I did some digging. It turns out that there was absolutely no research to back the 100% complication claim, but there was some research that showed that for people with a BMI over 40 the risk of complications increased by 100%.
Herein lies the issue. A 100% increase in the risk of complications is absolutely not the same as a 100% risk of complications.
The base risk of complications for the procedure was 1%, meaning that on average, 1 out of 100 people who have the procedure will experience complications.
A 100% increase of a risk of complications of 1% gives us a risk of complications of 2%, meaning that, on average, for people with a BMI over 40, 2 out of 100 who have the procedure (and not 100 out of 100, as the surgeon thought) will experience complications.
(I want to point out that when those of higher weight/BMI experience higher complication rates, there is a tendency to assume that body weight is the problem when, in fact, the problem may well be a system – research, tools, training, best practices, and biases etc. - that is created for thin bodies and fails to equally support fatter bodies, but that’s the subject for another post.)
Statistics help us make sense of data in ways that can be incredibly helpful. That said, it is certainly true that statistics can be manipulated and so we always need to be asking questions about who designed the analysis and for what purpose, and be on the lookout for these common issues. These certainly aren’t the only issues if you have examples you’d like to share or questions about stats you’d like me to write about, please feel free to leave them in the comments!
Did you find this post helpful? You can subscribe for free to get future posts delivered direct to your inbox, or choose a paid subscription to support the newsletter (and the work that goes into it!) and get special benefits! Click the Subscribe button below for details:
Liked the piece? Share the piece!
More research and resources:
https://haeshealthsheets.com/resources/
*Note on language: I use “fat” as a neutral descriptor as used by the fat activist community, I use “ob*se” and “overw*ight” to acknowledge that these are terms that were created to medicalize and pathologize fat bodies, with roots in racism and specifically anti-Blackness. Please read Sabrina Strings’ Fearing the Black Body – the Racial Origins of Fat Phobia and Da’Shaun Harrison’s Belly of the Beast: The Politics of Anti-Fatness as Anti-Blackness for more on this.
Thank you for this breakdown! I’ve taken statistics and I didn’t realize how much the diet industry was manipulating them and misinterpreting them.
Also that doctor who misinterpreted “100% increase” makes me cringe so hard. I am horrified and think this is a really basic concept. If they don’t understand this kind of math, how can we be sure they’re safely running the calculations for medication dosing and whatnot?
Thank you so much for this! I've never taken a statistics course so, although I have known for a while that statistics are often misunderstood and misused, the information you provided is eye-opening and empowering.