This is the Weight and Healthcare newsletter! If you like what you are reading, please consider subscribing and/or sharing!
In Part 1 I talked about the WaPo/Examination piece “As ob*sity rises, Big Food and dietitians push 'anti-diet' advice…General Mills warns of “food shaming"; dietitian influencers promote junk foods and discourage weight loss efforts.” I discussed my experience with being interviewed, as well as some basics about the piece. Today I’m going to talk about their methodology, such as it is.
One thing that I’m not going to do here is whataboutism (ie: what about dietitians shilling for the weight loss industry) but I do want to point out that it was an active choice on the part of those behind this article to target anti-diet dietitians.
The piece is based on a survey they conducted. As they explain:
“The Examination and The Washington Post analyzed more than 6,000 social media posts by 68 registered dietitians with at least 10,000 followers. The analysis showed that roughly 40 percent of these influencers, with a combined reach of over 9 million followers, repeatedly used anti-diet language.”
You can tell that this is more editorial than science because they know the exact numbers but use terms like more than, roughly, and over. They know how many “more than” 6,000 social media posts, they know the exact number of followers at the time of analysis and, assuming someone over there got through elementary school math, they can calculate an exact percentage of the influencers that their “analysis” identified. It’s one thing to state the exact numbers and then use terms like these later in the piece, but they are choosing language that is meant to be persuasive, not precise.
I reached out to Sasha about this in an email:
One quick request – can I get a copy of the WaPo/Examination dietitian/social media post survey and the methodology behind it? As a research nerd I would love the chance to look at the full information.
He responded:
For the recent story on the anti-diet movement, we used the same dataset and manually reviewed influencers' social media feeds to determine whether they regularly used anti-diet language.
The data analysis was led by the Washington Post, and they aren't able to share data that comes from scraping social media sites and accounts. So unfortunately I'm not able to provide the raw data or go further into detail on the analysis.
I hope this is helpful and thanks again for your insights in our conversation.
I’m curious why WaPo isn’t “able to share data that comes from scraping social media sites and accounts.” but I’ll move on.
Let’s take a moment to talk about their methodology (which is in a box at the end of the first piece “The food industry pays 'influencer’ dietitians to shape your eating habits”)
”The Examination and The Post analysis included registered dietitians on Instagram and TikTok who used “Registered Dietitian,” “RD” or “RDN” in their account name or social media bio as of July 2023 (social media users can adjust these fields at any time). The analysis included only those who had at least 10,000 followers (the lower threshold for what is typically considered a “micro-influencer”), had created at least 10 posts over the last year, and had posted English-language content.
This identified 68 influencers.
This is…not what I expected. I had assumed that the 68 Registered Dietitians was meant to be some kind of representative sample- because I thought that surely the behavior of 68 people on social media would not warrant a two-part investigative series by The Examination and the Washington Post. But here we are.
In the case of this second piece, they are tracking the “roughly 40%” (though we don’t know how “roughly” that percentage has been manipulated) who “repeatedly used anti-diet language” which is to say that this entire piece is about the behavior of roughly 27 people across two social media platforms. And, again, they aren’t saying that this is a representative sample of RDs, these are literally the only (roughly) 27 who met their non-rigorous criteria.
I emailed Sasha to ask the obvious research methods question: what definitions were utilized for “repeatedly used” and “anti-diet language”? in the analysis. At this time I have not heard back.
Let’s talk about the way they analyzed the posts:
Reporters archived all posts between July 1, 2022, and Aug. 1, 2023, generating a database of over 6,000 pieces of content.
Two reporters separately reviewed each influencer’s posts to identify instances when the dietitian at least once promoted a product, brand or industry-sponsored message. This included using a “Paid Partnership” tag on the post, mentioning personalized discount codes or writing “#ad” or “#sponsored” in the post description. These criteria were selected based on the Federal Trade Commission’s guidelines for social media influencers.
Reporters manually reviewed feeds and defined posts as having an unclear disclosure if the post (1) was promotional but had no clear disclosure, (2) was labeled as an ad but the brand sponsoring the post was unclear (e.g. the use of a vague “partner” tag instead of a brand mention), or (3) had a sponsorship disclosure but the “ad” mention was placed deep in the description of the post.
Again, why not just tell us precisely how many pieces of content? Is is 6,001 or is it eleventy gabillion? We may never know. Also, just from a research methods point of view, they say two reporters reviewed them separately, but they do not say what the criteria was for inclusion. Did both reporters have to (separately) “identify” it in order for the piece of content to be selected, or was it included if either reporter “identified” it?
Let’s dig deeper here. The way they identified posts to support their claim that there was a lack of disclosure was… using the disclosures? (That is to say, based on their methodology, if a post didn’t in some way disclose that it was an ad, they couldn’t have included it in their analysis based on their own criteria) and then made up their own 3-part definition for “unclear disclosures.”
The first part of the definition they made up is “was promotional but had no clear disclosure.” I’m a bit confused because the criteria that produced these posts was “This included using a “Paid Partnership” tag on the post, mentioning personalized discount codes or writing “#ad” or “#sponsored” in the post description.” These all seem like forms of clear disclosure to me.
The second… “was labeled as an ad but the brand sponsoring the post was unclear (e.g. the use of a vague “partner” tag instead of a brand mention)” Unclear to whom? By what definition? Do they mean the “partner” hashtag? Were the partners tagged in the post (which is what Instagram currently requires, TikTok has a “commercial content disclosure toggle.” To be meaningful, research like this requires clear definitions (not eg: …).
The third part is “had a sponsorship disclosure but the “ad” mention was placed deep in the description of the post.” Again, what is the definition of deep? And, again, it seems like the disclosure was made.
What I’m suggesting is that they should be clear if these content creators failed to meet the social media platforms’ requirements for disclosure, or if they failed to meet the definition that WaPo/Examination reporters made up 1-2 years later. If it’s the latter, then the real issue they have is with the platforms, not the content creators.
As I said in part 1, as far as I’m concerned, if you’re being paid to create content then you should be shouting from the rooftops about it. But that is not what is required by social media, so they need to be clear about exactly what they are claiming these dietitians did wrong.
Perhaps the most embarrassing thing about the piece is that they stake a claim about deaths supposedly caused by “ob*sity” to a study that used this “methodology” (emphasis mine)
“We simulated a nationally-representative virtual population of US adults, estimating annual all-cause mortality rates for each person given their demographic characteristics, BMI, and smoking history. We fitted the model to empirical data on all-cause mortality rates from 1999 to 2016 by subgroup and state. To estimate the impact of excess BMI on mortality we simulated counterfactual scenarios in which we changed the BMI distribution and compared the predicted mortality outcomes to the status quo.”
If you think that sounds like an incredibly questionable way to calculate mortality rates, you are absolutely right. Obviously, I’ll be doing a deep dive into this study (and its authors because, spoiler alert, there’s a doozy in there,) in the future.
For now, I’ll just say that I think the WaPo/Examination article was unbalanced and misleading, relying on some pretty questionable quotes and research. I also want to say that a discussion of the benefits of an anti-diet approach, including ending food moralization, is an important discussion to have and, critically, is not remotely the same as “roughly” 27 dietitians who didn’t disclose funding sources as vigorously as some reporters from WaPo/The Examination wanted them to.
Did you find this post helpful? You can subscribe for free to get future posts delivered direct to your inbox, or choose a paid subscription to support the newsletter (and the work that goes into it!) and get special benefits! Click the Subscribe button below for details:
Liked the piece? Share the piece!
More research and resources:
https://haeshealthsheets.com/resources/
*Note on language: I use “fat” as a neutral descriptor as used by the fat activist community, I use “ob*se” and “overw*ight” to acknowledge that these are terms that were created to medicalize and pathologize fat bodies, with roots in racism and specifically anti-Blackness. Please read Sabrina Strings’ Fearing the Black Body – the Racial Origins of Fat Phobia and Da’Shaun Harrison’s Belly of the Beast: The Politics of Anti-Fatness as Anti-Blackness for more on this.