close
close

first Drop

Com TW NOw News 2024

The JAMA Effect plus the news media echo chamber: More misleading publicity about the problematic claim that lesbian and bisexual women die earlier than heterosexual women
news

The JAMA Effect plus the news media echo chamber: More misleading publicity about the problematic claim that lesbian and bisexual women die earlier than heterosexual women

The JAMA Effect plus the news media echo chamber: More misleading publicity about the problematic claim that lesbian and bisexual women die earlier than heterosexual women

Last month we discussed a JAMA article that included the misleading graph above and made strong claims that were not supported by data.

Since then, these bold claims have received even more publicity. Here is a news article that uncritically reports these claims, including this quote from one of the study authors:

One of the advantages of this study is that we were able to separate bisexual and lesbian participants. We had enough participants and followed them long enough to actually look at those risks. No other American study has been able to do that.

This is a subtle problem, and it would be hard to expect a news reporter to argue against such a claim, but no, a comparison of the 49 lesbian deaths and 32 bisexual deaths in that study is so noisy that any differences between the two groups could be explained by chance alone. Even their own reported comparisons did not reach conventional levels of statistical significance, and that’s without even considering the other statistical issues discussed above. See the post linked above for details on that point.

Those survival plots are so misleading: they make the evidence look much stronger than it is. It would have been the job of the project’s statistician to challenge these kinds of noise mining claims. I’m not saying the differences between the groups are zero, just that the data are sparse enough that, even ignoring all other problems, they’re consistent with underlying differences that could go either way.

It’s frustrating that the news media coverage of this has been so unskeptical. See here , here , and here for more examples. JAMA’s prestige must be part of the problem, and I think another problem is that seductively attractive graph (to those who don’t look at it too carefully). From one of those reports:

“These findings may underestimate the true inequality in the general U.S. population,” the authors wrote, adding that the study population “is a sample of racially homogeneous female nurses with high health literacy and socioeconomic status, making them more likely to live longer, healthier lives than the general public.”

“We think this means that our estimate is unfortunately conservative,” McKetta noted.

This kind of thing is frustrating and we see it all the time: researchers see a pattern in noisy data and then they expect the real effect to be even bigger. Anything is possible, but experience and Bayesian logic go the other way.

I hope that continued reporting on the replication crisis in science would make this topic clearer to people. However, I fear that most researchers see the replication crisis as something that happens to others, not to themselves.

This is another reason why I prefer the neutral term “forking paths” over the accusatory sounding “p-hacking,” which sounds like a bad thing people do. Honesty and transparency are not enough; making statistical errors does not make you a bad person, and honest and qualified researchers can make statistical errors.

There’s also something called an echo chamber, where if you’re asked to talk about a topic repeatedly and never get any resistance, it’s natural to make increasingly strong claims that go far beyond the data that were used to justify those claims in the first place. We’ve seen this with the sleeper guy, the Stanford medical school professor, the glamorous business school professor, and many others. When there’s no resistance, it’s all too easy to go from experimental data to strong claims to pure speculation.

Again, I am not trying to belittle the study of health disparities. It is precisely because the topic is important that I find it annoying when researchers make avoidable mistakes in studying it.