My wife and I went through this a couple of years ago, with a 10 week NIPT calling a rare trisomy (chr 9), which is always fatal within a few weeks of birth.<p>It was absolute hell. The key problem here is the waiting and uncertainty. You have the NIPT at 10w, but you can’t have the amniocentesis until several weeks later. When that came back fine, there were questions about whether it was a “mosaic” meaning only a small proportion of cells are effected. We were only really in the clear after the 20 week ultrasound.<p>That’s a lot of weeks to be consumed by wondering about whether to terminate the pregnancy, or wait it out for more information. I have a masters in bioinformatics (in genomics!) and my knowledge of stats and the science was next to useless in the face of these decisions.<p>I know of couples who simply couldn’t deal with this uncertainty and chose to terminate on the basis of this test alone.<p>Fortunately for us our child was fine and is a perfectly healthy 18 month old now, but I wouldn’t do the rare trisomy test again.
IMHO, some of those criticizing the article for failing to understand statistics are missing the point.<p>The point is that people who get a "positive" result on these tests are often put through terrifying levels of anxiety when there is no actual problem; this anxiety is often exacerbated because they aren't informed of the false positive rate. This clearly has a harmful emotional effect on people, and explaining the false positives in Bayesian terms, or reframing it in terms of sensitivity and specificity, doesn't undo that damage.<p>That potential harm needs to be explained to patients, and it needs to be weighed carefully against the potential benefits of the test (as is done for PSA tests for prostate cancer, which also have a high false positive rate). Given that potential for harm, it's not unreasonable to ask that these tests be more tightly regulated.<p>To quote the OP:<p>> In interviews, 14 patients who got false positives said the experience was agonizing. They recalled frantically researching conditions they’d never heard of, followed by sleepless nights and days hiding their bulging bellies from friends. Eight said they never received any information about the possibility of a false positive, and five recalled that their doctor treated the test results as definitive.<p>(Edit: clarified)
My 2nd daughter was flagged during our 20 week for something having to do with the way her skull was forming and they wanted to do a series of genetic test. They charged us through the wazoo and everything came back negative. She arrived 3.5 weeks early and contracted bacterial meningitis shortly after birth. We found her code blue in the crib. She ended up having a bilateral craniotomy to relieve the empyema that had formed. CP, CVI, global TBI - every day is hell on earth. This was 2019, so the nightmare of the last few years started early for our family. We've had a number of medical professionals drop hints at the fact there might be something wrong from a rare disorder perspective but we're in a league of our own and that is hindsight - the damage and trauma are non-stop. Anyone trying to shickle a few dollars from the medical system to provide "pre-natal diagnosis" without sound science - they can come burn in the same hell I live in every day.
How did this article, written by someone who clearly lacks an understanding of basic statistics, make it into the Upshot? They try to make it seem like the test is wrong 85% of the time, but that's not necessarily the case. All we know from the article is that 85 / 100 positive results are false positives, which means the test could actually be quite accurate. If the test correctly identifies 100% of real cases, then that sounds like an excellent test. Just as an example, if 1/4000 people have the disease, and the test identifies 100% of these cases, then around 0.14% of test takers will get a false positive.
This article is a confused mess. It's something of a Gish gallop in conflating all the different issues they could come up with, while leaving out all the necessary vocabulary (C-f "Bayes" "posterior" "decision theory" [Phrase not found]) making it almost impossible to consider each issue in adequate detail.<p>It mixes up poor communication (reporting false-positive/negative rates as if posterior probabilities, & exaggerated confidence thereof), arbitrary-seeming decision thresholds (but their hyperventilating over '85% wrong' notwithstanding, many are probably too conservative, if anything, given how devastating many of these problems are, there should be <i>more</i> false positives to trigger additional testing, not less), costs of testing (sure why not but little is presented), tests which they claim just bad and uninformative (developed based on far too little _n_, certainly possible), implicit calls for the FDA to Do Something and ban the tests (not an iota of cost-benefit considered nor any self-reflection about whether we want the FDA involved in anything at all these days)... Sometimes in the same paragraph.<p>Plenty of valid stuff could be written about each issue, but they'd have to be at least 4 different articles of equivalent length to shed more light than heat.
> “The chance of breast cancer is so low, so why are you doing it? I think it’s purely a marketing thing.”<p>This mindset is ingrained in every doctor I speak to, but I think it's just so wrong.<p>Take DiGeorge syndrome. You have a 1/4000 chance of having it, and the test carries an 81% chance of a false positive. The above doctor calls this "marketing"? Foolishness. That's an incredibly useful test. The downside is small, and the upside is asymmetrically large.<p>We need far, far better screening for all sorts of things. Adult cancer and heart screens once a year, prenatal screening, and on. We do a good job with breast and prostate screens, but for rarer conditions our current approach of waiting for the disease to be symptomatic makes no sense. Part of that will be driving the cost down. There is so much market need for a legitimate version of Theranos and I'm glad there are some companies working on these things.
This article seems a bit deceptive. We are going through NIPT soon and our doctor went over false positive and false negative rates for the common screens. Our doctor has pointed out some of the screens (esp for rare conditions) are not that accurate. The only procedure with high accuracy, amniocentesis, has a slight risk of miscarriage (our provider quoted 0.3% ) so its still statistically better to take NIPT and then only consider amniocentesis with a positive result since there is no risk from NIPT.<p>You are supposed to treat a positive on NIPT as “there’s a chance your baby has this, need a more accurate procedure to confirm”.<p>It sounds like their ob gyn wasn’t able to explain results to them or they didn’t understand the probabilities. To be fair our provider didn’t even suggest tests for the disorders in the article, probably because of the false positive rates and rarity. Sounds like these extra screens shouldn’t be offered.
Interesting.<p>We have been undergoing IVF with my wife since 2019. (Covid made a huge mess of those plans...) One of our embryos tested as a possible positive (but only slightly) for aneuploidy of one chromosome.<p>The doctor, a veteran of IVF, looked at the results and said "my experience is that this is either a very small mosaic error, which tends to be utterly invisible in real life, or a computer artifact. I have never seen embryos with those borderline results develop any serious problems later. Things would be different if the aneuploidy signals were clear, but definitely do not discard this embryo".
I am not a parent, but the criticism of the article appears to be around a misunderstanding of statistics, or at least how to apply them. While I agree that criticism is completely correct, it overlooks the human nature of the people receiving the tests. At an already-stressful point in someone's life, it seems almost like bad bedside manner for the medical community, even if in an automated fashion, to tell people that there might be a complication looming.<p>This _does_, however, seem like a framing issue, more than a utility issue. If the tests are 100% accurate at detecting true positives, they're a great aid. But rather than framing the tests as a be-all, end-all source for information, why not frame them as "a test that suggests whether or not you should get other tests"? That simple wording change would save a great deal of added stress on someone starting or growing a family.
I recall a period in the early 2000s when unindicated whole-body CAT-scans were being advertised on television.<p>That got knocked down pretty quickly but wow a lot of folks picked up a big chunk of their lifetime radiation allowance because of that.<p>These tests seem to operate under a similar model, disregard the risks of unnecessary testing in return for information of limited utility that may cause material harm.
Behold, the curse of Reverend Bayes:<p><a href="https://en.wikipedia.org/wiki/Bayes%27_theorem#Drug_testing" rel="nofollow">https://en.wikipedia.org/wiki/Bayes%27_theorem#Drug_testing</a>
A tweet about this very article caught my eye yesterday, and I'm glad HN's taken notice too.<p><a href="https://twitter.com/JohnFPfaff/status/1477382805583716353?t=UAFtsfEu43n_J2-fwA2JsA&s=09" rel="nofollow">https://twitter.com/JohnFPfaff/status/1477382805583716353?t=...</a><p>'For a disease w a 1-in-20,000 risk, a test w a false positive rate of 1% and a false negative rate of 0%—an insanely accurate test—would identify 1 correct case and 200 false positives every time. Or would be wrong 99.5% of the time.<p>This isn’t “bad tests.” This is… baserates.'
Medical professionals are often shockingly bad at statistics. My wife and I were talking about birth control to an RN* after our first child was born. The RN mentioned cooper IUDs were 95% effective. He asked what timeframe that was measured over and she couldn’t answer. Not only did she not know, but she couldn’t even understand why we were asking the question.<p>*) My wife insists that it was a doctor, not an RN, but my brain won’t let me process that possibility.
Before my daughter was born I sometimes felt like it was the doctors job to scare us with every worse case scenario possible. It was quite stressful and upsetting.
I've heard that in the early days of HIV, the tests were (e.g.) 95% accurate, and when patients saw their positive results and the supposed 5% chance it's wrong they'd sometimes kill themselves.<p>They revised the tests so the first test would say Inconclusive rather than Positive, and ask them to repeat it. This saved some lives.<p>Maybe this a UX failure? Shouldn't the test designers present the results like this, even to doctors?
This seems to miss the point entirely. Even for their worst example the odds of the fetus having it go from 0.005% to 7%. That's valuable information even if it's not perfect or somewhat hard to understand.
There's a lot of sibling comments going on about whether the value they're looking at is the right one. What the Times is showing as their headline number is Positive Predictive Value (True positive/(TP+FP)), which depends on the prevalence in the population. The "methods section" here is a little vague, but given the low prevalence I'm willing to accept on face value that it's basically accurate (i.e. that it's not assuming that the families getting these tests are not orders of magnitude more likely to be positive for these diseases). If the test result truly said one patient's 'daughter had a “greater than 99/100” probability of being born with Patau syndrome', then that's concerning, but given the fairly narrow quotes around the number, I'd suspect that what is <i>actually</i> on the test result is not inconsistent with the fairly low PPV on these screens.
We were told our son had a high chance of being born with down syndrome. It was quite stressful to hear this as we weren't going to do anything about it regardless (he was born with no issues whatsoever and is now a thriving young adult).
I was in this exact situation. I received a phone call from my midwives, saying that my son had tested positive for one of these disorders, and that these tests aren’t usually wrong. Fortunately I had done my research and knew that the false positive rate is high. But the entire system is set up to provide a terrible experience.<p>Your results are sent directly to your provider, so you can’t read the fine print yourself. And if you do get access to the results, the wording implies that a null result (not enough DNA collected) actually means you’re likely to have some disorder. In fact the wording here actually got worse in the three years between my two (healthy) births.<p>Ideally these companies should require genetic counseling before you take the test. Parents should understand that these tests are for screening purposes only, and that a definitive diagnosis can’t be gotten until 16-20 weeks. Unfortunately these companies have found a niche- parents wanting to know the sex and health of their children as soon as possible- and have no real reason to improve their practices.
Isn’t that often true with screens in general? The threshold often allows a good number of false positives in order to minimize false negatives. The goal is to know when to seek further diagnostics. Communicating that to patients can be a challenge but it doesn’t mean the screens were designed incorrectly.
Edit: They kind of do this farther down in the article.<p>Considering this as a UX challenge - imagine a grid of 10,000 dots (100x100).<p>Draw one box around the base rate - the rate at which you expect to find the problem in the population. If the base rate is 1%, then the box is 10x10 = 100 dots.<p>Then color in the dots for the test positive rate (not false positive, just all positive tests) False positives would be the colored dots outside the box.<p>Next to that, put strikes through the dots corresponding to your expected false negative rate.
This is an example of a problem that is so hard to explain. The vast majority of folks getting these tests will get a true negative. Such that for most people, this is not an issue. So I get that it takes effort to make people care.<p>That said, I do feel that pulling in abortions to the debate is specifically to trigger a set of readers. But to what aim? They have not established that the tests could be better. Just that when they say yes, they are still not perfect.
The state mandated tests in California are far worse. At least with NIPT tests, if you get a negative, it's fairly certainly a negative. The state tests have all kinds of unnecessary false positives, and if you don't have the NIPT to negate them, you are in for a lot of worry.
Watch this for a great explanation about the statistics of testing for rare diseases <a href="https://www.youtube.com/watch?v=R13BD8qKeTg" rel="nofollow">https://www.youtube.com/watch?v=R13BD8qKeTg</a>