Another odd thing: One of the co-authors of the study, Andrew Bogan seems to manage a hedge fund, and then published a WSJ op-ed on the study. He didn't note in the op-ed that he was involved in the study (and a co-author no less!). A lot of conflict of interest potential here. These authors seem to have an agenda and end-result in mind, no matter the data.<p>Edit: here's the WSJ op-ed (paywalled): <a href="https://www.wsj.com/articles/new-data-suggest-the-coronavirus-isnt-as-deadly-as-we-thought-11587155298" rel="nofollow">https://www.wsj.com/articles/new-data-suggest-the-coronaviru...</a><p>The more I think about this, the more outrageous I find this. It's a form of astroturfing to advance the beliefs of the authors of the study. (Note that the senior author Bhattacharya and of course Ionnadis were advancing this theory before data collection began. This means their analysis deserves even more scrutiny)
In regards to the issue of selection bias, this is what the ad looked like:<p><a href="https://twitter.com/foxjust/status/1251270848075440133" rel="nofollow">https://twitter.com/foxjust/status/1251270848075440133</a><p>Quote from ad: "We are looking for participants to get tested for antibodies to COVID-19."<p>A quote from someone who participated in the study:<p>"I participated in the study because I had been sick the week before and was very curious. In the intake questionnaire they asked if I had recent symptoms. I'm unpleasantly surprised that they seem not to have made an effort to use that data to unbias the study."<p><a href="https://twitter.com/mattmcnaughton/status/1251322235484168192" rel="nofollow">https://twitter.com/mattmcnaughton/status/125132223548416819...</a><p>Another:<p>"I was part of this study and that is totally why I signed up! People I talked to who tried to sign up had similar reasons. Lots of subjects at the testing site wearing masks, more than you see at the grocery, more evidence that a lot of us were more conscious about transmission"<p><a href="https://twitter.com/McSalter/status/1251511091294691328" rel="nofollow">https://twitter.com/McSalter/status/1251511091294691328</a><p>It's very hard to deny that selection bias may have played a part here.
The original study was on the HN front page twice; biggest thread:<p><a href="https://news.ycombinator.com/item?id=22899272" rel="nofollow">https://news.ycombinator.com/item?id=22899272</a><p>This analysis is pretty damning, and is more credible in context (the topline findings of the original study constituted extraordinary claims, which, if extrapolated, could imply that a majority of all New Yorkers had C19 antibodies).<p>Some of the underlying ideas here are pretty straightforward. For instance, the fact that even with 90+% specificity, if your rate of false positives exceeds the true positives in the population (as can happen even with good tests when the underlying condition is rare, as it is with C19), you're going to have problems.
As flawed as this is, it's in line with other studies around the world. You can nitpick and critique each one for something, but we now have a whole body of evidence using different techniques and different methods that are all stating the number of cases is vastly undercounted, and the IFR is under 1%.<p>Scotland:
<a href="https://www.medrxiv.org/content/10.1101/2020.04.13.20060467v1.full.pdf" rel="nofollow">https://www.medrxiv.org/content/10.1101/2020.04.13.20060467v...</a><p>NYC Pregnant Women:
<a href="https://www.nejm.org/doi/full/10.1056/NEJMc2009316?query=C19&cid=DM90482_NEJM_COVID-19_Newsletter&bid=186123144" rel="nofollow">https://www.nejm.org/doi/full/10.1056/NEJMc2009316?query=C19...</a><p>Finland:
<a href="https://thl.fi/en/web/thlfi-en/-/number-of-people-with-coronavirus-infections-may-be-dozens-of-times-higher-than-the-number-of-confirmed-cases" rel="nofollow">https://thl.fi/en/web/thlfi-en/-/number-of-people-with-coron...</a><p>Germany:
<a href="https://www.land.nrw/sites/default/files/asset/document/zwischenergebnis_covid19_case_study_gangelt_0.pdf" rel="nofollow">https://www.land.nrw/sites/default/files/asset/document/zwis...</a><p>Chelsea, Mass.:
<a href="https://www.bostonglobe.com/2020/04/17/business/nearly-third-200-blood-samples-taken-chelsea-show-exposure-coronavirus/" rel="nofollow">https://www.bostonglobe.com/2020/04/17/business/nearly-third...</a>
I think the criticism in this thread and elsewhere is a bit too harsh. It’s by no means a perfect study, nor the last word, but hopefully will motivate further studies.<p>I volunteered on this study and talked with hundreds of the participants, at least 200 and possibly as many as 400. Two reported previous COVID symptoms, unprompted.<p>The bigger problem was socioeconomic bias. Judging from number of Tesla’s, Audi’s, and Lamborghis, we also skewed affluent. Against the study instructions, several participants (driving the nicest cars I might add) registered both adults and tested two children. In general, these zip codes had a lower rate of infection. It’s very hard to understand which way this study is biased, and a recruiting strategy based on grocery stores might be more effective, but difficult to get zip code balance<p>There has been additional validation since this preprint was posted and now there’s 118 known-negative samples that have been tested. Specificity remains at 100% for these samples. An updated version will be up soon on medrxiv.
I have wondered about the selection bias issue, and I was hopeful that this writeup would give a good look at this and other potential issues with the study.<p>But when I read it, I was a bit turned off by the author's attitude — which seems to be that he or some of his colleagues should have been consulted by the study authors because they are "statistics experts".<p>He refers to this apparent omission multiple times, and he also seems to think he's dunking on the authors when he references Theranos (and the fact that its advisors came from government/law/military). But this study is completely unrelated to Theranos (though they both involve blood and Stanford). Off-topic comments like these left me wondering if his analysis is a fair critique, or if he has an axe to grind.
Every study on coronavirus antibodies that's released gets panned here, yet every single one of them show strong <i>enough</i> evidence that infection is more widespread and death rates lower than widely assumed.<p>Even if every one of them is flawed, all the information taken as an aggregate paints a picture. In my province (Alberta, Canada) the health authority has recently expanded tested and as a result there's more confirmed cases and a lower death rate. Other health authorities in the country have strongly suggested infection rates are much higher than confirmed cases (which lowers the death rate since every single death is being accounted for in our country).<p>So there's concerns with this study, and maybe another, but there's no evidence to counter the conclusion we're seeing again and again.
Also of note, Premier Biotech, the 'manufacturer' of the antibody tests in the Stanford study, has been accused of distributing non-FDA approved Chinese antibody tests. <a href="https://www.nbcnews.com/health/health-news/unapproved-chinese-coronavirus-antibody-tests-being-used-least-2-states-n1185131" rel="nofollow">https://www.nbcnews.com/health/health-news/unapproved-chines...</a>
So far the best tests are from Iceland and Italian town Vo.<p>In Iceland over 11% of population is tested, 4% positive. Results indicate that over 50% of infected are asymptomatic.<p>Italian town Vo tested everyone in their village with similar results. Over 50% of those who test positive are completely asymptomatic.
Well, all the antibody studies around the globe have several things in common:<p>* relatively small numbers of participants (< 5000 and therefore only dozens of partifipants with postive results)<p>* focusing on relatively small geographical areas<p>* working with antibody test with a high uncertainty regarding
the specifity<p>One argument that strongly contradicts the narrative that a huge number of people already are/were SARS-COV-2-positive is the of positive PCR tests in Germany. Germany performs hundreds of thousands of PCR tests per week but still mainly tests people with some symptoms. If SARS-COV-2 were that prevalent, you would expect a large proportion of the tested to be positive but its only 4% by late March [1]. Every expert I heard admits that there is a significant number of undiagnosed cases. But 30x-60x seems to be quite unrealistic if even only 4% of people with symptoms are positive.<p>[1] <a href="https://www.zeit.de/wissen/gesundheit/2020-03/coronatests-deutschland-coronavirus-covid-19-who-pandemie" rel="nofollow">https://www.zeit.de/wissen/gesundheit/2020-03/coronatests-de...</a>
I get this study may be flawed, but there are several upcoming studies showing similar findings in respect of likely lower CFR than expected. The issue is, with what magnitude
Ioannidis warned us in advance that he'd publish bunk. "Why Most Published Research Findings Are False" [0] is his most famous work.<p>[0] <a href="https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124" rel="nofollow">https://journals.plos.org/plosmedicine/article?id=10.1371/jo...</a>
> But we didn’t then write up a damn preprint and set the publicity machine into action.<p>The latter part is crucial here. I don’t think you should have to apologize about mistakes in a preprint. Papers get improved by the review process. And most of us just upload them to get around journal paywalls, or to make it easier to share with our colleagues what we are working on.<p>But when a University hears that you’ve got a result on a hot topic, dollar signs light up in their eyes, and they go to work. Scientist beware.<p>I hope we can find a balance where scientists don’t rush something out just because it’s a hot topic, yet are also not paralyzed from working on something because of the dangers of the spotlight.
All these confidence interval discussions (both from the original study and from the critique) have no value besides entertaining professors who have more knowledge of math than common sense. Nothing good will ever come from buzzwords such as "Agresti-Coull 95% interval".<p>Bayesian techniques are not much better since no one will ever agree on the prior.<p>Just treat the study as some super rough point estimate. Adjust for biases such as selection bias if you can. Look at other studies too. Add your personal opinions (e.g., on whether conflicts of interest are relevant here). Complex statistical arguments won't buy you much more than that.
The comments below the article are also incredibly interesting.<p>I'll now wait on feedback from the authors to the concerns expressed here. But also, the focus will be on many more serology studies in the coming months. Looking forward to their results.
In Sweden they did antibody tests on blood givers and found that 11% had antibodies. There was no ad or survey so there couldn't have been any selection bias.
The OP is, for lack of better words, so academic. He wants an apology? OK, the study has flaws X, Y and Z. How about propose and conduct a better study? ASAP? Throw darts on a map if you have to.<p>There are millions of people kicked out of a job. People defer medical procedures indefinitely. Kids skipping school for months on end. We will, sooner or later, run out of basic necessities as well. The world doesn't run on money or theories. It runs on us, real people, shuffling our hands and turning sun and soil into food and heat and clothing. Right now we are grounded at home. This can't go on forever. We are running against the clock. Do something about it!