Question from an outsider (of medicine and of medical research): Why is this study <i>new</i>? I mean I can understand why it is <i>news</i> (for the mass media)...but isn't this something that would've have been tested long ago to a certain degree? At least to a degree in which, today, doctors are content that a venous draw is a statistically useful amount of blood to derive health results from? Wouldn't that have to based on some study that purported to find the minimum volume of blood needed to reliably represent someone's health?<p>Not only does it seem like a very fundamental question to have already asked...it doesn't even seem like a very difficult study to <i>do</i>. It's not substantially longitudinal over time -- for every subject, you take several pinpricks, and run the tests. Or logistically difficult to manage.<p>So I get why it's news, in terms of Theranos and what not...but this has to have been something that was studied many times over many decades. Or is the NYT misinterpreting/signifying the significance, i.e. the Rice scientists found a previously undetectable kind of difference, but which is, yes, technically shows that blood drops are different?
In general this doesn't seem like huge surprise. Blood is reasonably homogeneous, but noise becomes an issue when you're looking for anything present in low concentration (signal close to the noise floor) or when small differences in concentration matter (signal superimposed on background noise causes the value change from expected to be close to the magnitude of the noise).<p>If noise is too high for a single drop, a venous draw is a much larger volume and theoretically equivalent to sampling many drops of blood—it's the physical equivalent to averaging samples to increase the SNR.<p>The authors note[1] that averaging may not be enough though, and that there may be an interesting difference inherent to fingerprick blood (possibly caused by their collection method):<p>"Our data also suggest that collecting and analyzing more fingerprick blood does not necessarily bring the measured value closer to those of the donor’s venous blood (Figures 1D and 2D). For example, donor B’s hemoglobin and WBC concentration were similar for venous blood and fingerprick in drop 1 but became less concordant with additional drops, while donor C’s fingerprick measures came closer to the venous measures with additional drops. These data may represent true differences between fingerprick and venous blood, or they may be the result of errors in collection (such as leaving the tourniquet on for too long during a venous draw). Further research is needed to determine how common these patterns are."<p>1. <a href="http://ajcp.oxfordjournals.org/content/ajcpath/144/6/885.full.pdf" rel="nofollow">http://ajcp.oxfordjournals.org/content/ajcpath/144/6/885.ful...</a>
As someone working in this field particularly on the product front and academic front - a major concern I have with this study is the lack of work done to establish what the clinical significance is in these variations. The methodology is well controlled enough to indicate that a statistically significant difference does indeed exist in the variations on the drop-to-drop level between venous and capillary samples, but what's missing is a detailed analysis of whether or not these differences would result in clinically different outcomes - from my work, the range of identifying an anemic, leukocyte spikes, etc. is large enough that the spikes in deviations in capillary samples ultimately become inconsequential. Furthermore dozens of studies [1,2,3 are just a few examples] in the past have found essentially the opposite outcome. A discussion is necessary - but suggesting that all drop based diagnostics will forever be inaccurate is both unbased and dangerous given the growing importance of this field. If anyone has specific questions feel free to drop me a line at ttandon[at]stanford[dot]edu<p>[1]<a href="http://www.hindawi.com/journals/isrn/2012/508649/" rel="nofollow">http://www.hindawi.com/journals/isrn/2012/508649/</a>
[2]<a href="http://www.ncbi.nlm.nih.gov/pubmed/23294266" rel="nofollow">http://www.ncbi.nlm.nih.gov/pubmed/23294266</a>
[3]<a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0043702" rel="nofollow">http://journals.plos.org/plosone/article?id=10.1371/journal....</a>
Well, that is the end of Theranos. They should just return what money is left to their investors at this point. This explains why they could never get the tech right...
this is basically the common sense for any one with some proper medical training. and it's why many clinical scientists and medical practitioners (read: peers) keep questioning about Theranos from the beginning.
Something that's always perplexed me -- if we're talking about small amounts of blood, why finger-tips? This is such a sensitive area. Why not a prick on the elbow or the shoulder?
If it's scientifically proven that a drop of blood can't be accurate, what is the alternative to going into the vein? Maybe you wipe down the wrist with alcohol, then put on a cuff link apparatus that simultaneously takes 20 drops of blood.
Finally! I wondered how people actually know that the genetic code is the same in every cell. How would you prove that? What if everyone is a chimera to some extent? We are just getting started understanding epigenetics.
For additional takes on this, there was some discussion of this result yesterday, based on a post of a direct link to the paper: <a href="https://news.ycombinator.com/item?id=11159526" rel="nofollow">https://news.ycombinator.com/item?id=11159526</a>