I implemented a similar model based around the amazing out of core linear learner Vowpal Wabbit. It did pretty well in the Precision FDA challanges despite being developed in two person months. I has the benefit of using fantastically less compute to train than something like deepvariant. (<a href="https://github.com/ekg/hhga" rel="nofollow">https://github.com/ekg/hhga</a>)<p>The approach is the right one for small genetic variants. But it will be hard to handle more complex kinds of variation without adapting the alignments to training example synthesis.<p>I think the field should cool it on calling the results of something like deepvariant "genomes". These are genotypes, not fully sequenced and reconstructed genomes. The evaluations are typically on easy regions and we have no reason to believe that those are the only ones that are important. One important tool to dig into this is syndip, which is a simulated synthetic diploid where the full haplotypes are known. It is a mixture of two haploid human genomes that were de novo sequenced with pacbio technology. (<a href="https://www.biorxiv.org/content/early/2017/11/22/223297" rel="nofollow">https://www.biorxiv.org/content/early/2017/11/22/223297</a>). For the curious these haploid human genomes only exist in molar pregnancies, so even this isn't ideal but it is maybe the best resource we have at present.
The figures in this paper use pretty deceptive scales. To be clear, DeepVariant is 0.5% better than a tool built in ~2010 (GATK), on DeepVariant's best test.<p>GATK is still the standard, not because better variant callers don't exist, but because it's more important that everyone uses the same tool for comparisons between studies.