Exomes vs Genomes (re-visited)

The paper by Lupski et al in Genome Medicine provides fuel to the perpetual debate of Whole Exome Sequencing (WES) vs Whole Genome Sequencing (WGS). It takes me down the memory lane to my own presentation “Genomes or Exomes: evaluation of cost, time and coverage” at Beyond the Genome 2011 conference. (If you would like to check this out, my poster is available here at Faculty of 1000 resource, with so many others from the conference). My work summarized the WES vs WGS results on a single blood sample of an individual with cardio-myopathy. Although WGS gave better coverage of UCSC exons evaluated, WES identified exclusive variants missed by WGS.

Sequencing coverage has always been the key to elucidation of variants from NGS data. Lupski et al worked on a CMT (also known as HMSN) case, and from my generic evaluation of WES read-depth coverage of CMT related genes 93% of CCDS exons had good coverage (JNNP paper). I found about 89% of the known mutations in the 33 CMT genes, including SH3TC2, to be covered at 10x (or 10-fold) sequencing coverage. As the results suggest (JNNP paper) WES misses a lot of coding regions, including important known mutations, that one needs to be careful of, especially in utilization for clinical medicine.

Back to the WGS vs WES, lets start with the key points to consider for the comparison:

Key Point WES/WGS? Notes
Cost WES Typical WES requires 60-100 million 100bp reads for decent sequencing coverage, whereas WGS requires almost a billion 100bp reads for average 30x coverage
Time WES For same reason as above, WES can be generated and analyzed with a much faster turn-around time. For clinically specific WGS analysis, I developed a novel iterative method (PLoS One) that delivers variant results in 5 hours!
Average Coverage
– Depth WES WES, being targeted, provides much deeper coverage of the captured coding regions
– Breadth WGS Coverage from WGS is much more uniform, covering more of the annotated exons and independent of annotation sources. WGS has the advantage of analyzing regions with difficulty designing capture probes, providing sequencing coverage and thus potential for variant calling
Structural Variants WGS Broad uniform coverage from WGS coupled with mature algorithms and tools allows for better Structural Variant, CNV and large INDEL detection for WGS data

Lupski et al performed a variety of sequencing experiments on different NGS instruments including Illumina, ABI SOLiD and Ion Torrent. The best part is, all this data is publicly available on NCBI SRA. The scientific community can make much bigger strides by open data sharing. Such a deep dataset from multiple platforms and applications is extremely beneficial providing a distinct advantage over simulated datasets for algorithm development, software evaluation and benchmarking.

  • SOLID sequencer: 1 WES + 1 WGS
  • Illumina GAII: 2 WES
  • Illumina HiSeq: 2 WES + 1 WGS
  • Ion Torrent: 2 WES (PGM and Proton)

Summarizing the paper, all the WES were captured using the NimbleGen VCRome 2.1 capture kit. Its 42Mb capture region includes Vega, CCDS and RegSeq gene models along with miRNA and regulatory regions. Interestingly, the Clark et al (Nature Biotechnology) review of different WES capture technologies concluded that the densely packed, overlapping baits of Nimblegen SeqCap EZ generated highest efficiency target enrichment. On the other hand, the recent review of WES capture by Chilamakuri et al in BMC Genomics found Illumina capture data showing higher coverage of annotated exons.

Lupski et al analyzed Illumina data using BWA (align) -> GATK (re-calibrate) -> Atlas2 (SNV/INDEL) -> Cassandra (annotate). Ion Torrent data was analyzed using TMAP (aligner) -> Picard/Torrent-Suite (duplicates) -> VarIONt (SNV) -> Cassandra (annotate). The choice of tools used, and tools like VQSR from GATK that were not used is not detailed in the paper. A particular metric that readers would have liked to know about WGS datasets is ‘Targets hit’ and ‘Targeted bases with 10+ coverage’ in Table 1. The metric should be relatively straight-forward to calculate and provides a good perspective of how metrics compare with those from WES.

The most striking observation was regarding SNV called from all WES datasets absent from WGS! Here are some of the summary points:

  • 3709 coding SNV were concordantly called in all WES datasets, missed by the original SOLID (~30x coverage) WGS. This is huge as those 3709 SNV were identified in all six WES results, and thus should be good quality.
  • Variant concordance of the same sample using Illumina HiSeq & GAII – Figure 3
      • more than 96% and 98% SNV are concordant between HiSeq-HiSeq and GAII-GAII replicates respectively.
      • only 83% and 82% INDEL are concordant between HiSeq-HiSeq and GAII-GAII replicates respectively. Once again, INDEL calling is more noisy, though it was not clear if the authors used the ‘left-align’ on INDEL to get rid of false discordance due to the start and stop coordinates of INDEL not perfectly aligning. Wonder how the recent Scalpel tool that promises higher indel calling sensitivity might perform on these datasets.
      • even higher discordance when comparing HiSeq to GAII data (for the same sample and exome capture!!)
  • Properties of ‘private’ or exclusive SNV from WES results – Figure 4, Figure 5. As expected, a large majority of exclusive SNV are questionable due to basic quality metrics.
      • low variant fraction (% reads supporting alternate or non-reference allele)
      • low coverage depth
      • strand bias or multiply-mapped reads (leading to low variant quality)
  • Both WES and WGS found the 12 pharmacologically relevant variants

In all, this round goes to WES, mostly due to higher coverage achieved compared to WGS. The higher coverage allowed for elucidation of strand bias and appropriate proportion of alternate-supporting (variant calling) reads to reduce the particular FP and FN variants discussed in the paper. It would be interesting to generate a much higher average coverage WGS dataset and assess if some regions or genes are better suited for evaluation using WES. And to conclude, I quote from the paper “the high yet skewed depth of coverage in targeted regions afforded by the (W)ES methods may offer higher likelihood of recovery of significant variants and resolution of their true genotypes when compared to the lower, but more uniform WGS coverage

Mitochondrial Gold Rush

Mitochondrial genomes can be extracted from Whole Exome Sequencing (WES) data as outlined by this paper in Nature methods by Ernesto Picardi Graziano Pesole. Tools like Mito Seek are now available that gather mitochondrial read sequences from NGS data and perform high throughput sequence analysis. Availability of mitochondrial genomes is important as genomic variation in mitochondria has been implicated in a variety of neuro-muscular and metabolic disorders, along with roles in aging and cancer.

However here we ponder upon the feasibility of how effective it is to extract mitochondria from different capture kits used for WES. Picardi et al used the MitoSeek tool to successfully assemble 100%, 95% and 72% of the mtDNA genome from the TruSeq (Illumina), SureSelect (Agilent) and SeqCap EZ-Exome (NimbleGen) platforms, respectively. We set out to assess the mitochondrial genome data extraction using a different approach and tool-set. Using the same sample’s dataset from three different capture kits, and Whole Genome Sequenced (WGS) data as the gold standard we evaluated alignment and variant-calling results.

Clark et al sequenced and analyzed a human blood sample (healthy, anonymous volunteer) at the Stanford University using three commonly used WES kits:

  1. Agilent SureSelect Human All Exon kit
  2. Nimblegen SeqCap EZ Exome Library v2.0
  3. Illumina TruSeq Exome Enrichment

Illumina HiSeq instrument was used for WGS and all three WES capture kits. Clark et al highlight comparisons between the three capture kits, from library preparation to sequencing time. The paper discusses effectiveness of using each of these kits based on metrics such as baits, capture of UTR regions, etc. They compare variant calls across all three WES kits and WGS and discuss the ability of WES to detect additional small variants that were missed by WGS. Although this paper doesn’t provide an in-depth instrument comparison, the readers here assume that Illumina is the leader in sequencing technology (at least until tonight!)

We use this data set to compare and contrast the availability and quality of mitochondrial sequencing in off-target data from WES. A standard WGS experiment at 35× mean genomic coverage was compared to exome sequencing experiments yielding average exome target coverage of 30× for Illumina, 60× for Agilent and 68× for Nimblegen

We also utilized a single custom capture sequenced sample from Teer et al to study the feasibility of gleaning mitochondria from a custom capture experiment.

  1. Clark et al have made this data set downloadable from NCBI in the SRA file format
  2. Using the SRA toolkit we converted SRA to FASTQ. As these are paired end reads we used fastq-dump with the –split-3 option. This generated 2 fastq files for R1 and R2
  3. Using BWA-MEM algorithm we aligned reads in these fastq files to allchr.fa. Additionally for the Truseq data we also used BWA-SAMPE algorithm to compare BWA alignment algorithm
  4. The BWA alignment provided SAM files for each of three WES (Agilent, Nimblegen, Illumina) and WGS. Using Samtools we converted SAM files to BAM for easy storage and interpretability
  5. We filtered for reads that mapped to chromosome M and those that had PHRED-scale mapping quality >= 20 (more than 99% probability of being accurate)
  6. For calling variants we employed a custom perl script on the the generated pileup to determine variant calling at different thresholds of >=1% >=5% and >=10% variant supporting reads

Read Metrics:

All metrics for 10x/5x are using reads mapped with PHRED-scale mapping quality >= 20. The length of mitochondrial genome covered at more than 5x (5-fold) coverage and 10x is summarized for the sequencing data from different capture kits (Table 1).

All results are for BWA-MEM except for the Illumina TruSeq capture data that was also aligned using BWA-SAMPE. Our comparisons show that BWA-MEM aligned more reads and had generally better performance.

A custom capture sample was evaluated simply to see the potential of extracting mitochondrial genome from that data-type as well. It performed really well, generating more than 900 RPM for mitochondrial genome, implying much greater off-target throughput

Capture/WGS All reads (millions) Mapped reads (millions) % mapped reads chrM reads Q20 chrM Q20 chrM RPM* > 10x chrM > 5x chrM
SRR309291 (Agilent) 124.193 123.949 99.80 2836 2647 21.36 12615 15691
SRR309292 (Nimblegen) 185.088 184.588 99.73 3770 3466 18.78 5563 11271
SRR309293 (Illumina) 113.369 113.070 99.74 27326 24645 217.96 16569 16569
SRR309293.pe (Illumina SAMPE) 112.886 105.777 93.70 25149 22894 216.44 16569 16569
SRR341919 (WGS) 1,312.649 1,253.840 95.52 436042 417365 332.87 16569 16569
SRR062592.s.bam
(Custom Capture)
5.313 5.086 95.73 5346 4897 962.75 9997 14318

*: Q20 mapped chrM reads per Million Mapped reads for that sample
Table 1: Sequencing throughput and mitochondrial genome coverage from NGS data on whole-genome, exome and custom-captured samples

 

Coverage of Mitochondrial Genome

Figure 1: Contrasting coverage of mitochondrial genome from WGS and WES sequencing data (truseq-pe data was aligned using BWA-sampe tool while all others were aligned using BWA-mem)
  • WGS data generated really good coverage of the mitochondrial genome, almost always > 700-fold
  • Coverage from Illumina Truseq data was consistent between results from using BWA-mem or BWA-sampe aligner, though the latter gave slightly lesser coverage due to fewer mapped reads
  • Agilent off-target data generated sufficient mitochondria mapped reads considering ~95% of mitochondrial genome covered at 5x. Higher overall throughput for the sequenced sample could have provided greater off-target sequence reads yielding higher mitochondrial genome coverage.
  • Nimblegen off-target data was the least abundant, and the coverage profile across mitochondrial genome was also different from other datasets. This may also be due to the high-density overlapping bait design of Nimblegen, giving focused on-target coverage, leaving fewer off-target reads.

Variant Calling on the Mitochondrial Genome

33 variants shared by all 4 (WGS, Illumina/Nimblegen/Agilent capture)
Venn Diagram (generated using Venny) to compare the mitochondiral variants identified in the same sample from WGS and off-target data from different capture kits (10% or more alternate-supporting reads implied a variant call)

The sequencing data depicted high variability when using 1% alternate-supporting reads to annotate a mitochondrial genomic position as variant. So we used a threshold of at-least 10% reads at any given nucleotide position to be supporting the alternate allele to define a variant. The above venn-diagram highlights that the vast majority (33/41) of called variants on mitochondrial genome from WGS and WES data overlap. Another 6 variants identified in WGS were also observed in Agilent and Illumina WES data, but missed by Nimblegen WES due to low coverage. We do not provide a comprehensive iteration of the exclusive variants, but most of them suffer from low read-depth, low quality, and strand bias.

Conclusions

With the decreasing cost and increasing availability of exome sequencing data, there is a vast resource of mitochondrial genomes that can be mined for mitochondria-focused research. Data from large consortia like 1000 genomes and NHLBI exome datasets can be utilized for a comparative mitochondrial variation evaluation. As reported by Picardi et al, Illumina Truseq and Agilent exome kits generate better mitochondrial genome coverage compared to Nimblegen. Interestingly, even the custom-capture kit we evaluated generated a decent amount of mitochondrial genome coverage. This opens up a plethora of small NGS panel and custom-capture datasets for mitochondrial genome evaluation.

Journal Club: False-positive signals in exome sequencing

Detecting false-positive signals in exome sequencing

Human Mutation

I cannot believe that this paper is already a year old. There was a printed copy on my desk, but never got transmitted from the eyes into the brain!! Finally, there was enough time to review the paper and collate all the valuable information to share here.

Whole Exome Sequencing (WES) is fast becoming the most common NGS application. It allows querying almost all of the coding genome (the 3% of 3 billion nucleotides that we understand most about) at a relatively low cost and time investment. Looking up any list of sequencing papers of note, the most common title is “Exome sequencing identifies the causal variant for XYZ“. However, we know about the small but omnipresent spurious results that are part of the WES data. This article does a great job at elucidating the common false positives and sources of noise in WES data.

    • 118 WES samples from 29 families seen by NIH Undiagnosed Diseases Program
    • 401 additional exomes from ClinSeq study for cross-check
    • Agilent 38Mb and 50Mb all exome capture kits; GA-IIx 76 and 100bp paired-end
    • Method: ELAND -> Cross_Match -> bam2mpg genotype -> CDPred prediction -> VarSifter -> Galaxy
    • Used hg18; No duplicate removal
    • False-positive candidate variants are usually
      • located in highly polymorphic genomic region
      • caused by assembly misalignment
      • error in the reference genome
    • 23,389 positions with excess heterozygosity (alignment error)
    • 1009 positions where reference genome contains the minor allele (excess hom.)
    • Errors arise from – library construction bias; polymerase error; higher error rate towards end of short reads; loss of synchrony within a cluster (Illumina sequencing); platform specific mechanistic issues
  • Highly Variable Genes – frequently contain numerous pathogenic variants, thus unlikely to be disease causing (gene with >10 high quality variants; should normalize by gene length and where in the CDS variants were found)
  • (Pseudo genes) 392 high quality variants were heterozygous in all 118 exomes

Similar reading:
PLOS ONE: Limitations of the Human Reference Genome for Personalized Genomics

Journal Club: Indels in 179 genomes (1000genome data)

The origin, evolution and functional impact of short insertion-deletion variants identified in 179 human genomes

Genome Research

Finally there is a comprehensive analysis on indels, and of course it is the Next Generation Sequencing data that is driving it. I have my concerns with the biases of NGS technology and analysis along with ensuing false-positives in indel detection. Nonetheless, the authors have done a good job in summarizing the information and touching upon the important points making some valuable observations. It would be great to see this comprehensive analysis repeated on the public Complete Genomics genomes or the increasing Ion Torrent data to corroborate these findings as generic and not specific to any variables.

  • Dataset used = 179 (~4x coverage) genomes from 1000 genomes pilot data of 3 populations
  • 1.6 million indels – 50% of them in 4% of the genome (indel hotspots)
  • Polymerase slippage is the main cause of 75% of indels (almost all indels in hotspots and 50% indels in non-repeat regions are due to slippage)
  • indels subject to stronger purifying selection than SNVs (they call it SNPs)
  • recombination hotspots that are known to be enriched with SNVs are not enriched with indels
  • longer and frameshift indels have stronger effect on fitness
  • indels on average have a stronger functional effect than SNVs
  • Method
    • STAMPY: aligner with high sensitivity and low reference bias
    • DINDEL genotyper: Use alt-supporting reads to select high quality indels
    • build implied haplotypes (LD betw SNV/indel and impute) and error model for homopolymers
    • ignore indels in long (>10bp) homopolymers
    • validate with sanger
  • the 1.6 million indels are 8-fold lower than SNVs from these genomes
  • selected novel indels (not seen in 1000 genomes report not dbSNP129)
  • chose 2 CEU as validation targets and sampled calls predicted to segregate in them
  • randomly selected a subset; able to design primers for 111; 60 sanger sequenced
  • 36 matches; 12 low-Q sanger; 12 discordant => 0.25% FDR for this novel set (4.6% total FDR)
  • INDEL classes
    • Homopolymer Run (6nt+) – HR – 10-fold indel enrichment compared to genomic average (even higher if include longer homopolymers)
    • Tandem Repeat – TR – 20-fold indel enrichment
    • Predicted hotspot – PR – predicted indel rate > predicted SNV rate
    • Non-repetitive sites – NR
    • change in copy-number count – CCC – NR-CCC & NR non-CCC
  • HR + TR + PR = 4% of the genome (hotspot) with 50% of indels – deletions dominate short tracts, insertions longer tracts, and then del again for much longer tracts
  • 100-fold increase in polymorphism rate going from 4-bp homopolymer to 8-bp
  • 25% indels not due to polymerase slippage mostly NR non-CCC – mostly deletions (about 90%) – perhaps due to formation of double-stranded break intermediate and imperfect repair
  • the remaining 2.5% insertions most often involve palindromic repeat
  • 43 genes with high individual predicted mutation rate in coding regions – 10 of those do not show SNV enrichment and thus have exclusive indel enrichment to cause high mutational load – includes HTT (huntington), AR (prostrate cancer), ARID1B (neurodevelopmental), MED and MAML genes
  • GWAS: common indels are well tagged by SNVs – possible to phase indels into SNV haplotype reference panels

AGBT 2013 Saturday sessions

Plenary Session: Genomic Technologies
Len Pennacchio, Lawrence Berkeley National Laboratory, Chair

— could not take notes on some of the talks and afternoon session

9:00 a.m. – 9:30 a.m.
Rebecca Leary, Johns Hopkins Kimmel Cancer Center
“Personalized Approaches to Non-invasive Cancer Detection”

– personalized analysis of rearranged ends (PARE)-identify structural alterations in solid tumors
– generate personalized biomarkers for the detection of circulating tumor DNA
– Tumor-derived mate-pair library -> somatic rearrangements -> confirmed by PCR in tumor & matched normal
– Application = monitor disease progression, identify residual disease (predict relapse), surgical margins
– Plasma Aneuploidy Score – clearly differentiates normals from colorectal cancer samples (just 10x physical coverage – detect rearrangements)
– 0.75% circulating tumor DNA – 90%+ sensitivity, 99%+ specificity using 1 HiSeq lane

9:30 a.m. – 9:55 a.m.
* Eric Antoniou, Cold Spring Harbor Laboratory
“Increased Read Length and Sequence Quality with Pacific Biosciences Magbead Loading System and a New DNA Polymerase”

– duckweed as Biofuel (40tonnes/acre/yr), .1 ton yields .025tons of ethanol by weight and is ~7.5 gallons a day
– rice genome (470 Mbp) sequenced using the Pacific Biosciences RS sequencer (MagBead loading system) – hybrid de novo assembly with Illumina data
– 10kbp insert library; 9X coverage of the rice genome (mean read length – 3kb, max 21kb)
– mean accuracy mode of single pass long read – 90%, (85-87% for current C2 chemistry)

9:55 a.m. – 10:20 a.m.
* Tim Harkins, Life Technologies
“Ovarian Cancer Evolution: a Tale of Two Paths”

– ovarian cancer 9th leading cancer among women, 5th leading cause of cancer related death, high relapse rate

10:45 a.m. – 11:10 a.m.
* X. Sunney Xie, Harvard University
“Detecting Single Nucleotide and Copy Number Variations of a Single Human Cell by Whole Genome Sequencing”

– Individual cells of identical descent can have different genomes (dynamic changes in DNA) – important to many biological investigations and medical diagnoses
– Single-cell whole-genome amplification methods – exponential amplification bias => low genome coverage
– Multiple Annealing and Looping Based Amplification Cycles (MALBAC) – 93% genome coverage ≥ 1x for a single human cell at 30x mean sequencing depth
– detection of digitized CNV & SNVs – ~76% efficiency for a single cancer cell
– 2.5 single-base substitutions per mitosis in human tumor cell line identified using single cell amplification/sequencing
– circulating tumor cells (CTCs) of same patient show similar CNV; CTCs of lung cancer patients show similar CTC
– clinical trial for pre-implantation genomic screening for IVF using single polar bodies of oocytes
– male’s genome can be phased by seq sperm, female’s genome phased using polar bodies genomes
– 0.1X genome coverage is enough to determine aneuploidy (at 8-cell stage) for MALBAC’s single-cell sequencing in IVF
– anomalous transition/transversion ratio for newly acquired SNVs

11:10 a.m. – 11:35 a.m.
* Jeremy Schmutz, HudsonAlpha Institute
“Evaluating Moleculo Long Read Technology for de novo Whole Genome Sequencing”

– Moleculo Long Read technology – sequencing two complex plant genomes (inbred diploid switchgrass comparator Panicum hallii (600 Mb) and the outbred tetraploid Miscanthus sinensis (~2.3 Gb)
– incldue long, retrotransposon-derived repeats, diverse GC-content and present significant challenges for short-read NGS whole genome shotgun sequencing
– Moleculo reads – 10kb reads (5kb avg), high accuracy (1.26bp error/10k), tunable to genome size/complexity, reduces computational complexity
– limitations = distribution of reads depends on local repetitive content & global repeat freq; illumina based => localized chemistry issues; some amplification bias

11:35 a.m. – 12:00 p.m.
* Jonas Korlach, Pacific Biosciences
“Automated, Non-Hybrid De Novo Genome Assemblies and Epigenomes of Bacterial Pathogens”

AGBT 2013 Friday sessions

Plenary Session:  Genomic Studies II
John McPherson, Ontario Institute for Cancer Research, Chair

9:00 a.m. – 9:30 a.m.
Steve Scherer, The Hospital for Sick Children
“Whole Genome Sequencing Analysis in Autism”

– Autism Spectrum Disorder (ASD) – high heritability, familial clustering & ~4:1 male to female bias (as many candidates on X-chr)
– 100+ risk genes, ~10 not present on the capture
– WGS (at BGI, >30x) on ASD families; need for better indel callers (indel validation rate ~20%, SNV validation rate >90%)
– better and more uniform X chr and splice site coverage in WGS compared to WES
– also mentions PGP-Canada

9:30 a.m. – 10:00 a.m.
Jay Shendure, University of Washington
“Tackling Genetic Heterogeneity with Massive Multiplexing and Molecular Counting”

Missed out on the talk, but here is an older slide-deck from Shendure which covers most of the stuff presented

10:00 a.m. – 10:30 a.m.
* Gabe Rudy, Golden Helix@gabeinformatics
“Home-Brewed Personalized Genomics: The Quest for Meaningful Analysis Results of a 23andMe Exome Pilot Trio of Myself, Wife, and Son”

– $999 80x exome for the trio, mother with clinically-diagnosed idiopathic rheumatoid arthritis
– 75bp PE, SureSelect capture, BWA/GATKdeliver BAM, VCF, PDF Summary report
– goals = variant call accuracy from NGS, usefulness of 23andme risk variants, usefulness of healthy person’s exome, potential to find driver variants and genes for diagnosis
– 3 Mendel errors, usually due to technical biases (eg mom and dad had non-ref nucleotide messing up child’s genotype)
– 8000 phantom variants (some GATK bug in that version)
– Ingenuity Variant Analysis performed on the exome trio data – look for rare variants within 1-hop of JIA gene

—- Illumina User Meeting Dispatch newsletter

11:00 a.m. – 11:30 a.m.
Mark Yandell, University of Utah
“VAAST: A Probabilistic Disease-gene Finder for Personal Genomes”

VAAST substantially improves upon existing approaches in terms of statistical power, flexibility and scope of use
– identify rare-disease causing loci using single trios of family members, and in small cohorts (n=3) where no two individuals share the same deleterious variants
– also identify genes involved in common, complex diseases using many fewer cases than traditional GWAS
– working to integrate indels, CNV and SV into VAAST, along with pedigrees, and non-human projects (piegeonomics)

11:30 a.m. – 12:00 p.m.
* Agnes Viale, Memorial Sloan Kettering Cancer Center
“RNA-sequencing Analysis Identifies Novel Leukemic Pathways in a Genetically Accurate Model of Acute Myeloid Leukemia”

Bronze Sponsor Workshops
Chad Nusbaum, Broad Institute of MIT and Harvard, Chair

Line-up of all the vendor talks – @PerkinElmer @iontorrent @NuGENInc @illumina @BCILifeSciences @QIAGEN @PacBio @dnanexus

1:40 p.m. – 2:00 p.m.
NuGen Technologies, Inc., Christine Malboeuf, Broad Institute of MIT and Harvard
“Viral RNA Genome Sequencing of Ultra-Low Copy Samples using NuGen’s Ovation RNA-Seq”

– 5pg of RNA is in human cell; ultra-low rna = 5fg (1000 copies) to 5 ag = amount of viral rna and does not work well with qPCR, etc
– Challenges – low quantity, host contamination, diversity (high mutation rate), technological and extraction process
– Ovation rna-seq v2 protocol from NuGen (500pg to 100ng input RNA) – low contamination
– West Nile virus – 50fg input 5M reads 31% map to virus, 48% map to host, covering 100% of viral CDS
– Dilutions starting with lesser material generated reproducible coverage profiles
– HIV – 50fg input rna – 5M reads, 69% viral aligned reads 5% host aligned, covering 100% CDS
– lesser copies of input rna meant 1-2% reads mapping to virus 30-40% mapping to host, but covered ~97% CDS with reproducible coverage profile
– process worked on samples that failed RT-PCR-454 process; method applicable to many other viral sample types (300-75k viral copies)
– applications: surveillance of endemic/emerging viral pathogens; co-infection of multiple viruses; pathogen discovery (viral parasite bacterial fungal)

Concurrent Session: Computational Biology
Mike Zody, Broad Institute of MIT and Harvard, Chair

7:30 p.m. – 7:50 p.m.
* Mark DePristo, Broad Institute of MIT and Harvard
“Overcoming Today’s Limitations in Sequencing Technology for Human Medical Genetics”

– have sequenced 40k+ samples to date from the common (Diabetes, Autism, and Heart Disease) to the uncommon/rare (Crohn’s and Mendelian disorders)
– Variation among individuals in a population – 90% SNPs 10% indels; disease-causing variation, particularly rare diseases, SNP and indel approach 50% / 50%
– indels remain an outstanding challenge; technical and analytic reasons
– PCR-free libraries improve variant calling sensitivity & specificity
– nice visual example of data looking clean with almost everything matching reference with one SNP and some noise calls; actually a het indel!
– better error models and longer reads improve sensitivity to true indels
– sample size is a huge limitation to better calling; but the ensuing massive data aggregation becomes a challenge as well

7:50 p.m. – 8:10 p.m.
* Andrew Farrell, Boston College
“Reference-free Approach for Mutation Detection”

– De novo assembly is prohibitively expensive for most labs – deep read coverage and massive computing power
– practical approach = reference guided alignment; dependent on three factors – reference accuracy, mapper’s ability to correctly place read (uniquely), degree to which a variant allele differs from reference (indels)
– developed a novel completely reference-independent method – no mapping or de novo assembly of the genome; directly compares raw sequence data from two or more samples, and identifies groups of reads unique to a sample
– tested on small genomes but will tackle human (incl. tumor) genomes, metagenomes, transcriptomes

8:10 p.m. – 8:30 p.m.
* James Knight, 454 Life Sciences
“Assembling Human Sequence into Genomes”

8:30 p.m. – 8:50 p.m.
* Aaron Quinlan, University of Virginia
“LUMPY: A Probabilistic Framework for Structural Variant Discovery and Genomic Data Mining”

– structural variation (SV) needs integration of multiple alignment signals – read-pair, split-read and read-depth
– most existing SV discovery approaches utilize only one signal; poor at low sequence coverage and for smaller SVs (Hydra, DELLY, GASVPro)
– LUMPY = extremely flexible probabilistic SV discovery framework – integrates SV detection signals from read alignments or prior evidence
– 4k simulated SV – 1k each deletion, duplication, insertion, inversion – 2x, 5x, 10x, 20x coverage
– potential for a unified variant calling framework and probabilistic analyses of diverse genomic interval datasets (ENCODE)

8:50 p.m. – 9:10 p.m.
* Jeffrey Reid, Baylor College of Medicine
“Discovery of Mobile Element Variation in Ultra-deep Whole Genome Data”

9:10 p.m. – 9:30 p.m.
* Michael Schatz, Cold Spring Harbor Laboratory
“Assembling Crop Genomes with Single Molecule Sequencing”