Benchmarking Variant Identification Tools for Plant Diversity Discovery [post]

2019 unpublished
The ability to accurately and comprehensively identify genomic variations is critical for plant studies utilizing high-throughput sequencing. Most bioinformatics tools for processing next-generation sequencing data were originally developed and tested in human studies, raising questions as to their efficacy for plant research. A detailed evaluation of the entire variant calling pipeline, including alignment, variant calling, variant filtering, and imputation was performed on different programs
more » ... different programs using both simulated and real plant genomic datasets. Results A comparison of SOAP2, Bowtie2, and BWA-MEM found that BWA-MEM was consistently able to align the most reads with high accuracy, whereas Bowtie2 had the highest overall accuracy. Comparative results of GATK HaplotypCaller versus SAMtools mpileup indicated that the choice of variant caller affected precision and recall differentially depending on the levels of diversity, sequence coverage and genome complexity. A cross-reference experiment of S. lycopersicum and S. pennellii reference genomes revealed the inadequacy of single reference genome for variant discovery that includes distantly-related plant individuals. Machine-learning-based variant filtering strategy outperformed the traditional hard-cutoff strategy resulting in higher number of true positive variants and fewer false positive variants. A 2-step imputation method, which utilized a set of high-confidence SNPs as the reference panel, showed up to 60% higher accuracy than direct LD-based imputation. Conclusions Programs in the variant discovery pipeline have different performance on plant genomic dataset. Choice of the programs is subjected to the goal of the study and available resources. This study serves as an important guiding information for plant biologists utilizing nextgeneration sequencing data for diversity characterization and crop improvement. Background 3 Genomic technologies provide unprecedented opportunities to reveal the history of crop domestication, to discover novel genetic diversity, and to understand the genetic basis of economically important traits, collectively contributing to crop improvement and food security [1]. One of the most important steps in genomic analyses is the ability to accurately and comprehensively identify genetic variations. As sequencing cost continues to decrease, whole genome sequencing (WGS) strategies are increasingly employed for plant diversity and domestication studies. [2] [3] [4] [5] . Accompanying improvements in sequencing technology is the need to not only improve but also better understand the algorithms that enable variant calling from sequencing data. Many of the algorithms used in the processing of sequencing data were originally developed and evaluated in human WGS studies yet are frequently used by plant genomic researchers [6] [7] [8] [9] . The underlying assumption is that the performance of a given algorithm for human data will be similar for plant data, in spite of significant differences between the human and plant genomes. The variant discovery pipeline for WGS dataset can be roughly divided into four steps: read mapping, variant calling, variant filtering, and imputation. Sequence aligners for the read mapping step can be grouped according to their indexing methodologies [9]. Programs such as Novoalign ( and GSNAP [10] use hash tables indexing methods; whereas BWA [11], SOAP2 [12] and Bowtie2 [13] use Burrows-Wheeler Transformation indexing algorithms. Variant calling programs can be categorized into alignment-based programs such as SAMtools [14] and FreeBayes [15], and assembly-based programs, such as GATK HaplotypeCaller [16] and FermiKit [17]. Variant filtering steps remove low-quality variants based on various quality metrics such as base quality, read depth, and mapping quality. The purpose of this step is to remove false positive variants while minimizing false negative variants, a source of "hidden diversity". The basic filtering strategy, termed "hard-filtering" [18] , sets empirical cutoffs on quality metrics to
doi:10.21203/rs.2.9666/v2 fatcat:vjj6ienx6nagtkgf6dwn2pcryq