CRS4

Scalable genomics: from Raw Data to Aligned Reads on Apache Yarn

Luca Pireddu, Francesco Versaci, Gianluigi Zanetti
Misc - september 2016
Télécharger la publication : poster_vertical.pdf [3.7Mo]  
The adoption of Big Data technologies can potentially boost the scalability of data-driven biology and health workflows by orders of magnitude. Consider, for instance, that technologies in the Hadoop ecosystem have been successfully used in data-driven industry to scale their processes to levels much larger than any biological- or health-driven work attempted thus far. In this work we demonstrate the scalability of a sequence alignment pipeline based on technologies from the Hadoop ecosystem – namely, Apache Flink and Hadoop MapReduce, both running on the distributed Apache Yarn platform. Unlike previous work, our pipeline starts processing directly from the raw BCL data produced by Illumina sequencers. A Flink-based distributed algorithm reconstructs reads from the Illumina BCL data, and then demultiplexes them – analogously to the bcl2fastq2 program provided by Illumina. Subsequently, the BWA-MEM-based distributed aligner from the Seal project is used to perform read mapping on the Yarn platform. While the the standard programs by Illumina and BWA-MEM are limited to shared-memory parallelism (multi-threading), our solution is completely distributed and can scale across a large number of computing nodes. Results show good pipeline scalability, almost linear in the number of nodes. In addition, this approach automatically benefits from the robustness to hardware failure and transient cluster problems provided by the Yarn pipeline, as well as the scalability of the Hadoop Distributed File System. Moreover, this Yarn-based approach complements the up-and-coming version 4 of the GATK toolkit, which is based on Spark and therefore can run on Yarn. Together, they can be used to form a scalable complete Yarn-based variant calling pipeline for Illumina data, which will be further improved with the arrival of distributed in-memory filesystem technology such as Apache Arrow thus removing the need to write intermediate data to disk.

Références BibTex

@Misc{PVZ16,
  author       = {Pireddu, L. and Versaci, F. and Zanetti, G.},
  title        = {Scalable genomics: from Raw Data to Aligned Reads on Apache Yarn},
  month        = {september},
  year         = {2016},
  keywords     = {genomics,big data, yarn, hadoop},
  url          = {https://publications.crs4.it/pubdocs/2016/PVZ16},
}

Autres publications dans la base

» Luca Pireddu
» Francesco Versaci
» Gianluigi Zanetti