Flink in Genomics: Efficient and scalable processing of raw Illumina BCL data
A single run in genome sequencing can easily produce several terabytes of data, which subsequently feed a complex pipeline of tools. Typically, the first step in this workflow is a rearrangement of data, roughly equivalent to a matrix transposition, to reconstruct the original DNA fragments from the raw BCL data, where the fragments are sliced and scattered over multiple files. This step is followed by the sorting of the fragments by a specific identifying tag sequence, which is attached during the preparation of the sample. In this talk we will present a parallel program which performs these essential operations. Our BCL converter is shown to have comparable performance to the shared-memory Illumina bcl2fastq tool, while also enabling easy and scalable distributed-memory parallelization. We will describe the techniques we have used to achieve high performance and discuss the features of Flink which we have particularly appreciated as well as the ones which we think are still missing.
Références BibTex
@Misc{VPZ16a,
author = {Versaci, F. and Pireddu, L. and Zanetti, G.},
title = {Flink in Genomics: Efficient and scalable processing of raw Illumina BCL data},
month = {september},
year = {2016},
type = {Presentation at workshop FlinkForward 2016},
keywords = {Big data, Apache Flink, Genomics, NGS},
url = {https://publications.crs4.it/pubdocs/2016/VPZ16a},
}
Autres publications dans la base