Statistical and Computational Challenges in High-Throughput Genomics with Application to Precision Medicine (18w5202)

Arriving in Oaxaca, Mexico Sunday, November 4 and departing Friday November 9, 2018

Organizers

Gabriela Cohen-Freue (University of British Columbia)

(Harvard University)

(University of California, San Francisco)

Ronglai Shen (Memorial Sloan-Kettering Cancer Center)

Objectives

High-throughput genomic experiments are a key component of precision medicine. Such experiments are expensive and time consuming. Effort and resources are wasted by poor experimental design, inadequate pre-processing, and sub-optimal analytical strategies. Addressing these problems is fundamental to realizing the promise of precision medicine.



The primary objective of this workshop is to bring together Biomathematicians, Biostatisticians and Computational Biologists to discuss analytic challenges in high-throughput genomic data in the context of precision medicine. The participants will be quantitative scientists with expertise in methods development and implementation of scalable open source software packages in this area. The specific objectives of this workshop are to:



  • Share methods for adjustment of technological artifacts such as
  • those related to batch effects and GC content in high-throughput genomic data;



  • Establish guidelines and standards to increase reseach
  • reproducibility;



  • Examine new methods for analysis of sequence data;
  • Discuss approaches to integrating multiple forms of genomic
  • data.

Relevance, Importance, and Timeliness





Researchers and clinicians now routinely gather and analyze genomic data to learn about basic biology, identify drug targets, and find biomarkers for example for disease risk prediction, early detection of disease onset, and prediction of response to treatment. In recent years it has become obvious (for example, in several retractions from high profile journals such as Science and Nature) that proper assessment of data structures and statistical modeling of bias and uncertainty are absolutely essential in any genomic analyses, as technological artifacts often dwarf biological signals. Thus, we believe this workshop is extremely timely and important, revisiting the basics in statistical genomics and developing novel methods to implement these principles particularly for the genomic data emerging from novel high-throughput technologies. Addressing these issues will, in turn, move the field of precision medicine forward.



There are numerous conferences specializing in analytical challenges that arise in computational biology. Most of these are organized and attended by computer scientists, with emphasis on algorithms to approach the genomic data generated to address relevant biological questions. While these algorithms are undoubtedly very important in their own right, they only represent one aspect of the many quantitative challenges involved in tackling these scientific problems. By gathering together Biomathematicians, Biostatisticians and Computational Biologists, it will be possible to address the full dimension of the analytic challenges.