# Sparse and Low Rank Approximation (11w5036)

Arriving in Banff, Alberta Sunday, March 6 and departing Friday March 11, 2011

## Organizers

Gitta Kutyniok (Technische Universität Berlin)

Holger Rauhut (RWTH Aachen University)

Joel Tropp (California Institute of Technology)

Ozgur Yilmaz (University of British Columbia)

## Objectives

The workshop will focus on the following three subtopics.

Sparse Approximation

The field of sparse approximation is concerned with the problem of representing a target vector using a short linear combination of vectors drawn from a large, fixed collection called a dictionary. The earliest applications arose in approximation theory for functions and operators. A closely related classical problem is variable selection in regression. Modern applications include machine learning, signal and image processing, and numerical analysis.

Although sparse approximation is computationally hard in the general case, contemporary research has identified situations where tractable algorithms can provably solve sparse approximation problems. Some of the most interesting recent advances concern dictionary learning and sparse approximation of random signals.

Compressed Sensing

Compressed sensing is a new field which has seen enormous interest and growth. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered from what was previously considered highly incomplete measurements (information) by using efficient algorithms: convex relaxation methods, such as $ell_1$-minimization and greedy algorithms. This discovery has led to a fundamentally new approach to certain signal and image recovery problems. Applications include signal acquisition, A/D conversion, radar, astronomy and more.

Current research focuses on efficient algorithms as well as to providing recovery guarantees for certain measurement matrices. So far mainly random constructions for good measurement matrices in this context are known, and for this reason mathematical research in compressive sensing exploits many tools from probability theory and geometry of Banach spaces.

The workshop shall initiate a fertile discussion between the different groups of mathematicians, statisticians, and computer scientists, and experts from application areas, on the one hand to advance research in this highly topical area by combining different techniques, and on the other hand to bring problems from application areas to the attention of more theoretical oriented researchers.

Matrix identification: rank minimization, algorithms, and recovery results

Rank minimization has its roots in image compression algorithms, learning theory, and collaborative filtering. In a nutshell, collaborative filtering is the task of making automatic predictions about the interests of a user by collecting taste information from many users. A simple mathematical model for this setup can be described by the matrix completion problem where one has only a few observations of the entries of a low-rank matrix and tries to complete the missing entries. More generally, one may consider the problem of minimizing the rank of a matrix subject to some general linear constraint. A quite surprising recent result due to Candï¿½s and Tao in this direction shows that many $ntimes n$ matrices of rank $r$ can be recovered from only ${cal O} (nr log(n) )$ observations of its entries via nuclear norm minimization (the $ell_1$-norm of the singular values). It turns out that many of the ideas and techniques from compressive sensing can be transferred to the rank-minimization problem or other matrix recovery problems.

This direction is currently in its beginning stage and far more developments can be expected in the near future. Therefore, such a workshop would be a unique opportunity to discuss the most recent results, stimulate new research in the interplay between sparse approximation, compressed sensing and low rank matrix recovery problems, and, in particular, formulate open questions, and thereby significantly influence the research in this field in the future.

Application Areas

We would like to put a focus on the following applications of sparse approximation.

- Astronomical image and signal processing

- Radar imaging

- Wireless communication

- Seismology

We will invite experts from these application fields in order to stimulate discussions between practitioners and mathematicians, and thereby initiating new research directions and new research collaborations.

Sparse Approximation

The field of sparse approximation is concerned with the problem of representing a target vector using a short linear combination of vectors drawn from a large, fixed collection called a dictionary. The earliest applications arose in approximation theory for functions and operators. A closely related classical problem is variable selection in regression. Modern applications include machine learning, signal and image processing, and numerical analysis.

Although sparse approximation is computationally hard in the general case, contemporary research has identified situations where tractable algorithms can provably solve sparse approximation problems. Some of the most interesting recent advances concern dictionary learning and sparse approximation of random signals.

Compressed Sensing

Compressed sensing is a new field which has seen enormous interest and growth. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered from what was previously considered highly incomplete measurements (information) by using efficient algorithms: convex relaxation methods, such as $ell_1$-minimization and greedy algorithms. This discovery has led to a fundamentally new approach to certain signal and image recovery problems. Applications include signal acquisition, A/D conversion, radar, astronomy and more.

Current research focuses on efficient algorithms as well as to providing recovery guarantees for certain measurement matrices. So far mainly random constructions for good measurement matrices in this context are known, and for this reason mathematical research in compressive sensing exploits many tools from probability theory and geometry of Banach spaces.

The workshop shall initiate a fertile discussion between the different groups of mathematicians, statisticians, and computer scientists, and experts from application areas, on the one hand to advance research in this highly topical area by combining different techniques, and on the other hand to bring problems from application areas to the attention of more theoretical oriented researchers.

Matrix identification: rank minimization, algorithms, and recovery results

Rank minimization has its roots in image compression algorithms, learning theory, and collaborative filtering. In a nutshell, collaborative filtering is the task of making automatic predictions about the interests of a user by collecting taste information from many users. A simple mathematical model for this setup can be described by the matrix completion problem where one has only a few observations of the entries of a low-rank matrix and tries to complete the missing entries. More generally, one may consider the problem of minimizing the rank of a matrix subject to some general linear constraint. A quite surprising recent result due to Candï¿½s and Tao in this direction shows that many $ntimes n$ matrices of rank $r$ can be recovered from only ${cal O} (nr log(n) )$ observations of its entries via nuclear norm minimization (the $ell_1$-norm of the singular values). It turns out that many of the ideas and techniques from compressive sensing can be transferred to the rank-minimization problem or other matrix recovery problems.

This direction is currently in its beginning stage and far more developments can be expected in the near future. Therefore, such a workshop would be a unique opportunity to discuss the most recent results, stimulate new research in the interplay between sparse approximation, compressed sensing and low rank matrix recovery problems, and, in particular, formulate open questions, and thereby significantly influence the research in this field in the future.

Application Areas

We would like to put a focus on the following applications of sparse approximation.

- Astronomical image and signal processing

- Radar imaging

- Wireless communication

- Seismology

We will invite experts from these application fields in order to stimulate discussions between practitioners and mathematicians, and thereby initiating new research directions and new research collaborations.