Sparse Interpolation, Rational Approximation and Exponential Analysis (16w5038)
George Labahn (University of Waterloo)
Annie Cuyt (Universiteit Antwerpen)
Avram Sidi (Israel Institute of Technology (Technion))
The tantalizing preliminary results showing connections in both the techniques used along with common difficulties. The connections include overlap between methods used for exact vs numeric computation and overlap in the types of sparse generating functions. As an example, in sparse interpolation finding the number of nonzero terms in a sparse representation using a minimal number of evaluations or probes has many similarities to the problem of finding the number of vertices when reconstructing a polygonal shape from its moment data. At the same time some techniques are more firmly connected to their application. For example, the use of randomization of probes to help with numerical stability is possible in sparse interpolation but not in the (seemingly similar) shape from moments problem.
Unfortunately techniques using early termination work well in exact computation but remains a challenge in numeric environments. The workshop promises to bring together a number of researchers from application areas (for example biomedical signal processing) where the type of generating functions forces a wide variety of methods for generating the sparse terms. For example a 2D shape from moments problem or a sparse representation of a signal in terms of exponential decaying functions are both solved by a generalized Hankel eigenvalue problem. However the same signal processing problem but generated by cosine functions of different frequencies requires a generalize eigenvalue problem making use of Hankel plus Toeplitz matrices. These methods seem similar but, particularly in noisy numeric environments, have significantly different numeric conditioning properties. The use of randomness both as a way to improve numerical conditioning of sparse problems, but also as a fundamental part of how the algorithms can more efficiently determine the sparse support, will play an important role in the workshop.
This workshop provides a forum for focused discussion amongst the experts in both algorithm development and applications. The goal is to understand the techniques used by algorithm designers and builders of sparse mathematical models and their applications, particularly in data having noise. The expectation is that understanding the applications and their mathematical models, accumulating the various techniques for efficiently building the various sparse generating functions and understanding the subtle similarities and differences of the methods used for computation promises to open up a number of new and interesting research directions.
The problem of finding sparse representations for mathematical models using a minimum number of probes has only recently appeared in the literature. Investigating the techniques and understanding the difficulties in important applications will go a long ways to helping determine the present feasibility of the methods and help point to the directions for overcoming any limitations.