Beyond Convexity: Emerging Challenges in Data Science (17w5159)

Arriving in Oaxaca, Mexico Sunday, October 22 and departing Friday October 27, 2017

Organizers

Robert Nowak (University of Wisconsin-Madison)

(University of Wisconsin-Madison)

(University of Wisconsin-Madison)

(Sandia National Laboratories)

Objectives

The primary objectives of this workshop are to explore nonconvex problems that arise in data science and to develop algorithms for solving these problems that are practical, that draw on the full range of technology from optimization, statistics, and computational hardware and systems, and that come with performance guarantees. We will bring together researchers from the machine learning, optimization, statistics, and applied mathematics communities to explore these key challenges and opportunities at the intersection of nonconvex optimization and machine learning.

The year 2017 will be an optimal time for this workshop. Convex optimization formulations have underpinned solver technology for much of the “big data” revolution (particularly over the past 15 years) and impressive advances have been made both in the effectiveness of these methods and of our understanding of them. However, there is a feeling across the field that we will need to look beyond convex optimization in designing solvers for future generations of data analysis and machine learning problems. Early efforts in this area have involved adaptations of convex optimization solvers, and these are sometimes effective, though performance guarantees are usually lacking. More recent work has made use of some techniques better suited to handling nonconvexity. Lately, there are tantalizing new developments in explaining the performance of nonconvex solvers, and in proving strong results about their global convergence properties in specific problem domains. We need to explore how far these encouraging developments can be extended to other problems, what other optimization tools can be brought to bear, and what other novel optimization formulations can be proposed to address key problems in machine learning.

Developments are occurring rapidly in this area. It is guaranteed that important new results and interesting new applications will emerge between now and the time of the workshop. We have therefore refrained from proposing a full slate of participants at this point, to ensure that space remains to invite researchers who make key contributions in the interim. We have contacted approximately 35 potential participants to gauge their interest in the proposed workshop, and have received strongly favorable responses, often accompanied by a comment on the importance of our focus on nonconvexity in optimization and machine learning.

Specific objectives include the following:

- Assess the state of the art in nonconvex formulations and algorithms for machine learning problems, and identify areas in which improved solver technology and improved understanding are needed. We will address such specific questions as: * What are the most important instances of nonconvex learning problems?

* Where are the practical bottlenecks in solving nonconvex formulations? What algorithms and heuristics have been tried to date? * What are the different formulation options for writing the key problems as nonconvex optimization problems?

* Are other related formulation types appropriate, for example, game-theoretic formulations or variational inequalities?

* What are the key properties of these formulations? For example, is there separability or some structure that should be exploited by algorithms? Are there a multitude of saddle points? Is the nonconvexity “benign” in some sense?

- What are the gaps in our understanding of algorithm performance? If there exist algorithms / heuristics that are (at least partly) effective, do we understand why? Can such techniques be understood using perspectives from optimization, statistics, learning, or a synthesis of these? (One example of an issue of this types is the use of dropout in neural network training. Another is the unreasonable effectiveness of block coordinate descent approaches, such as alternating-least-squares in tensor factorization.)

- Identify novel machine-learning settings in which nonconvex optimization expertise can be leveraged to build faster and more accurate data analysis methods with performance guarantees.

- Introduce non-optimization participants to methods and analysis techniques used in nonconvex optimization settings that are possibly relevant to their nonconvex formulations. These could include both extensions of known techniques for convex optimization, along with topics targeted more specifically to nonconvex situations, including integer programing methods, nonlinear programming methods, trust-region methods, and semi-algebraic properties that facilitate convergence results.

- Provide young researchers and underrepresented minorities with opportunities to network with each other and senior researchers in their fields.