Data-Driven Methods for Reduced-Order Modeling and Stochastic Partial Differential Equations (17w5140)

Arriving in Banff, Alberta Sunday, January 29 and departing Friday February 3, 2017

Organizers

(University of British Columbia)

(University of Washington)

(Massachusetts Institute of Technology)

Objectives

Data methods are leading to transformative changes across the engineering, physical and biological sciences. Our objective is to bring together leading experts in various fields of mathematical sciences with the goal of integrating state-of-the-art methods across mathematical fields and scientific disciplines. This is the time for such an effort as many transformative innovations are emerging across disciplines but have yet to migrate more broadly across the mathematical sciences and/or domain sciences.



Mathematical strategies that rely on dimensionality reduction techniques that are of growing importance given the continued demand for methods capable of handling what has become ubiquitous, big-data sets in every field of science. The development of experimental tools capable of generating enormous amounts of data, coupled with the plummeting cost of storing, analyzing and dispensing these data enable scientists to grapple with problems that were out of reach only 10 years ago. Thus the ability to use such rich data sets in conjunction with more traditional analysis, modeling, computation, and dynamical systems theory techniques is of critical importance. Indeed, such methods have even been proposed to develop data-based multi-scale (e.g. equation-free) modeling strategies that allow for a purely data-driven method for characterizing the interplay of micro- and macro-scale physics without having to force inaccurate, over-fit and/or made-up models upon the system. The goal of this workshop is to seize this extremely compelling -- and extraordinarily challenging -- opportunity for integrating cutting edge data-analysis techniques being developed in statistics and computer science with more traditional applied mathematics methodologies. The workshop focuses on model reduction, data-assimilation, sparse sampling, equation-free methodologies, and machine learning with the goal of providing innovative tools for adaptively modeling and controlling complex, nonlinear processes.



Taken together - the complex system dynamics, physical observations and measurements, and low-dimensional embedding - significant challenges remain in accurately and efficiently prescribing the best (most statistically likely) evolution of the system. Many issues are especially pernicious, including (i) capturing the correct micro-scale physics at the macroscopic level, i.e. multi-scale physics and its associated model errors, (ii) dealing with a highly limited (sparse) number of observations for assimilation and/or (iii) quantifying the uncertainty and its dynamic evolution in the low-rank approximations of the dynamics. The goal of this workshop is to address mathematical strategies whereby these obstacle can be overcome in a computationally efficient manner by leveraging the expertise of the strongest people in the field. In order to do so, ideas of equation-free modeling (EF), compressive sensing (CS) and machine learning (ML) are all integrated together to enhance the modeling and predictive capabilities. The latter of the two techniques (CS, ML) are quite novel in the context of dynamical systems and data assimilation whereas the former (EF) is now at the cutting edge of data-driven methods for understanding complex systems. Thus the overall research objective synthesizes model reduction, sparse sampling, and machine learning with the goal of adaptively modeling, controlling and/or characterizing complex, nonlinear processes. This work will catalyze advances in many interdisciplinary fields, including short-range weather forecasting, neuroscience, epidemiology and financial market modeling, for example.



To be more precise about the mathematical framework of techniques and innovations capable of significant and broad impact across the engineering, physical and biological sciences, we will bring together experts around the following themes:



1. Reduced Models: The dynamics of complex nonlinear systems are often sparse in the sense that a relatively small subset of the full space is needed to describe the evolution of the system. As a result, most of the nonlinear dynamics may be compressed, or encoded, in a much smaller space than the full space, i.e. there is an underlying low-dimensional attractor in the nonlinear system. Such dimensionality reduction lends itself easily to the building of libraries of relevant dynamic modes and evokes the power of machine learning methodologies. All the workshop participants have an extensive history of engagement in dimensionality-reduction methods applied broadly to numerical and experimental methods in, for instance, climate science, ocean modeling, turbulent flow control and fluids, laser dynamics, material science and neuro-sensory systems.



2. Equation-free Methods and Bayesian Model Learning: The dynamics of the encoded subspace may be deduced from observations of the system (data measurements) and/or constraints to obey physical laws (e.g.~conservation of mass and energy). Examples include inverse modeling and dynamical mode decomposition. However, for large complex systems subject to vast spatial and time scales, the micro-scale dynamics are often parametrized through over-fitting of available data. Recent mathematical innovations prescribe macro-scale evolution using a hybrid approach where no micro-scale physics are prescribed and/or parametrized unless necessary. This equation-free strategy is highly efficient as it only uses short bursts of measurements in time and space in order to construct approximation to future states without being constrained to an assumed macro-scale physics model. Moreover, data assimilation and Bayesian model inference can help calibrate the models with observations and discover new models.



3. Data Assimilation and Learning. Data assimilation is critical and has been transformative in weather and ocean forecasts. More broadly, it is applicable across a wide range of complex systems often used in modeling of the physical, engineering and biological sciences. The assimilation procedure ensures that the dynamical models chosen be constrained to produce realistic trajectories, or slow-manifold dynamics, that remain close to the experimental data. Although assimilation technology of sparse data is at the core of modern weather and ocean prediction models, modern compressive sensing techniques have not yet been tested for reconstruction of the macro-scale. Such an approach may be critical to applications such as paleoclimate reconstruction and or brain (neural) recordings, where the "measurements'' are sparse and noisy. Compressive sensing has been demonstrated to be a highly effective method for performing global reconstructions using only limited measurements. In such mathematical architectures, machine learning can be used to build libraries of observable dynamics that can naturally encode the history and learning of the dynamical system; i.e. the sparse mode structure used in compressive sensing can be learned through statistical training algorithms that keep such information in memory for future use for classification and/or future state estimation.



4. Stochastic PDE (SPDE) constrained optimization. New methods integrating SPDE and optimization are necessary for parameter identification in large scale inverse problems. Recent innovations demonstrate the value of integration of a variety of techniques from optimization combined with valuable stochastic viewpoints - Bayesian, equation free or approximate simulations, information theory (Majda) -for large scale problems, where traditional methods such as standard MCMC, Metropolis-Hastings, or Lagrange multipliers are prohibitively expensive. These problems can require optimization over multiple variables, e.g. involving model parameters that minimize norms measuring the distant of the functions of the solutions from the data,These involve the combination of approaches commonly used in different areas of optimization - including penalty methods where the SPDE is appended to the objective function, variable projection method on smaller alternating optimization problems for the multiple key variables, use of different types of norms that are appropriate for the (projected) representation of the solution - combined with smart, efficient sampling that improves convergence via a Bayesian formulation or an approximate stochastic simulation (Ghattas) or information-theoretic approach (Majda). While these method have been developed in certain applications, understanding the combined approach methods is virtually unexplored. For example, what characteristics of the problem suggestion the right combination to use? Under what conditions can we expect superior speed and accuracy for large scale stochastic optimization problems, compared with standard optimization or sampling?



5. Uncertainty and Fault Tolerance: By having several realizations of the encoded subspace model, one may involve a measure of error identification and correction. Such uncertainty quantification (UQ) is critical for understanding model error and the bounds of reasonable predictions. UQ is quickly becoming one of the most meaningful model validation techniques available and the dimensionally reduced models proposed here give us the ability to quantify the evolution and growth of the error bounds through propagation of the underlying dynamical system or equation-free architecture.



Each of the above ideas are powerful data methods in and of themselves. But in combination, they provide a transformative mathematical framework for modeling the behavior of complex systems.