Computability, Analysis, and Geometry (15w5005)

Arriving in Banff, Alberta Sunday, March 8 and departing Friday March 13, 2015

Organizers

(Princeton University)

Michael Yampolsky (University of Toronto)

Objectives

Theory of computation of analytic objects is an emerging field of mathematics which has seen much exciting progress recently. Numerical simulation has become second nature in dynamics, and in many branches of analysis. It is key to understanding a wide range of phenomena ranging from planets’ movement, to climate patterns, to market dynamics. Various numerical tools have been developed to help answer specific questions, such as predicting the weather or planning the trajectory of a satellite. However, while we have vast knowledge about the computability and complexity of discrete problems, computability theory of the continuous world is only beginning to come into its own.

Computable analysis has a venerable history, beginning with the work of Banach and Mazur of 1937 (only a year since Turing’s seminal paper introduced both Turing Machine as a model of computation, and a concept of a computable real number). While details may vary, generally speaking, a real function f(x) is understood to be computable if there is an algorithm to compute an arbitrarily good rational approximation of the value of f(x) given a good enough rational approximation of the value of the argument x (think of how a pocket calculator produces the value of an exponential, as an example). The framework of computable analysis was extensively developed in the several following decades mainly by logicians. Its application to hard problems of modern numerical analysis and dynamics is much more recent.

One instructive example is the computational complexity of the Riemann mapping of a simply-connected domain in the complex plane. The first complete proof of the existence of the mapping, given by Paul Koebe, was already constructive. Intuitively it is clear that the computational hardness of constructing the mapping should correspond to the geometric complexity of the boundary of the domain. Computation of the Riemann map is important in applications from classical problems of dynamics to exotic, such as the human brain mapping. I. Binder and the two authors of the proposal have shown that the problem of computing the Riemann mapping at a point inside the domain lies in the complexity class #P given an oracle for the boundary. The best practical algorithm is the elegant “Zipper” constructed by D. Marshall and S. Rohde. Its complexity is exponential, so potentially its efficiency may be improved. In some settings one may be interested in a speedy computation of a rough but uniform approximation of the Riemann mapping. In a beautiful recent paper of C. Bishop this question has been formalized for an n-gon, and the complexity has been shown to be equal to O(n). All of these works present a synthesis of classical complex analysis with theoretical computer science and numerical methods.

Another example of the above synthesis is a series of recent works of the authors of the proposal, I. Binder, C. Rojas, and others, on computability of Julia sets of rational maps. It is a common feature of chaotic dynamics that the orbit of a single point cannot be reliably approximated numerically due to sensitivity to initial conditions. In the modern paradigm of dynamics, typically there exists a finite collection of attractors which absorb almost every orbit. These attractors are the focus of numerical methods. Julia sets are perhaps the most drawn mathematical objects, due to their beauty and the deep and difficult theory behind them. They are repellers (attractors for the inverse branches) of rational maps. Despite a plethora of numerical algorithms to draw them, the questions of their computational hardness were not studied before our work. Surprisingly, it has turned out that even in the family of quadratic polynomials there exist computable maps with non-computable Julia sets: even though one iteration of such a dynamical system is easy to simulate numerically, its repellers are impossible to visualize. These results rely on a number of cutting edge techniques of complex dynamics, drawing from works of A. Cheritat, X. Buff, M. Shishikura, and others. The questions of computational complexity of computable Julia sets are wide open for the most intriguing classes of maps. From the point of view of applications, it is natural to ask whether a typical map has a polynomial-time computable Julia set. While there are some encouraging results in this direction, notably a recent work of A. Dudko, the answer is not known. The generalizations to higher-dimensional families of dynamical systems, such as Henon maps, are also not known – and pose a number of exciting and challenging problems.

Polynomial root finding algorithms are another branch of analysis in which both numerical methods and theoretical complexity bounds have generated exciting mathematics. The theoretical approach has been pioneered by M. Shub and S. Smale in a series of works in the 1980’s and 1990’s. In 1981, S. Smale has introduced a concept of an approximate zero of a polynomial – an initial value from which an iteration of the Newton’s method converges to a true zero at a quadratic speed. Smale’s 17th problem asks for an algorithm to find approximate roots of a typical polynomial equation in polynomial time. A probabilistic algorithm has recently been constructed by C. Beltran and M. Pardo. A deterministic algorithm with running time close to polynomial was given by P. Burgisser and F. Cucker in 2009. Dynamics of a root-finding method can in general be very sensitive to the initial value, as evidenced by beautiful fractal pictures for the dynamics of the Newton’s method in the complex plane. In 2001, J. Hubbard, D. Schleicher, and S. Sutherland constructed a small set of initial values for the Newton’s method (less than 1.11d log2d where d is the degree) which suffice for the Newton’s method to find all of the roots. Recently M.-H. Kim, M. Martens, and S. Sutherland studied a similar set of questions for a path-finding method, proving, for instance, that on average over all polynomials and for a typical initial value, an approximate root can be found in time linear in the degree.

There is a beautiful connection between theory of computation and differential geometry. It probably starts with the work of M. Gromov from the early 1980’s, who showed that a closed manifold whose fundamental group has an algorithmically unsolvable word problem possesses infinitely many contractible closed geodesics. The interplay between geometric and computational ideas has come to fruition in a series of works of A. Nabutovsky and S. Weingberger. Using algorithmic complexity considerations, they have shown that the geometry of the “parameter space” of equivalence classes of Riemannian metrics Riem(M)/Diff(M) for every closed manifold of dimension d>4 is spectacularly complex. To describe it, consider a scale-invariant functional on Riem(M)/Diff(M) which measures geometric complexity of the metric, such as (sup K)diam2. Nabutovsky and Weinberger showed that this functional will have infinitely many local minima on the parameter space. Furthermore, these minima are not isolated – in every neighborhood of one of them there are infinitely many others. The minima are also “deep”, that is, to go from one to another along a continuous path one has to pass through a point where the value of the functional is large.

The results of Nabutovsky and Weinberger can be interpreted as a failure of geometrization – in dimensions 5 and higher there does not exist a locally optimal (from the point of view of a specific functional) Riemannian metric. It is striking that the origin of geometric complexity here is algorithmic non-decidability of recognition problems of closed manifolds. The situation is reversed in dimension 3 – the success of the geometrization program, which has culminated with the work of G. Perelman, implies that natural recognition questions are decidable algorithmically. This shifts the interest to computational complexity questions. A few years ago, I. Agol, J. Hass, and W. Thurston showed that the problem of deciding whether a polygonal knot in a closed three-dimensional manifold bounds a surface of genus at most g is NP-complete. Further, S. Schleimer demonstrated that the three-sphere recognition problem lies in NP.

Similar techniques in dynamics have developed in the theory of postcritically finite branched coverings (or Thurston maps) in dimension 2. The works of V. Nekrashevych, L. Bartholdi, and others have developed a group-theoretic language of describing these objects. Recognition problems (such as the Douady-Hubbard “twisted rabbit problem”) were effectively resolved using this approach. The general geometrization program for Thurston maps has been formulated by K. Pilgrim, and recently completed by N. Selinger. This opens a way of resolving the general algorithmic recognition problem for Thurston maps. In a particular case, when the geometrization of the Thurston map is a rational endomorphism, this problem was recently solved by S. Bonnot and the authors of the proposal.

Randomness is one of the most tantalizing mathematical concepts. Consider the following two sequences of “heads” and “tails”: HTHTHTHTHT... and HHTHTTTHHT... (the second one was actually produced by tossing a coin). Intuitively, it is clear that the first one is less “random” than the second – yet from the point of view of Probability Theory they are equally likely. The first sequence is produced by a simple rule: repeat “HT” n times. Thus, for an arbitrary value of n, it can be compressed to an instruction of a finite length. The degree of compression which a finite string allows is codified by its Kolmogorov complexity. Its connection with randomness stems from the algorithmically effective definition of randomness due to Martin-Löf (a student of Kolmogorov). A Martin-Löf random sequence is “unpredictable” – you cannot infer its future behaviour from the initial string. Equivalently, such a sequence is incompressible in the sense of Kolmogorov complexity. Different models of effective computation lead to a hierarchy of definitions of algorithmic randomness, a field actively developed by R. Downey, D. Hirschfeldt, and others. Algorithmic randomness also gives rise to effective ergodic theory, which studies algorithmic unpredictability of dynamics in the above sense. This field has led to some elegant interplay between computability and dynamics in the works of P. Gacs, S. Galatolo, M. Hoyrup, C. Rojas, and others.

The aim of the workshop is to bring together the experts working on different aspects of the rapidly developing field of Theory of Computation in Dynamics, Analysis, and Geometry. There is a growing understanding that computational complexity considerations play a fundamental role in "analytic" disciplines. Intuitively, objects which are hard to compute should be analytically or geometrically complex. Exploring this connection leads to a beautiful interplay between theoretical computer science and analysis, and has produced powerful results in both directions. The proposed workshop will create a unique opportunity at building new synthesis within this emerging area by bringing together experts and top young talent in key areas of overlap between Computational Complexity and Analysis/Geometry.