Schedule for: 22w5167  Applied Functional Analysis
Beginning on Sunday, August 28 and ending Friday September 2, 2022
All times in Oaxaca, Mexico time, CDT (UTC5).
Sunday, August 28  

14:00  23:59  Checkin begins (Front desk at your assigned hotel) 
19:30  22:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
20:30  21:30  Informal gathering (Hotel Hacienda Los Laureles) 
Monday, August 29  

07:30  08:45  Breakfast (Restaurant Hotel Hacienda Los Laureles) 
08:45  12:30  Chair (Morning, 8:4512:30): V. Temlyakov (Online) 
08:45  09:00  Introduction and Welcome (Conference Room San Felipe) 
09:00  09:45 
Yuan Xu: Approximation and analysis in localized homogeneous space ↓ We consider approximation and localized frames on conic domains. Our approach
is based on the orthogonal structure that admits a closedform formula for its
reproducing kernels, akin to the addition formula of spherical harmonics. Such
a formula leads to highly localized kernels that serve as the foundation for
analysis in such domains. The results will be presented in a general framework
encompassing wellstudied domains such as the unit sphere and the unit ball. (Zoom) 
10:30  11:00  Coffee Break (Conference Room San Felipe) 
11:00  11:45 
Andras Kroo: Weierstrass type approximation problem for multivariate homogeneous polynomials ↓ By the celebrated Weierstarss approximation theorem continuous
functions on compact sets
admit uniform polynomial approximation. The similar question of density
of homogeneous multivariate
polynomials has been actively investigated in the past 1015 years. In
this talk we will give a survey
of the main developments related to this problem. (Zoom) 
11:45  12:30 
Akram Aldroubi: Dynamical sampling: Source term recovery ↓ Consider the abstract IVP in a separable Hilbert space $\mathcal H$:
$$
\begin{cases}
\dot{u}(t)=Au(t)+f(t)+\eta(t)\\
u(0)=u_0,
\end{cases}
\quad t\in\mathbb R_+,\ u_0\in\mathcal H,
$$
where $t\in[0,\infty)$, ${u}: \mathbb R_+\to\mathcal H$,
$\dot{u}: \mathbb R_+\to\mathcal H$ is the time derivative of $u$, and $u_0$ is an initial condition. The goal is to recover $f$ from the measurements $\mathfrak m(t,g) = \left\langle u(t),g \right\rangle +\nu(t,g),\ t\ge 0,\ g\in G$, where $G$ is a countable subset of $\mathcal H$, $\eta$ is an unknown, but slowly varying background source, and $\nu$ is an additive noise. (Zoom) 
13:20  13:30  Group Photo (Hotel Hacienda Los Laureles) 
13:30  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  18:00  Chair (Afternoon): Dany Leviatan (Online) 
15:00  15:45 
Gideon Schechtman: The problem of dimension reduction of finite sets in normed spaces ↓ Given a normed space \(X\), \(C>1\) and \(n\in \mathbb{N}\) we denote by \(k_n^C(X)\) the smallest \(k\) such that every \(S\subset X\) with \(S=n\) admits an embedding into a \(k\)dimensional subspace of \(X\) which distort the mutual distances in \(S\) by at most a factor of \(C\). We shall survey the little that is known about estimating \(k_n^C(X)\) for different spaces \(X\). The lattest is a result of Assaf Naor, Gilles Pisier and myself: Let \(S_1\) denote the Schattenvon Neumann trace class, i.e., the Banach space of all compact operators \(T:\ell_2\to \ell_2\) whose trace class norm \(\T\_{S_1}=\sum_{j=1}^\infty\sigma_j(T)\) is finite, where \(\{\sigma_j(T)\}_{j=1}^\infty\) are the singular values of \(T\). We prove that for each \(C>1\), \(k_n^C(S_1)\) has a lower bound which is a positive power of \(n\). This extends a result of Brikmann and Charikar (2003) who proved a similar result with \(\ell_1\) replacing \(S_1\). It stands in sharp contrast with the JohnsonLindenstrauss lemma (1984) which says that the situation in \(\ell_2\) is very different. (Zoom) 
16:00  16:30  Coffee Break (Conference Room San Felipe) 
16:30  17:15 
Alexander Litvak: The minimal dispersion in the unit cube. ↓ We improve known upper bounds for the minimal dispersion of a point set in the unit cube. Our bounds are sharp up to logarithmic factors. The talk is partially based on a joint work with G. Livshyts. (Zoom) 
17:15  18:00 
Javad Mashreghi: A nonrecoverable signal ↓ Given an analytic function \(f\) on the open unit disc \(\mathbb{D}\), or an integrable function on its boundary \(\mathbb{T} = \partial \mathbb{D}\), our first attempt to approximate \(f\) is via its partial Taylor sums \(s_n(f)=\sum_{k=0}^{n}\hat{f}(k)z^k\) or partial Fourier sums \(s_n(f)=\sum_{k=n}^{n}\hat{f}(k)e^{ikt}\). If this first direct approach fails, we exploit several welldeveloped summation methods, e.g., Abel, Borel, Ces\`{a}ro, Hausdorff, H\"{o}lder, Lindel\"{o}f, N\"{o}rlund, etc., to come up with an appropriate combination of the partial sums which converges to the original function. More explicitly, we consider the weighted sums \(\sigma_n(f)=\sum_{k=0}^{n}w_{nk}\hat{f}(k)z^k$ or $\sigma_n(f)=\sum_{k=n}^{n}w_{nk}\hat{f}(k)e^{ikt}\).
While, in many cases, this procedure is a success story, we may naturally wonder if for each space an appropriate summability method via partial Taylor or Fourier sums can be always designed. We show that, unfortunately, this is not always feasible. We construct a Hilbert space $\mathcal{H}$ of analytic functions on \(\mathbb{D}\) with the following properties: 1) analytic polynomials are dense in \(\mathcal{H}\), 2) odd polynomials are not dense in the subspace of odd functions in \(\mathcal{H}\). Hence, in particular, there is an \(f \in \mathcal{H}\) such that with no lowertriangular summability method one can recover \(f\) from its partial Taylor sums \(s_n(f)\). At the same token, there is a Hilbert space of integrable functions on \(\mathbb{T}\) such that 1) trigonometric polynomials are dense in \(\mathcal{H}\), 2) odd trigonometric polynomials are not dense in the subspace of odd functions in \(\mathcal{H}\). Hence, as an outcome, there is a signal \(f \in \mathcal{H}\) such that with no lowertriangular summability method, we can recover \(f\) from its partial Fourier sums \(s_n(f)\). (Zoom) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Tuesday, August 30  

07:30  09:00  Breakfast (Restaurant Hotel Hacienda Los Laureles) 
09:00  12:30  Chair (morning): Alexander Litvak (Online) 
09:00  09:45 
Mario Ullrich: On optimal \(L_2\)approximation with function values. ↓ Let \(F\subset L_2\) be a class of complexvalued functions on a set \(D\),
such that, for all \(x\in D\),
point evaluation \(f\mapsto f(x)\) is a continuous linear functional.
We study the \(L_2\)approximation of functions from \(F\)
and
want to compare the power of function values with the power of arbitrary
linear information.
To be precise,
the sampling number \(g_n(F)\) is the minimal worstcase error (in \(F\))
that can be achieved with \(n\) function values,
whereas the \emph{approximation number} (or Kolmogorov width)
\(d_n(F)\) is the minimal worstcase error
that can be achieved with \(n\) pieces of arbitrary linear information
(like derivative values or Fourier coefficients).
Here, we report on recent developments in this problem and,
in particular, explain how the individual contributions from~[1,2,3,4]
lead to the following statement:\\
There is a universal constant \(c\in\mathbb{N}\) such that
the sampling numbers
of the unit ball \(F\) of every separable reproducing kernel Hilbert space
are bounded by
\[ g_{cn}(F) \,\le\, \sqrt{\frac{1}{n}\sum_{k\geq n} d_k(F)^2}.\]
We also obtain similar upper bounds for more general classes \(F\), and
provide examples where our bounds are attained up to a constant.
For example, if we assume that \(d_n(F) \asymp n^{\alpha} (\log n)^\beta\)
for some \(\alpha>1/2\) and \(\beta \in \mathbb{R}\),
then we obtain
\[
g_n(F) \,\asymp\, d_n(F),
\]
showing that function values are (up to constants) as powerful as
arbitrary linear information.
The results rely on the solution to the KadisonSinger problem, which we extend to the
subsampling of a sum of infinite rankone matrices.
(Zoom) Bibliography:

09:45  10:30  Boris Kashin: On some problems joint for function theory and theoretical computer science (Zoom) 
10:30  11:00  Coffee Break (Conference Room San Felipe) 
11:00  11:45 
Yuri Malykhin: Widths and rigidity ↓ We will consider Kolmogorov widths of finite systems of functions. It is known
that any orthonormal system \(\{f_1,\ldots,f_N\}\) is \textit{rigid} in \(L_2\),
i.e. it can't be approximated by linear spaces of dimension essentially smaller
than \(N\). This is not always true in weaker metrics, e.g. the first
\(N\) Walsh functions can be \(o(1)\)approximated by linear spaces of dimension
\(o(N\)) in \(L_p\), \(p<2\). We will give some sufficient conditions for rigidity in
that norms.
Also we will discuss the connections between widths and the notion of
matrix rigidity and give some related positive results on approximation of
Fourier and Walsh systems. (Zoom) 
11:45  12:30 
Felipe Gonçalves: Bandlimited extremal functions in higher dimensions ↓ We will talk about some of the challenging obstructions in constructing
higher dimensional bandlimited functions with sign constraints in physical space
and support constraints in frequency space. We will give several applications
of such ``magic'' functions in several contexts. (Zoom) 
13:30  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  18:00  Chair (afternoon): Gustavo Garrigos (Online) 
15:00  15:45 
Andriy Prymak: Optimal polynomial meshes exist on any multivariate convex domain ↓ We show that every convex body \(\Omega\) in \(\mathbb{R}^d\) possesses optimal polynomial meshes, which confirms a conjecture by A. Kroo. Namely, there exists a sequence \(\{Y_n\}_{n\ge1}\) of finite subsets of \(\Omega\) such that the cardinality of \(Y_n\) is at most \(C_1 n^d\) while for any polynomial \(P\) of total degree \(\le n\) in \(d\) variables \(\P\_{\Omega}\le C_2 \P\_{Y_n}\), where \(\P\_X:=\sup\{P(x):x\in X\}\) and \(C_1\), \(C_2\) are positive constants depending only on \(\Omega\). This is a joint work with Feng Dai. (Zoom) 
15:45  16:05 
Aleh (Oleg) Asipchuk: Construction of exponential Riesz bases on split intervals ↓ Let $I$ be a union of intervals of total length $1.$ It is well known that exponential bases exist on $L^2(I),$ but explicit expressions for such bases are only known in special cases. In this work, we construct exponential Riesz bases on $L^2(I)$ with some mild assumptions on the gaps between the intervals. We also generalize Kadec's stability theorem in some special and significant cases. (Zoom) 
16:05  16:35  Coffee Break (Conference Room San Felipe) 
16:30  17:15 
Bin Han: Generalized Hermite subdivision schemes and spline wavelets on intervals ↓ Hermite and Birkhoff interpolation is a classical topic in approximation theory and is useful in CAGD and numerical PDEs, due to their connections to spline theory and wavelets. In this talk, we shall introduce the notation of generalized Hermite subdivision schemes, characterize their convergence and smoothness, and then discuss their connections with spline multiwavelets having the interpolation properties. We provide some examples of generalized Hermite subdivision schemes having the Hermite/Birkhoff interpolation and spline properties. As an application, we first construct spline multiwavelets on the real line from some generalized Hermite subdivision schemes, adapt them to the unit interval through a recent general method for adapting any wavelets to bounded intervals, and then illustrate their application to the cavity problem of the Helmholtz equation. Some are joint work with M. Michelle. (Zoom) 
17:15  18:00 
Tino Ullrich: Constructive sparsification of finite frames with application in optimal function recovery ↓ We present a new constructive subsampling technique for finite frames to extract (almost) minimal plain
(nonweighted) subsystems which preserve a good lower frame bound. The technique is based on a greedy
type selection of frame elements to positively influence the spectrum of rank one updates of a matrix. It is a
modification of the 2009 algorithm by Batson, Spielman, Srivastava and produces an optimal size subsystem
(up to a prescribed oversampling factor) without additional weights. It moreover achieves this in polynomial
time and avoids the Weaver subsampling (based on the KadisonSinger theorem) which has been applied in
earlier work, yielding rather bad oversampling constants. In the second part of the talk we give applications for
multivariate function recovery. Here we consider the particular problem of L_2 and L_\infty recovery from sample values. In this context, the presented subsampling technique allows to determine optimal (in cardinality) node sets even suitable for plain least squares recovery. It can be applied, for instance, to reconstruct functions in dominating mixedsmoothness Sobolev spaces, where we are able to discretize trigonometric polynomials with frequencies from a hyperbolic cross with nodes coming from an implementable subsampling procedure. Inaddition we may apply this to subspaces coming from hyperbolic cross wavelet subspaces. Numerical experiments illustrate the theoretical findings.
Joint work with: Felix Bartel (Chemnitz), Martin Schaefer (Chemnitz) (Zoom) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Wednesday, August 31  

07:30  09:00  Breakfast (Restaurant Hotel Hacienda Los Laureles) 
09:00  13:30  Free Morning (Oaxaca) 
13:30  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  17:45  Chair (afternoon): Gideon Schechtman (Online) 
15:00  15:45 
Vladimir Temlyakov: Sampling discretization of the uniform norm ↓ Discretization of the uniform norm of functions from a given finite dimensional subspace of continuous functions will be discussed. We will pay special attention to the case of trigonometric polynomials with frequencies from an arbitrary finite set with fixed cardinality. We will discuss the fact that for any \(N\)dimensional subspace of the space of continuous functions it is sufficient to use \(e^{CN}\) sample points for an accurate upper bound for the uniform norm. Previous known results show that one cannot improve on the exponential growth of the number of sampling points for a good discretization theorem in the uniform norm.
Also, we will present a general result, which connects the upper bound on the number of sampling points in the discretization theorem for the uniform norm with the best $m$term bilinear approximation of the Dirichlet kernel associated with the given subspace.
We illustrate the application of our technique on the example of trigonometric polynomials.
The talk is based on a joint work with B. Kashin and S. Konyagin. (Zoom) 
15:45  16:15  Coffee break (Conference Room San Felipe) 
16:15  17:00 
Dany Leviatan: Coconvex approximation of periodic functions ↓ Let $\widetilde C$ be the space of continuous $2\pi$periodic functions $f$, endowed with the uniform norm
$\f\:=\max_{x\in\mathbb R}f(x)$, and denote by $\omega_k(f,t)$, the $k$th modulus of smoothness of $f$. Denote by $\widetilde C^r$, the subspace of $r$ times continuously differentiable functions $f\in\widetilde C$, and let $\mathbb T_n$, be the set of trigonometric polynomials $T_n$ of degree $\le n$ (that is, of order $\le 2n+1$).
Given a set $Y_s:=\{y_i\}_{i=1}^{2s}$,of $2s$ points, $s\ge1$, such that
$$\pi \leq y_1 < y_2 <\cdots y_{2s}<\pi,$$
and a function $f\in\widetilde C^r$, $r\ge3$, that changes convexity exactly at the points $Y_s$, namely, the points $Y_s$ are all the inflection points of $f$. We wish to approximate $f$ by trigonometric polynomials which are coconvex with it, that is, satisfy
\[
f''(x)T_n''(x)\ge0,\quad x\in\mathbb R.
\]
We prove, in particular, that if $r\ge 3$, then for every $k,s\ge1$, there exists a sequence $\{T_n\}_{n=N}^\infty$, $N=N(r,k,Y_s)$, of trigonometric polynomials $T_n\in\mathbb T_n$, coconvex with $f$, such that
$$
\fT_n\\le \frac{c(r,k,s)}{n^r}\omega_k(f^{(r)},1/n).
$$
It is known that one may not take $N$ independent of $Y_s$. (Zoom) 
17:00  17:25 
Laura De Carli: Weaving Riesz bases, and piecewise weighted frames ↓ This talk consists of two parts loosely connected to one another. In the first part we discuss the properties of a family of Riesz bases on a separable Hilbert space \(H\) obtained in the following way: For every \(N>1 \) we let \[ B_N= \{w_j \}_{j=1}^N \bigcup \{v_j \}_{j=N+1}^\infty, \]
where \( \{v_j \}_{j=1}^\infty\) is a Riesz basis of \(H\) and \(B= \{w_j \}_{j=1}^\infty \) is a set of unit vectors. We find necessary and sufficient conditions that ensure that the \(B_N\) and \(B\) are Riesz bases, and we apply our results to the construction of exponential bases on domains of \(L^2\).
In the second part of the talk we present results on weighted Riesz bases and frames in finite or infinitedimensional Hilbert spaces, with piecewise constant weights. We use our results to construct tight frames in finitedimensional Hilbert spaces. (Zoom) 
17:25  17:50 
Kristina Oganesyan: HardyLittlewood theorem in two dimensions ↓ We prove the HardyLittlewood theorem in two dimensions for functions whose Fourier coefficients obey general monotonicity conditions and, importantly, are not necessarily positive. The sharpness of the result is given by a counterexample, which shows that if one slightly extends the considered class of coefficients, the HardyLittlewood relation fails. (Zoom) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Thursday, September 1  

07:30  09:00  Breakfast (Restaurant Hotel Hacienda Los Laureles) 
09:00  12:30  Chair (Morning): Tino Ullrich (Online) 
09:00  09:45 
Qi Ye: Machine Learning in Banach Spaces: A Blackbox or Whitebox Method? ↓ In this talk, we study the whole theory of regularized learning for generalized data in Banach spaces including representer theorems, approximation theorems, and convergence theorems. Specially, we combine the datadriven and modeldriven methods to study the new algorithms and theorems of the regularized learning. Usually the datadriven and modeldriven methods are used to analyze the blackbox and whitebox models, respectively. With the same thought of the Tai Chi diagram, we use the discrete local information of the blackbox and whitebox models to construct the global approximate solutions by the regularized learning. Our original ideas are inspired by the eastern philosophy such as the golden mean. The work of the regularized learning for generalized data provides another road to study the algorithms of machine learning including
(Zoom)

09:45  10:30 
Dmitriy Bilyk: Discrete minimizers of energy integrals ↓ It is quite natural to expect that minimization of pairwise interaction energies leads to uniform distributions, at least for "nice" kernels. However, the opposite effect occurs in many interesting examples, especially for attractiverepulsive energies or when the repulsion is very weak: minimizing measures are discrete (or at least are very nonuniform, e.g. supported on "thin" or lowerdimensional sets). We shall discuss some results related to this curious phenomenon and its relation to analysis, signal processing, discrete geometry etc. (Zoom) 
10:30  11:00  Coffee Break (Conference Room San Felipe) 
11:00  11:45 
Egor Kosov: New bounds in the problem of sampling discretization of \(L^p\) norms. ↓ In the talk we consider the problem of \(L^p\) norms discretization by evaluating a function at a certain finite set of points for functions from a given finite dimensional subspace. We mostly consider the cases \(p=1\) and \(p=2\) and present some new upper bounds for the number of points sufficient for such a discretization. In addition, we discuss some general ideas used to obtain these bounds. (Zoom) 
11:45  12:30 
Yeli Niu: Jackson inequality on the unit sphere $\mathbb{S}^d$ with dimensionfree constant ↓ This is joint work with Feng Dai. Let $E_n(f)_p $ denote the rate of approximation by spherical polynomials of degree at most $n$ in the $L^p$metric on the $d$dimensional unit sphere $\mathbb{S}^d$. Let $\omega^r(f, t)_{p}$ denote the $r$th order modulus of smoothness on the sphere $\mathbb{S}^d$ introduced by Zeev Ditzian using the group of rotations. We prove the following Jackson inequality on the sphere $\mathbb{S}^d$: for each positive integer $r$ and every $1\leq p\leq \infty$,
$$ E_n(f)_p\leq C_r \omega^r(f, \frac{d^3} n)_{p},$$
with the constant $C_r$ depending only on $r$ . The key point here is that the constant $C_r$ is independent of the dimension $d$. The Jackson inequality on the sphere $\mathbb{S}^d$ was previously established by Zeev Ditzian with the constant depending on $d$ and $r$ but going to $\infty$ exponentially fast as $d\to \infty$. For $p=\infty$ and the first order moduli of smoothness (i.e., $r=1$), the Jackson inequality with dimensionfree constant on the sphere was established by D. J. Newman and H. S. Shapiro in 1964, who also pointed out that their proof didn't work for higherorder moduli of smoothness. (Zoom) 
13:30  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  18:00  Chair (Afternoon): Dmitriy Bilyk (Online) 
15:00  15:45 
Ben Adcock: Is Monte Carlo a bad sampling strategy for learning smooth functions in high dimensions? ↓ This talk concerns the approximation of smooth, highdimensional functions on bounded hypercubes from limited samples using polynomials. This task lies at the heart of many applications in computational science and engineering  notably, those arising from parametric modelling and computational uncertainty quantification. It is common to use Monte Carlo sampling in such applications, so as not to succumb to the curse of dimensionality. However, it is well known that such a strategy is theoretically suboptimal. Specifically, there are many polynomial spaces of dimension \(n\) for which the sample complexity scales logquadratically, i.e., like \(c \cdot n^2 \cdot \log(n)\) as \(n \rightarrow \infty\). This welldocumented phenomenon has led to a concerted effort over the last decade to design improved, in fact, nearoptimal strategies, whose sample complexities scale loglinearly, or even linearly in \(n\).
Paradoxically, in this talk we demonstrate that Monte Carlo is actually a perfectly good strategy in high dimensions, despite this apparent suboptimality. We first document this phenomenon empirically via several numerical examples. Next, we present a theoretical analysis that resolves this seeming contradiction for the class of \textit{\((\bm{b},\varepsilon)\)holomorphic} functions of infinitelymany variables. We show that there is a leastsquares approximation based on \(m\) Monte Carlo samples whose error decays algebraically fast in \(m/\log(m)\), with a rate that is the same as that of the best \(n\)term polynomial approximation. This result is nonconstructive, since it assumes knowledge of a suitable polynomial subspace (depending on \(\bm{b}\)) in which to compute the approximation. Hence, we then present a constructive scheme based on compressed sensing that achieves the same rate, subject to a slightly stronger assumption on \(\bm{b}\) and a larger polylogarithmic factor. This scheme is practical, and numerically performs as well as or better than wellknown adaptive leastsquares schemes.
Finally, while most of our results concern polynomials, we also demonstrate that the same results can be achieved via deep neural networks with standard training procedures.
Overall, our findings demonstrate that Monte Carlo sampling is eminently suitable for smooth function approximation tasks on bounded domains when the dimension is sufficiently high. Hence, the benefits of stateoftheart improved sampling strategies seem to be generically limited to lowerdimensional settings.
This is joint work with Simone Brugiapaglia, Juan M. Cardenas, Nick Dexter and Sebastian Moraga.
(Zoom) References

16:00  16:30  Coffee Break (Conference Room San Felipe) 
16:30  17:15 
Gustavo Garrigos: Recent results in Weak Chebyshev Greedy Algorithms ↓ The Weak Chebyshev Greedy Algorithm (WCGA) is a generalization to Banach spaces of the popular Orthogonal Matching Pursuit.
For the latter, a fundamental theorem by T. Zhang establishes the optimal recovery of Nsparse signals after O(N) iterations, under suitable RIP
conditions in the dictionary. For the WCGA, however, several questions remain to be investigated. In 2014, Temlyakov proved a deep theorem
establishing Lebesguetype inequalities for the WCGA, which guarantee stable recovery after \(\phi(N)=O(N^a)\) iterations,
with the exponent \(a\geq 1\) depending on the geometry of the Banach space, via the powertype of its modulus of smoothness,
as well as on properties of the dictionary (which generalize RIP).
In this talk we present recent work on Lebesguetype inequalities for the WCGA, which extend the above theorem of Temlyakov.
We obtain a new bound for the number of iterations \(\phi(N)\) in terms of the modulus of convexity of the dual space
and similar properties of the dictionary, where this time the parameters are no longer necessarily of power type.
In particular, when applied to the spaces \(L^p(\log L)^a\), with \(1 (Zoom) 0\), we show that it suffices with \(\phi(N)= O(N \log\log N)\). iterations. 
17:15  18:00 
DingXuan Zhou: Approximation Theory of Structured Deep Neural Networks ↓ Deep learning has been widely applied and brought breakthroughs in speech recognition,
computer vision, natural language processing, and many other domains. The involved deep neural network architectures
and computational issues have been well studied in machine learning.
But there lacks a theoretical foundation for understanding the modelling, approximation or generalization ability
of deep learning models with network architectures. One family of structured neural networks is
deep convolutional neural networks (CNNs) with convolutional structures.
The convolutional architecture gives essential differences between deep CNNs and
fullyconnected neural networks, and the classical approximation theory
for fullyconnected networks developed around 30 years ago does not apply.
This talk describes an approximation theory of deep CNNs and related structured deep neural networks. (Zoom) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Friday, September 2  

07:30  09:00  Breakfast (Restaurant Hotel Hacienda Los Laureles) 
09:00  11:45  Chair (morning): Yuan Xu (Online) 
09:00  09:45 
Han Feng: Generalization Analysis of deep neural networks for Classification ↓ Deep learning based on a neural networks is extremely efficient in solving classification
problems in speech recognition, computer vision, and many other fields. But there is no enough theoretical understanding about this topic, especially the generalization ability of induced optimization algorithms. In this talk, we shall present the mathematical framework for binary classification problems. For target functions associated with a convex loss function, we provide rates of Lpapproximation and then present generalization bounds and learning rates for the excess misclassification error of the deep neural networks classification algorithm. Our analysis is based on efficient integral discretization and other tools from approximation theory. (Zoom) 
09:45  10:30 
Martin Buhmann: Strict Positive Definiteness of Convolutional and Axially Symmetric Kernels ↓ We study new sufficient conditions of strict positive definiteness for generalisations of radial basis and other kernels on multidimensional spheres which are no longer radially symmetric but possess specific coefficient structures. The results use the series expansion of the kernel in spherical harmonics. The kernels either have a convolutional form or are axially symmetric with respect to one axis.
(Zoom) References

10:30  11:00  Coffee Break (Conference Room San Felipe) 
11:00  11:45 
Janin Jäger: Strict positive definiteness: From compact Riemaniann manifolds to the sphere ↓ Isotropic positive definite functions are used in approximation theory and are for example applied in geostatistics and physiology. They are also of importance in statistics where they occur as correlation functions of homogeneous random fields on spheres. We study a class of functions applicable for interpolation of arbitrary scattered data on \(\mathbb{M}^{d}\) by linear combination of shifts of an isotropic basis function \(\phi\), where \(\mathbb{M}^{d}\) is a compact Riemannian manifold.
A class of functions for which the resulting interpolation problem is uniquely solvable for any distinct point set \(\Xi\subset \mathbb{M}^{d}\) and arbitrary \(d\) is the class of strictly positive definite kernels \(SPD(\mathbb{M}^{d})\).
For kernels possessing a certain series expansion in eigenfunctions of the LaplaceBeltrami operator on \(\mathbb{M}^{d}\), we derive a characterisation of this class. First, for general compact Riemannian manifolds, then for homogeneous manifolds and finally for twopoint homogeneous manifolds and the sphere (see \cite{Buhmann2022}).
For the special case of \(\mathbb{S}^{d1}\), the results extend the characterisation for radial kernels from \cite{Chen2003}. For this case, we derive conditions showing that nonradial kernels can be strictly positive definite while possessing significantly less positive coefficients, in the given expansion, compared to radially symmetric kernels (see \cite{Guella2022}).
\begin{thebibliography}{99}
\bibitem{Chen2003} Chen, D., Menegatto, V. A., \& Sun, X. (2003). A Necessary and Sufficient Condition for Strictly Positive Definite Functions on Spheres. \textit{Proceedings of the AMS}, 131(9), 2733–2740.
\bibitem{Guella2022} Guella, J. C. \& J\"ager, J. (2022). Strictly positive definite nonisotropic kernels on twopoint homogeneous manifolds: The asymptotic approach, \textit{Arxiveprints}, arXiv:2205.07396
\bibitem{Buhmann2022} Buhmann, M., \& Jäger, J. (2022). Strict positive definiteness of convolutional and axially symmetric kernels on ddimensional spheres. \textit{Journal of Fourier Analysis and Applications}, 28(3), 125.
\end{thebibliography} (Zoom) 
12:00  14:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 