Approximation classes of tree tensor networks
Anthony Nouy  1@  
1 : Laboratoire de Mathématiques Jean Leray
Nantes Université, École Centrale de Nantes, CNRS : UMR6629, Ecole Centrale de Nantes

Tree Tensor networks (TTNs) are prominent model classes for the approximation of high-dimensional functions in computational and data science. After an introduction to approximation tools based on tensorization of functions and TTNs, we introduce their approximation classes and present some recent results on their properties.
In particular, we show that classical smoothness (Besov) spaces are continuously embedded in TTNs approximation classes. For such spaces, TTNs achieve (near to) optimal rate that is usually achieved by classical approximation tools, but without requiring to adapt the tool to the regularity of the function. The use of deep networks is shown to be essential for obtaining this property. Also, it is shown that exploiting sparsity of tensors allows to obtain optimal rates achieved by classical nonlinear approximation tools, or to better exploit structured smoothness (anisotropic or mixed) for high-dimensional approximation.
We also show that approximation classes of tensor networks are not contained in any Besov space, unless one restricts the depth of the tensor network. That reveals again the importance of depth and the potential of tensor networks to achieve approximation or learning tasks for functions beyond standard regularity classes. In particular, it is shown that some discontinuous or even nowhere differentiable functions can be approximated with exponential convergence rates, and that some classes of compositional functions can be approximated without the curse of dimensionality. 

Joint work with Mazen Ali, Markus Bachmayr and Reinhold Schneider.

Personnes connectées : 2 Vie privée