Approximation speed of quantized vs. unquantized ReLU neural networks and beyond.
Antoine Gonon  1@  , Rémi Gribonval  1  , Nicolas Brisebarre  2  , Elisa Riccietti  1  
1 : Univ Lyon, ENS de Lyon, UCBL, CNRS, Inria, LIP, F-69342 Lyon,
L'Institut National de Recherche en Informatique et e n Automatique (INRIA)
2 : CNRS, LIP, Inria AriC, Université de Lyon, Lyon, France.
Centre National de la Recherche Scientifique - CNRS

We introduce a new property of approximation methods, which lays a framework that we use (i) to guarantee that a simple quantization scheme turns ReLU neural networks into ones that can be represented on a computer, and which approximate each set C of functions at the same speed as unquantized ReLU networks, and (ii) to prove that ReLU neural networks share a common upper-bound on approximation rates with many other approximation methods: the approximation speed of a set C is automatically bounded from above by an encoding complexity of C, which is well-known for many sets C such as balls of Sobolev spaces. Property (ii) allows us to identify functions sets C for which ReLU neural networks will not be able to outperform classical approximation methods, based on dictionaries or Lipschitz-parameterized functions.


Personnes connectées : 2 Vie privée
Chargement...