We introduce a new property of approximation methods, which lays a framework that we use (i) to guarantee that a simple quantization scheme turns ReLU neural networks into ones that can be represented on a computer, and which approximate each set C of functions at the same speed as unquantized ReLU networks, and (ii) to prove that ReLU neural networks share a common upper-bound on approximation rates with many other approximation methods: the approximation speed of a set C is automatically bounded from above by an encoding complexity of C, which is well-known for many sets C such as balls of Sobolev spaces. Property (ii) allows us to identify functions sets C for which ReLU neural networks will not be able to outperform classical approximation methods, based on dictionaries or Lipschitz-parameterized functions.