Momentum Residual Neural Networks
Michael E. Sander  1@  , Pierre Ablin  2  , Mathieu Blondel  3  , Gabriel Peyre  1  
1 : Département de Mathématiques et Applications - ENS Paris
École normale supérieure - Paris, Centre National de la Recherche Scientifique : UMR8553
2 : Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision
Université Paris Dauphine-PSL, Centre National de la Recherche Scientifique : UMR7024
3 : Google Brain, Paris
Google Research [Paris]

The training of deep residual neural networks (ResNets) with backpropagation has a memory cost that increases linearly with respect to the depth of the network. A way to circumvent this issue is to use reversible architectures. We propose to change the forward rule of a ResNet by adding a momentum term. The resulting networks, momentum residual neural networks (Momentum ResNets), are invertible. Unlike previous invertible architectures, they can be used as a drop-in replacement for any existing ResNet block. We show that Momentum ResNets can be interpreted in the infinitesimal step size regime as second-order ordinary differential equations (ODEs) and exactly characterize how adding momentum progressively increases the representation capabilities of Momentum ResNets. Our analysis reveals that Momentum ResNets can learn any linear mapping up to a multiplicative factor, while ResNets cannot. In a learning to optimize setting, where convergence to a fixed point is required, we show theoretically and empirically that our method succeeds while existing invertible architectures fail. We show on CIFAR and ImageNet that Momentum ResNets have the same accuracy as ResNets, while having a much smaller memory footprint, and show that pre-trained Momentum ResNets are promising for fine-tuning models.


Personnes connectées : 2 Vie privée
Chargement...