Exponential expressivity in deep neural networks through transient chaos 

Exponential expressivity in deep neural networks through transient chaos

Ben Poole, Stanford University poole@cs.stanford.edu

Subhaneil Lahiri, Stanford University sulahiri@stanford.edu

Maithra Raghu, Google Brain and Cornell University maithrar@gmail.com

Surya Ganguli. Stanford University sganguli@stanford.edu

Jascha Sohl-Dickstein, Google Brain jaschasd@google.com

Abstract
We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shal- low network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions.

LINK: https://arxiv.org/pdf/1606.05340v2.pdf

You may also like...