Tangent Space Separability in Feedforward Neural Networks

    Hierarchical neural networks are exponentially more efficient than their corresponding “shallow” counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training. By approximating the tangent subspace, we suggest a sparse representation that enables switching to shallow networks, GradNet after a very early training stage. Our experiments show that the proposed approximation of the metric improves and sometimes even surpasses the achievable performance of the original network significantly even after a few epochs of training the original feedforward network.

    Év: 
    2019
    Szerzők: 
    Bàlint Daróczy, Rita Aleksziev, András Benczúr
    Kiadvány: 
    Beyond First Order ML workshop at NIPS 2019, Vancouver, Canada