09/07/2020

Learning a Single Neuron with Gradient Methods

Gilad Yehudai, Ohad Shamir

Keywords: Neural networks/deep learning, Non-convex optimization

Abstract: We consider the fundamental problem of learning a single neuron $\bx\mapsto \sigma(\bw^\top\bx)$ in a realizable setting, using standard gradient methods with random initialization, and under general families of input distributions and activations. On the one hand, we show that some assumptions on both the distribution and the activation function are necessary. On the other hand, we prove positive guarantees under mild assumptions, which go significantly beyond those studied in the literature so far. We also point out and study the challenges in further strengthening and generalizing our results.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLT 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers