@mgrani

Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks

, , und . In, MIT Press, (2003)Discussion of learning curves for stochastic gradient descent. Besides gradient based approaches, the paper shortly describes (with additional references) weight perturbation and node perturbation approaches..

Zusammenfassung

Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are frequently used to overcome these difficulties. We derive quantitative learning curves for three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. The maximum learning rate for the stochastic methods scales inversely with the first power of the dimensionality of the noise injected into the system; with sufficiently small learning rate, all three methods give identical learning curves. These results suggest guidelines for when these stochastic methods will be limited in their utility, and considerations for architectures in which they will be effective.

Beschreibung

Discussion of learning curves for stochastic gradient descent. Besides gradient based approaches, the paper shortly describes (with additional references) weight perturbation and node perturbation approaches.

Links und Ressourcen

Tags

Community

  • @mgrani
  • @dblp
@mgranis Tags hervorgehoben