Zusammenfassung
In previous work (1, 2, 3) we explained how to use
standard optimization methods such as simulated
annealing, gradient descent and genetic algorithms to
optimize a parametric function which could be used as a
learning rule for neural networks. To use these
methods, we had to choose a fixed number of parameters
and a rigid form for the learning rule. In this
article, we propose to use genetic programming to find
not only the values of rule parameters but also the
optimal number of parameters and the form of the rule.
Experiments on classification tasks suggest genetic
programming finds better learning rules than other
optimization methods. Furthermore, the best rule found
with genetic programming outperformed the well-known
backpropagation algorithm for a given set of tasks
Nutzer