Abstract
An important strength of learning classifier systems (LCSs) lies in
the combination of
genetic optimization techniques with gradient-based approximation
techniques.
The chosen approximation technique develops locally optimal approximations,
such as accurate classification estimates, Q-value predictions, or
linear function approximations.
The genetic optimization technique is designed to
distribute these local approximations efficiently over the problem
space.
Together, the two components develop a distributed, locally optimized
problem solution in the form of a population of expert rules, often
called classifiers.
In function approximation problems,
the XCSF classifier system develops a problem solution in the form
of overlapping, piecewise linear approximations.
This paper shows that XCSF performance on function approximation problems
additively benefits from
(1) improved representations,
(2) improved genetic operators, and
(3) improved approximation techniques.
Additionally, this paper introduces a novel closest classifier matching
mechanism for
the efficient compaction of XCS's final problem solution.
The resulting compaction mechanism can boil the population size down
by
90\% on average, while decreasing prediction accuracy only marginally.
Performance evaluations show that the additional mechanisms enable
XCSF to reliably, accurately,
and compactly approximate even seven dimensional functions.
Performance comparisons with other, heuristic function approximation
techniques show that XCSF
yields competitive or even superior noise-robust performance.
Users
Please
log in to take part in the discussion (add own reviews or comments).