Abstract

Many successful applications have proven the potential of Learning Classifier Systems and the XCS classifier system in particular in datamining, reinforcement learning, and function approximation tasks. Recent research has shown that XCS is a highly flexible system, which can be adapted to the task at hand by adjusting its condition structures, learning operators, and prediction mechanisms. However, grounding theory concerning the scalability of XCS dependent on these enhancements and problem difficulty is still rather sparse and mainly restricted to boolean function problems. In this article we developed a learning scalability theory for XCSF---the XCS system applied to real-valued function approximation problems. We determine crucial dependencies on functional properties and on the developed solution representation and derive a theoretical scalability model out of these constraints. The theoretical model is verified with empirical evidence. That is, we show that given a particular problem difficulty and particular representational constraints XCSF scales optimally. In consequence, we discuss the importance of appropriate prediction and condition structures regarding a given problem and show that scalability properties can be improved by polynomial orders, given an appropriate, problem-suitable representation.

Description

own papers

Links and resources

Tags

community

  • @patrick.stalph
  • @butz
@butz's tags highlighted