@trvs2cool

Learning Relational Options for Inductive Transfer in Relational Reinforcement Learning

, , and . In the proceeding of ILP, (2007)

Abstract

In reinforcement learning problems, a learning agent has the task of learning a good or optimal strategy from interaction with his environment. At the start of the learning task, the agent usually has very little information. Therefore, when faced with complex problems that have a large state space, learning a good strategy might be infeasible or too slow to work in practice. One way to overcome this problem, is the use of guidance to supply the agent with traces of “reasonable policies”. However, in a lot of cases it will be hard for the user to supply such a policy. In this paper, we will investigate the use of transfer learning for Relational Reinforcement Learning problems. The goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task. More specifically, we introduce an extension of the options framework to the relational setting and show how one can learn skills that can be transferred across similar, but different domains. We present some preliminary experiments showing the possible advantages of using relational options for transfer learning.

Description

Relational Options learning framework - Contains good overview of options-transfer papers - Introduces a relational option concept via decision lists - Rules in lists have both state and state-action precondition - Both all preconditions (state & state-action) must fire - state-action preconditions can parameterize final action choice - Options are learned by: 1 first learning ploicy, 2 categorizing state-actions as either on or off policy, 3 learn a relational decision tree from those examples 4 flatten tree and extract all on-policy rules

Links and resources

Tags

community

  • @trvs2cool
  • @dblp
@trvs2cool's tags highlighted