Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world.
This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time.
We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.
Description
Learn Your Opponent's Strategy (in Polynomial Time)! (Extended Abstract)
%0 Book Section
%1 mor-learn
%A Mor, Yishay
%A Goldman, Claudia V.
%A Rosenschein, Jeffrey S.
%B Lecture Notes in Artificial Intelligence
%D 1996
%E Weiss, Gerhard
%E Sen, Sandip
%I Springer Verlag
%K AI agents complexity computational gametheory haifa-edtech learning multiagent my myown
%P 164-176
%T Learn Your Opponent's Strategy (in Polynomial Time)!
%U http://www.springerlink.com/content/y34767040874q517/
%V 1042
%X Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world.
This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time.
We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.
@incollection{mor-learn,
abstract = {Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world.
This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time.
We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.},
added-at = {2011-02-04T12:13:28.000+0100},
author = {Mor, Yishay and Goldman, Claudia V. and Rosenschein, Jeffrey S.},
biburl = {https://www.bibsonomy.org/bibtex/2b8b59a4657847c00d5772801d9988111/yish},
booktitle = {Lecture Notes in Artificial Intelligence},
description = {Learn Your Opponent's Strategy (in Polynomial Time)! (Extended Abstract)},
editor = {Weiss, Gerhard and Sen, Sandip},
interhash = {fca234ecb08c2bbb2261cc7ef2b9a914},
intrahash = {b8b59a4657847c00d5772801d9988111},
keywords = {AI agents complexity computational gametheory haifa-edtech learning multiagent my myown},
pages = {164-176},
publisher = {Springer Verlag},
timestamp = {2011-02-04T12:13:29.000+0100},
title = {Learn Your Opponent's Strategy (in Polynomial Time)!},
url = {http://www.springerlink.com/content/y34767040874q517/},
volume = 1042,
year = 1996
}