Abstract
We compare scaling properties of several value-function estimation algorithms. In particular, we prove that Q-learning can scale exponentially slowly with the number of states. We identify the reasons of the slow convergence and show that both TD($łambda$) and learning with a fixed learning-rate enjoy rather fast convergence, just like the model-based method.
Users
Please
log in to take part in the discussion (add own reviews or comments).