Abstract
We show that on-policy policy gradient (PG) and its variance reduction
variants can be derived by taking finite difference of function evaluations
supplied by estimators from the importance sampling (IS) family for off-policy
evaluation (OPE). Starting from the doubly robust (DR) estimator (Jiang & Li,
2016), we provide a simple derivation of a very general and flexible form of
PG, which subsumes the state-of-the-art variance reduction technique (Cheng et
al., 2019) as its special case and immediately hints at further variance
reduction opportunities overlooked by existing literature. We analyze the
variance of the new DR-PG estimator, compare it to existing methods as well as
the Cramer-Rao lower bound of policy gradient, and empirically show its
effectiveness.
Users
Please
log in to take part in the discussion (add own reviews or comments).