Abstract
We give an algorithmically efficient version of the learner-to-compression
scheme conversion in Moran and Yehudayoff (2016). In extending this technique
to real-valued hypotheses, we also obtain an efficient regression-to-bounded
sample compression converter. To our knowledge, this is the first general
compressed regression result (regardless of efficiency or boundedness)
guaranteeing uniform approximate reconstruction. Along the way, we develop a
generic procedure for constructing weak real-valued learners out of abstract
regressors; this may be of independent interest. In particular, this result
sheds new light on an open question of H. Simon (1997). We show applications to
two regression problems: learning Lipschitz and bounded-variation functions.
Users
Please
log in to take part in the discussion (add own reviews or comments).