Abstract

We study the problem of properly learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning $d$-dimensional halfspaces on the unit ball within misclassification error $OPT_\gamma + \epsilon$, where $OPT_\gamma$ is the optimal $\gamma$-margin error rate and $\alpha 1$ is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio $1$, that are nearly-matching for a range of parameters. Specifically, for the natural setting that $\alpha$ is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an $= 1.01$-approximate proper learner that uses $O(1/(\epsilon^2\gamma^2))$ samples (which is optimal) and runs in time $poly(d/\epsilon) 2^O(1/\gamma^2)$. On the negative side, we show that any constant factor approximate proper learner has runtime $poly(d/\epsilon) 2^(1/\gamma)^2-o(1)$, assuming the Exponential Time Hypothesis.

Description

[1908.11335] Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

Links and resources

Tags

community

  • @kirk86
  • @dblp
@kirk86's tags highlighted