Abstract
The present paper provides a new generic strategy leading to non-asymptotic
theoretical guarantees on the Leave-one-Out procedure applied to a broad class
of learning algorithms. This strategy relies on two main ingredients: the new
notion of $L^q$ stability, and the strong use of moment inequalities. $L^q$
stability extends the ongoing notion of hypothesis stability while remaining
weaker than the uniform stability. It leads to new PAC exponential
generalisation bounds for Leave-one-Out under mild assumptions. In the
literature, such bounds are available only for uniform stable algorithms under
boundedness for instance. Our generic strategy is applied to the Ridge
regression algorithm as a first step.
Users
Please
log in to take part in the discussion (add own reviews or comments).