@alrigazzi

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

, , , , und . (2017)cite arxiv:1711.05305.

Zusammenfassung

Distributed optimization algorithms are essential for training machine learning models on very large-scale datasets. However, they often suffer from communication bottlenecks. Confronting this issue, a communication-efficient primal-dual coordinate ascent framework (CoCoA) and its improved variant CoCoA+ have been proposed, achieving a convergence rate of $O(1/t)$ for solving empirical risk minimization problems with Lipschitz continuous losses. In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $O(1/t^2)$ in terms of reducing suboptimality. The analysis of this rate is also notable in that the convergence rate bounds involve constants that, except in extreme cases, are significantly reduced compared to those previously provided for CoCoA+. The results of numerical experiments are provided to show that acceleration can lead to significant performance gains.

Beschreibung

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

Links und Ressourcen

Tags