From post

Stochastic Difference of Convex Algorithm and its Application to Training Deep Boltzmann Machines.

, и . AISTATS, том 54 из Proceedings of Machine Learning Research, стр. 470-478. PMLR, (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

A Scaling Law for Syn2real Transfer: How Much Is Your Pre-training Effective?, , , , , , , и . ECML/PKDD (3), том 13715 из Lecture Notes in Computer Science, стр. 477-492. Springer, (2022)Least-Squares Conditional Density Estimation., , , , , и . IEICE Trans. Inf. Syst., 93-D (3): 583-594 (2010)A Consistent Method for Graph Based Anomaly Localization., , , , и . AISTATS, том 38 из JMLR Workshop and Conference Proceedings, JMLR.org, (2015)Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error., , , , , , , , и . IJCAI, стр. 2839-2846. ijcai.org, (2020)Scheduled for July 2020, Yokohama, Japan, postponed due to the Corona pandemic..Data-Parallel Momentum Diagonal Empirical Fisher (DP-MDEF):Adaptive Gradient Method is Affected by Hessian Approximation and Multi-Class Data., , , , , и . ICMLA, стр. 1397-1404. IEEE, (2022)Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality.. CoRR, (2018)Fast Learning Rate of Multiple Kernel Learning: Trade-Off between Sparsity and Smoothness., и . AISTATS, том 22 из JMLR Proceedings, стр. 1152-1183. JMLR.org, (2012)Accelerated Sparsified SGD with Error Feedback., и . CoRR, (2019)Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness., и . ICLR, OpenReview.net, (2022)Uniform-in-time propagation of chaos for the mean-field gradient Langevin dynamics., , и . ICLR, OpenReview.net, (2023)