From post

A Review of the Gumbel-max Trick and its Extensions for Discrete Stochasticity in Machine Learning

, , , и . (2021)cite arxiv:2110.01515Comment: Accepted as a survey article in IEEE TPAMI.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs., , , , , и . NeurIPS, (2022)Learning to Predict Security Constraints for Large-Scale Unit Commitment Problems., , , и . ISGT EUROPE, стр. 1-5. IEEE, (2023)Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning., , , , и . ICML, том 162 из Proceedings of Machine Learning Research, стр. 17584-17600. PMLR, (2022)Augment with Care: Contrastive Learning for Combinatorial Problems., , , , и . ICML, том 162 из Proceedings of Machine Learning Research, стр. 5627-5642. PMLR, (2022)Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator., , и . CoRR, (2020)Learning with and for discrete optimization.. ETH Zurich, Zürich, Switzerland, (2023)base-search.net (ftethz:oai:www.research-collection.ethz.ch:20.500.11850/629004).Gradient Estimation with Stochastic Softmax Tricks., , , , и . NeurIPS, (2020)A Review of the Gumbel-max Trick and its Extensions for Discrete Stochasticity in Machine Learning., , , и . CoRR, (2021)Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs., , , , , и . CoRR, (2022)Augment with Care: Contrastive Learning for the Boolean Satisfiability Problem., , , , и . CoRR, (2022)