From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Learning to Shape Rewards Using a Game of Two Partners., , , , , , , , , и 4 other автор(ы). AAAI, стр. 11604-11612. AAAI Press, (2023)STAS: Spatial-Temporal Return Decomposition for Solving Sparse Rewards Problems in Multi-agent Reinforcement Learning., , , и . AAAI, стр. 17337-17345. AAAI Press, (2024)IOFollow: Improving the performance of VM live storage migration with IO following in the cloud., , , , и . Future Gener. Comput. Syst., (2019)JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models., , , , , , , , , и 2 other автор(ы). CoRR, (2023)Multi-Agent Reinforcement Learning is a Sequence Modeling Problem., , , , , , и . NeurIPS, (2022)Replica-Exchange Nosé-Hoover Dynamics for Bayesian Learning on Large Datasets., , , и . NeurIPS, (2020)Modelling Behavioural Diversity for Learning in Open-Ended Games., , , , , и . ICML, том 139 из Proceedings of Machine Learning Research, стр. 8514-8524. PMLR, (2021)Is Nash Equilibrium Approximator Learnable?, , , , , , и . AAMAS, стр. 233-241. ACM, (2023)Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games., , , , , , , и . NeurIPS, стр. 941-952. (2021)Meta-Reward-Net: Implicitly Differentiable Reward Learning for Preference-based Reinforcement Learning., , , и . NeurIPS, (2022)