Part I: Intuition (you are reading it now) Part II: How Capsules Work Part III: Dynamic Routing Between Capsules Part IV: CapsNet Architecture (coming soon) Quick announcement about our new…
Humans excel at solving a wide variety of challenging problems, from low-level motor control through to high-level cognitive tasks. Our goal at DeepMind is to create artificial agents that can achieve a similar level of performance and generality. Like a human, our agents learn for themselves to achieve successful strategies that lead to the greatest long-term rewards.
Geoffrey Hinton has finally expressed what many have been uneasy about. In a recent AI conference, Hinton remarked that he was “deeply suspicious” of back-propagation, and said “My view is throw it…
Proceedings of the 1st Annual Conference on Robot Learning on 13-15 November 2017 Published as Volume 78 by the Proceedings of Machine Learning Research on 18 October 2017. Volume Edited by: Sergey Levine Vincent Vanhoucke Ken Goldberg Series Editors: Neil D. Lawrence Mark Reid
Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was…
Through my PhD on Deep Learning based robotics, I read a lot of papers on Machine Learning, Reinforcement Learning and AI in general. But papers can be a bit...
Unlike task-specific algorithms, Deep Learning is a part of Machine Learning family based on learning data representations. With massive amounts of computational power, machines can now recognize…
The codebase contains a replica of the AlphaZero methodology, built in Python and Keras. Gain a deeper understanding of how AlphaZero works and adapt the code to plug in new games.
Interacting Systems Are Prevalent In Nature, From Dynamical Systems In Physics To Complex Societal Dynamics. The Interplay Of Components Can Give Rise To Complex Behavior, Which Can Often Be Explained Using A Simple Model Of The System's Constituent Parts. In This Work, We Introduce The Neural Relational Inference (nri) Model: An Unsupervised Model That Learns To Infer Interactions While Simultaneously Learning The Dynamics Purely From Observational Data. Our Model Takes The Form Of A Variational Auto-encoder, In Which The Latent Code Represents The Underlying Interaction Graph And The Reconstruction Is Based On Graph Neural Networks. In Experiments On Simulated Physical Systems, We Show That Our Nri Model Can Accurately Recover Ground-truth Interactions In An Unsupervised Manner. We Further Demonstrate That We Can Find An Interpretable Structure And Predict Complex Dynamics In Real Motion Capture And Sports Tracking Data.