This is a PyTorch implementation/tutorial of Deep Q Networks (DQN) from paper Playing Atari with Deep Reinforcement Learning. This includes dueling network architecture, a prioritized replay buffer and double-Q-network training.
Graph neural networks are intimately related to partial differential equations governing information diffusion on graphs. Thinking of GNNs as PDEs leads to a new broad class of graph ML methods.
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most…
Have you ever wondered how will the machine learning frameworks of the '20s look like? In this essay, I examine the directions AI research might take and the requirements they impose on the tools at our disposal, concluding with an overview of what I believe to be the two strong candidates: `JAX` and `S4TF`.
Ever since graduation, people have been asking me: “What’s now?” My answer has been an unequivocal: “I don’t know.” I used to think that by the time I finish...
Download Open Datasets on 1000s of Projects + Share Projects on One Platform. Explore Popular Topics Like Government, Sports, Medicine, Fintech, Food, More. Flexible Data Ingestion.
An attempt to create a convenient workspace that makes it possible to work with multiple custom python libraries, while keeping all benefits of Google Colaboratory.
This is a short collection of lessons learned using Colab as my main coding learning environment for the past few months. Some tricks are Colab specific, others as general Jupyter tips, and still more are filesystem related, but all have proven useful for me.
I’ve been engrossed in a few recent academic pre-prints which are skeptical of AI. These are not just from a business / hype-train perspective, but digging deeper into how machine learning research…
My name is Daniel Holden. I'm a researcher at Ubisoft Montreal using Machine Learning for character animation and other applications. I'm also a Digital Artist and Writer. My interests are Computer Graphics, Game Development, Theory of Computation, and Programming Languages.
One of the hardest concepts to grasp when learning about Convolutional Neural Networks for object detection is the idea of anchor boxes. It is also one of the most important parameters you can tune…
Unlike task-specific algorithms, Deep Learning is a part of Machine Learning family based on learning data representations. With massive amounts of computational power, machines can now recognize…
Deep learning has changed the way we work, compute and has made our lives a lot easier. As Andrej Karpathy mentioned it is indeed the software 2.0, as we have taught machines to figure things out…
Part I: Intuition (you are reading it now) Part II: How Capsules Work Part III: Dynamic Routing Between Capsules Part IV: CapsNet Architecture Quick announcement about our new publication AI³. We are…
Many call artificial intelligence (AI) a “black box”, and it kinda is. One of the biggest problems of AI is that it’s incredibly difficult to understand how the data is being interpreted. If you…
You want a cheap high performance GPU for deep learning? In this blog post I will guide through the choices, so you can find the GPU which is best for you.
In this article, we’re going to introduce self-organizing maps. We assume the reader has prior experience with neural networks. Self-organizing maps are a class of unsupervised learning neural…
In this tutorial I’ll explain how to build a simple working Recurrent Neural Network in TensorFlow. This is the first in a series of seven parts where various aspects and techniques of building…
The purpose of deep learning is to learn a representation of high dimensional and noisy data using a sequence of differentiable functions, i.e., geometric transformations, that can perhaps be used…
by Computer Vision Department of NTRLab Suppose we are given a set of distinct points P = {(xi, yi) ∈ ℝm ×ℝ}i=1,...,n which we regard as a set of test samples xi ∈ ℝm with known answers yi ∈ ℝ.
Next time you’re at King’s Cross station, take a moment to think about this. Just yards from where you’re standing, the world’s most advanced artificial intelligence (AI) technology is being developed — by a London company called DeepMind.
I teach deep learning both for a living (as the main deepsense.ai instructor, in a Kaggle-winning team1) and as a part of my volunteering with the Polish Chi...
Now you can develop deep learning applications with Google Colaboratory -on the free Tesla K80 GPU- using Keras, Tensorflow and PyTorch. Hello! I will show you how to use Google Colab, Google’s free…
The codebase contains a replica of the AlphaZero methodology, built in Python and Keras. Gain a deeper understanding of how AlphaZero works and adapt the code to plug in new games.
It is of course an outdated model of how the neurons actually work. The current neural network research and development is more driven by mathematically techniques that ensure continuity and…
Geoffrey Hinton has finally expressed what many have been uneasy about. In a recent AI conference, Hinton remarked that he was “deeply suspicious” of back-propagation, and said “My view is throw it…
Part I: Intuition (you are reading it now) Part II: How Capsules Work Part III: Dynamic Routing Between Capsules Part IV: CapsNet Architecture (coming soon) Quick announcement about our new…
Quite a few people have asked me recently about choosing a GPU for Machine Learning. As it stands, success with Deep Learning heavily dependents on having the right hardware to work with. When I was…