Hi, I’m Greg, and for the last two years, I’ve been developing a 3d fractal exploration game, which started as just a “what if” experiment. I would describe myself as technical artist, meaning, I am…
Fullstack GraphQL Tutorial to go from zero to production covering all basics and advanced concepts. Includes tutorials for Apollo, Relay, React and NodeJS.
This is a short collection of lessons learned using Colab as my main coding learning environment for the past few months. Some tricks are Colab specific, others as general Jupyter tips, and still more are filesystem related, but all have proven useful for me.
An attempt to create a convenient workspace that makes it possible to work with multiple custom python libraries, while keeping all benefits of Google Colaboratory.
List of 51 TensorFlow deep learning tutorial videos. TensorFlow™ is an open source software library for numerical computation using data flow graphs....
This book explains the algorithms behind those collisions using basic shapes like circles, rectangles, and lines so you can implement them into your own projects.
This book is an interactive introduction to the theory and applications of complex functions from a visual point of view. However, it does not cover all the topics of a standard course. In fact, it is a collection of selected topics and interactive applets that can be used as a supplementary learning resource by anyone interested in learning this fascinating branch of mathematics.
This post describes my outline/structure for a technical paper. I have found a fairly specific structure works best for abstracts and introductions. For the
The program focused on the following four themes:
- Optimization: How and why can deep models be fit to observed (training) data?
- Generalization: Why do these trained models work well on similar but unobserved (test) data?
- Robustness: How can we analyze and improve the performance of these models when applied outside their intended conditions?
- Generative methods: How can deep learning be used to model probability distributions?
This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. It will review past developments and identify promising directions of research, with an emphasis on addressing existing open problems, ranging from the design of efficient, scalable algorithms for exploration to how to control learning and planning. It also aims to deepen the understanding of model-free vs. model-based learning and control, and the design of efficient methods to exploit structure and adapt to easier environments.
- Aug. 31 – Sep. 4, 2020
- Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (MSR), Alan Malek (DeepMind), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), Mengdi Wang (Princeton)
- Sep. 28 – Oct. 2, 2020
- Lihong Li (Google Brain; chair), Marc G. Bellemare (Google Brain)
- The success of deep neural networks in modeling complicated functions has recently been applied by the reinforcement learning community, resulting in algorithms that are able to learn in environments previously thought to be much too large. Successful applications span domains from robotics to health care. However, the success is not well understood from a theoretical perspective. What are the modeling choices necessary for good performance, and how does the flexibility of deep neural nets help learning? This workshop will connect practitioners to theoreticians with the goal of understanding the most impactful modeling decisions and the properties of deep neural networks that make them so successful. Specifically, we will study the ability of deep neural nets to approximate in the context of reinforcement learning.
This is the Graph Neural Networks: Hands-on Session from the Stanford 2019 Fall CS224W course.
In this tutorial, we will explore the implementation of graph neural networks and investigate what representations these networks learn. Along the way, we'll see how PyTorch Geometric and TensorBoardX can help us with constructing and training graph models.
Pytorch Geometric tutorial part starts at -- 0:33:30
Details on:
* Graph Convolutional Neural Networks (GCN)
* Custom Convolutional Model
* Message passing
* Aggregation functions
* Update
* Graph Pooling
An introduction to what a Mesh, Shader and Material is in Unity, how to set Shader Properties from C#, a brief look at Forward vs Deferred rendering and some information about Material instances and Batching. HLSL | Unity Shader Tutorials, @Cyanilux
This is a PyTorch implementation/tutorial of Deep Q Networks (DQN) from paper Playing Atari with Deep Reinforcement Learning. This includes dueling network architecture, a prioritized replay buffer and double-Q-network training.
A tutorial that teaches you everything it takes to render 3D graphics with the Vulkan API. It covers everything from Windows/Linux setup to rendering and debugging.
R. Sharipov. (2004)cite arxiv:math/0405323Comment: The textbook, AmSTeX, 143 pages, amsppt style, prepared for double side printing on letter size paper.
R. Sharipov. (2004)cite arxiv:math/0412421Comment: The textbook, AmSTeX, 132 pages, amsppt style, prepared for double side printing on letter size paper.
A. Slivkins. (2019)cite arxiv:1904.07272Comment: The manuscript is complete, but comments are very welcome! To be published with Foundations and Trends in Machine Learning.