Have you ever wondered how will the machine learning frameworks of the '20s look like? In this essay, I examine the directions AI research might take and the requirements they impose on the tools at our disposal, concluding with an overview of what I believe to be the two strong candidates: `JAX` and `S4TF`.
Every minute, South Korea's household debt rises by US$90 thousand dollars. Every 12 minutes, a Korean is declared bankrupt. Ordinary households now owe some...
This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. It will review past developments and identify promising directions of research, with an emphasis on addressing existing open problems, ranging from the design of efficient, scalable algorithms for exploration to how to control learning and planning. It also aims to deepen the understanding of model-free vs. model-based learning and control, and the design of efficient methods to exploit structure and adapt to easier environments.
Military hierarchies are, by necessity, rigid structures. DARPA’s ‘Mosaic Warfare’ project aims for something much more fluid and adaptable, with AI doing the logistical grunt work so human commanders can get creative.
If you build or maintain software, you’re familiar with GitHub. Millions of developers rely on the massive code repository for everything from source code
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most…
- Sep. 28 – Oct. 2, 2020
- Lihong Li (Google Brain; chair), Marc G. Bellemare (Google Brain)
- The success of deep neural networks in modeling complicated functions has recently been applied by the reinforcement learning community, resulting in algorithms that are able to learn in environments previously thought to be much too large. Successful applications span domains from robotics to health care. However, the success is not well understood from a theoretical perspective. What are the modeling choices necessary for good performance, and how does the flexibility of deep neural nets help learning? This workshop will connect practitioners to theoreticians with the goal of understanding the most impactful modeling decisions and the properties of deep neural networks that make them so successful. Specifically, we will study the ability of deep neural nets to approximate in the context of reinforcement learning.
- Aug. 19 – Aug. 28, 2020
- Nike Sun (Massachusetts Institute of Technology; chair), Jian Ding (University of Pennsylvania), Ronen Eldan (Weizmann Institute), Elchanan Mossel (Massachusetts Institute of Technology), Joe Neeman (University of Texas at Austin), Jelani Nelson (UC Berkeley), Tselil Schramm (Stanford University; Microsoft Research Fellow)
These are articles about the techniques I develop and lessons I learnt while toying or working with computer graphics. Most of it is self-taught and there's lots of reinventing the wheel (which I recommend) but also some innovative and new discoveries that often times are not documented anywhere else (and if any of this content becomes part of your paper or the center of your PhD thesis, I feel it'd be fair to mention this website).
TL;DR: Have you even wondered what is so special about convolution? In this post, I derive the convolution from first principles and show that it naturally emerges from translational symmetry. During…
- Aug. 31 – Sep. 4, 2020
- Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (MSR), Alan Malek (DeepMind), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), Mengdi Wang (Princeton)
H. Tajima, und F. Fujisawa. (2020)cite arxiv:2007.00926Comment: 6 pages, 5 figures, accepted by Scientific and Educational Reports of the Faculty of Science and Technology, Kochi University.
Q. Qu, Z. Zhu, X. Li, M. Tsakiris, J. Wright, und R. Vidal. (2020)cite arxiv:2001.06970Comment: QQ and ZZ contributed equally to the work. Invited review paper for IEEE Signal Processing Magazine Special Issue on non-convex optimization for signal processing and machine learning. This article contains 26 pages with 11 figures.
M. Lindvall, und J. Molin. (2020)cite arxiv:2001.07455Comment: Accepted for presentation in poster format for the ACM CHI'19 Workshop <Emerging Perspectives in Human-Centered Machine Learning>.
R. Hanocka, G. Metzer, R. Giryes, und D. Cohen-Or. (2020)cite arxiv:2005.11084Comment: SIGGRAPH 2020; Project page: https://ranahanocka.github.io/point2mesh/.
M. Cook, A. Zare, und P. Gader. (2020)cite arxiv:2007.01263Comment: 6 pages, 4 figures, Presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning.
H. Chawla, M. Jukola, T. Brouns, E. Arani, und B. Zonooz. (2020)cite arxiv:2007.12918Comment: Accepted at 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).