While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:
- goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
- meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
- curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer
This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.
Programmers think dynamic languages like Python are easier to use than static ones, but why? I look at uniquely dynamic programming idioms and their static alternatives, identifying a few broad trends that impact language usability.
Gibson’s underlying database of spaces includes 572 full buildings composed of 1447 floors covering a total area of 211k m2s. The database is collected from real indoor spaces using 3D scanning and reconstruction. For each space, we provide: the 3D reconstruction, RGB images, depth, surface normal, and for a fraction of the spaces, semantic object annotations. In this page you can see various visualizations for each space, including 3D dissections, exploration using a randomly controlled husky agent, and standard point-to-point navigation episodes