While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:
- goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
- meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
- curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer
This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.
Learn AI from Stanford professors Christopher Manning, Andrew Ng, and Emma Brunskill. Free online course videos in Deep Learning, Reinforcement Learning, and Natural Language Processing.
- survey several important computational problems for which the traditional worst-case analysis of algorithms is ill-suited
- study systematically alternatives to worst-case analysis