Consolidating the Meta-Learning Zoo: A Unifying Perspective as Posterior Predictive Inference

Abstract

A plethora of methods and approaches combining meta-learning with deep neural networks have recently been proposed, achieving great success in applications such as few-shot learning. Much of the existing work may be characterized as either gradient-based, metric-based, or amortized MAP based meta-learning. Due to the ubiquity of recent work, a unifying view is useful for understanding and improving these methods. Existing frameworks are limited to specific families of approaches, namely gradient-based methods. In this paper we develop a framework for meta-learning approximate probabilistic inference for prediction, or ML-P IP for short. ML-P IP provides this unifying perspective in terms of amortizing posterior predictive distributions. We show that ML-P IP re-frames and extends existing probabilistic interpretations of meta-learning to cover both point-estimates and variational posteriors, as well as a broader class of methods, including gradient based meta-learning, metric based meta-learning, amortized MAP inference, and conditional probability modelling.

Publication
In 2nd NeurIPS Workshop on Meta-Learning
Avatar
Jonathan Gordon
Machine Learning PhD Student

My research interests include probabilistic machine learning, deep learning, and approximate Bayesian inference.

Related