To complement the Google DeepMind News and Discussions thread, I figured "why not open this one as well?" It took three years, but here it is.
Inaugural post dates back to March.
Reptile: A Scalable Meta-Learning Algorithm
We’ve developed a simple meta-learning algorithm called Reptile which works by repeatedly sampling a task, performing stochastic gradient descent on it, and updating the initial parameters towards the final parameters learned on that task. Reptile is the application of the ShortestDescent algorithm to the meta-learning setting, and is mathematically similar to first-order MAML (which is a version of the well-known MAMLalgorithm) that only needs black-box access to an optimizer such as SGD or Adam, with similar computational efficiency and performance.
Meta-learning is the process of learning how to learn. A meta-learning algorithm takes in a distribution of tasks, where each task is a learning problem, and it produces a quick learner — a learner that can generalize from a small number of examples. One well-studied meta-learning problem is few-shot classification, where each task is a classification problem where the learner only sees 1–5 input-output examples from each class, and then it must classify new inputs. Below, you can try out our interactive demo of 1-shot classification, which uses Reptile.