I've been reading about them a bit more recently, and I firmly believe that this is where we'll see many interesting developments in the coming months.
Here's an Arvix paper on PNNs.
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
And here's a blog post on deep RL.
It seems a bit shocking that we've had all this progress, and yet we still haven't gotten around to crossing these two methods. Deep reinforcement learning is a very, very important step towards the development of artificial general intelligence, but none of it matters if the AI can master Pong but completely fail at Pac-Man, despite both games being on the Atari 2600 and utilizing some similar skills.
Imagine you learned how to bake a cake. After 50 tries, you finally bake a perfect cake, better than any other person can in fact. Wonderful, right? But when you go to bake brownies, you have to learn everything from scratch. You need 50 more shots just to relearn how to bake these things, despite the fact there isn't much difference between a cake and brownies in terms of preparation.
That's where AI is now. We can teach AI how to bake cakes, and we can teach AI how to bake brownies— but we can't teach AI how to bake cakes and then brownies. Which is a pretty massive handicap for intelligence, to say the least. One of the most fundamental aspects of what defines something as being 'intelligent' is the ability to use past experiences to predict the future. When I say 'predict', I'm not necessarily saying anything ridiculously abstract like going into the stock market or figuring out higher level physics from simple experiments. No, I mean "figure this out based on what you've learned from something else."
Like my previous analogy said, if you can bake a cake, you can bake brownies. If you can't, then there's something fundamentally wrong with your brain. You shouldn't have to relearn how to measure ingredients when you're making a new, but similar dish.
So in that sense, we've figured out one aspect of machine learning, but it's still useless without the ability to reason. PNNs are a step in that direction.