It's one of the final steps before we grace AGI. They've done it.
We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labeled examples. Our representation also contains a distinct “sentiment neuron” which contains almost all of the sentiment signal.
We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.
Three things to point out here:
1: This was an unsupervised model coupled with a semi-supervised model. I, and dozens of people many times smarter than myself in computer science and neurology and psychology and whathaveyou, have repeatedly stressed that unsupervised learning is the "truest" form of AI. This is what we're seeing here.
2: This behavior— that is, discovering the concept of sentiment— was not preprogrammed or preplanned. The AI did this entirely on its own.
3: OpenAI specializes in deep reinforcement learning, much like DeepMind. However, it's likely that their approach is much different and looking more for true general intelligence, whereas DeepMind's goal— while stated to be solving intelligence— is more than likely something involving Google's graces. In fact, this goes right back to #1 in that OpenAI is using unsupervised learning models, whereas all of DeepMind's models utilize supervised and semi-supervised learning. In that regard, OpenAI is even further along towards the achievement of AGI than even DeepMind at this point.
OpenAI is the company in which Elon Musk invested.
That's not to say that DeepMind isn't also aiming for general AI. Oh trust me, they are. And it's proving what I said earlier to be true: it doesn't matter, Recyvuvym, whether or not Google falls apart. There are other groups, other agents who are just as far along as Google. If not moreso. And that's something I said to PhoenixRu earlier in a private message and that I'm willing to say out in public— I believe Russia and, to an even greater extent, China are ahead of DeepMind, OpenAI, and DARPA by an unknown amount in terms of AI progress.
Unsupervised deep reinforcement learning + spiked recurrent progressive neural networks = weak AGI. And we're on the cusp of seeing it all come together for the very first time in a real way.