Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
5 replies to this topic

#1
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

It's one of the final steps before we grace AGI. They've done it.
 

OpenAI announces an unsupervised AI system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews

We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labeled examples. Our representation also contains a distinct “sentiment neuron” which contains almost all of the sentiment signal.
We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.


Three things to point out here:

1: This was an unsupervised model coupled with a semi-supervised model. I, and dozens of people many times smarter than myself in computer science and neurology and psychology and whathaveyou, have repeatedly stressed that unsupervised learning is the "truest" form of AI. This is what we're seeing here.
2: This behavior— that is, discovering the concept of sentiment— was not preprogrammed or preplanned. The AI did this entirely on its own.

3: OpenAI specializes in deep reinforcement learning, much like DeepMind. However, it's likely that their approach is much different and looking more for true general intelligence, whereas DeepMind's goal— while stated to be solving intelligence— is more than likely something involving Google's graces. In fact, this goes right back to #1 in that OpenAI is using unsupervised learning models, whereas all of DeepMind's models utilize supervised and semi-supervised learning. In that regard, OpenAI is even further along towards the achievement of AGI than even DeepMind at this point.

 

OpenAI is the company in which Elon Musk invested.

That's not to say that DeepMind isn't also aiming for general AI. Oh trust me, they are. And it's proving what I said earlier to be true: it doesn't matter, Recyvuvym, whether or not Google falls apart. There are other groups, other agents who are just as far along as Google. If not moreso. And that's something I said to PhoenixRu earlier in a private message and that I'm willing to say out in public— I believe Russia and, to an even greater extent, China are ahead of DeepMind, OpenAI, and DARPA by an unknown amount in terms of AI progress. 

 

 

Unsupervised deep reinforcement learning + spiked recurrent progressive neural networks = weak AGI. And we're on the cusp of seeing it all come together for the very first time in a real way. 

 

Stay tuned.


  • wjfox, Sciencerocks, Casey and 1 other like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#2
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,174 posts

Stop it, stop it it stop it.  I get the point.  Future shock, etc.

 

Ok, now I am ready to starting reading the article in question.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#3
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,174 posts

Hmmmm....one wonders if such AI would have predicteid my emotional response to reading the opening post (and not the link).

 

Would it have then changed the opening post?

 

If it did, how would that change have affected my emotional response?

 

If it did, how would the prediction of that change have affected the change made to the opening post?

 

If the prediction of the change that would have been made to the opening post would have changed the change that was made to the opening post, how would that have affected my emotional response?

 

Again, we have the problem of an infinite regression.  

 

Is the answer to be more unpredictible?

 

I suppose that too could be subject to an endlessly regressive analysis.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#4
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Normally, I wouldn't be quite as shocked about emergent behavior in AI— we've actually seen such a thing before. What gets to me is that OpenAI was using an unsupervised deep reinforcement learning model. On top of that, the behavior wasn't even planned in the first place. Some examples of emergent behavior, like when DeepMind's algorithm became aggressive, could be predicted. 
 
This wasn't. None of the programmers even entertained the idea that the AI could eventually come to "understand" the sentiment of a review.
 

We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#5
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,507 posts
  • LocationRaleigh, NC

 

Normally, I wouldn't be quite as shocked about emergent behavior in AI— we've actually seen such a thing before. What gets to me is that OpenAI was using an unsupervised deep reinforcement learning model. On top of that, the behavior wasn't even planned in the first place. Some examples of emergent behavior, like when DeepMind's algorithm became aggressive, could be predicted. 
 
This wasn't. None of the programmers even entertained the idea that the AI could eventually come to "understand" the sentiment of a review.
 

We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.

 

 

I mentioned a few years ago that the true Turing test is that it's not the people who are tricked into believing the AI is a person that passes the test but the very behavior of the AI occurred in a manner that is so unexpected its creators are left flabbergasted as to how it got there and will never find out how.


  • Casey and Yuli Ban like this
What are you without the sum of your parts?

#6
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 691 posts
  • LocationLondon

Ultimately factoring sentiment into it's analysis of reviews just makes sense. 

 

I mean, the AI was trying to predict the next character in a review right? So as the AI builds it's model, the probabilities for the next character are going to be affected by whether or not the review is positive or negative. So its fairly predictable that a sufficiently good AI will end up coming up with some sort of sentiment value for each review right?

 

Its still awesome that we're at the point where AIs can do that, but ultimately any analysis and prediction on a bunch of reviews should either end up factoring in sentiment, or end up being bad at its purpose.







Also tagged with one or more of these keywords: OpenAI, deep learning, reinforcement learning, deep reinforcement learning, artificial neural network, artificial intelligence, Singularity, Elon Musk, unsupervised learning, AGI

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users