Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What 2029 will look like


  • Please log in to reply
147 replies to this topic

#141
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts
More examples of the bot's output:
 
https://github.com/m...aster/README.md

Handling questions about things that happened after 2017 where the dataset ended:

Context: The trading war between China and US is still happening .

Bot: I'm sure the Chinese government and the U . S . will be fine with that as long as they keep their arms
on the table and the money flowing

Context: who won the world cup in 2018 ?

Not: Germany , I believe .

Context: Nvidia's Titan RTX is really good .

Bot: I have this card and i love it . No complaints .

Context: Can Porsche beat Tesla with its new Taycan EV ?

Not: I don't think so . The Tesla is more powerful than the porsche .


Some of these, though factually incorrect, are probably not generated randomly. There is a very large amount of world knowledge implicitly contained in the statistical relations of words, as I have pointed out before.

The bot's response to philosophy questions:

Context: What is the meaning of a good life ?

Bot: I think it's that we're all connected to our past lives and the meaning of life is to live the way you want to and how you want to .

Context: Which is more important -- being successful or being happy ?

Bot: Being successful is just endless . Being happy is the only thing that matters .

Context: Is peace the only way to stop war ?

Bot: Not if the world is not willing to work with you . Peace is the ultimate goal , but peace requires more than that .

Context: How much freedom should people have ?

Bot: The right amount of freedom is the freedom to do as you please , as long as you don't hurt people or property .


Those are pretty good! A few imperfections, but correct subject to the right interpretation!

Finally:

Context: What do you think of the DialoGPT repo ?

Bot: It's pretty good , but it's very easy to find a bug .


  • johnnd likes this

#142
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts

I thought I would mention a few things about the seemingly really good responses this model gives, and how they might be generated:

 

It could be that some of them are just cobbled-together from Reddit posts; but it could also be that the system is doing something closer to actual "reasoning" -- but maybe not what we would normally think of as reasoning; more like "intuition".  Let me explain:

 

Take the example, "Which is larger, the sun or the moon?"  It got the answer correct, saying "sun".  Maybe if you give it 100 similar questions, it will answer correctly at least 80% of the time -- that test would have to be performed to know for sure.  My suspicion is that it would be correct more than 50% of the time; and probably more than 70% of the time. 

 

How?

 

In learning statistical relationships between words, it encounters combinations like "the sun above...", "the large sun", "father sun, mother moon", and so on.  These combinations connect the word "sun" with words like "above", "large", "father", and so on, all of which signal the sun is something big and important.  The moon probably also has some of these associations, but they aren't as strong.  Then, among the millions of dialogs that involve asking "Which is bigger?", it learns a little computational gadget, which is that the bigger of the two objects is the one with the strongest association with big-sounding words.  Hence, it outputs "The sun is bigger." 

 

That doesn't sound very "smart"; but, actually, there is some evidence that humans learn a lot of intuitive world knowledge this way. 

 

Many of the other responses it generates are probably similar.  You might think it's just regurgitating text from Reddit -- but it actually could be applying a little gadget it has learned to all the statistical relationships among words and word patterns.

 

Take the question about the boiling point of water:  it says that it is 212 F, which is correct.  Again, it could be getting that from a sentence or two from Reddit.  But it could also learn associations between words "water" and "boil" and the number 212.  It could be there are thousands of posts that mention water and boiling, with a long list of numbers; where the most common numbers are 212 and 100, for the Fahrenheit and Centigrade temperatures.  It may actually learn more than just this weak statistical linkage.  For example, one of the little gadgets it learns may be a formula to map centigrade to Fahrenheit and vice versa -- it's a simple linear relation that it ought to be able to learn, given enough examples; though, it also has to learn how to map the numerical digits of a number to a variable value, which is certainly within its capacity.

 

If you iterate these relationships, looking for second-order correlations, third-order correlations, and so on, you can get deeper and deeper knowledge about the world.  It's not easy to appreciate just how deep it can be, until you test it out!


  • Yuli Ban, Jakob and johnnd like this

#143
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 255 posts

Can it do explanation and transfer learning ?

 

I just hope people wont get carried away with future version of this that can do some commonsense and pages longer responses and confuse its abilities with the strong AI kind.

 

The better AI get the more people should remember the actual long road ahead.



#144
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts

I don't know what its limitations are, because I haven't seen it. But since it is built on top of GPT-2, and adds even more data and a reranker, I imagine its outputs can be pretty complicated. It can probably do some long, but not-too-long explanations, yes; just like how GPT-2 can write long blocks of text. And because conversational outputs are usually shorter, and don't require as much deep inference, they will probably be more accurate -- that's my guess, anyhow. The fact that the model can beat humans in the three categories relevance, contentfulness, and human-like, it has to be producing good outputs more than 90% of the time. Humans, after all, produce good outputs 90% of the time; so, if you had a bot that only outputted good stuff 85% of the time, say, it would lose in a head-to-head competition with a human for single-round conversations.

Now, there is a lot of stuff it certainly won't be able to do. This isn't an AGI. It's a very, very good socialbot -- better than any you've ever seen before, by a mile. Better than Cleverbot; better Xiaoice; better than those that have come before in ways it's hard to find adjectives to describe. If you entered it in a Loebner Prize competition, it would win, hands-down:

https://en.wikipedia...i/Loebner_Prize

And it won't be limited to just giving simple responses like, "How are you doing?" The examples show it can generate good philosophical text responses; can take into account context; can do question-answering; and can even answer some amount of commonsense-type questions. I'm guessing it has some other skills, too, e.g. maybe it can write short poems or tell jokes -- those are the kinds of skills that GPT-2 has demonstrated. It might even be able to generate short arguments for positions; again, some of GPT-2's output suggests it has learned how to do this at least some of the time.

Would it pass a Turing Test? It might if you gave it to unsupecting humans with low expectations. I doubt it would pass an official, 30 minute test with a skeptical judge.

So why is the public not getting to try it? It seems that the safety issues aren't the researchers' main concern. Their main concern, as they say in the paper, is the "toxicity". They are trying to come up with ways to stop it from producing racist, sexist, lewd, rude, and other kinds of toxic output. (e.g. What if a kid tells the bot it wants to clean out his insides, because he's constipated, and the bot tells the kid to use bleach?)

If they can get this problem solved, and if they could add a few more things (long-term memory, consistent personality), then it would make a great companion for seniors in retirement villages all over the world. They could talk to it for hours, and it would patiently listen and make comments that make it see like it really understands them and cares for them.

 

This is what exists in 2019.  Just imagine how much better it will get on the march to 2029... or even 2025.


  • Yuli Ban and SkyHize like this

#145
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts
Facebook has also made an advance on dialog systems / chatbots:
 
https://parl.ai/proj...e_unlikelihood/
 

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases.


This will make the conversations even more accurate and coherent.

Now, if only they can take care of the sexist, racist, lewd, unethical, mean, etc. outputs, the public would get to have some incredible conversations with their technology. That's coming... but it may be a while before it's mitigated sufficiently to where large companies are willing to put their brand behind it.
  • Yuli Ban and SkyHize like this

#146
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts

This is a really fascinating post about testing GPT-2's ability to solve "zero-shot Natural Language Understanding (NLU)" tasks:
 
https://rakeshchada....Shot-GPT-2.html
 
The setup here is that the author took GPT-2, fed it some prompts  to let it know the task to perform, and then without any additional training it was able to perform them on the spot!

The name extraction one was incredible, as I doubt that was a skill it learned from the data (there weren't examples of this in the training data for GPT-2 to learn it) -- it had to "improvise" or do something like "reasoning" to solve it (or fake it).

Take, for example, the fact that "Southern Railways" was capitalized, and is a proper noun, but not a person's name. GPT-2 had to figure out not to put that on the list of names; and it's amazing it was able to do that out-of-the-box!

I'm guessing that it has learned lots of little skills, and then when presented with a situation like that, it interpolates "between skills" (or takes some convex-combination of skills), which looks like "improvising" -- like how Google's translation system can learn to translate between language pairs it hasn't seen before. In other words, it not only has representations of terms and basic concepts, but even the more abstract "operators" that act on concepts, that it can flexibly manipulate.

The one about the colors and breeds was equally impressive. Here, it has to know the difference between the two, as well as pick up what the task at hand is to perform. Just incredible!

It's a little surprising that it had trouble learning how to convert numbers into their word-equivalent, as it's a pretty straightforward mapping, with not too many exceptions to the rule; and there probably are several examples in the training data to learn from. It did get at least a few right, though. I'd guess a larger model, with more training data from free text, and it would solve it pretty handily -- e.g. I wouldn't be surprised if something like Megatron could solve that one without additional training.

Problems about multiplication are a little tricky, as multiplication is a hard rule to learn (it's a "deep circuit"). Probably, the way it would learn multiplication is as a mixture of memorization and combination rules for number patterns -- just like how people do! (We learn our times-tables, and then mix together what we memorized with a simple algorithm.)

It seemed to knock the mapping between singular and plural out of the park; and note there is no way it learned all possible mappings like that from the text -- it had to generalize.

Overall, I was very impressed! It's a little scary, when you think about it. Much larger language models might learn enough little skills, and how to combine them together, to do just about any little task like that you throw at it -- even ones that appear to require some improvisation. Think about what that means for future iterations of DialoGPT, Microsoft's super-chatbot built on top of GPT-2. You will be hard-pressed to find any simple word puzzle to reveal that it isn't intelligent. You'll have to try harder, and ask it some tricky problems that require many steps of reasoning.

An example problem one of these large language models might already be able to solve without any additional task-specific training, or soon will: imagine you feed a spreadsheet (text file with columns of numbers) depicting statistics about countries. each row represents a country, and then different columns represent stats. And then suppose you ask it a question like, "Which country had the lowest birth rate?", and it correctly answers -- and you don't even have to say that the countries in each row are actually countries; it has learned that all on its own.

It wouldn't surprise me if a sufficiently deeply trained model could do that -- given what else these models appear able to do already.


  • Casey and Yuli Ban like this

#147
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 170 posts

I wrote my 2029 predictions months ago but I am very unsure about them. I was too optimistic for 2019 and I don't want to be wrong again.



#148
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,327 posts
Some more GPT-2 amusements. First:

https://mobile.twitt...589059928866820

Trying to get GPT-2 to complete 'A is for apple, B is for Banana' etc.


Results (prompt in bold):

A is for Apple
B is for Banana
C is for Carrot
D is for
Cookie
E is for Egg
F is for...
G is for Ginger
H is for Hedgehog
I am a closed mouth bitch
J is for Jar
K is for Key
L is for Liquid
M is for Maggie
N is for Needle


Well, it mostly got the pattern right -- and bear in mind that it wasn't even explicitly trained to know the sequence of letters in the alphabet. But there was a little *whoopsie* when it got down to "I".

And the second amusement is "AI dungeon 2":

http://www.aidungeon.io/

What's the idea?:

Imagine an infinitely generated world that you could explore endlessly, continually finding entirely new content and adventures. What if you could also choose any action you can think of instead of being limited by the imagination of the developers who created the game?


An example:

Leading Turkeykind to Freedom and World Domination


http://www.aidungeon...-and-world.html
  • eacao and Yuli Ban like this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users