If it weren't for the ridiculous topic, I wouldn't have suspected a machine wrote that article; I would have thought it was a somewhat awkward translation of an article written in Spanish to English.
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
These ads will disappear if you register on the forum
What 2029 will look like
Posted 14 February 2019 - 09:28 PM
- Yuli Ban, Erowind and starspawn0 like this
Posted 14 February 2019 - 09:32 PM
As I said, human-level media-synthesis is almost here.
This system is actually even more impressive. Without any additional training it can translate, read and answer comprehension questions, and scores 7 points higher on the Winograd schema (commonsense reasoning test) than the state of the art, reaching over 70% accuracy.
And they want to train it with even more data!...
And, as @gwern pointed out on my forum, they didn't even use state-of-the-art Transformer neural net technology. What if they did, and magnified the data 100x?
- Casey, Yuli Ban and funkervogt like this
Posted 16 February 2019 - 02:36 PM
Imagine Sundar Pichai unveiling this year's version of Duplex: the video rolls, and it shows a guy talking to a Google Home, that responds in a Duplex-like, super-realistic voice. They have a little chat about politics, film, and sports. And it's all so human-like, and natural; and the thing seems to have a decent grasp of context, remembering what the guy said a few minutes ago.
The whole thing is a good 5 minutes long -- many rounds of conversation.
The audience gasps all through the video, and then when it ends, they nervously clap. The room fills with noise as people talk to their neighbors about what they had just seen. Pichai says, "We are still working the bugs out. It's not yet ready for the public, but we plan to make it available to our Pixel 3 and soon-to-be-released Pixel 4 phones."
Who at Google would build it?
I could see someone like Quoc Le attempting to scale up OpenAI's work, using a lot more data taken from online conversations (that Google has access to), and using better-performing neural nets. He was one of the guys behind "A Neural Conversational Model" four years ago, that used orders of magnitude less data than OpenAI's project, yet still got eyebrow-raising results. And he recently wrote a paper on what kind of commonsense and world knowledge large neural nets trained on massive text data absorb. He likes to scale models up, until they break.
I could definitely see a Neural Conversational Model 2.0 absorbing lots of world knowledge, commonsense at the level of 75% to 80% accurate on Winograd Schema challenge, ability to write little stories and poems in response to a comment (if it is appropriate), ability to detect and respond to sarcasm and emotion, translate between languages (if there are enough examples), do basic question-answering, answer reading comprehension questions, solve simple logic puzzles, answer trivia questions, basic analogical reasoning, and all with high levels of coherence and with a consistent personality.
It would not necessarily be super-accurate on all of these; but it will be much better than random guessing, and better than many baseline models for individual tasks. On some, like Winograd Schema, it would be state-of-the-art.
One thing I have pointed out before (on other forums), is that there are a lot of educational apps and chatbots that could be used for training LMs in additional skills. A large company (like Google) with access to a large number of chat logs as people interact with chatbots to teach them history, biology, math, physics, etc. could add that to the corpus to train the language model. There's a good chance it would learn some of the logic behind many of these skills, not just to imitate the language superficially. There are probably gigabytes of potential educational app chat logs out there.
Furthermore, we will see replications of OpenAI's work from teams in China. They like to wait and see what U.S. companies come up with, and then replicate it and improve, using more data and compute, together with more fine-tuning.
- Yuli Ban and Jakob like this
Posted 16 February 2019 - 02:40 PM
^ If this is true, it means the Turing Test can conceivably be passed well before 2029, possibly even before 2025. Which also means we will need a modified test if we're still trying to test for general intelligence.
- starspawn0 likes this
And remember my friend, future events such as these will affect you in the future.
Posted 16 February 2019 - 03:43 PM
^ If this is true, it means the Turing Test can conceivably be passed well before 2029, possibly even before 2025.
Don't even whisper it, friend.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users