Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

StarSpawn0 says: Machine Reading is the next big thing in A.I.

artificial intelligence deep learning deep RL reinforcement learning neural networks machine learning machine reading AI convolutional neural networks progressive neural networks

  • Please log in to reply
1 reply to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,625 posts
  • LocationNew Orleans, LA

Machine Reading is the next big thing in A.I.

Introduction

I've been following what's going on in Machine Reading over the past few years, and it's pretty incredible, especially what has happened recently. For the next year or two, I expect we will see greater and greater advances, resulting in a step-jump in capability. It's like where we were in 2012 with object recognition, just as the big breakthroughs were made in accuracy using Deep Learning. What I want to do in this post is to tell you about some of the work that has been going on.

First, though, I will need to say what I mean by "Machine Reading": I want to use the term in a fairly broad sense to include things like Information Extraction, as well as reading comprehension; other uses of the term don't include Information Extraction, since IE usually assumes a fixed set of relations to be extracted. A few prototypical tasks include:

  • Given a news article or text passage, as well as some questions about the passage (could be multiple choice, fill-in-the-blank, or even questions where the answer is of indeterminate length), the program should be able to make one or more passes through the passage, and answer the questions.

  • Given a question whose answer is in Wikipedia, track down the relevant page, and then provide the answer, which may or may not be directly listed on the page (some reasoning may be required).

  • Given a passage, without questions, extract as much information as possible from it -- put it into a form that a machine can more easily process, like, say, a Knowledge Graph.

It's worth pointing out that we make no assumptions here that the machine "reads" just like a human. The metric of quality is based strictly on performance, not how the performance is attained.

On the low end for these tasks you have "what is?"-type questions that can be addressed by finding the relevant answers among the "surface forms" of a document. And on the high end you have "why?" questions that require deep understanding and reasoning. Obviously, these latter kind are much more difficult, and will require more advances in AI to solve; but "what is?" type questions are starting to be answerable -- and even questions that are intermediate between the low- and high-end, that require a little bit of reasoning.

Where we were a year ago.

A year or two ago, you may remember the news that some researchers at Deepmind had built a dataset and some Machine Reading neural nets to answer fill-in-the-blank-type questions (Cloze Deletion task):

https://arxiv.org/abs/1506.03340

And you may also remember Facebook's use of Memory Networks that can read and reason about text:

https://arxiv.org/abs/1502.05698

There are also lots of results that make use of a lot of hand-crafting, like Christopher Re's work; there is work out of AllenAI (on science exams, that seems to be more about question-answering, than Machine Reading, per se) and similar work out of Stanford using "natural logic" (also more question-answering); there is TextRunner out of University of Washington; there are methods based on looking at many documents at the same time (which is how Watson won at Jeopardy); and there are at least a dozen more things here I should list to make this a comprehensive survey. But, here, I'm mostly interested in fully Machine Learning-based approaches -- requiring very little hand-crafting or prior world knowledge (not even Wordnet) or access to a parser or other NLP modules -- applied to queries on a single document. So most of the older stuff doesn't apply.

Where we are in 2016.

Since that time there has been steady improvement in the models applied to these sorts of questions, as well as the introduction of several new datasets to train new models on.

The Dynamic Memory Networks of people from Metamind (part of Salesforce), for example, have achieved state-of-the-art performance on the Facebook dataset mentioned above (there may be some hand-crafted methods that can do better; but they might not generalize to other tasks):

https://arxiv.org/abs/1603.01417

There is also work by groups at IBM and CMU on this dataset, just to name two more of a large number.

And a group of researchers from Stanford has reached basically human-level performance on the tasks from that Deepmind paper (more accurately, reached the "performance ceiling"), also mentioned above:

https://arxiv.org/abs/1606.02858

Though they say in the paper:

Overall, we think the CNN/Daily Mail datasets are valuable datasets, which provide a promising avenue for training effective statistical models for reading comprehension tasks. Nevertheless, we argue that: (i) this dataset is still quite noisy due to its method of data creation and coreference errors; (ii) current neural networks have almost reached a performance ceiling on this dataset; and (iii) the required reasoning and inference level of this dataset is still quite simple.

There are, fortunately, newer datasets that are more of a challenge, and will have much greater applicability to real world problems. One of these is the recent WikiReading dataset produced by some people from Google:

https://arxiv.org/abs/1608.03542

An example from this dataset might be something like the following: you are given a passage about Canada from Wikipedia, and then are asked to predict the bodies of water it is located next to. The answer (assuming "body of water" = "ocean") would be, "Atlantic Ocean, Arctic Ocean, and Pacific Ocean". I've noticed that Google over the past few months seems to have gotten better at picking out answers from Wikipedia, and wondered if they have already incorporated this into Google Search for certain types of questions.

In addition to these more traditional reading comprehension-style tasks, there has also been progress on Information Extraction. For example, one of the Best Paper Prizes for this year's prestigious EMNLP conference (which is next week!) will go to Regina Barzilay's group at MIT for their work on applying Deep Reinforcement Learning to the problem:

https://arxiv.org/abs/1603.07954

From the paper:

We show that our model, trained as a deep Q-network, outperforms traditional extractors by 7.2% and 5% on average on two different domains, respectively. We also demonstrate the importance of sequential decision-making by comparing our model to a meta-classifier operating on the same space, obtaining up to a 7% gain.

Note that this method does make use of multiple documents; but is conservative in its use of training data.

And those gains are absolute, not relative. It's a huge improvement over traditional methods. At a recent talk, Stanford's Christopher Manning supposedly said the following:

https://twitter.com/...956922864996352

Next year deep learning will dominate SIGIR completely, but beware of the hype.

SIGIR is an Association for Computing Machinery (ACM) conference dedicated to Information Retrieval. It's not quite the same as Information Extraction; but the two are related.

Where we will be in 2017 and beyond

It looks to me like all this work will keep accelerating in the near term. At the very least I expect to see lots more datasets like WikiReading; and I expect to see more elaborate uses of Deep Reinforcement Learning. When all this will trickle down to the average user is anyone's guess; although, as I said above, it looks like Google is already using some interesting Information Extraction technology in Search.

It's worth reiterating that Machine Reading isn't quite the same as "reasoning" or is a form of reasoning. Obviously, some amount of reasoning is necessary to answer reading comprehension questions; but you can probably go pretty far using just a little, in a way similar to how you can do pretty well at speech recognition using just a little bit of world knowledge, but to do it perfectly, you need a lot. More long-term, I expect we will see a lot of deep reasoning and logic applied to Machine Reading. So, while Google won't be crafting legal arguments for you any time soon, based on what it reads -- but might, eventully! -- there will be programs out there to answer factual questions about that mortgage you may have signed.


And remember my friend, future events such as these will affect you in the future.


#2
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,625 posts
  • LocationNew Orleans, LA

Add:

Comparing with other AI advancements:

  • Driverless cars: will be on the road in large numbers in late 2020, maybe. Maybe 1% of cars in the U.S. by then will be driverless (level 4)... probably will take a little longer, like 2022.

  • Rat-level AI: despite what Hassabis said, we're a long way away from that. It's either hubris on his part, or he was simply misunderstood.

  • AI scientists: good theorem-provers might exist in 10 years, but that might be too early.

  • Virtual Assistants: will start being good by late 2017, early 2018. Still a lot they won't be able to do. But a lot of the frustration with them will go away.

  • Starcraft-playing AI: maybe 2017. Won't directly affect you.

  • Really good house-cleaning robots: 10 to 15 years away... maybe.

  • Good video-generation: about 4 years away, maybe.

  • Video question-answering: good on basic questions for short clips maybe in 4 years, if large datasets available by then.

  • Machine translation: will continue to improve. Might be good enough in 2020 to replace humans in low-grade capacities (to translate rush transcripts) for large numbers of language pairs. Not good enough for legal work by then. It will be a while before machines are as good as a medium-skill human translator.

Machine Reading will probably affect you directly before those others. You'll notice it first in improved search, and maybe as part of virtual assistants.

Improved machine translation might also affect you directly.

Addendum: we might get hit with a curve-ball, like good brain-scanning headbands, that could accelerate timelines. I don't think they will be quite high enough resolution until at least 2020, though... so don't hold your breath. Obviously invasive methods could already get us there; but I don't think many people will opt for brain chips or optogenetics, unless they are in pretty bad shape.

 

_______________________________________________________

 

 

He keeps saying "maybe" because he's been wrong before. In his case, he's been too conservative. For example, he claimed (in 2011) that we should see AI achieve human-level sentence and paragraph understanding by 2025. We did that this year. He claimed (also in 2011) that we'd see superhuman success rates at image recognition in 2025 as well. Again, we've achieved that this year. But he's being smart, because there are surely some things that could happen much later than we expect. No one in 2011 expected deep learning to be as amazing as it has been, and no one could possibly have fathomed just how much funding AI has received in the past 5 years. No wonder we were so pessimistic!


  • mik1192 likes this

And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: artificial intelligence, deep learning, deep RL, reinforcement learning, neural networks, machine learning, machine reading, AI, convolutional neural networks, progressive neural networks

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users