Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The coming era when everything is neurally annotated (BCIs)

BCI

  • Please log in to reply
2 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,312 posts

I have recently been skimming papers on using BCIs and other physiological signals to annotate text and other media for retrieval; for example, this work, which appeared in Nature Scientific Reports back in December 2016:

https://www.nature.c...icles/srep38580
 

In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus.

(Here, they are annotating the text a user is reading for a single use, not annotating large amounts of text for later retrieval; but it's a similar idea.)

 

Unfortunately, the signals used were very noisy, and low in information content. What would it be like, I wondered, if we had high temporal and spatial resolution BCIs to help annotate everything? This little note is a meditation on that eventuality:

For one thing, it should be noted that we won't even need advanced AI or machine learning to extract a lot of benefit from those BCI signals! Most of the benefit could come from very simple indexing algorithms, along with some very simple "linear decoding" methods. Maybe some AI would be needed for speech recognition and query-understanding -- but no complex reasoning engines, video understanding, object recognition, or machine reading algorithms are necessary. If the BCIs were available circa the year 2000, say, it would lead to much better-performing search engines than we have today.

For example, here is how the BCIs could be used to annotate the web for much smarter retrieval: take Wikipedia as a starting point, and get a few people to read articles while wearing the BCIs (as in the above research paper); multiple people would read the same articles as their eyes are tracked (so we see what they are reading), and then their brain activities would be averaged later, to increase robustness and accuracy. Every single word would be annotated by a multidimensional "brain word vector" that indicates not only what the person thinks the word means generically, but also what it means in context, the mood and sentiment associated with the word, time and place, and many other things. So, for example, if they were to read the Wizard of Oz (the story, not the Wiki article) and came across the name "Dorothy", they might think,
 

girl, young, naive, mid-West, Kansas, good, dog,


and many other things. There would be so many things going through a person's mind that it would take forever even just to write it all down for a few pages of text; and, therefore, it would cost companies like Google or Facebook a fortune to pay people to annotate the text, if they had to do it "by hand" (without BCIs).

Not only would BCIs make the annotation effortless, but they would capture subtle shades of meaning that we aren't even aware of. Word clouds can't capture this, some of which would be "subsymbolic".

Other media could similarly be annotated. Every second of film in movies would have an associated 10,000-dimensional brain vector, representing many different facets of the story. Using eye-tracking, there would even be vectors associated to individual actors and objects in the story -- what do people think of the tears of the actors? The dancing? The background? -- every single detail would be annotated, and with no effort on the part of the viewers providing the annotation.

It would be so easy to annotate (with BCI) images, films, Wikipedia, novels, news articles, and so forth, that people may even do it for free, as a public service, just as they do maintaining Wikipedia.

Some lawyers may also BCI-annotate legal opinions; and scientists may BCI-annotate scientific papers.

Once all this annotation data is collected, some very, very simple algorithms could be used to look things up with incredible specificity. For example, say you want an article that can be described as:
 

A piece that superficially appears to be skeptical of global warming, but that seems to have the ulterior motive of convincing the reader that global warming is real -- the kind of piece that would convince a denialist.



It would be very difficult to find this using a traditional search engine; but could possibly be found using all those BCI annotations. First, you would tell Google Assistant, say, what kind of article you were looking for; it would scan (as you are speaking or writing your request) and extract from your brain vector the key features of the kind of article you were looking for; then, it would simply look through its vast trove of articles, matching the subject area based on your words, along with the subtler shades of meaning based on your brain vector. A fraction of a second later you would get exactly the kind of article you were looking for.

You could do the same with videos and images -- not only could you find a video that kind-of is in the ballpark of what you were looking for, but you could find an exact match, along with the specific 3 second clip within the video that is relevant, and even the specific objects in context within that clip that you were seeking.

Having the web be neurally-annotated would also take question-answering to a whole other level. For example, if you wanted to know who the villain in the Wizard of Oz is, with every word neurally-annotated, it would be easy to pick out "The Wicked Witch of the West"; that wouldn't even require anything fancy. But systems today can do that already. You could ask much more complicated questions, like, "What was Dorothy's reaction upon first meeting the Scarecrow?" -- and the system might be able to pick out "sad" or "frightened" or something as the answer, from brain scan data. This would require interpreting the brain vector annotations in the right way, but this should be a lot less complicated than the daunting task of building a machine reading system that can make deep inferences about text.

And why stop at text, images, videos, and audio recordings? We could neurally annotate the whole world! -- stores, houses, city streets, forests -- everything! Two obvious applications would be to home robots and self-driving cars:

Some problems faced in building home robots are the fact that they don't necessarily know how to grasp objects, whether an object is trash or not, whether it is delicate and shouldn't be handled, and so on. If people wear BCIs (and eye-trackers) even just a few minutes as they wander around their homes, they could be neurally-annotated; and then the robots would know what they should be doing. They would know that this needs to be put in the trash, and that needs to be left where it is.

As to self-driving cars: as people drive their cars while wearing advanced BCIs, the entire landscape could be annotated. Every sign could be passively identified and recorded based on brain data (at least the signs you look at -- but since the data from multiple viewers will be pooled, at least a few annotators are sure to notice each sign); the road lanes and free space could be identified; any confusing landmarks could be identified (e.g. ones with reflective surfaces); the visceral sense of which areas are "dangerous" and that one should pay extra-attention to the road, could be identified; bad parts of the road to watch out for (e.g. potholes) could be identified; and so on.

Many of these can probably already be picked out using existing algorithms, with high accuracy. The brain data could be used to "check their work" -- if a difference is found between the BCI annotations and what the self-driving companies have in their files, their labels could be checked by a human and updated. Hopefully, the number of errors is sufficiently small that not many checks would be needed.

This "check their work" also applies to moving objects in the environment: for example, as the car is moving, it may make lots of identification errors that never would result in having to hand over control to the human driver -- so wouldn't get caught. Maybe the car sees a plastic bag in a different lane, mistakes it for a rock that would need to be avoided; but since it is in another lane, doesn't bother with it -- and the driver never notices that it made that error. Or, maybe a child wearing a costume holding a mirror crosses the street ahead, the car identifies it as an obstacle, correctly slows down; but doesn't register it as a "pedestrian". Using BCIs, all these extra errors would get caught -- and that should help improve the safety of the cars pretty efficiently.

I'm sure there are hundreds more scenarios like the ones I've listed. As I've written before, I'm excited by what BCIs will mean for the advance of AI; but BCIs will have a huge impact even without much contact with advanced AI. Even very simple algorithms applied to mass neural annotations of text, video, images, audio, the home, and the rest of the world, will unlock whole new realms of possibility.


  • Yuli Ban and Alislaws like this

#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,312 posts
This is the kind of application that would be a lot easier to solve using (averaged) brain word vectors from high-res BCIs:
 
https://mobile.twitt...534644578701312
 

Is there work / data / anything on embedding algorithms for movies? If we could have word2vec style arithmetic recommendations for movies/tv series I think that would be pretty cool. ("I feel like altered cabon - firefly + west world today")


I would guess some ML methods might be able to pick out superficial aspects of style, pacing, cinematography, kinetics, complexity, and so on; but there would be huge gaps in what it "notices" -- it wouldn't be able to pick out the intricacies of the plot or what the characters know at each moment of the story. Maybe the music would serve as a clue (dark music for dark scenes); but it would only be the faintest hint. (Eventually, ML will allow us to identify those deeper aspects of movies, but not today.)

Advanved BCIs, however, could be used to pick our much finer details of the story. In fact, it may be the case that simple averaged brain vectors serve as a very good indicator of what a movie is about; and could be easily combined with vector arithmetic to form analogies.

#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,312 posts
New work that is relevant:

https://www.biorxiv....18/07/25/252718

Understanding movies and stories requires maintaining a high-level situation model that abstracts away from perceptual details to describe the location, characters, actions, and causal relationships of the currently unfolding event. These models are built not only from information present in the current narrative, but also from prior knowledge about schematic event scripts, which describe typical event sequences encountered throughout a lifetime. We analyzed fMRI data from 44 human subjects presented with sixteen three-minute stories, consisting of four schematic events drawn from two different scripts (eating at a restaurant or going through the airport). Aside from this shared script structure, the stories varied widely in terms of their characters and storylines, and were presented in two highly dissimilar formats (audiovisual clips or spoken narration). One group was presented with the stories in an intact temporal sequence, while a separate control group was presented with the same events in scrambled order. Regions including the posterior medial cortex, medial prefrontal cortex (mPFC), and superior frontal gyrus exhibited schematic event patterns that generalized across stories, subjects, and modalities. Patterns in mPFC were also sensitive to overall script structure, with temporally scrambled events evoking weaker schematic representations. Using a Hidden Markov Model, patterns in these regions can predict the script (restaurant vs. airport) of unlabeled data with high accuracy, and can be used to temporally align multiple stories with a shared script. These results extend work on the perception of controlled, artificial schemas in human and animal experiments to naturalistic perception of complex narrative stimuli.


A modality-independent, subject-independent representation of story schemas, including the characters, actions, locations, and causal relations in currently unfolding events, would make it possible to look up story details, summarize stories, and many more things. Search engines are nowhere near being able to do this -- but if everything is neurally annotated, I could see it being possible in the span of a small number of years; the basic algorithms already exist -- it would just take some experiments to figure out how and where the information is encoded in the brain. Averaging results from multiple readers or listeners would raise accuracy, if it becomes necessary.

Furthermore, these encodings would be one source of annotation data to build smarter AI, and even what I called "Zombie AGI chatbots".
  • Yuli Ban likes this





Also tagged with one or more of these keywords: BCI

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users