Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

The human enhancement potential of Brain Computer Interfaces (BCIs) that can only read (and not write)

BCIs neuroscience AI transhumanism

  • Please log in to reply
13 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 505 posts
Introduction

The purpose of this posting, and the others in this thread, is to lay out what I think is the human enhancement potential of next-generation Brain Computer Interfaces. Specifically, I’m thinking of BCIs that have very high spatial and temporal resolution, and that can scan the brain at depth. I am not interested here in:

* EEG BCIs. These are great for some things, though are limited.

* BCIs that write to the brain. These will have many uses; I’m just not interested in them for this posting.

I foresee four broad categories of how BCIs will enhance humans:

1.  They will use our brain patterns to show us things we never knew could be deduced from what we know. This will allow us to extend human intelligence in various ways, and will also accelerate our ability to learn new things.

2.  They will greatly improve a computer’s ability to read our intentions and disambiguate. This will make virtual assistants more accurate and engaging, for example.
 
3.  They will allow us to transfer our thoughts to the computer -- e.g. our inner voices will be transcribed; our intentions will be interpreted (“robot, clean the floor”); the images in our heads can be transferred to the screen.
 
4.  They will allow us to indirectly enhance ourselves, by training machine learning algorithms to automate tasks.

I have written extensively about #4, so will focus on the other three in this note. The next several posts will address these in order. All will involve the use of statistical algorithms and machine learning, and I foresee progress similar to what we have seen with speech recognition, image recognition and machine translation -- it will start slow, and then will rapidly improve. The remainder of this posting will address #1:

I.  Computers will use our brain patterns to show us things we never knew could be deduced from what we know.

Here is something rather amazing:  just by looking at how words co-occur in the English language, it is possible to determine the distance between cities of Europe, and also determine their longitude and latitude coordinates, with high accuracy.

I suppose I need to explain this a little bit.  When people talk about the cities of Europe, they mention nearby cities together more often than ones far apart.  This is obviously not the case in all contexts -- for example, if we’re talking about “major world powers”, you might find that “London” and “Berlin” are mentioned together more often than London is to some other city of England.  But if we’re talking about a very broad set of contexts, then word co-occurrence statistics of city names can be used to localize them on a map.

It’s not just cities.  This has been tried with parts of the body!  I kid you not.  Here is a paper about it:

https://escholarship...c/item/21h9t3sw
 

Recent literature has shown that perceptual information, such as geographical locations, modalities, and iconicity, is encoded in language. The current paper extended these findings by addressing the question whether language encodes (literally) embodied information: whether statistical linguistic frequencies can explain the relative location of different parts of the body.

 
The takeaway message here is that statistical methods are very powerful, and can be used to make deductions about things that are “implicit”, “hidden”, or “encrypted”. In point of fact, it was statistics that helped England crack the German Enigma code during World War II, and might have been the deciding factor in winning the war.

we can apply this same kind of mindset to the data generated by high-resolution BCIs recording our brains as we interact with the world. When we hear a word mentioned, our brains will respond with a certain regularity; and that regularity can be  determined through the use of statistical methods (e.g. using MVPA or “Mutli-Voxel Pattern Analysis”).

You might think that everybody’s brain is different, and that in different contexts our brain responds differently, making this kind of analysis impossible.  However:  

* When we hear or see a word, at least about a third of our brain pattern is the same across contexts. It sounds like a lot is being lost, but that’s still good enough to read off some generic, context-independent information about that word; and if we can decode how those other two-thirds of the brain pattern represent context, then we can extract even finer details of meaning.
 
* Although different people have different brains, there are still some similarities in brain patterns across individuals. There are also algorithms for how to map responses from brain to brain, that enable one to match even more of the features shared by different brains. It’s remarkable how much of this information is shared by different individuals -- individuals who grew up in different towns or cities, who had different cultural upbringings, and even spoke different languages!

Ok, so we have powerful statistical methods, and we have brains that exhibit similar patterns when people hear the same words -- even in different contexts. What can we do with that?

Well, maybe we can apply the statistics to brain data to make surprising deductions about the contents of our knowledge, just like with the longitude and latitude coordinates example above.  For example, maybe by examining our brain responses when we hear the word “New York”, we can feed those responses into an algorithm, and out will pop an estimate as to its location, population, size, and other details.  Something like this has been shown for statistical analysis on free text, so why not brain patterns?:
 
http://www.aclweb.or...hology/D15-1002

Now, one can easily make such an algorithm that just memorizes the answers, by programming in a “lookup table” with cities and their various properties. This is not the kind of algorithm I’m talking about. I’m talking about an algorithm that can be spelled out in just a few lines of code.  This algorithm will have some learnable parameters that will require feeding in some “training examples” of (city, brain response, attributes) triples to set their values; but this number of examples will be very tiny compared to the number of cities it will work on. The total amount of information contained in the lines of code and the training data will be very small compared to the amount of information we can extract from the brain when a very large number of city names are presented to an individual.

The success of such an algorithm, just like the success of the ones that do the same for stats applied to free text, should not be taken to indicate the information about cities is “contained in the human brain”. Rather, the human brain contains associational information that can be translated into good estimates for location, population, size, and so on.

It would be a neat parlor trick to do this for cities, but that wouldn’t be very practical, as we can look up that information very easily. So what would be some good applications?

For a start, consider this one:  when you hear the name of someone you have known for years, certain parts of your brain light up that indicate your knowledge of how they speak and walk, whether their skin is pale and sickly, their psychological state, and so on. A computer given access to that information could attempt to predict whether they have a major illness like Alzheimer’s or cancer; and would probably be right far above chance-level -- accurate enough to make a good suggestion that they should see a specialist.

Another example: let’s say that BCI is light and compact enough to where you can wear it basically all the time. You lie down on the couch to watch TV, and as you listen to the news, the BCI scans your brain responses. When you hear certain words, a program analyzes those brain responses. The program knows the typical response of someone who correctly understands the meaning of the word; and, as I said earlier, some portion of that response is shared across individuals, and across most contexts. So, if you have an incorrect understanding of the word, that crosses a certain threshold of error, it could alert you by stating the correct definition or meaning. It wouldn’t pipe up for casual misunderstandings, only for the really egregious ones.

You can take that further, using ideas similar to the ones I described above regarding cities: it’s known how to use word co-occurrence statistics to determine the temporal order of events:

https://mindmodeling...0/paper0250.pdf
 

In several computational studies we demonstrated that the chronological order of days, months, years, and the chronological sequence of historical figures can be predicted using language statistics. In fact, both the leaders of the Soviet Union and the presidents of the United States can be ordered chronologically based on the cooccurrences of their names in language. An experiment also showed that the bigram frequency of US president names predicted the response time of participants in their evaluation of the chronology of these presidents.

 
And I’m guessing something similar can be done for brain responses. So, for example, after a BCI scans your brain when you hear “Teddy Roosevelt”, an algorithm might look at your neural representations, and determine that you think he was president in the early 1800s, instead of the early 1900s. After making that determination with high confidence, the computer might chime in with, “Teddy Roosevelt was the U.S. president from 1901 to 1909”. You might then say, “Wow! How did it know I didn’t know that!?”

I’m a believer in a certain “stoplight” theory of problems with learning new skills or areas of study: I think most of the roadblocks that people face when progressing to the next level in their education, are based on some small set of misconceptions or flaws. It’s like how in getting from point A to point B in a busy city, most of the time is spent stuck at stoplights (depending on the city and time of day) or in snarled traffic.

For example, when learning to play the piano or the game of chess, there are often subtle flaws in ones playing or strategy that one doesn't notice at first, and that take time to overcome.  An observant teacher can help, but not everyone has access to the best teachers.  Maybe, for example, a novice chess player spends too much time focused on the periphery of the board, and not enough time focusing on the center -- a BCI and the right software might detect that flaw, by comparing the player's neural representations with those of a master.

Another example:   while learning about modern physics, some people may read, in a popular press article, about “extra dimensions”. If they are like most, they think “dimension” refers to “parallel universe”, instead of “number of coordinates” (or size of basis). That one misconception could prevent them from learning any more about the subject. If you pile up several more misconceptions like that, there is almost no chance they will progress much further in their understanding.

Could we find those misconceptions right as they arise? I think in some cases we can. I’ve already mentioned how this would work for factual knowledge about geography and history, and about the correct use of the word "dimension"; but it probably also works for pinning down the conceptualizations for doing science. Here is a paper that points in that direction:

http://journals.sage...956797616641941

In the paper, researchers looked at the brain response patterns of various different kinds of students (undergrad and grad) as they were presented with physics terms like “momentum” and “electric field”. They found certain similarities -- and differences -- in the responses. With much higher resolution BCIs, I think algorithms could be built to probe the finer aspects of neural representations of scientific knowledge. If someone’s early understanding is at variance with that of experts in the field, then they could be told this by a computer (that can even generate natural language descriptions about their misconceptions).

The very same methods could enable computers to help scientists solve problems. For example, if a researcher is thinking about a potential approach to nuclear fusion, their brain activity may be similar to that of other researchers in different fields when they think about specific problems. The computer could alert the fusion specialist to that work, and perhaps the solution transfers.

Maybe as the researcher thinks about fusion, he or she forms a dynamic, mental sculpture, representing the wavering magnetic fields and plasma currents inside a reactor.  The sculpture may not be purely visual, but could involve body motions (dance; hands cupped around currents), sounds, or mere fleeting glimpses of fields in the void.  The thought patterns behind that mind-sculpture might be similar to ones that pop into the heads of engineers working on the design of high-temperature engines, for example. Perhaps if the fusion researcher knew precisely which research paper in the engine design literature to look at, they could transfer some of the ideas to their field.

This is the kind of analogizing that is common in the sciences, though people rarely notice that they are making analogies. 

A more down-to-earth example of how to use BCIs to analogize would be the following:  suppose you want to know what the equivalent is of a particular experience in some other city you will be visiting. Maybe you are looking for the equivalent of a particular jazz club in city A that you have experienced in city B. Or maybe you are looking for a piece of Jazz music that gives you the same sense of wonder and chills-down-the-arms-and-legs as when you listen to a particular modern classical piece.  

How could it be done? Through the use of “brain vectors”. You can do something similar to what is described here:

https://blog.acolyer...f-word-vectors/

The basic method that applies to “word vectors” should also apply to vectors generated from brain data using a BCI.
 
These examples I have written are only a snapshot of some of the possibilities for this type of human enhancement by BCIs.  Stay tuned for the next posting, which will be about how BCIs will allow computers to understand us better (e.g. disambiguate).
  • Alislaws likes this

#2
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,163 posts

I am mostly interested in the political aspects of this. I'll just say if you think Transgenderism is opposed and hated on...Well, you have seen nothing yet.

 

Religious extremism is going to have to be defeated before such becomes popular.



#3
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,877 posts
  • LocationGeorgia
Great! I've always wanted to be a good chess player! Oh yeah potentially solving fusion and having a better understanding of physics is a plus too.

The growth of computation is doubly exponential growth. 


#4
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 462 posts

Very fascinating. 



#5
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 505 posts
II. Next-gen BCIs will allow computers -- and even other people -- to understand us better, and to disambiguate

I foresee a day, not long after the arrival of those next-gen BCIs I talked about in the last posting, when a new kind of search engine emerges. The way this one will work is that each time a user views a webpage the search engine sends them to, while wearing that BCI, their brain state will be recorded, stored in a central server, and averaged with all the others who have visited that page. Strong crypto and various hashing techniques will be needed to guarantee privacy -- so that malicious parties don’t use their brain data for the purposes of identity theft or blackmail. Later, when someone has a thought similar to theirs when they read that webpage, the search engine will send that person to the page, fulfilling the query almost exactly, down to very fine shades of meaning. New websites that the search engine sends people to won't have many associated brain state recordings, and so it may make a mistake; but over time, more and more of the internet will get annotated with brain recordings. The engine will need to balance sending people to new sites to collect data with the need to give accurate search results.

How, specifically, would this work? A crude first attempt might be to find the “nearest neighbor” in the database to the “brain state vector” of the user. Because brains are all a little different from each other, a slightly less crude method might need to be tried, such as to first map all brain states to some common representation space, before doing the matching. In fact, this can be made into a very good method, if one allows the mappings to be highly non-linear and complex -- e.g. using neural networks.

So, for example, you might be thinking to yourself, “I’d like an article that superficially appears to be skeptical of Global Warming, but which is, in fact, the opposite, on closer inspection. A subversive piece, of sorts.” And it’s entirely possible that that exact thought occurred to someone who read such and so article in The New York Times, while having their brain scanned.

You could even search for things by feel, taste, smell, sight, or sound; and combinations thereof. For example, suppose you run your hand through plush carpeting in a house you are visiting, and say to the search engine, “What kind of carpeting feels like that?” Or maybe one day you run through the woods (with your BCI cap on), and the combination of clear blue sky, crackling clear mountain stream, smell of honeysuckle and pine, makes you wonder if other people have discovered this small piece of paradise -- and whether there are other places like it nearby. Simply search the database by brain state.

Just think about how much easier that would make finding all the subtle things on the internet. Also think about how useful it would be for debate if you could find the exact material to back you up in argument!

Another way that BCIs can help machines understand us better is that they can be used to disambiguate language. When humans speak, we don’t always notice how there are many different ways to interpret our language. For example, if we say, “I went to paradise last weekend,” maybe the word “paradise” here refers to Paradise, California, instead of the more metaphorical use of the term. Computers have been getting better at that kind of disambiguation lately; but it gets a lot harder when you want the computer to carry out not-so-well-defined tasks involving many steps.

For example, suppose you are communicating with a virtual assistant, and ask it to do something like, “See about having a junk-removal service come by this weekend to haul some things away.” How would it figure out what you mean?

First of all, the assistant could use your brain scans to help it better disambiguate what you mean by “junk-removal service” and “some things”. Maybe you have in mind furniture like a bookshelf, or a couch. Second, you are obviously thinking about a local service of some kind; and that may be something that can be determined from your brain scan. Lastly, when you think “this weekend”, you really mean Saturday or Sunday afternoon. Some subset of those things would probably be encoded in your brain patterns in a decodable way, provided the BCI had high enough resolution; and the assistant could carry out the task without having to ask many or any follow-up questions to clarify.

Perhaps you don’t think something like this would be tried? The first steps already have been, but only using EEGs. For example, see this work:

https://dl.acm.org/c....cfm?id=2388695


Understanding user intent is a difficult problem in Dialog Systems, as they often need to make decisions under uncertainty. Using an inexpensive, consumer grade EEG sensor and a Wizard-of-Oz dialog system, we show that it is possible to detect system misunderstanding even before the user reacts vocally. We also present the design and implementation details of NeuroDialog, a proof-of-concept dialog system that uses an EEG based predictive model to detect system misrecognitions during live interaction.


Several other groups have built similar systems, some even to control robots. But all of these, unfortunately, use plain old EEG. EEG is just not a very good BCI modality. I expect vastly improved results using next-gen BCIs.

It may even be possible to use brain data to predict the kind of response you are expecting from the computer: right after you speak to your virtual assistant, your brain may light up in such a way as to indicate what you are expecting it to say; and this signal could be used to narrow down the list of possible responses the computer produces. Many chatbots and virtual assistants produce a list of responses, and then filter out the ones that fail to satisfy certain constraints. Your expectations, decoded from your brain state, could serve as a set of constraints.

In fact, it wouldn’t surprise me if you could build a chatbot that comes a lot closer to passing a Turing Test, if the computer could scan your brain as it is interacting with you. This would give it a much more accurate grasp of the current dialog state, and also the exact kind of response to keep you engaged. Furthermore, this could serve as excellent training data to build chatbots without access to the BCI; but that is a topic for another thread.

Closely related to helping disambiguate for chatbots and virtual assistants is the use of BCIs to extend what you can do with “Natural Language Programming”, which is where you specify in English, say, what you want a program to do -- like you would to an expert programmer you want to hire -- and then the system writes the code for you.

You can, of course, specify a program in natural language in such detail that the computer doesn’t have to “think” very hard to figure out what you mean. For example, if you say, “Input two floating point numbers x and y from the user. Then, print their sum, x+y.” -- you’ve essentially written a program, and doing it in English doesn’t seem like it improves productivity very much; you might as well just write it in C.

And if you want to be much more vague in your descriptions, then you lose creative control, as the computer has to guess what you mean, to fill in the details. For example, if you are designing a game, and use natural language programming, you might tell the computer, “I want this level to be in a big, wide city." The computer maybe has some stock game levels that it could fit to your description; but it doesn't know if you are talking about a city at night... near a body of water... ancient, modern, or futuristic... etc. Your description is under-specified. So, it will have to guess, and may guess wrongly.

BCIs could fix these problems. First of all, it will allow you to speak in vague terms about what you want -- “I want this level to be in a big, wide city.” Second, while you're (vaguely) describing what you want to see, you're also thinking about it; and these thoughts can be read, and turned into specifications to resolve the ambiguity due to the vagueness.

I'm not asserting here that you have a crystal-clear picture in your mind of what you want the scene to look like -- many people have trouble holding a stable image in their minds. Rather, as you think about that scene, there is sure to be "semantic information" that can be read off from your brain state, that could specify the time of day and other attributes of the scene.

Extend this principle to any type of software, and I think BCIs could truly pose a danger to professional programmers, as they -- and the right decoding and program synthesis algorithms -- would allow just about anybody to produce complex software using vague descriptions.

In this posting I’ve talked exclusively about how BCIs could allow machines to understand us better; but they could also enable humans to better understand other humans. One example that comes to mind is that BCIs could make improve our use of langauge:

Suppose you have a certain thought, and think there might be a word for it, but either don’t know it, or can’t think of the best way to describe it. A computer could interpret your brain, and turn that thought into words. This has actually already been attempted, using neural networks and FMRI scans:

https://www.aclweb.o...16/P16-3004.pdf
 

Our study aims to generate natural language descriptions for human brain activation phenomena caused by visual stimulus by employing deep learning methods, which have gained interest as an effective approach to automatically describe natural language expressions for various type of multi-modal information, such as images.


This could also be used to describe people you know, places you’ve been, scenes you are looking at. A list of adjectives could be produced based on your brain scan; or perhaps, eventually, a paragraph that would make the best novelists in the world jealous.

This is the end of the second posting. Stay tuned for the third and final post
  • Alislaws likes this

#6
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,551 posts
  • LocationLondon

Well now I am excited about BCIs!

 

 

Brain Computer Interfaces. Specifically, I’m thinking of BCIs that have very high spatial and temporal resolution, and that can scan the brain at depth.

Do you have any ideas on the ETA of these sort of BCI capabilities? I think ne article you (or someone posted) said we' get amazing new BCIs within 10 years potentially.

 

Would these be the sort you are talking about, or are they further away?



#7
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 505 posts

Mary Lou Jepsen's startup OpenWater is building them.  She says she will live demo one at a conference this year, 2018.

 

Mark Chevillet's group at the secretive Facebook Building 8 is also building them.  Conference papers suggest they are far along.

 

DARPA recently announced a major funded project to build them, and said there is a feeling in the research community they are close. Harvard's David Boas said he thinks they are possible.

 

And there are several more groups working on them, including using different technologies like Electrical Impedance Tomography (EIT).

 

The feeling seems to be that consumer-ready devices could be here between 2019 and maybe 2024.


  • Yuli Ban and Alislaws like this

#8
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 462 posts

 

 

Also think about how useful it would be for debate if you could find the exact material to back you up in argument!

Or maybe, by the time this scenario is a reality, AI will be so advanced that as soon as you start thinking about that, you'll hear a Stephen Hawking voice in your head saying: "The other guy is right and you're wrong. Here's why..." And then you'd be shown whatever objective evidence the AI had found.

 

Argument averted, time saved, nth extra degree of conformity achieved. 



#9
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,551 posts
  • LocationLondon

Mary Lou Jepsen's startup OpenWater is building them.  She says she will live demo one at a conference this year, 2018.

 

Mark Chevillet's group at the secretive Facebook Building 8 is also building them.  Conference papers suggest they are far along.

 

DARPA recently announced a major funded project to build them, and said there is a feeling in the research community they are close. Harvard's David Boas said he thinks they are possible.

 

And there are several more groups working on them, including using different technologies like Electrical Impedance Tomography (EIT).

 

The feeling seems to be that consumer-ready devices could be here between 2019 and maybe 2024.

Amazing. Such a short timeframe!

 

Thanks for the quick and in depth response.



#10
Vivian

Vivian

    Member

  • Members
  • PipPipPip
  • 60 posts

A BCI that allows animals to undestand us better, and we to understand animals, would be cool. We could have talking dogs and cats, and train animals that wouldnt be easily trainable, like insects and fishes. We could know what animals are able to feel pain, and how it feel to be an animal, so people could start to treat animals better. 

 

I , personally , think its very unlikely that all animals feel pain, but some animals definitely do feel pain. We perceive pain with the neocortex, so animals with neocortex ( mammals) do feel pain , because they have everything we have involved in  feeling pain.  Its unlikely that animals without neocortex feel pain, even if they have a behavior that is the same we have when feeling pain. Lower brain structures are involved in this behavior, but it doesnt mean that these animals actually have the subjective feeling of pain.



#11
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 462 posts

A BCI that allows animals to undestand us better, and we to understand animals, would be cool. We could have talking dogs and cats, and train animals that wouldnt be easily trainable, like insects and fishes. We could know what animals are able to feel pain, and how it feel to be an animal, so people could start to treat animals better. 

I've got an idea for a sci fi short story: AIs take over by reprogramming all human intelligence-enhancing brain implants to share emotions and sensations with all other humans in their vicinity. Interpersonal violence and even cruelty becomes impossible thanks to unavoidable feedback with the other person. Just being forced to feel the full extent of human suffering paralyzes us to the point that we can't organize a counterattack against the machines. They go a step farther by putting the implants in all animals, so we can't even eat meat. 



#12
Vivian

Vivian

    Member

  • Members
  • PipPipPip
  • 60 posts

We would still eat meat from animal without neocórtex. And only mammal have neocortex. 



#13
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 462 posts

More alligator, anyone? 



#14
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 505 posts
III. BCIs will allow us to transfer our thoughts to machine, and to control technology like an extension of our bodies.

Let me begin with a glib statement: a sufficiently good BCI is the only sensor you need.

What I mean by this is that, once BCIs get to be of high enough resolution, and once the decoding algorithms get strong enough (mostly: once enough data exists to train them), a BCI could serve as your video camera, your touch-taste-smell recorder, your thought-logger, your blood and heart monitor, your thermometer, your hormone assayer (e.g. using hypothalamus signals), and much more. All the things you experience in a day will be decoded from your brain pattern picked-up by the BCI; and so, you won’t need separate sensors.

This is, of course, not quite right -- for, while BCIs will certainly allow you to preserve all these things, they will only do so from your limited perspective. Part of what makes old photographs such wondrous things is that we keep discovering new details in them that we never noticed before. A BCI recording of your experiences might only log things that you happened to notice through your limited perception. That would not replace photographs, but it would still be incredibly useful.

For the purposes of training machine learning algorithms, at least, a BCI might ultimately be the only sensor needed. For example, if you want to predict what a video will mean to a given person, you could use the BCI recording from the low-level parts of the brain to determine the movie being seen and heard; and then the predictive model would try to guess what the whole brain ultimately makes of it -- no need to affix a video recorder to the head of the subject.

This may even be better than if you had the video recording for free, since the BCI would not only tell you what video is being seen -- but also how it is perceived (e.g. maybe you’re blind in one eye?); where you are looking; where your head is turned; how dilated your pupils are; and where you are focusing your attention.

And before you say it is impossible to decode what is being seen from brain data, it’s worth mentioning that that is one of the more successful areas of predictive models in neuroscience. There is a lot of work on decoding what people see and hear -- as they are seeing and hearing them -- from BCI and FMRI data. Here, for example, is a recent paper on end-to-end Deep Learning applied to FMRI to decode what is seen:

https://www.biorxiv....18/02/27/272518

It's pretty noisy, but all indications are that more data will vastly improve the quality. There are similar works on decoding what is heard, from ECoG recordings.

Decoding what is imagined, however, is a different beast. Complicating matters is the fact that not everyone can imagine (images), and that when we do, we do so imperfectly -- our “inner movies” are incomplete, noisy, and only make sense “locally”.
Nonetheless, research has shown that some of the same brain areas active when we say something or see something are also active when we only imagine saying it or seeing it. At the moment, it is possible to use this to produce very noisy speech and images -- much noiser than for speech and images that we just observe -- that can be identified slightly better than random guessing.

What is really missing here is a good dataset to train the models. The problem is that it’s not easy to align what someone is imagining with the thing in the real world that it is but a facsimile -- because we may not imagine it perfectly, or with perfect rhythm -- so, it’s difficult to generate the datasets in the first place.

The recent success of subvocal speech recognition might be the key to getting the data for imagined speech decoding:

http://news.mit.edu/...k-silently-0404

Subvocal speech relies on using faint, fleeting neuromuscular activations as people imagine speaking. I suppose to get a good signal they have to focus their attention on their face and mouth while doing so. Very likely, the kind of brain signal you would see is very similar to the one where they don’t try to focus -- much more similar than the brain response patterns for when people actually speak are to when they imagine speaking.

It might also be possible to use unsupervised learning methods, such as those used to translate between languages without parallel corpora:

https://arxiv.org/abs/1710.11041

Or re-purpose “cryptography-based” methods for movement decoding:

https://www.nature.c...1551-017-0169-7

I will not bother to go into detail about how these can be used to decode imagine speech.

Decoding imagined images and videos is much harder. As I said above, not all of us have the ability to form mental images, and what we imagine is imperfect in many ways. Nonetheless, there are certain visual features that a BCI can pick up on, and that a Machine Learned model can use to generate images. In addition to these visual features, there are also semantic features that can be extracted, and fed into an image-generation neural network. You could perhaps think of this as being like models that produce images based on a text description, with "text" here playing the role of "semantics":

https://arxiv.org/abs/1804.01622

But in addition to the text /semantics, the system would have some genuine image features to further constrain what is produced.

Someone without the ability to imagine would probably look at the output -- which would be based only on the semantic component, and not imagined image features -- and say something like, “Yeah, that seems to have all the details I was thinking about; but I didn’t imagine that.” And somebody who does have the ability to imagine would say, “Wow! That’s exactly how I pictured it!”

If the machine makes a mistake, no problem! It will detect the error signal from your brain, such as the kind that EEG researchers have worked with (“Error-Related Potentials”); only, with a much more advanced BCI, the error signals will be much more specific, allowing the system to zero-in on what it got wrong. A half-second later, it could generate a new, corrected image. And if there are problems, again, it could generate a third -- and so on, until the image it produces is exactly right.

The exact same methods used to generate images should also work for generating video; and it’s a foregone conclusion that this could be applied to dreams as well as the consciously-imagined video. A BCI would extract a combination of image, motion, motor, audio and semantic features, and send these to a neural net that can generate video. It’s, again, like an advanced form of the neural nets that generate video based off of a text description. And, again, if there is an error, the system can always correct. There’s no rule that says it has to get it perfect the first time!

Looking further to the future, it’s even possible that we will be able to generate interactive videos with our minds. We will imagine the creatures or people in our videos having personalities, or at least some degree of flexibility, so that if we change a few details in the video, they will behave the way you would expect them to.

Such creations would not be far from video games. In the limit, interactive videos are basically video games. So, this could be yet another path towards eliminating the need for programmers!

In addition to transferring our inner voice and inner dreams to computers, we will be able to also transfer our actions. For example, we will be able to control prosthetic robot limbs using advanced, non-invasive BCIs. But that’s only just the beginning! -- we will be able to control robots remotely, just using our minds. Want to straighten up the house remotely? Just put your BCI cap on, log in from the internet through a VR system, and reach out with the Force (i.e. imagined actions) and control that robot. Make it pick things up; make it vacuum; maybe even make it make dinner.

Some amount of training yourself to manipulate the robot's "body" will be needed; and the robot will need to have “semi-autonomous” abilities, so that if you don’t control it perfectly it can fill in the details.

This is kind of the idea of the film Surrogates; only, I don’t foresee us having robots indistinguishable from humans for many decades, and am unsure of just how fine the motor control can be using BCIs that use neural population responses, rather than individual neurons.

Like with the case of transferring interactive video from the mind, I foresee the potential for a new kind of “programming”: as you use your brain to control robots, that information can also be used to train them to complete actions. For example, you could give them much finer motor skills than they currently have, teach them how to make the right grasps, how to recover if they drop an object, and so on.

At the moment, Reinforcement Learning and Imitation Learning are seen as ways to maybe train robots to complete complex tasks. However, imitating a human is problematic, as humans have different physical properties from robots, so there will always be error; and Reinforcement Learning is inefficient and has lots of problems that still need to be ironed out. Directly training robots using your brain data and supervised learning might be a good alternative, that could give robots the ability to make human-like decisions and complete complex tasks gracefully.

This is the end of my three part thread on what I see as the future impact of BCIs that can read -- but not write to -- the brain.

Some Final Thoughts on the path to the future

The combination of advanced new BCIs, the massive new datasets they will generate, modern Machine Learning and statistical methods, and massive amounts of computing power, will produce a “Cambrian Explosion” of economic activity the likes of which we have never seen before. It won’t all happen at once, to be sure; but it will happen very, very quickly. You see, over the past couple years, Machine Learning practitioners have gotten very good at understanding what they can do if you give them enough data and enough computing power. There won’t be this massive learning curve, where they have to invent a whole slew of new tweaks to Deep Learning algorithms, to squeeze every ounce of performance out of them -- those tweaks have already been invented, as have the hardware and the software platforms to run everything.

The way it will probably play out is like this: first, there will be an announcement, a public demo of some team that has built the world’s first truly wearable, inexpensive, high spatiotemporal resolution BCI that can scan the whole brain, at depth. This will send shock-waves through the tech world, and also through the neuroscience community -- it’s what they’ve been waiting for, for decades. If the team that developed this demo is not with Facebook-Google-Apple-Microsoft-Amazon, then the big players will try to buy them out. At the same time, the mere knowledge that such BCIs can be constructed will massively reduce the perceived risk for engineering them. Many other teams will try to produce their own, using related technologies (that get around patent violations). In very short order, I expect Facebook, Google, and Microsoft to have built their own, or acquired it. This will not be like the VR revolution, which is taking a long time getting started; there are lots of immediate uses of BCIs to medicine, AI development, education, and so on, that will drive the demand. As the data pile starts to swell, as the equivalent of ImageNet for various brain decoding tasks -- multiple ImageNets -- come online, you will see rapid improvement in the quality of the decodings; and adoption will grow along with it. It’s anybody’s guess what the world will look like 10 years after that first BCI is revealed to the public!
  • Alislaws likes this





Also tagged with one or more of these keywords: BCIs, neuroscience, AI, transhumanism

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users