Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

True Artificial Intelligence Could Be Closer Than We Think, Via Brain-Computer Interfaces + Deep Learning

AI BCIs Brain-Computer Interface Artificial Intelligence

  • Please log in to reply
18 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts

*
POPULAR

My old account (star0) is inaccessible to me, so I created a new one.  Yuli-Ban asked me to come here and write something, so I have.  Yuli had written some of what I am posting here once before, I see -- but my posting now provides a little more context:

 

So, I've talked about various paths to "true AI" in the past, for example using video-prediction as a route to common sense reasoning -- and that is progressing very rapidly.  But I think that this will not go the whole way towards true AI.  About a year ago it occurred to me that there is a way to get a lot closer, and that it seems inevitable that this is how the march towards AI will play out.  I am not here to tell you that I will build it; my role is simply one of prediction.  You may find the prediction strange, as when I predicted a few years ago on this very forum that many big box stores and malls would close before 2025, to be replaced by online shopping.  That prediction was met with comments like, "I promise you, there will still be big box stores!" -- misunderstanding what I said ("many" not "all"), but also seemingly disbelieving that online shopping could have such a large impact.  

 

Now hear me out about my latest prediction:  I predict that with the arrival of good, inexpensive, mass-market, non-invasive Brain-Computer Interfaces, A.I. will advance incredibly rapidly.  And, furthermore, there are multiple different groups working to produce such BCIs in the next 2 years or so, including Mary Lou Jepsen's group at Open Water, and also Facebook's Building 8 group led by Mark Chevillet.  Here is one from OBELAB, a KAIST spin-off, that does FMRI-resolution scans from the surface of the Prefrontal Cortex using FNIRS:

 

 

(Probably a scanner like that for more brain surface regions would be all you need for the idea below to work!)

 

How, you might wonder?  

 

Well, I've written a technical discussion of how it could happen, which you can find here:

 

https://www.reddit.c...h_to_strong_ai/

 

complete with potential criticisms and rebuttals.  

 

I've written other drafts of this over the past year, and then earlier this year I heard about an approach by some neuroscientists who came up with an even more basic proposal, that already seems to be producing good results -- a New Scientist article about which can be found here:

 

https://www.newscien...d-more-like-us/

 

Where I think this is headed is the following:  soon after the arrival of good, non-invasive BCIs with high spatial and temporal resolution -- perhaps even just for the surface of the brain -- research groups will start collecting large amounts of passively-generated data as human subjects perform everyday tasks.  Using Machine Learning, it will be possible to fuse this brain-state information with the recorded stimuli, and produce A.I. systems that generate their own synthetic brain states as they are presented with stimuli.  

 

These brain states are not firings of individual neurons, but rather the averaged behavior of many neurons within a population -- they are "population responses".  That may sound like you'd be leaving a lot of important details about the brain out, but people have been able to at least partially read off short-term working memory details, geolocation information, emotional state, language use, and many other things, from just these population values.  Furthermore, the right Machine Learning algorithms can fill in some of the gaps when using such crude information about the brain, when given the stimuli and behavioral response (all this is detailed in the above reddit link).  

 

Not long after the arrival of those good BCIs, I foresee the following basic application:  a few dozen subjects passively watch a large number of short videos as their brain states are recorded.  Using the raw pixels and brain data from all those viewings, a "joint model" will be built using Machine Learning, that when given video input, will produce its own time-varying synthetic brain states, that closely match what you would see if a human viewed those same videos.  These wouldn't merely be PREDICTED brain states; but would also be USED to guide future brain-state predictions as the media runs.  Once the video the machine is "watching" finishes, the final synthetic brain state will give a representation of what the machine "thought of it".  

 

Sound far-fetched?  Well, here is a very crude version of what I just described:

 

http://ieeexplore.ie...cument/7056522/

 

The article describes a method to fuse together human FMRI features as people watch a video (I don't believe it does this over multiple time-steps, though), with features derived from the video, using a Deep Boltzmann Machine neural net.  During training time, the video features and FMRI features are both available; but during test time, only the video features are available.  The neural net used has the nice property that if one of the modalities is missing at test time, you can apply something called "Gibbs Sampling" to fill in the missing modality.  Deep Boltzmann Machines would not be the best method to use if you scale up this work, use pixel inputs for multiple frames, and try to predict and USE a synthetic brain response for multiple time steps; but you can at least see there is at least one path forward on building this type of brain-based A.I.

 

More intriguing would be to build a chatbot of some kind, using brain data.  I have a feeling that if you collected enough data (from maybe 100 test subjects), and used some of the best "generative models" out there today, you could build a chatbot that far and away outperforms the best of the best.  I refer to such a hypothetical chatbot as a "Zombie AGI", and conjecture that talking to it would be like chatting with a human with moderate dementia:  it would be able to hold a conversation for several exchanges (using short-term and working memory), but would get confused when things got too complicated.  It would have common sense reasoning ability and good language understanding.  And, most interesting of all, if you were to investigate the synthetic brain states it generates, they would closely resemble what you see in a human test subject!

 

So when can we expect to see something like this?  Let's see:  assuming good BCIs are available in about 2 to 3 years, and assuming they are cheap enough to where thousands or millions of people can buy them, I expect it might take another 2 to 3 years to collect the data and experiment with the best brain-prediction models.  And so, the first versions of such a chatbot might appear in the academic research literature possibly 5 to 6 years from now!

 

If you want to see A.I. take off like a rocket, faster than what Deep Learning has produced to date, then focus on the arrival of those BCIs!


  • Casey, Yuli Ban, Maximus and 7 others like this

#2
_SputnicK_

_SputnicK_

    Member

  • Members
  • PipPipPip
  • 61 posts
  • LocationUSA
Now hear me out about my latest prediction:  I predict that with the arrival of good, inexpensive, mass-market, non-invasive Brain-Computer Interfaces, A.I. will advance incredibly rapidly.  And, furthermore, there are multiple different groups working to produce such BCIs in the next 2 years or so, including Mary Lou Jepsen's group at Open Water, and also Facebook's Building 8 group led by Mark Chevillet.

 

I think two years is overly optimistic. The field is still in its infancy, there is in incredible amount of progress that needs to be made before a company can produce the kind of mass-market BCI's you have described. The startup company, Neurolink, is focused on implantable BCI's and is being managed by Elon Musk. In an interview, Musk said it will probably take eight to ten years before it reaches people without disabilities of some kind. I am willing to admit that estimate could be wrong however.


  • starspawn0 likes this

Artificial intelligence will reach human levels by around 2029.

Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.
-Ray Kurzweil


#3
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts

You're probably right that 2 to 3 years to mass-market is too optimistic.  It might be more like 4 years.  But "early partners", like academic labs, will get the tech first, and will generate datasets.

 

Just so we're clear, and talking about the same thing:  I'm interested here in NON-INVASIVE BCIs that can only READ, not write.  EEGs  are one example, but their spatial resolution is too low.  That OBELAB NIRSIT headset is in the ballpark of what I'm talking about, but it doesn't scan enough of the brain (only a patch from the prefrontal cortex), and last I checked on pricing, it was $30,000 per headset -- needs to come way down in price.  It also only records the BOLD signal, which has a ~1 second latency.  Latency and indirect measures of neural activity are fine, though, as I outlined in that Reddit posting. 

 

Elon Musk's NeuralLink project is a fantasy, and DARPAs INVASIVE BCIs are decades away.



#4
Alpha Centari

Alpha Centari

    Member

  • Members
  • PipPipPip
  • 71 posts
Brain Computer interfaces could also be used to augment our intelligence in the near future and if wired or connected together can be used for telepathic communication which could revolutionize how we communicate altogether.

#5
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

Elon Musk's NeuralLink project is a fantasy, and DARPAs INVASIVE BCIs are decades away.

Would you say that using AI to accelerate this development could bring "decades" down to "a decade"? And what about utilizing AI to more accurately refine neural feedback (i.e. "sharpening a fuzzy image")? Or is this impossible?


  • starspawn0 likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#6
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

Brain Computer interfaces could also be used to augment our intelligence in the near future and if wired or connected together can be used for telepathic communication which could revolutionize how we communicate altogether.

 

That will definitely help people with hearing and speech disabilities.

 

But how will BCIs help in the case, let's say, someone who grew up not knowing a language and then given a BCI to try communicating with others? Will it be even possible to communicate without language, or with mentalese?

 

Language psychologists are going to have a field day with this.


  • _SputnicK_ likes this
What are you without the sum of your parts?

#7
Alpha Centari

Alpha Centari

    Member

  • Members
  • PipPipPip
  • 71 posts

Brain Computer interfaces could also be used to augment our intelligence in the near future and if wired or connected together can be used for telepathic communication which could revolutionize how we communicate altogether.

 
That will definitely help people with hearing and speech disabilities.
 
But how will BCIs help in the case, let's say, someone who grew up not knowing a language and then given a BCI to try communicating with others? Will it be even possible to communicate without language, or with mentalese?
 
Language psychologists are going to have a field day with this.
Well by the time BCI's and other related technologies start becoming widespread which will likely be within the next few decades, like around the 2030's and 40's, then there will likely be some sort of translating device hooked up to it, similar to the translating apps we have on our smartphones but more sophisticated.

#8
_SputnicK_

_SputnicK_

    Member

  • Members
  • PipPipPip
  • 61 posts
  • LocationUSA

 

Brain Computer interfaces could also be used to augment our intelligence in the near future and if wired or connected together can be used for telepathic communication which could revolutionize how we communicate altogether.

 

That will definitely help people with hearing and speech disabilities.

 

But how will BCIs help in the case, let's say, someone who grew up not knowing a language and then given a BCI to try communicating with others? Will it be even possible to communicate without language, or with mentalese?

 

Language psychologists are going to have a field day with this.

 

 

This might be one of the most promising applications of a BCI. We already have Google Translate, the technology needed to translate languages has existed for a while now. The BCI could send the translated text directly to your brain, removing the gap in conversation. Of course, this will probably exist in a simpler form in AR long before it is applicable in a BCI.


Artificial intelligence will reach human levels by around 2029.

Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.
-Ray Kurzweil


#9
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts

@yuli

 

Christopher Mims (of the Wall Street Journal, I think; used to write for Technology Review, and has a background in neuroscience) wrote on NeuralLink recently, and interviewed scientists.  Mims's conclusion was that Tim Urban's piece was silly, and much too long; and the scientists thought that it's going to be a long time before anything LIKE what Musk is aiming to produce will be a reality.  Decades.  Advanced A.I. probably won't change that -- a lot of experimentation will be required, and then there's the fact that people don't want to put chips in their brains.

 

The scientists are also skeptical of Mark Chevillet's work at Facebook Building 8, but not as much about producing the hardware -- more about being able to read silent speech.  I actually think that that won't be as difficult as they think, using Machine Learning and enough data -- however, getting it to work robustly, with low error rate may be a challenge.  Separating out what you are idly thinking about versus what you want to type probably WILL be difficult.  That seems to be one of the main criticisms on the applications side.

 

I've viewed a few summaries of the work that Chevillet has been doing with Johns Hopkins' optical physics group, and it looks encouraging.  They are already in the process of testing devices to detect the so-called FOS, or Fast Optical Scattering/Signal in "optical phantoms" and real biological tissue.  One of the challenges they seem to be having is getting the number of "optode pairs" high enough, and then they also seem to think that there will be considerable challenges in shrinking it all down to a consumer product at a low price.  

 

If they were aiming for a medial scanning device, then it would probably take several more years to get something workable, since the scan has to be precise.  However, for BCI applications it might not matter how accurately it can scan, so long as it isn't TOO inaccurate -- Machine Learning models are noise-tolerant.

 

Perhaps the most promising approach would simply be to take ordinary FNIRS and just raise the scanning resolution, and give up trying to scan brain features at depth.  There would be the usual BOLD latency, but you could probably still have a good device with a lot of uses, including running neuroscience experiments and building better AI.

 

Another option is to combine FNIRS with EEG.



#10
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts

Oh, one more thing:  if you want to see systems that can WRITE to the brain, I would focus my hopes on this technology:

 

http://blogs.discove...ive-deep-brain/

 

Neuroskeptic seems excited.  He writes, "There’s no doubt that this is one of the most exciting neuroscience papers to come along in a long time. It’s a cliché, but TI really could revolutionize neuroscience, as well as having clinical applications in the treatment of disorders such as Parkinson’s disease and more."  

 

Still might be a long ways from the WRITE component of a BCI, though.



#11
LWFlouisa

LWFlouisa

    Member

  • Members
  • PipPipPip
  • 95 posts
  • LocationNashChat

What would be the distinction between AI and simply devices that enhance the human memory? For context: At one point I had done an offline chat room. While it's intended as a way to prevent snooping between chatters using thumb drive, it can temporarily retain data for beyond the length of any particular conversation.

 

Suppose one had trouble losing track of previous conversations, one could use the temporary chat log to remember what they had said, and be able to continue from where they left off, like a digital save point in a meat space game.

 

But even with this, it's not really AI. It's something far more subtle.

 

But it could be argued it could remember things when you want to have your mind on something else.

 

Erasing chat logs for privacy purposes is simply a matter if using say ... an 82AB hash function, or any given hash function more secure than SHA512, that can use a Super_Bin functionality that permanently encrypts any particular conversation that is not longer relevant for the topic at hand.

 

Then to restart, just write over the file.


Cerebrum Cerebellum -- Speculative Non-Fiction -- Writing

Luna Network -- Nodal Sneaker Network -- Programming

Published Works: https://www.wattpad.com/432077022-tevun-krus-44-sword-planet-the-intergalactic-heads


#12
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 138 posts

Honestly, starspawn0 reddit post seems very logical but I still doubt it's going to happen soon. Every time I read about technologies that are "just around the corner" I get excited and then disappointed with reality few years later. It's like that every time. It all always looks promising, you think you understand how it works and how it will become widespread but then it doesn't. It would be very nice if it did though.


  • Jakob and sasuke2490 like this

#13
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts
New results exactly like what I wrote about in the OP:
 
http://perceive.diee..._human_mind.php
 
It's about a project to transfer "human visual features" from 128-channel EEG to a neural net image recognition system. 
 

What if we could effectively read the mind and transfer human visual capabilities to computer vision methods? In this work, we aim at addressing this question by developing the first visual object classifier driven by human brain signals. In particular, we employ EEG data evoked by visual object stimuli combined with Recurrent Neural Networks (RNN) to learn a discriminative brain activity manifold of visual categories in a reading the mind effort. Afterward, we transfer the learned capabilities to machines by training a Convolutional Neural Network (CNN)–based regressor to project images onto the learned manifold, thus allowing machines to employ human brain–based features for automated visual classification. We use a 128-channel EEG with active electrodes to record brain activity of several subjects while looking at images of 40 ImageNet object classes. The proposed RNN-based approach for discriminating object classes using brain signals reaches an average accuracy of about 83%, which greatly outperforms existing methods attempting to learn EEG visual object representations. As for automated object categorization, our human brain–driven approach obtains competitive performance, comparable to those achieved by powerful CNN models and it is also able to generalize over different visual datasets. This gives us a real hope that, indeed, human mind can be read and transferred to machines.

 
The work was presented at the prestigious CVPR 2017 conference.  Here is a talk by the lead author:
 
https://m.youtube.co...h?v=9eKtMjW7T7w
 
The work has two parts:  the first part is about recognizing what image a person is looking at, just based on their EEG signals, and achieves an accuracy of 83% for 40 image classes (classes drawn from a portion of ImageNet).  A student recently bumped that up to over 90% accuracy in this writeup, using the same EEG dataset:
 
https://imatge.upc.e.../pub/xBozal.pdf
 
In addition, they used learned features from this EEG --> image class model to improve an image classification neural network.  They bumped the accuracy up to the point where it is competitive with GoogleNet image classification for these 40 categories. 
 
The author says at the end in the above video that they want to improve the model to work with a lot more classes.  This will require collecting a lot more EEG data, as well as improving experimental protocols to reduce sources of error. 

A more interesting "next step" -- to me, anyhow -- would be to make very long recordings (multiple 1-hour sessions) of single individuals as they watch video and listen to audio. There should be lots of high-level brain features that could be extracted and transferred into video and audio "understanding" systems. If audio and video understanding systems can be made to work well enough, they would have all kinds of applications -- e.g. to improving robots, chatbots, self-driving cars, you name it!

 
Could EEG alone be sufficient to put the plan I listed in the OP above into action?  I have my doubts, but recent work has shown that it's possible to predict FMRI responses, just from EEG.  One such work is this paper:
 
http://etkinlab.stan...yModulation.pdf
 
The authors say:
 

Source estimation of EEG is considered an ill-posed problem
(20). This problem becomes even more detrimental when
aiming to locate sources in deep subcortical regions. The
EFP model introduced a novel data-driven approach to enable
the prediction of fMRI-BOLD activity using only EEG (13).
However, when forsaking a priori hypotheses, this data-driven
method also suffers from a higher risk of false discovery. By
conducting simultaneous EEG/fMRI on a new sample, the
current study validated that the amyg-EFP can indeed predict
amygdala-BOLD activity (Figure 1D). The fact that this prediction
is included in the ROI used to develop the model is
both reassuring and remarkable.

 
Regardless, much better BCIs are coming.  And Mary Lou Jepsen's OpenWater BCI system recently got a convert -- an optical physics and neuroimaging expert at Harvard named David Boas says after talking to Jepsen, he was converted from being a skeptic:
 
https://medium.com/n...ne-abc53a3b27ca
 

However, David Boas, who researches optical imaging of the brain at Harvard, told me that talking with Jepsen has converted him from a skeptic.
 
“When I first started hearing from colleagues about what Mary Lou Jepsen was proposing, I, too, viewed the claims as grandiose,” Boas says. “I have now come to the realization that aspects of her vision are possible.” With significant investment, Boas says, LCD technology could be modified to dramatically improve our ability to image through human tissue.
 
“Theoretically, this is possible, although I am still trying to get my head around how deeply we could in fact get the light to focus inside the body.”


  • Yuli Ban likes this

#14
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

Question: what's your opinion on the idea presented here?


  • starspawn0 likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#15
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts
I saw that the other day. I don't think it has a chance of going very far -- just like Alex Wissner-Gross's idea about intelligence and entropy.

I also don't think DeepMind is going to produce anything revolutionary anytime soon. I don't think their Atari stuff or the more recent stuff will "move the needle" very far, no matter how many PNAS and Nature papers they write.

I see two approaches to improving AI rapidly in the coming years: (1) Lots of datamining from lots of big data using a combination of algorithms; and, (2) Transferring features and capability directly from the human brain, using BCIs.
  • Casey and Yuli Ban like this

#16
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

All of this reinforces the idea the brain is incredibly complex and can't be easily modeled with our current understanding of it. We need further advances in nanotechnology to achieve the necessary imaging resolution in all cortical regions of the brain required to extrapolate to a meaningful interaction by models to changing situations that approximate human intelligence.


  • Yuli Ban, sasuke2490 and starspawn0 like this
What are you without the sum of your parts?

#17
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts
If you want to emulate the brain at a high level of fidelity, then, yes, you're going to need some advances in nanotech.  But if you just want to build systems that are much smarter than what we have right now, but not all the way to "sentient AI" (only "Zombie AI"),  then I think it can be accomplished by extracting algorithmic information from the brain at the level of neural populations.
 
We are very far from understanding the brain, and don't even know all the types of neurons in the brain.  But work like the above shows that machine learning can still extract useful features, even though we don't understand how they work.  We also don't fully understand what features Deep Learning extracts; but we can still build photo classification apps.
  • Yuli Ban and sasuke2490 like this

#18
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts
More research related to the OP to share. First up is using BCIs + Deep Learning to train driverless cars, to make them more "human":

https://arxiv.org/abs/1709.04574

The research was done at Columbia University, which I was not aware did this kind of work.

They say in the paper:
 

This is the first demonstration of how an hBCI can be used to provide implicit reinforcement to an AI agent in a way that incorporates user preferences into the control system.

Just imagine what could be done with a more advanced BCI that has 1000x the spatial resolution (and comparable temporal resolution); and imagine what could be done if thousands or millions of recordings are made, once those BCIs go into large-scale consumer production.

The second link is to a paper by Marcel van Gerven. It is an outline of progress towards "natural intelligence" and "strong AI":

https://www.biorxiv....5.full.pdf html

He seems to think that artificial life (e.g. genetic algorithms), biophysical modeling (e.g. the Human Brain Project), and rule-based methods are not going to cut it. They all have drawbacks. His preferred method is "connectionism", or artificial neural networks; when trained with Backpropagation at massive scale this is often called "Deep Learning".

But how should the neural networks be trained, and what should they be trained on? It seems to me that one of his preferred methods is basically what I outlined in the OP: train them on behavioral + brain data. Deep in the paper he writes:
 

Rather than using neural networks to explain certain observed neural or behavioral phenom-
ena, one can also directly fit neural networks to neurobehavioral data. This can be achieved via
an indirect approach or via a direct approach. In the indirect approach, neural networks are first
trained to solve a task of interest. Subsequently, the trained network’s responses are fitted to
neurobehavioral data obtained as participants engage in the same task.
Using this approach, deep
convolutional neural networks trained on object recognition, action recognition and music tagging
have been used to explain the functional organization of visual as well as auditory cortex (Güçlü
and van Gerven, 2015, 2017; Güçlü et al., 2016). The indirect approach has also been used to
train RNNs via reinforcement learning on a probabilistic categorization task. These networks have
been used to fit the learning trajectories and behavioral responses of humans engaged in the same
task (Bosch et al., 2016). Mante et al. (2013) used RNNs to model the population dynamics of sin-
gle neurons in prefrontal cortex during a context-dependent choice task. In the direct approach,
neural networks are trained to directly predict neural responses. For example, Mcintosh et al.
(2016) trained convolutional neural networks to predict retinal responses to natural scenes, Joukes
et al. (2014) trained RNNs to predict neural responses to motion stimuli, and Güçlü and Gerven
(2017) used RNNs to predict cortical responses to naturalistic video clips.
This ability of neural
networks to explain neural recordings is expected to become increasingly important (Sompolinsky,
2014; Marder, 2015), given the emergence of new imaging technology where the activity of thou-
sands of neurons can be measured in parallel (Ahrens et al., 2013; Lopez et al., 2016; Pachitariu
et al., 2016; Churchland and Sejnowski, 2016; Yang and Yuste, 2017). Better understanding will
also be facilitated by the development of new data analysis techniques to elucidate human brain
function (Kass et al., 2014)10, the use of ANNs to decode neural representations (Schoenmakers
et al., 2013; Güçlütürk et al., 2017), as well as the development of approaches that elucidate the
functioning of ANNs (e.g., (Nguyen et al., 2016; Kindermans et al., 2017; Miller, 2017)).

So, he and his students have already tried to predict cortical responses when people watch video clips. This is very similar to one of the applications that I wrote about in the OP. However, the scale at which they have done this is quite small, at the moment. New neuroimaging technology should enable them to scale this up 1000x. In the meantime, they are amassing vast quantities of fMRI recordings to train models. In a IEEE Pulse article I posted the other day, it said that they will soon have 40 hours of recordings from a single individual -- which is unheard of for fMRI. That amount of data from a single subject should lead to much better trained models. Perhaps we will read about some breakthrough video understanding technology from that group in the near future!
  • Casey and superexistence like this

#19
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts
Yet more research to report, related to the OP. This time from a guy named Thomas Dean at Google Research:

https://arxiv.org/abs/1710.05183

Basically, he wants to model the brain of a simple organism like a fly or zebrafish at the “mesoscale” using brain connectivity information, as well as functional brain image data (varies with time) obtained by optogenetics, say. The functional brain data is used to train neural networks using Deep Learning — which then predict brain states.

Sounds very ambitious, and might result in realistic virtual flies or fish, some day.

I would prefer to see more abstract models trained with human brain data at the neural population level (meso-macroscale). I think some important applications will result, as I said.
  • superexistence and SkyHize like this





Also tagged with one or more of these keywords: AI, BCIs, Brain-Computer Interface, Artificial Intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users