Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

True Artificial Intelligence Could Be Closer Than We Think, Via Brain-Computer Interfaces + Deep Learning

AI BCIs Brain-Computer Interface Artificial Intelligence

  • Please log in to reply
92 replies to this topic

#81
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
Neuromod had an update:

https://twitter.com/...534555477872640

An important milestone of the project was reached today! First fMRI run with videogame play using a fully MRI-compatible game controller, designed and built by @cyrand1.


Perhaps they will soon have enough data to build a brain-like game-playing system.
  • Yuli Ban likes this

#82
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
The people behind the Neuromod project have written an explainer / mini-announcement piece:

https://psyarxiv.com/3epws

They lay out their vision in a fair bit of detail. The highlights:

* They aim to record single individuals for hundreds of hours while engaging in various cognitively demanding tasks, like playing videogames.

* With all that data, recorded alongside behavioral output and sensory input, they plan to train neural net models that imitate brain activity + behavior. They claim that just attempting to imitate the brain data in isolation might not be successful; but if you combine that with behavior, they think it might succeed (I think they are right).

* They argue that you might not need a very high resolution of brain scan data to pull this off, and discuss so-called "neural mass models" as one example architecture. They seem to think they will need to run a lot of experiments, trying various architectures, in order to find one that works well. As I have stated before, I strongly suspect that a large number of model types will work well -- there isn't much "secret sauce" in the model, if you have really good data.

* They also inquire whether they will need to use "priors" like "neural connectivity", in order to reduce the number of parameters. I have talked about this very thing in this posting of mine:

https://www.reddit.c...ual_assistants/
 

An important set of restrictions to reduce the number of parameters comes from brain connectivity: each voxel of neurons can only exert an influence on ones it is directly connected to, given a short enough time-step. Using “functional connectivity” restrictions one can maybe reduce things further. This will make the predictor model “sparse” and easier to learn (fewer training examples needed). Furthermore, as has been pointed out before (e.g. in previous discussions about a talk by David Sussillo), different regions of the brain act like independent little “modules” (that only weakly influence others over short time-steps); this should have the effect of requiring fewer training examples to train a model, for reasons I’ve discussed before.


Won't be long now... It's all coming together...
  • Casey, Yuli Ban and Alislaws like this

#83
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

So what do you see happening in the next 3 to 5 years? Or maybe even that's too long of a timescale. Maybe 1 to 3 years is a better timeframe.


And remember my friend, future events such as these will affect you in the future.


#84
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
Difficult to predict with certainty. It's my understanding they will release tranches of data each year, finishing in 5 years; and probably they will experiment with the data each year. So, perhaps 1 to 2 years from now we will see the first AI models they have built with this data. It will probably work something like this: you send either a raw bitmap (down-sampled) or an encoding of frames of video + audio from a video game to the network, and it imitates a human player, complete with motor signals for how to move the joystick. The system then literally plays the game. The same brain-based system will be able to play a wide variety of games out-of-the-box -- say, play pretty much any game the way a human would that isn't familiar with it. So, it won't be an expert player, right away; but it will still be impressive.

Now, such a system can be used in lots of ways. One way is that it can serve as a "critic" in some larger system, that learns to play at a superhuman level -- crucially, it will learn to do this even for very complex games that have previously required complex architectures and add-ons; and it will do it playing far, far fewer games than these systems required in the past.

Oh, and I don't think complex architectures will be required. Basic ones will do pretty well, given enough data.

But while this is going on... Deepmind, OpenAI, Microsoft, Facebook, and others, may continue to improve their own game-playing systems that don't use brain data. It's uncertain which will win the race towards a general-purpose, efficient game-playing system.

....

In parallel to this work, Neuromod and also Alex Huth's group are pursuing brain imitation of language understanding (have a person listen to a recording, record their brain, and then try to model it). That will probably take at least 2 years to acquire enough data to do something really, really impressive; but, again, there will probably be releases after the first year (actually, I'm not sure if Neuromod will be doing this this year, or will focus on video games 100% in the first year). Once the data are acquired, and the first models are trained, I expect they will work very well, and will generalize easily to whole other categories of language input. Probably in about 2 years Huth will write a paper with his student on this. They'll try various networks, and report a few very surprising behaviors. For example, when a story turns sad, you will notice certain parts of the artificial brain light up, and this will persist until the mood of the story lifts -- showing that the system really understood what was going on. Or, maybe to understand a story you have to be able to apply logic or to count, and that will be reflected by the brain-imitation system.

The first systems may not work quite at full human levels, but will show shocking, scary hints of human-like understanding. Shivers down the spine... "It's alive!"

Now, Huth might get the idea to integrate this system into a chatbot framework like I described here:

https://www.reddit.c...ual_assistants/

He may not have the right data to do it; on the other hand, he just might.

He could use GPT-2 as the language model / generator. The brain-imitation network would be the critic. The two of them together would work far and away better than either one alone. The combined system might produce shockingly human-like conversation -- so good as to have national security implications.
  • Zaphod, Casey and Yuli Ban like this

#85
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

Elon Musk's NeuralLink project is a fantasy, and DARPAs INVASIVE BCIs are decades away.

How quickly things change!


And remember my friend, future events such as these will affect you in the future.


#86
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts

No.  Musk was talking about FIVR and union with superintelligence when I wrote that.  That's the part that's decades away, and a fantasy near-term.  By 2050+ it may not be fantasy, though.

 

Also, we've had invasive BCIs for decades already.  The devil is in the details -- bandwidth, biocompatibility, durability, degree of invasiveness, depth, temporal resolution, dynamic range, noise level, etc.


  • Yuli Ban likes this

#87
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,619 posts
  • LocationNew Orleans, LA

No.  Musk was talking about FIVR and union with superintelligence when I wrote that.  That's the part that's decades away, and a fantasy near-term.  By 2050+ it may not be fantasy, though.

 

Also, we've had invasive BCIs for decades already.  The devil is in the details -- bandwidth, biocompatibility, durability, degree of invasiveness, depth, temporal resolution, dynamic range, noise level, etc.

I didn't get that, at least from the early comments. I thought you meant that the NeuraLink in general was a fantasy (considering Musk's reputation as overpromising things). I barely even remember the early NeuraLink hype from '17 because I distinctly recall thinking "this isn't happening anytime soon anyway."


And remember my friend, future events such as these will affect you in the future.


#88
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
Two bits of news related to the basic science of this thread:

1. New work showing how MRI methods can be used to measure brain activity with temporal resolution of 100 milliseconds, which is unprecedented, and may open the gates to much deeper brain analysis:

https://www.nibib.ni...w-mri-technique

2. And, new work showing great similarities in brain semantic features when you read and when you listen to the same material:

https://news.berkele...eadingbrainmap/

This is evidence of core, modality-independent semantic processing regions of the brain that can be analyzed and imitated.

It also appears that you see similar activation patterns across individuals. So, for example, a large part of how your brain responds while reading a novel, will be the same as how my brain responds while listening to an audiobook version. You can imagine what the implications of such a thing would be...
  • Zaphod, Casey, Yuli Ban and 1 other like this

#89
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
https://twitter.com/...464126803644416


#tweeprint time!
How do we generate the right muscle commands to grasp objects? We present a neural network model that replicates the vision to action pipeline for grasping objects and shows internal activity very similar to the monkey brain.


https://www.biorxiv....0.1101/742189v1

The Tweet thread presents kind of an extended abstract:


Monkeys grasped and lifted many objects while we recorded neural activity in the grasping circuit (AIP, F5, & M1 - see original paper https://elifescience.../articles/15278). All of these areas have been shown to be necessary for properly pre-shaping the hand during grasping.

We show that the advanced layers of a convolutional neural network trained to identify objects (Alexnet) has features very similar to those in AIP, and may therefore be reasonable inputs to the grasping circuit, while muscle velocity was most congruent with activity in M1.

Based on these results, we constructed a modular neural network model (top of thread) to transform visual images of objects into the muscle kinematics necessary to grasp them. Activity in the model was very similar to neural activity while monkeys completed the same task.

We tested how different neural network architectures and regularizations affected these results, finding that modular networks with visual input best matched neural data and showed similar inter-area relationships as in the brain.

Importantly, networks used simple computational strategies for maintaining, reorganizing, and executing movements, relying on a single fixed point during memory and a single fixed point during movement.

This simple strategy allowed networks to generalize well to novel objects, even predicting the real neural activity for these objects, providing a powerful predictive model of how flexible grasp control may be implemented in the primate brain!


I suspect with good brain-scanning + pose estimation with high degrees of freedom, we will be able to replicate human responses to visual information, at least over a few seconds. So, for example, if you throw a virtual ball at a virtual model of a human, it will instinctively move its arms and fingers up to catch it. And if you roll a large object towards it, it will move out of the way just like a human would.

It's possible that this will transfer to humanoid robots.
  • Yuli Ban likes this

#90
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 212 posts

Does it have robotic implication ? (such as amazon challenges)



#91
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
Harvard study: Artificial neural networks could be used to provide insight into biological systems:

https://news.harvard...ogical-systems/

In fact, the research associate in the labs of Florian Engert, professor of molecular and cellular biology, and Alexander Schier, the Leo Erikson Life Sciences Professor of Molecular and Cellular Biology, was hoping to build a system that worked differently than zebrafish with an eye toward comparing how both process temperature information.

What he got instead was a system that almost perfectly mimicked the zebrafish — and that could be a powerful tool for understanding biology. The work is described in a July 31 paper published in Neuron.

....

Haesemeyer then compared the artificial network to whole-brain imaging data he’d previously collected that showed how every cell in the zebrafish brain reacted to temperature stimulus. He found that the artificial “neurons” showed the same cell types as those found in the biological data.

“That was the first surprise — that there is actually a very, very good match between how the network encodes temperature and how the fish encode temperature,” he said. “And as a way to confirm that point a bit more … one thing we can easily do with the artificial network is remove certain cell types. When we removed all the cells that look like those in the fish, the network cannot navigate the gradient anymore, so that really indicates that what makes the network do what it does is the cells that look like those found in the fish.”


This is "task-based training". Direct imitation of neural responses + behavior would lead to an even better fit.
  • Yuli Ban likes this

#92
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts
This is a neat webpage that discusses a short story of Greg Egan with an idea very similar to this thread.  (The main difference with Egan is that in this thread we have explored mimicking the brain using neural population + behavioral data, not single neuron data; and that near-term BCIs and/or fMRI recordings will provide the necessary data.  The argument proposed here is that neural population + behavioral data is enough to go quite far, crude though it might seem -- and, really, I'm mostly just reporting on the literature. Our intuitions about the limitations of what you can do with crude data are wrong.):
 
https://danielrapp.github.io/cnn-gol/
 

Learning Game of Life with a Convolutional Neural Network
In Greg Egan's wonderful short story "Learning to Be Me", a neural implant, called a "jewel", is inserted into the brain at birth. The jewel monitors activity in order to learn how to mimic the behavior of the brain. From the introduction

I was six years old when my parents told me that there was a small, dark jewel inside my skull, learning to be me.

Microscopic spiders had woven a fine golden web through my brain, so that the jewel's teacher could listen to the whisper of my thoughts. The jewel itself eavesdropped on my senses, and read the chemical messages carried in my bloodstream; it saw, heard, smelt, tasted and felt the world exactly as I did, while the teacher monitored its thoughts and compared them with my own. Whenever the jewel's thoughts were wrong, the teacher - faster than thought - rebuilt the jewel slightly, altering it this way and that, seeking out the changes that would make its thoughts correct.

Why? So that when I could no longer be me, the jewel could do it for me.


In this article I'd like to discuss a way of building this kind of jewel, as a convolutional neural network, which, after having seen a bunch of iterations of game of life, can learn its underlying behaviour.


  • Yuli Ban and Alislaws like this

#93
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,119 posts

Brain Inspired podcast with Talia Konkle (Harvard):

 

https://www.stitcher...ired/e/63290592

 

In this podcast, Konkle says that skeptics who say that what goes on in deep nets is not really like what goes on in the brain are just wrong.  It's shocking the level of correlations one can find between the two.  The correspondence is so good, in fact, that there are a wealth of insights you can uncover about the brain, by studying neural network models.


  • Casey likes this





Also tagged with one or more of these keywords: AI, BCIs, Brain-Computer Interface, Artificial Intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users