Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

True Artificial Intelligence Could Be Closer Than We Think, Via Brain-Computer Interfaces + Deep Learning

AI BCIs Brain-Computer Interface Artificial Intelligence

  • Please log in to reply
113 replies to this topic

#81
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
Neuromod had an update:

https://twitter.com/...534555477872640

An important milestone of the project was reached today! First fMRI run with videogame play using a fully MRI-compatible game controller, designed and built by @cyrand1.


Perhaps they will soon have enough data to build a brain-like game-playing system.
  • Yuli Ban likes this

#82
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
The people behind the Neuromod project have written an explainer / mini-announcement piece:

https://psyarxiv.com/3epws

They lay out their vision in a fair bit of detail. The highlights:

* They aim to record single individuals for hundreds of hours while engaging in various cognitively demanding tasks, like playing videogames.

* With all that data, recorded alongside behavioral output and sensory input, they plan to train neural net models that imitate brain activity + behavior. They claim that just attempting to imitate the brain data in isolation might not be successful; but if you combine that with behavior, they think it might succeed (I think they are right).

* They argue that you might not need a very high resolution of brain scan data to pull this off, and discuss so-called "neural mass models" as one example architecture. They seem to think they will need to run a lot of experiments, trying various architectures, in order to find one that works well. As I have stated before, I strongly suspect that a large number of model types will work well -- there isn't much "secret sauce" in the model, if you have really good data.

* They also inquire whether they will need to use "priors" like "neural connectivity", in order to reduce the number of parameters. I have talked about this very thing in this posting of mine:

https://www.reddit.c...ual_assistants/
 

An important set of restrictions to reduce the number of parameters comes from brain connectivity: each voxel of neurons can only exert an influence on ones it is directly connected to, given a short enough time-step. Using “functional connectivity” restrictions one can maybe reduce things further. This will make the predictor model “sparse” and easier to learn (fewer training examples needed). Furthermore, as has been pointed out before (e.g. in previous discussions about a talk by David Sussillo), different regions of the brain act like independent little “modules” (that only weakly influence others over short time-steps); this should have the effect of requiring fewer training examples to train a model, for reasons I’ve discussed before.


Won't be long now... It's all coming together...
  • Casey, Yuli Ban and Alislaws like this

#83
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,708 posts
  • LocationNew Orleans, LA

So what do you see happening in the next 3 to 5 years? Or maybe even that's too long of a timescale. Maybe 1 to 3 years is a better timeframe.


And remember my friend, future events such as these will affect you in the future.


#84
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
Difficult to predict with certainty. It's my understanding they will release tranches of data each year, finishing in 5 years; and probably they will experiment with the data each year. So, perhaps 1 to 2 years from now we will see the first AI models they have built with this data. It will probably work something like this: you send either a raw bitmap (down-sampled) or an encoding of frames of video + audio from a video game to the network, and it imitates a human player, complete with motor signals for how to move the joystick. The system then literally plays the game. The same brain-based system will be able to play a wide variety of games out-of-the-box -- say, play pretty much any game the way a human would that isn't familiar with it. So, it won't be an expert player, right away; but it will still be impressive.

Now, such a system can be used in lots of ways. One way is that it can serve as a "critic" in some larger system, that learns to play at a superhuman level -- crucially, it will learn to do this even for very complex games that have previously required complex architectures and add-ons; and it will do it playing far, far fewer games than these systems required in the past.

Oh, and I don't think complex architectures will be required. Basic ones will do pretty well, given enough data.

But while this is going on... Deepmind, OpenAI, Microsoft, Facebook, and others, may continue to improve their own game-playing systems that don't use brain data. It's uncertain which will win the race towards a general-purpose, efficient game-playing system.

....

In parallel to this work, Neuromod and also Alex Huth's group are pursuing brain imitation of language understanding (have a person listen to a recording, record their brain, and then try to model it). That will probably take at least 2 years to acquire enough data to do something really, really impressive; but, again, there will probably be releases after the first year (actually, I'm not sure if Neuromod will be doing this this year, or will focus on video games 100% in the first year). Once the data are acquired, and the first models are trained, I expect they will work very well, and will generalize easily to whole other categories of language input. Probably in about 2 years Huth will write a paper with his student on this. They'll try various networks, and report a few very surprising behaviors. For example, when a story turns sad, you will notice certain parts of the artificial brain light up, and this will persist until the mood of the story lifts -- showing that the system really understood what was going on. Or, maybe to understand a story you have to be able to apply logic or to count, and that will be reflected by the brain-imitation system.

The first systems may not work quite at full human levels, but will show shocking, scary hints of human-like understanding. Shivers down the spine... "It's alive!"

Now, Huth might get the idea to integrate this system into a chatbot framework like I described here:

https://www.reddit.c...ual_assistants/

He may not have the right data to do it; on the other hand, he just might.

He could use GPT-2 as the language model / generator. The brain-imitation network would be the critic. The two of them together would work far and away better than either one alone. The combined system might produce shockingly human-like conversation -- so good as to have national security implications.
  • Zaphod, Casey and Yuli Ban like this

#85
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,708 posts
  • LocationNew Orleans, LA

Elon Musk's NeuralLink project is a fantasy, and DARPAs INVASIVE BCIs are decades away.

How quickly things change!


And remember my friend, future events such as these will affect you in the future.


#86
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts

No.  Musk was talking about FIVR and union with superintelligence when I wrote that.  That's the part that's decades away, and a fantasy near-term.  By 2050+ it may not be fantasy, though.

 

Also, we've had invasive BCIs for decades already.  The devil is in the details -- bandwidth, biocompatibility, durability, degree of invasiveness, depth, temporal resolution, dynamic range, noise level, etc.


  • Yuli Ban likes this

#87
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,708 posts
  • LocationNew Orleans, LA

No.  Musk was talking about FIVR and union with superintelligence when I wrote that.  That's the part that's decades away, and a fantasy near-term.  By 2050+ it may not be fantasy, though.

 

Also, we've had invasive BCIs for decades already.  The devil is in the details -- bandwidth, biocompatibility, durability, degree of invasiveness, depth, temporal resolution, dynamic range, noise level, etc.

I didn't get that, at least from the early comments. I thought you meant that the NeuraLink in general was a fantasy (considering Musk's reputation as overpromising things). I barely even remember the early NeuraLink hype from '17 because I distinctly recall thinking "this isn't happening anytime soon anyway."


And remember my friend, future events such as these will affect you in the future.


#88
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
Two bits of news related to the basic science of this thread:

1. New work showing how MRI methods can be used to measure brain activity with temporal resolution of 100 milliseconds, which is unprecedented, and may open the gates to much deeper brain analysis:

https://www.nibib.ni...w-mri-technique

2. And, new work showing great similarities in brain semantic features when you read and when you listen to the same material:

https://news.berkele...eadingbrainmap/

This is evidence of core, modality-independent semantic processing regions of the brain that can be analyzed and imitated.

It also appears that you see similar activation patterns across individuals. So, for example, a large part of how your brain responds while reading a novel, will be the same as how my brain responds while listening to an audiobook version. You can imagine what the implications of such a thing would be...
  • Zaphod, Casey, Yuli Ban and 1 other like this

#89
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
https://twitter.com/...464126803644416


#tweeprint time!
How do we generate the right muscle commands to grasp objects? We present a neural network model that replicates the vision to action pipeline for grasping objects and shows internal activity very similar to the monkey brain.


https://www.biorxiv....0.1101/742189v1

The Tweet thread presents kind of an extended abstract:


Monkeys grasped and lifted many objects while we recorded neural activity in the grasping circuit (AIP, F5, & M1 - see original paper https://elifescience.../articles/15278). All of these areas have been shown to be necessary for properly pre-shaping the hand during grasping.

We show that the advanced layers of a convolutional neural network trained to identify objects (Alexnet) has features very similar to those in AIP, and may therefore be reasonable inputs to the grasping circuit, while muscle velocity was most congruent with activity in M1.

Based on these results, we constructed a modular neural network model (top of thread) to transform visual images of objects into the muscle kinematics necessary to grasp them. Activity in the model was very similar to neural activity while monkeys completed the same task.

We tested how different neural network architectures and regularizations affected these results, finding that modular networks with visual input best matched neural data and showed similar inter-area relationships as in the brain.

Importantly, networks used simple computational strategies for maintaining, reorganizing, and executing movements, relying on a single fixed point during memory and a single fixed point during movement.

This simple strategy allowed networks to generalize well to novel objects, even predicting the real neural activity for these objects, providing a powerful predictive model of how flexible grasp control may be implemented in the primate brain!


I suspect with good brain-scanning + pose estimation with high degrees of freedom, we will be able to replicate human responses to visual information, at least over a few seconds. So, for example, if you throw a virtual ball at a virtual model of a human, it will instinctively move its arms and fingers up to catch it. And if you roll a large object towards it, it will move out of the way just like a human would.

It's possible that this will transfer to humanoid robots.
  • Yuli Ban likes this

#90
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 239 posts

Does it have robotic implication ? (such as amazon challenges)



#91
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
Harvard study: Artificial neural networks could be used to provide insight into biological systems:

https://news.harvard...ogical-systems/

In fact, the research associate in the labs of Florian Engert, professor of molecular and cellular biology, and Alexander Schier, the Leo Erikson Life Sciences Professor of Molecular and Cellular Biology, was hoping to build a system that worked differently than zebrafish with an eye toward comparing how both process temperature information.

What he got instead was a system that almost perfectly mimicked the zebrafish — and that could be a powerful tool for understanding biology. The work is described in a July 31 paper published in Neuron.

....

Haesemeyer then compared the artificial network to whole-brain imaging data he’d previously collected that showed how every cell in the zebrafish brain reacted to temperature stimulus. He found that the artificial “neurons” showed the same cell types as those found in the biological data.

“That was the first surprise — that there is actually a very, very good match between how the network encodes temperature and how the fish encode temperature,” he said. “And as a way to confirm that point a bit more … one thing we can easily do with the artificial network is remove certain cell types. When we removed all the cells that look like those in the fish, the network cannot navigate the gradient anymore, so that really indicates that what makes the network do what it does is the cells that look like those found in the fish.”


This is "task-based training". Direct imitation of neural responses + behavior would lead to an even better fit.
  • Yuli Ban likes this

#92
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
This is a neat webpage that discusses a short story of Greg Egan with an idea very similar to this thread.  (The main difference with Egan is that in this thread we have explored mimicking the brain using neural population + behavioral data, not single neuron data; and that near-term BCIs and/or fMRI recordings will provide the necessary data.  The argument proposed here is that neural population + behavioral data is enough to go quite far, crude though it might seem -- and, really, I'm mostly just reporting on the literature. Our intuitions about the limitations of what you can do with crude data are wrong.):
 
https://danielrapp.github.io/cnn-gol/
 

Learning Game of Life with a Convolutional Neural Network
In Greg Egan's wonderful short story "Learning to Be Me", a neural implant, called a "jewel", is inserted into the brain at birth. The jewel monitors activity in order to learn how to mimic the behavior of the brain. From the introduction

I was six years old when my parents told me that there was a small, dark jewel inside my skull, learning to be me.

Microscopic spiders had woven a fine golden web through my brain, so that the jewel's teacher could listen to the whisper of my thoughts. The jewel itself eavesdropped on my senses, and read the chemical messages carried in my bloodstream; it saw, heard, smelt, tasted and felt the world exactly as I did, while the teacher monitored its thoughts and compared them with my own. Whenever the jewel's thoughts were wrong, the teacher - faster than thought - rebuilt the jewel slightly, altering it this way and that, seeking out the changes that would make its thoughts correct.

Why? So that when I could no longer be me, the jewel could do it for me.


In this article I'd like to discuss a way of building this kind of jewel, as a convolutional neural network, which, after having seen a bunch of iterations of game of life, can learn its underlying behaviour.


  • Yuli Ban and Alislaws like this

#93
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts

Brain Inspired podcast with Talia Konkle (Harvard):

 

https://www.stitcher...ired/e/63290592

 

In this podcast, Konkle says that skeptics who say that what goes on in deep nets is not really like what goes on in the brain are just wrong.  It's shocking the level of correlations one can find between the two.  The correspondence is so good, in fact, that there are a wealth of insights you can uncover about the brain, by studying neural network models.


  • Casey likes this

#94
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts

Some interesting paper abstracts submitted to NeurIPS 2019, some of these accepted:

https://openreview.n...m?id=S1gRRESxUH

https://openreview.n...m?id=H1g42rHgLB

https://openreview.n...m?id=ByxMASrlUB

The first one is on using brain data to regularize image recognition neural nets. They claim that when this is done, the nets become more resistant to "adversarial examples". Maybe brain data is the key to making nets resistant to these types of attack.

The second one is about injecting brain data into large NLP models like BERT.

The third is on linking behavior and brain data, and using this to generate behavior videos given associated brain recording.


  • Yuli Ban likes this

#95
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
I've mentioned this before, but thought I'd delve into a little more depth: just recently, Deep Learning has led to tremendous advances in NLP, media synthesis, image recognition, speech recognition, and a few more subjects. But it hasn't yet made huge progress in robotics. Why is that?

Well, the recent leap in NLP, say, is due to basically 3 things:

1. New, massive datasets with which to train the models.

2. New types of neural net architectures that are easier to parallelize, and therefore train up to billions of parameters in reasonable time (e.g. Transformers).

3. Cheaper hardware per computation (e.g. TPUs).

What is missing for robotics is #1 -- there are no large datasets. So, people have attempted to use Deep Reinforcement Learning in virtual environments and real environments. The problem is, though, that it's a slow process; and the transfer from virtual to real isn't perfect (even using randomization).

If there were datasets as large as the ones for NLP for specific robot bodies, I have no doubt that we would be in the midst of a robotics revolution the likes of which we never imagined.

It doesn't seem like we will ever have these datasets, though, using traditional means. However, using brain data could provide at least some of it. You can think of it as a new type of "programming". The way it would theoretically work is as follows: a person puts on a next-gen, high-resolution BCI. Then, they use their mind to attempt to control a robot in an environment. At first, the robot will flop around and make mistakes; but over several training sessions, the "programmer" will learn to control it to a very high degree of accuracy -- just like how quadraplegics learn to control robot hands using their mind. After recording a person controlling the robot -- making it walk, grasp objects, etc. -- over about 100 to 200 hours, say, you will have enough "motor control" data to build a controller -- or, at least to supplement an existing controller. Furthermore, that brain data will contain "video understanding", "planning" and other stuff that could be used to guide the robot. Now, don't just use one brain; but have, say, 10 or 100 or 1000 people contributing their data. Fuse all that data together, to build a giant neural net controller; and, presto, you have a robot "brain" to make it interact with the world at a reasonably high level.

Paying 100 people to control a robot like that with their mind for 100 hours will cost: say, $20 / hour; so, total cost is $2 million. That's a very, very small amount of money for the potential payoff.

It's conceivable that with all that data you could even build a neural net model that can listen and take verbal instructions about what to do. It would be good enough to complete general tasks; but if you wanted it put it to work on an assembly line, say, you'd have to "fine tune" it, just like how you train humans to complete those tasks.

When good BCIs arrive, if we haven't already found a good way to train robots to solve complex tasks in unstructured, real-world environments, then I think we will see people resort to using brain data to build the requisite datasets. And nothing you've seen to date with advances in AI will compare to the explosion of economic activity it will unleash. It will basically lead -- eventually -- to the automation of virtually all physical labor.

When?

I can't really answer that. I can say with reasonable certainty that the advances will be hair-raising, though, just a few short years after these BCIs are in the hands of a large number of consumers.
  • Casey, Yuli Ban and johnnd like this

#96
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts

Coincidentally, the Columbia University robotics group released a paper just a few days ago on training robots using BCIs:

 

http://crlab.cs.colu...rain_guided_rl/

 

(Includes video describing their work.)

 

They show that even very noisy, low-quality brain signals can be used to accelerate robot learning.  

 

Alas, they use EEGs.  Next-gen BCIs will offer at least 100x richer data, and at a higher signal-to-noise ratio.  That should accelerate robot learning tremendously.

 

But EEGs can be powerful if you wire enough people together.  For example, if you could make an addictive game where one of the tasks involves watching a robot (not sure how to work that into a game), and 10,000 people play it at once; if they all wear EEGs and you set things up right, you could extract 100k bits of information each second.  In 24 hours you'd have around 1 gigabyte of information.  If all that could be devoted to improving the planning, dexterity and fluidity of a robot, you'd have something that looks like it came straight out of a scifi movie.  

 

And if you did the same with a next-gen BCI, instead, you'd have between 100 gigs to 1 terabyte of data to play with.  That's up near the size of those giant NLP training sets -- and you'd get it in just 24 hours!


  • Casey and Yuli Ban like this

#97
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts
Tweet thread about a new Nature paper that argues it would be fruitful for neuroscience to adopt a deep learning perspective:

https://mobile.twitt...868863850500096

Here is a one-sentence summary of a key argument:

https://mobile.twitt...865334771699713
 

Main argument (?) is that we should be viewing neural networks from an optimization perspective, and not worry so much about what individual neurons are doing


The old, classical, microscale, study-and-simulate-at-super-high-resolution approach is another methodology. The new approach to studying the brain with a deep learning perspective has been building for years, and is now reaching a new phase of acceptance and crystalization of method.

This article doesn't really speak to BCI advances, but does mention the wealth of data from new scanning methods that can read many thousands of neurons. The focus is mostly on task-based training and analysis, where a neural net is trained on a task and then compared to the brain; it doesn't discuss imitating the brain from neural recordings directly -- but that is something in the purview of their perspective.

https://mobile.twitt...158149476896769
 

free link:



#98
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 793 posts

Tweet thread about a new Nature paper that argues it would be fruitful for neuroscience to adopt a deep learning perspective:

https://mobile.twitt...868863850500096

Here is a one-sentence summary of a key argument:

https://mobile.twitt...865334771699713
 

Main argument (?) is that we should be viewing neural networks from an optimization perspective, and not worry so much about what individual neurons are doing


The old, classical, microscale, study-and-simulate-at-super-high-resolution approach is another methodology. The new approach to studying the brain with a deep learning perspective has been building for years, and is now reaching a new phase of acceptance and crystalization of method.

This article doesn't really speak to BCI advances, but does mention the wealth of data from new scanning methods that can read many thousands of neurons. The focus is mostly on task-based training and analysis, where a neural net is trained on a task and then compared to the brain; it doesn't discuss imitating the brain from neural recordings directly -- but that is something in the purview of their perspective.

https://mobile.twitt...158149476896769
 

free link:

 

This is what Kurzweil said many years ago.



#99
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,289 posts

No, he did not.  His focus was on finding the micro-column circuit repeated throughout the cortex; and he suggested using a "hierarchical hidden Markov model (HHMM)" to implement details.  His harping on "hierarchical" was an old trope, e.g. seen in Hawkins's book On Intelligence, and implemented much earlier in models like HMAX:

 

https://maxlab.neuro...n.edu/hmax.html

 

Kurzweil did talk about merging with AIs, but not about "middle-way" methods as described by Gwern here:

 

https://www.reddit.c...ation_learning/

 

It's hard to say who the first person was that had this idea.  Tom Mitchell, Konrad Kording, and Jack Gallant are some people who had similar ideas many years ago.  The science fiction writer Greg Egan had similar ideas, too, but about microscale imitation, not mesoscale like the other names I listed; microscale imitation is probably not necessary and far into the future, mesoscale is much closer due to sudden advances in scanning technology.

 

Even so, just tossing out a vague idea is not going to cause people in the sciences to give you credit for it.  You must be a lot more specific than that, and argue it forcefully in a scientific paper.  In the humanities, however, it's common to say, for example, "the Greeks invented everything", and give credit whenever an idea is even anywhere within 1 million light-years of the target.


  • Yuli Ban likes this

#100
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 239 posts
I recall Kurzweil mention how the info to make a brain has to be encoded in the few megabytes of the dna , so that is an encoding of the architecture so the rest of the details are handled by circumstances.

You are correct about specificity , but the opposite of taking an idea maybe fleshing out more details and putting it in print can get someone credited.





Also tagged with one or more of these keywords: AI, BCIs, Brain-Computer Interface, Artificial Intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users