Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The future of simulated animals


  • Please log in to reply
6 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,290 posts
An interesting Tweet:

https://mobile.twitt...585706485297153
 

Louis Scheffer of @HHMIJanelia promises us a whole simulated Drosophila within 5-10 years. If they fail I get beer, if they succeed I buy beer. I am pretty confident. Why do people believe the path from connectomics to simulation is short?


Fruit flies are pretty complicated! They have about 135,000 neurons, lots of connections, and a complex body. This group plans to simulate the entire body -- including the ability to fly, have sex, and so forth -- in a complex virtual environment, within the next 5 to 10 years! Given how much computing power people are willing to devote to large projects like this, I don't think the compute will be the limiting factor.

If they succeed in doing this, then super-accurate mouse models probably wouldn't be more than 5 or 10 years further off. I say this not only because fruit flies are not too far from mice in complexity (they are pretty far, but not that far), but also because the success of such a project would spur on orders of magnitude more time, money and energy (people) devoted to animal simulations.

Other groups have attempted to simulate a C. elegans brain and body:

https://www.biorxiv....17/11/26/209155

https://mobile.twitt...594152894840834
 

We can’t simulate all behaviors but can reproduce various locomotion neural dynamics in response to stimuli like fwd, backward, turns and chemo sensation. With current comp tools in few years we’ll be able to simulate #celegans fully. So 10 years for the fly could be doable.


I recall that groups have attempted to simulate a bee brain, which has 1 million neurons -- but have not attempted to simulate the whole body and environment.

Google has some researchers working on Meso-scale simulations, that leverage Deep Learning:

https://arxiv.org/abs/1710.05183

And various other Deep Learning-based projects aim to simulate at least parts of complex animal brains. Here are two related talks that appeared at the COSYNE 2018 conference:
 

A novel deep recurrent network for predicting large scale population responses to natural video

To understand the representations in visual cortex, we need to be able to faithfully predict neural activity in response
to its natural input: a continuous video stream. Since cortical activity is highly variable and context dependent,
this prediction is already difficult for integrated neural activity to static natural images, and even more
difficult for dynamic responses to movies. In awake animals under free-viewing conditions, eye movements and
brain states add to this response variability, making the prediction problem even harder. While deep convolutional
networks have recently been shown to improve prediction performance over linear-nonlinear type models and
are currently considered state-of-the-art, they make suboptimal use of the data, because they cannot account for
stimulus-independent variability.

Here, we developed a new deep recurrent network architecture that predicts the deconvolved Ca++ activity of
thousands of simultaneously recorded neurons in mouse V1 to natural videos, recorded at 7Hz and 30Hz, respectively,
while simultaneously estimating dynamic gaze position and brain state changes related to running
state and pupil dilation. In addition to the natural movie input, the network uses pupil position and dilation extracted
from a video of the animal’s eye, as well as treadmill velocity. The unknown relation between pupil position
and gaze position on the monitor is learned by the network during training based solely on predicting neural activity.
We find that incorporating all these elements (nonlinear recurrent network, running speed, pupil position, and
pupil dilation) significantly increases the prediction performance of the network. Our network achieves between
40% and 60% of a leave-one-out estimate of single-trial correlation with the mean response over repeated presentations.
To the best of our knowledge, this makes our model the state-of-the-art on single trial prediction of
dynamic responses to natural movies on large neuronal populations.


and
 

A modular neural network model of the primate grasping circuit

Grasping objects is an essential part of primate behavior. In macaque monkeys, the core of the grasping circuit is
formed by the interconnected anterior intraparietal area (AIP), the hand area (F5) of the ventral premotor cortex,
and the hand area of the motor cortex (M1). Generating appropriate delayed grasping movements involves many
inter-related steps, from identification of visual target identity and spatial location, to the determination and maintenance
of the appropriate movement plan, and finally the control of muscles. We hypothesized that the grasping
circuit could be effectively modeled by training a modular recurrent neural network on visual object features to
output muscle dynamics. To train and test our model, we recorded from neural populations simultaneously from
AIP, F5, and M1 using floating microelectrode arrays while two macaque monkeys performed a delayed grasping
task in which ~50 objects of distinct shape, size, and orientation had to be grasped and lifted. During every
trial, arm and hand kinematics were recorded and transformed into a 50-dimension muscle length space using
a musculoskeletal model. The network model was successfully trained to produce single-trial muscle velocities
during grasping (normalized error: <5 %). Interestingly, the internal dynamics of the model matched the recorded
neural data (canonical correlation, mean r=0.7 over 12 dimensions). Furthermore, biological regularizations were
implemented to encourage simplistic solutions, which resulted in a strong alignment between the contributions
of modules of the model and the recorded brain areas to the canonical variables (r=0.80) that was not present
in untrained networks (r=-0.06). Our model therefore provides a simplistic and accurate representation of the
primate grasping circuit and suggests that the combined processing of these areas can be well understood as a
network optimized to transform object information into the muscle dynamics required to grasp each object.


That poster can be downloaded here:

http://www.jmichaels...COSYNE_2018.pdf

An even weaker sort of simulation leverages Machine Learning, but only uses behavioral data to train it -- in other words, no brain scans; just videos of how the animal behaves. The example of this most people on this forum have probably heard about is this work from AllenAI on simulating dog behavior:

https://www.technolo...nks-like-a-dog/

About a year earlier there was this work at ICLR on hierarchical behavioral modelling:

https://arxiv.org/abs/1611.00094

And there was this earlier work on generating videos of mice running around in a habitat (in lab).

In the not-too-distant future, I think the approach that will win out will be meso-scale "simulations" (not really simulations), based on Deep Learning applied to brain population responses + behavioral data (e.g. videos of how the animal behaves), along with neuromuscular models of the organisms interacting with a simulated environment. This seems to be a good compromise between the super-detailed simulations at the individual neuron or synapse level, and the very crude, purely behavior-based models (no brain data).
  • Yuli Ban, Outlook and Alislaws like this

#2
Ewolf20

Ewolf20

    Member

  • Members
  • PipPipPipPipPip
  • 209 posts
  • LocationColumbia,sc

An interesting Tweet:

https://mobile.twitt...585706485297153
 

Louis Scheffer of @HHMIJanelia promises us a whole simulated Drosophila within 5-10 years. If they fail I get beer, if they succeed I buy beer. I am pretty confident. Why do people believe the path from connectomics to simulation is short?


Fruit flies are pretty complicated! They have about 135,000 neurons, lots of connections, and a complex body. This group plans to simulate the entire body -- including the ability to fly, have sex, and so forth -- in a complex virtual environment, within the next 5 to 10 years! Given how much computing power people are willing to devote to large projects like this, I don't think the compute will be the limiting factor.

If they succeed in doing this, then super-accurate mouse models probably wouldn't be more than 5 or 10 years further off. I say this not only because fruit flies are not too far from mice in complexity (they are pretty far, but not that far), but also because the success of such a project would spur on orders of magnitude more time, money and energy (people) devoted to animal simulations.

Other groups have attempted to simulate a C. elegans brain and body:

https://www.biorxiv....17/11/26/209155

https://mobile.twitt...594152894840834
 

We can’t simulate all behaviors but can reproduce various locomotion neural dynamics in response to stimuli like fwd, backward, turns and chemo sensation. With current comp tools in few years we’ll be able to simulate #celegans fully. So 10 years for the fly could be doable.


I recall that groups have attempted to simulate a bee brain, which has 1 million neurons -- but have not attempted to simulate the whole body and environment.

Google has some researchers working on Meso-scale simulations, that leverage Deep Learning:

https://arxiv.org/abs/1710.05183

And various other Deep Learning-based projects aim to simulate at least parts of complex animal brains. Here are two related talks that appeared at the COSYNE 2018 conference:
 

A novel deep recurrent network for predicting large scale population responses to natural video

To understand the representations in visual cortex, we need to be able to faithfully predict neural activity in response
to its natural input: a continuous video stream. Since cortical activity is highly variable and context dependent,
this prediction is already difficult for integrated neural activity to static natural images, and even more
difficult for dynamic responses to movies. In awake animals under free-viewing conditions, eye movements and
brain states add to this response variability, making the prediction problem even harder. While deep convolutional
networks have recently been shown to improve prediction performance over linear-nonlinear type models and
are currently considered state-of-the-art, they make suboptimal use of the data, because they cannot account for
stimulus-independent variability.

Here, we developed a new deep recurrent network architecture that predicts the deconvolved Ca++ activity of
thousands of simultaneously recorded neurons in mouse V1 to natural videos, recorded at 7Hz and 30Hz, respectively,
while simultaneously estimating dynamic gaze position and brain state changes related to running
state and pupil dilation. In addition to the natural movie input, the network uses pupil position and dilation extracted
from a video of the animal’s eye, as well as treadmill velocity. The unknown relation between pupil position
and gaze position on the monitor is learned by the network during training based solely on predicting neural activity.
We find that incorporating all these elements (nonlinear recurrent network, running speed, pupil position, and
pupil dilation) significantly increases the prediction performance of the network. Our network achieves between
40% and 60% of a leave-one-out estimate of single-trial correlation with the mean response over repeated presentations.
To the best of our knowledge, this makes our model the state-of-the-art on single trial prediction of
dynamic responses to natural movies on large neuronal populations.


and
 

A modular neural network model of the primate grasping circuit

Grasping objects is an essential part of primate behavior. In macaque monkeys, the core of the grasping circuit is
formed by the interconnected anterior intraparietal area (AIP), the hand area (F5) of the ventral premotor cortex,
and the hand area of the motor cortex (M1). Generating appropriate delayed grasping movements involves many
inter-related steps, from identification of visual target identity and spatial location, to the determination and maintenance
of the appropriate movement plan, and finally the control of muscles. We hypothesized that the grasping
circuit could be effectively modeled by training a modular recurrent neural network on visual object features to
output muscle dynamics. To train and test our model, we recorded from neural populations simultaneously from
AIP, F5, and M1 using floating microelectrode arrays while two macaque monkeys performed a delayed grasping
task in which ~50 objects of distinct shape, size, and orientation had to be grasped and lifted. During every
trial, arm and hand kinematics were recorded and transformed into a 50-dimension muscle length space using
a musculoskeletal model. The network model was successfully trained to produce single-trial muscle velocities
during grasping (normalized error: <5 %). Interestingly, the internal dynamics of the model matched the recorded
neural data (canonical correlation, mean r=0.7 over 12 dimensions). Furthermore, biological regularizations were
implemented to encourage simplistic solutions, which resulted in a strong alignment between the contributions
of modules of the model and the recorded brain areas to the canonical variables (r=0.80) that was not present
in untrained networks (r=-0.06). Our model therefore provides a simplistic and accurate representation of the
primate grasping circuit and suggests that the combined processing of these areas can be well understood as a
network optimized to transform object information into the muscle dynamics required to grasp each object.


That poster can be downloaded here:

http://www.jmichaels...COSYNE_2018.pdf

An even weaker sort of simulation leverages Machine Learning, but only uses behavioral data to train it -- in other words, no brain scans; just videos of how the animal behaves. The example of this most people on this forum have probably heard about is this work from AllenAI on simulating dog behavior:

https://www.technolo...nks-like-a-dog/

About a year earlier there was this work at ICLR on hierarchical behavioral modelling:

https://arxiv.org/abs/1611.00094

And there was this earlier work on generating videos of mice running around in a habitat (in lab).

In the not-too-distant future, I think the approach that will win out will be meso-scale "simulations" (not really simulations), based on Deep Learning applied to brain population responses + behavioral data (e.g. videos of how the animal behaves), along with neuromuscular models of the organisms interacting with a simulated environment. This seems to be a good compromise between the super-detailed simulations at the individual neuron or synapse level, and the very crude, purely behavior-based models (no brain data).

 

 

 

i'm not exactly trying to be rude....but..could you simplify that in a approachable manner.  like i get what it's saying jsut not how it's saying it. 



#3
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,029 posts
  • LocationLondon

Starspawn0 is just running through current and past research efforts to produce accurate simulations of animals and discussing where he thinks its going in future.

 

There's also the implication that this may also tie in with the improved BCI tech which seems to be just around the corner, since that could make figuring out how animal brains work much easier (animals don't like being shoved in MRI scanners, but would probably get used to wearing a little hat)

 

I don't understand a lot of the detail either, but I think the extra detail is more useful for some other members of the forum.

 

I have in the past sat down and done a whole load of googling to understand some of the detail of other posts from Starspawn0 but a "too technical: Didn't read" or "TT:DR" summary would be super useful.



#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,290 posts

i'm not exactly trying to be rude....but..could you simplify that in a approachable manner. like i get what it's saying jsut not how it's saying it.


Ah, come on! It's not that complicated -- and I didn't intend for it to be a challenging read.

TL;DR:

* Some groups of researchers think they can simulate a fruit fly, including body, in virtual environment in 5 to 10 years.

* There are other efforts. They come in three varieties: (1) Detailed simulation; (2) Simulation using brain + behavior data + Machine Learning; (3) Behavior + Machine Learning only.

* The most promising of these is (2), in my opinion. Progress is coming.
  • Yuli Ban and Alislaws like this

#5
Ewolf20

Ewolf20

    Member

  • Members
  • PipPipPipPipPip
  • 209 posts
  • LocationColumbia,sc

 

i'm not exactly trying to be rude....but..could you simplify that in a approachable manner. like i get what it's saying jsut not how it's saying it.


Ah, come on! It's not that complicated -- and I didn't intend for it to be a challenging read.

TL;DR:

* Some groups of researchers think they can simulate a fruit fly, including body, in virtual environment in 5 to 10 years.

* There are other efforts. They come in three varieties: (1) Detailed simulation; (2) Simulation using brain + behavior data + Machine Learning; (3) Behavior + Machine Learning only.

* The most promising of these is (2), in my opinion. Progress is coming.

 

 

 

so they're finding ways to stimulate the senses in a virtual environment? am I right on this one?



#6
NoahJones

NoahJones

    New Member

  • Members
  • Pip
  • 4 posts

 

 

i'm not exactly trying to be rude....but..could you simplify that in a approachable manner. like i get what it's saying jsut not how it's saying it.


Ah, come on! It's not that complicated -- and I didn't intend for it to be a challenging read.

TL;DR:

* Some groups of researchers think they can simulate a fruit fly, including body, in virtual environment in 5 to 10 years.

* There are other efforts. They come in three varieties: (1) Detailed simulation; (2) Simulation using brain + behavior data + Machine Learning; (3) Behavior + Machine Learning only.

* The most promising of these is (2), in my opinion. Progress is coming.

 

 

 

so they're finding ways to stimulate the senses in a virtual environment? am I right on this one?

 

 

its looks like. But i really didnt like it. I dont know. Its kinda weird.



#7
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,290 posts
TL;DR: There are some cool videos at the end of this post. Don't read just the first two sentences and miss out on the fun!


Interesting paper submitted to ICLR 2020, on a virtual rodent model:

Deep neuroethology of a virtual rodent

https://openreview.n...m?id=SyxrxR4KPS
 

In this work we develop a virtual rodent that learns to flexibly apply a broad motor repertoire, including righting, running, leaping and rearing, to solve multiple tasks in a simulated world. We analyze the artificial neural mechanisms underlying the virtual rodent's motor capabilities using a neuroethological approach, where we characterize neural activity patterns relative to the rodent's behavior and goals. We show that the rodent solves tasks by using a shared set of force patterns that are orchestrated into task-specific behaviors over longer timescales. Through methods familiar to neuroscientists, including representational similarity analysis, dimensionality reduction techniques, and targeted perturbations, we show that the networks produce these behaviors using at least two classes of behavioral representations, one that explicitly encodes behavioral kinematics in a task-invariant manner, and a second that encodes task-specific behavioral strategies. Overall, the virtual rat promises to facilitate grounded collaborations between deep reinforcemet learning and motor neuroscience.


It's a limited model, but shows promise. And, it's built in a different way than I had in mind / predicted for this thread -- I had intended something like this would be built using rodent brain data:
 

In the not-too-distant future, I think the approach that will win out will be meso-scale "simulations" (not really simulations), based on Deep Learning applied to brain population responses + behavioral data (e.g. videos of how the animal behaves), along with neuromuscular models of the organisms interacting with a simulated environment. This seems to be a good compromise between the super-detailed simulations at the individual neuron or synapse level, and the very crude, purely behavior-based models (no brain data).


But, this virtual rodent is built via "task-based training". Still, what they've built is impressive!. Here are some video links from the paper:

1. "Gaps" task: YouTube video


2. "Forage" task: YouTube video


3. "Escape" task: YouTube video


4. "Two-tap" task: YouTube video


5. "Behavioral brain map": YouTube video


6. "Neural activity stream during gaps task": YouTube video


7. "Neural dynamics during two-tap task": YouTube video


8. "Neural dynamics during forage": YouTube video


9. "Policy layer-2 inactivation": YouTube video


10. "Core layer inactivation": YouTube video


11. "Policy layer-2 spin inactivation": YouTube video


And there are several more on "variants" of the model -- see the paper.

The question is: who built it? I guess we'll have to wait to find out! (I have some guesses.)
  • Yuli Ban likes this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users