Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

True Artificial Intelligence Could Be Closer Than We Think, Via Brain-Computer Interfaces + Deep Learning

AI BCIs Brain-Computer Interface Artificial Intelligence

  • Please log in to reply
77 replies to this topic

#21
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,852 posts
  • LocationLondon

Why so skeptical about neauralink? The "commercial device in 10 years" was a claim that was based around some sort of implant for medical purposes, similar in theory to some of the existing stuff for Parkinson's, not for the full 1million linked neurons product, or that's what I took away from Tim urban's article. 

 

oh no wait:

 

 

https://www.techrada.../news/neuralink

“We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years.”

 

Beyond that, the jury is out. There is a law that corresponds to Moore’s law, which states that the number of simultaneously recorded neurons has doubled every seven years. That could mean that we’re anywhere between 25 and 50 years from having a mainstream device.

 

Musk has different ideas: “I think we are about eight to 10 years away from this being usable by people with no disability … It is important to note that this depends heavily on regulatory approval timing and how well our devices work on people with disabilities.”

so we will see what's going on when they release their first medical devices and revise estimates from there. 

 

I read somewhere (cant remember where) that if you tell people something will take longer than 10 years they lose interest, 10 years is about as far away as people can conceptualise properly, anything beyond that just gets filed under "basically never happen" in peoples heads, hence smoking, global warming etc. so it could be they picked the 10 year date because they hadn't started yet, had no real idea how long it would take and they wanted to generate interest. 



#22
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
A new bit of research by some neuroscientists to add to the pile:

https://www.biorxiv....18/05/21/327601
 

Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work, we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.


So, they now have models that do an even better job predicting brain responses to language, by using LSTM neural nets. I suppose it shouldn't be a surprise that they learn brain-like features: a model that can predict the flow of human language sufficiently well should contain at least some information about the thing that generated it -- in some form or other.

I'd like to see them go the other way around -- use brain data to produce a language model or neural net that outputs a "meaning representation" for a block of text. Unfortunately, it would take massively more data than can be extracted to date. As usual, we'll have to wait for the BCIs to arrive to make it possible!
  • Yuli Ban, SkyHize and Alislaws like this

#23
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
Another recent bit of research:

https://arxiv.org/abs/1805.09975
 

The human autonomic nervous system has evolved over millions of years and is essential for survival and responding to threats. As people learn to navigate the world, "fight or flight" responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.) We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function that mimics human autonomic nervous system responses based on peripheral pulse measurements. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage.


They used photoplethysmographic measurements of changes in blood volume to train a Reinforcement Learning agent (first, they used Deep Learning to train a neural net to predict blood volume changes given frames of video; and then they used this net to improve training of a Reinforcement Learning agent). These measurements are known to be correlated with autonomic nervous system responses, just as fMRI measurements of blood oxygenation in the brain are correlated with neural activity.

A high-resoultion BCI would produce a much better signal to train Reinforcement Learning agents. But, again, we will have to wait for them to be available.

....

One more thing: people have been talking recently about the need for "causal knowledge" and "causal models", and that so-called Big Data won't deliver that. Actually, I see no compelling reason why existing sources of data shouldn't -- if you record everything a person does in their life, all the words they say, the moves they make, what they see (and how they see it), what they hear, touch, taste, and so on, then you basically have all the inputs they do to build your causal model. And this includes understanding the difference between "probability of Y given that I DO X" and "probability of Y given that X OCCURS". With that information + right training algorithms + maybe some brain priors, one should be able to train a model to make causal predictions as accurately as a human, even though the computer only got to passively observe the world.

Google doesn't have complete recordings of people's lives from birth; but they do have billions of videos, which should, in principle, contain that much and more information from which to learn causal relationships.

But, in the event that that doesn't pan out (e.g. if the algorithms don't work as well as expected), there is still brain data. From brain recordings, it should be possible to extract complex features about objects that reveal causal relationships. And, the causal inference mechanism used by the brain will likely be exposed and learnable using existing methods.
  • Yuli Ban likes this

#24
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
^^^Reflecting on this most recent work, I can see that it could enable researchers to build much smarter AI agents, by working out the mapping from frames of video to autonomic nervous system responses; but this mapping is going to be very, very complicated for anything having to do with the real world (not video games or virtual worlds), except maybe very limited domains. So, lots and lots of training data will be needed work out a reasonable approximation.

However, BCI data contains much more information than the kind of body-response data they collect; and should expose the micro-steps that the brain takes in going from video --> nervous system respose. It's like the difference between trying to learn a function / mapping just given input-output examples, and learning the function given not only those values but also some encoding of a step-by-step recipe for how to do it.
  • Yuli Ban likes this

#25
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
Another piece of research, posted to the arxives just today by a group at Harvard and the IBM Watson AI group (I think they are all part of David Cox's lab):

https://arxiv.org/abs/1805.10734


Title: A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception

Abstract: While deep neural networks take loose inspiration from neuroscience, it is an open question how seriously to take the analogies between artificial deep networks and biological neuronal systems. Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial. However, while CNNs capture key properties of the average responses of cortical neurons, they fail to explain other properties of these neurons. For one, CNNs typically require large quantities of labeled input data for training. Our own brains, in contrast, rarely have access to this kind of supervision, so to the extent that representations are similar between CNNs and brains, this similarity must arise via different training paths. In addition, neurons in visual cortex produce complex time-varying responses even to static inputs, and they dynamically tune themselves to temporal regularities in the visual environment. We argue that these differences are clues to fundamental differences between the computations performed in the brain and in deep networks. To begin to close the gap, here we study the emergent properties of a previously-described recurrent generative network that is trained to predict future video frames in a self-supervised manner. Remarkably, the model is able to capture a wide variety of seemingly disparate phenomena observed in visual cortex, ranging from single unit response dynamics to complex perceptual motion illusions. These results suggest potentially deep connections between recurrent predictive neural network models and the brain, providing new leads that can enrich both fields.


Results like that were previously known for still images; but, now, they've managed to show that the features neural nets (PredNet) learn in predicting frames of video closely match features observed in the brain.

You may recall that I mentioned in the OP that a really good application of BCIs would be to build a model that predicts brain responses to video. The present work kind of goes the other way around -- it leverages big data that doesn't involve the brain to prodict brain responses. As good as this new result is, the neural net is likely going to miss some high-level semantic features in the video. The way to add these to the model, and get it to match the brain even more accurately, would be to flip it around and use brain data + video to train the model.

One thing that we can get from this model, though, that extends beyond the work itself: by showing that the model accurately matches at least some of the brain's features, it should give one confidence that the model is "on the right track" -- and that training even larger models, with lots more data, should lead to even more accurate video prediction.
  • Yuli Ban and Alislaws like this

#26
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
https://mobile.twitt...939813512462336
 

New preprint on state of the art #Deeplearning models of the retinal response to natural scenes; biorxiv.org/content/early/… The model's interior functionally matches that of the retina and it generalizes to capture decades of #neuroscience experiments on artificial stimuli


The paper:

https://www.biorxiv....18/06/08/340943
 

We addressed both these issues by applying convolutional neural network models (CNNs) to capture retinal responses to natural scenes. We find that CNN models predict natural scene responses with high accuracy, achieving performance close to the fundamental limits of predictability set by intrinsic cellular variability. Furthermore, individual internal units of the model are highly correlated with actual retinal interneuron responses that were recorded separately and never presented to the model during training. Finally, we find that models fit only to natural scenes, but not white noise, reproduce a range of phenomena previously described using distinct artificial stimuli, including frequency doubling, latency encoding, motion anticipation, fast contrast adaptation, synchronized responses to motion reversal and object motion sensitivity. Further examination of the model revealed extremely rapid context dependence of retinal feature sensitivity under natural scenes using an analysis not feasible from direct examination of retinal responses. Overall, these results show that nonlinear retinal processes engaged by artificial stimuli are also engaged in and relevant to natural visual processing, and that CNN models form a powerful and unifying tool to study how sensory circuitry produces computations in a natural context.


And more elaborate neural net models will likely produce good models of the whole brain, at least at the level of neural population responses, given enough training data from high-resolution brain scanners.
  • Yuli Ban likes this

#27
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
Here is some work on decoding speech from human auditory cortex signals:

https://www.biorxiv....18/06/19/350124

To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex.

....

Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline.


It likely works in reverse -- that is, it can map text to the brain sequences that drive the generation of the spoken words.
  • Casey and Yuli Ban like this

#28
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
This is an interesting Twitter thread by a Google Brain neuroscientist (and professor) named David Sussillo:

https://mobile.twitt...637406765199361

He seems to have realized, like me, that thunderstorm clouds of change are on the horizon. A data explosion from cheap brain scanners + advances in deep learning, will produce a golden age of systems neuroscience, as well as unique challenges:
 

In summary, systems neuroscience is about get bonkers. The advent of inexpensive, high-quality recording equipment with the maturation of deep learning is an incredible opportunity!

Couldn't be a better time to be in the field, but we should be preparing now.


It will also lead to explosive advances in AI using the first idea he mentions here (as I have said):
 

With ML two modes of inquiry have started to take off:

1. Model the data - optimize a system that generates your simultaneously recorded data.

2. Model the task - optimize a system that performs a task analogous to yours and then compare model internals to recorded neurons.


But he usn't yet thinking about data coming from BCIs -- rather, he's thinking about Neuropixel probes (applied to animals):
 

This is already happening in a few labs with Neuropixels probes or Ca2+ imaging. IMO, mass deployment of Neuropixels probes represents the vanguard of the game changing recording technology that systems neuroscience has been waiting for.


Still, animal AI models inferred from Deep Learning applied to massive amounts of neuropixel probe data would be amazing, and encouraging. Results like this have already started trickling out (see the Cosyne posters mentioned in my postings above); but it's still very new. I expect we will soon hear about results that some will label "shocking" and "frightening"; e.g. see what I've written in other threads about "artificial animals".
  • Casey and Alislaws like this

#29
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
David Sussillo, the guy behind the previous post on the data revolution ahead in systems neuroscience, also gave a talk at the Simons Institute a few months back on using Deep Learning to model neural population dynamics:

https://youtu.be/08xa7bE5iTQ

It's an excellent talk!

He only discusses results for neural populations without input (only initial conditions that encode previous state and input), but says it works more generally. He says this (no input case) turns out to be a shockingly good model for many parts of the brain over about a 1 or 2 second interval, but obviously not for all -- for example, the V1 visual area is affected by visual input over short timescales; but we know if you add that input to the Deep Learned model, we can fit V1, too.

He talks about how shocked he was about how well all this worked.

I seem to recall someone asking him near the end whether the deep neural net model of grasping behavior could be used to guide actual complex arm motions (I was listening on the road, and may have misheard). And he said they would need to train on a diverse dataset, but that it might work -- he mentioned how the models are better at "interpolation", rather than "extrapolation", hence the need for diverse data. I am actually skeptical that it wouldn't extrapolate, as interpolation at the model-learning level is not interpolation at the behavior level.

Still, it would amazing to see that! -- imagine a neural net model fluidly guiding a primate's arm and hands, given some high-level command of the action to be taken!
 
Listen to the part beginning around 1 hour 16 minutes in where he answers an audience question about whether this model really is meaningful.  He rags on connectomics a little, and then says if you stitch together enough datasets the Deep Learning approach should give you a real, abstract model of a large part of the primate brain.   

My contention is that if you scale this up, and use human brain data acquired from BCIs, you will get similarly shocking results. No, you won't reproduce a perfect simulation of a human brain; but you will recover enough of the functioning to build good video understanding systems and chatbot rerankers, that will result in incredibly human-like conversations. I think this will also result in much more agile robots -- humans will "program" them by using a BCIs to make them walk and grasp, and then when enough data are acquired, the BCI will no longer be needed.

I'm glad to hear Google Brain is doing this line of work. It will be immensely important when those BCIs arrive!
  • Alislaws likes this

#30
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
Here is another interesting paper:

https://www.biorxiv....18/07/08/364489

The paper is ostensibly about:

Here, RNNs are trained to reproduce animal behavior while also recapitulating key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Further, as the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single neuron firing rate motifs may not adequately capture important population motifs. Second, by visualizing the RNN's dynamics in low-dimensional projections, we demonstrate that even when RNNs recapitulate key neurophysiological features on both the single neuron and population levels, it can do so through distinctly different dynamical mechanisms.


And these issues can be dealt with.

But the part that I found most interesting was the fact that they were able to train models to generalize nicely:

Finally, we show that these dynamics are sufficient for the RNN to generalize to a target switch task it was not
trained on.


That's a good sign that they really are capturing the underlying dynamics, and not simply "memorizing" behaviors.

Ordinarily, deep neural nets trained with Deep Learning don't generalize as well as people would like -- they generalize as well as they maybe expect, but not as well as they hope. However, the tasks where deep neural nets generalize poorly tend to be "unnatural", and come from language or other human-created sources. When trained to predict natural physical phenomena like fluid dynamics, smoke behavior, simple kinematics, and other "physical" and even "biological" processes, they tend to do much better.

Here's the thing about that: the brain behaves like these "natural" systems that are easier to predict, even though it acts on and generates things that are "unnatural"; the unnatural stuff arises from the long-term dynamics of a natural system.

Thus, we should expect that Deep Learning will generalize nicely on brain behaviors -- just as it does with predicting fluid dynamics -- and as a result of the long-term dynamics, we might see the emergence of artificial things like language use and symbol manipulation.
  • Yuli Ban likes this

#31
tomasth

tomasth

    Member

  • Members
  • PipPipPipPip
  • 169 posts

What if the reason for "even when RNNs recapitulate key neurophysiological features on both the single neuron and population levels, it can do so through distinctly different dynamical mechanisms" is that they are in capable of "emergence of artificial things" "as a result of the long-term dynamics" ?

 

Some underlying dynamics maybe easier then other for a variety of dynamical mechanisms to model.



#32
moderate_ai

moderate_ai

    New Member

  • Members
  • Pip
  • 6 posts

The creepy aspect of this, is how the model is learning what inputs should nudge the brain towards a target state. Which advertisers are going to throw money at in a big way.

 

I don't know what the limits of charisma and persuasion are (how well a perfect model could manipulate someone), but I guess we're going to find out in the coming decades. Hopefully it doesn't uncover too many cheap psychological tricks, the debating equivalent of optical illusions.


  • Yuli Ban and starspawn0 like this

#33
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
Welcome, moderate_ai. I see you've found a new home; though, this forum may not last. Yes, neuromarketing is going to be a big driver for this line of work, once the BCIs arrive, and once it develops.

#34
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts

This is an interesting thread:

https://mobile.twitt...026379515801601

I just want to focus on one small comment, unrelated to the main thrust of the work:
 

4. What if some neurons “fall asleep” on the job and don’t respond to the image? This actually happens very often, and yet the brain is remarkably robust to these failures.

5. Even if 90% of the neurons don’t do their job, we can still recognize the fox. Even if we randomly change 90% of the pixels, we can still recognize the fox. The brain is robust to a lot of manipulations like that.


Sounds like the brain's need for robustness will mean that a lot of its functioning can be seen at the population level -- or at least as aggregates of many neurons (that may be spatially isolated). Probably fine details of memory don't work this way; but parts of the visual cortex and motor and pre-motor cortex, do.

I wouldn't be surprised if what we call "thinking", "reasoning", "planning", "symbol processing", and "language understanding" can be read off from population-level dynamics; and that if a model emulates them approximately, it will express some of these human competencies.

If this is true, it would be further evidence that the "rough emulation" via Deep Learning approach to building powerful AI is sound.


  • funkervogt likes this

#35
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 634 posts

The creepy aspect of this, is how the model is learning what inputs should nudge the brain towards a target state. Which advertisers are going to throw money at in a big way.

 

I don't know what the limits of charisma and persuasion are (how well a perfect model could manipulate someone), but I guess we're going to find out in the coming decades. Hopefully it doesn't uncover too many cheap psychological tricks, the debating equivalent of optical illusions.

Hello moderate_ai. It's great to see another former member of Kurzweilai here. 



#36
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
It just occurred to me another thing one can read off from David Sussillo's talk above is the fact that it may require a lot fewer training sessions (and/or fewer number of hours) to acquire enough data to build "rough emulation" models than one might think. I have given reasons for this before, but Sussillo's talk suggests a new one:

One thing he says is that the behavior of parts of the motor cortex are roughly independent of the other parts of the brain, in the sense that he is able to train models to predict the dynamics of these brain regions very accurately over a 1 to 2 second interval, without using constant input and feedback from neighboring parts of the brain. Obviously there is going to be some influence from the rest of the brain; but to a first, second, or third-order approximation, these inputs don't matter over short time intervals.

What this suggests, therefore, is that you can "pre-train" a model to predict the brain, by training independent models of different brain regions first, then weakly hook them together, and train end-to-end, afterwards.

And the reason that that is great is that you can train, say, dozens or hundreds of independent little modules in parallel, using the same hours of training data from brain recordings. It not only speeds up the training, but also magnifies the gain from the same data source -- compared to training for similar tasks using similar methods and similar amounts of (non-brain) data.

Perhaps, therefore, 100 hours of data would be equivalent of 5,000 hours of data from traditional (non-brain) sources. If so, you wouldn't need very much data at all to build AI models -- you could even use a single source, from a single individual.

....

I've already said that I think large parts of cognition can be explained by "population dynamics", but what about this "independence"?

Well, Sussillo says it himself. He says that there are regions like V1 that, obviously, make use of continuous input from sensors (eyes); but that "dynamical" (non-continuous-input, only input for initial conditions) models work for many parts of the brain, with good accuracy, not just the motor cortex that he spoken on in depth. Perhaps a lot of cognition is like some sort of "mixture model", where different modules (either innate or learned) do their jobs, and when they come up with something they want the rest of the brain to pay attention to, they rise above the cacophony of other modules, and take control. The brain is a chorus of nearly-independent voices, that occasionally stand out from the crowd, and occasionally become united in harmony.
  • Casey and Yuli Ban like this

#37
moderate_ai

moderate_ai

    New Member

  • Members
  • Pip
  • 6 posts

Well, Sussillo says it himself. He says that there are regions like V1 that, obviously, make use of continuous input from sensors (eyes); but that "dynamical" (non-continuous-input, only input for initial conditions) models work for many parts of the brain, with good accuracy, not just the motor cortex that he spoken on in depth. Perhaps a lot of cognition is like some sort of "mixture model", where different modules (either innate or learned) do their jobs, and when they come up with something they want the rest of the brain to pay attention to, they rise above the cacophony of other modules, and take control. The brain is a chorus of nearly-independent voices, that occasionally stand out from the crowd, and occasionally become united in harmony.

 

Regarding which concepts bubble to the top, I found this paper on how inhibitory neurons regulate sparsity quite interesting, http://brain.mpg.de/..._et_al_NN07.pdf. I wonder if the basic mechanism which maintains sparsity for simple inputs (in this case odor intensity) might also apply to mixing of higher level concepts.


  • starspawn0 likes this

#38
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
This is interesting; I will study it more carefully. Sparse coding isn't quite what I had in mind in what I wrote above -- it occurs at a lower level of organization than these little "neural modules" I wrote about; but, perhaps, as you maybe were suggesting, a similar phenomenon to sparse coding occurs at the module-level, and results in modular specialization.

Actually, I guess you were talking about representations of abstractions. I could see that both "distributed" and "sparse" representations have their advantages. Maybe I'll write a separate post about it.

....

In addition to the kind of independence I wrote about, and also the population dynamics, I should probably have also mentioned that brains are "resilient" and "robust" to damage: cut out a part of the brain, and depending on where it was done, the person will not be fatally impaired. You'll still be able to talk to them, and they may not seem to have an impairment. Cut out the hippocampus, for instance, and they may have difficulty integrating knowledge and forming new memories; but they will still be able to have a conversation with you, that seems normal over short stretches. And, people with amnesia (intact language understanding, intact procedural and declarative memory, but damaged episodic memory) can still function -- will seem mostly normal.

The great thing I see about that is that if you don't record enough brain data from a single individual, to know how they will react in virtually every situation, it may not matter, as far as predicting brain responses with good accuracy -- ones rough emulation will act similar to a high-res emulation of a person with mild brain damage, where a few bits of procedural memory or functionality are deleted. For example, it's unlikely that from a few hundred hours of recording a single individual's response to language you will have data for responses to every word or phrase they have been exposed to; but it may be enough to encode how they would respond if they had a small stroke and forgot a few thousand things they knew -- you will still be able to predict their responses most of the time.

Another great thing that "resilience" and "independence" suggest is that the brain is "stable", in the sense that "noise" and "errors" in the modules don't rapidly spiral out of control, resulting in a catastrophic breakdown of the whole system. This is good for our Deep Learning approach, as it could mean that the dynamics are "stable or quasi-stable" and try to avoid highly chaotic orbits.

#39
tomasth

tomasth

    Member

  • Members
  • PipPipPipPip
  • 169 posts

Resilience failing is importent for any safety critical system to emulate. Can those capabilities copied to planes , AV , power plants est ?



#40
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 864 posts
This is an interesting paper:

https://www.ncbi.nlm...les/PMC4708087/
 

Complex animal behaviors are likely built from simpler modules, but their systematic identification in mammals remains a significant challenge. Here we use depth imaging to show that three-dimensional (3D) mouse pose dynamics are structured at the sub-second timescale. Computational modeling of these fast dynamics effectively describes mouse behavior as a series of reused and stereotyped modules with defined transition probabilities. We demonstrate this combined 3D imaging and machine learning method can be used to unmask potential strategies employed by the brain to adapt to the environment, to capture both predicted and previously-hidden phenotypes caused by genetic or neural manipulations, and to systematically expose the global structure of behavior within an experiment. This work reveals that mouse body language is built from identifiable components and is organized in a predictable fashion; deciphering this language establishes an objective framework for characterizing the influence of environmental cues, genes and neural activity on behavior.


Another study, by some of the same group, wherein they used video of mouse behavior to train a Deep Learning + probabilistic graphical model to predict future mouse video:

https://arxiv.org/abs/1603.06277
 

We propose a general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths. Our model family augments graphical structure in latent variables with neural network observation models. For inference, we extend variational autoencoders to use graphical model approximating distributions with recognition networks that output conjugate potentials. All components of these models are learned simultaneously with a single objective, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick. We illustrate this framework with several example models and an application to mouse behavioral phenotyping.



And there was also a study on predicting dog behaviors that people might have seen:

https://arxiv.org/abs/1803.10827

Again, no brain data was used. If Deep Learning can be used to train a neural net to model zombie-like mouse video given only behavioral data, imagine what it could do if you just give it a little brain data!

There have also been a few studies that use brain data, as I mentioned earlier in this thread. For example, there was a study that used Deep Learning + brain data + behavioral data to jointly model mouse whisker movements, running on a track-pad, pupil dilation, visual fixations, and visual cortex processing. It didn't model smell, complex planning, location tracking, and many other things.

....

Another thing worth remarking on, that is tangential to this thread: not only can mouse motions be broken up into "motion syllables and phrases", but the same is true of human motion. It's also true of conversation at not only the basic level, but also at a high level. Some of my relatives, for example, have such modular, stereotyped conversations that I have been writing down pat answers to some of their questions -- and when they ask me again, I just show them the answer I wrote down. I also know people who have more free-form conversations; but I bet even they can be broken down into basic components -- though, different people will have different associated components.

This gives me hope that, even if it takes a while until machines can pass a Turing Test, high-performing conversational agents built using a "dialog state graph" with a gargantuan number of states, and with a high level of granularity, will make it possible for machines to have human-like conversations near-term, so long as one doesn't expect deep, philosophical conversations where people generate long monologues, and expect their listeners to follow every word.
  • Yuli Ban likes this





Also tagged with one or more of these keywords: AI, BCIs, Brain-Computer Interface, Artificial Intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users