Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

True Artificial Intelligence Could Be Closer Than We Think, Via Brain-Computer Interfaces + Deep Learning

AI BCIs Brain-Computer Interface Artificial Intelligence

  • Please log in to reply
86 replies to this topic

#61
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 691 posts

Do BCIs do a better job of seeing your brain activity if you have a shaved head? Does hair block your brain's signals? 


  • starspawn0 likes this

#62
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
It shouldn't, and the kind Mary Lou Jepsen is building won't. In fact, you can even put a thick wig on, and it should still work -- unless the wig blocks near-infrared light.

Basically, as light from the LED array passes through hair, it will scatter (and some will be absorbed) -- but it scatters even more as it passes through the skull and through the first layers of the brain. All that scattering is taken into consideration. If you are given the exit waveform that strikes the camera array, you can invert the pattern.

Actually, Mary Lou Jepsen's method doesn't need to invert the pattern of light -- it just tries to find the right waveform so that it focuses on each voxel. And ultrasound is used to do this. The main problem lies with getting the ultrasound to focus, to form a "guidestar". Fortunately, it doesn't scatter nearly as much as light, and passes straight into the brain; but there is another problem, namely that it can reflect off the skull from the inside, and cause distortions in the wave pattern. This problem can be fixed using three different transducers.
  • Yuli Ban, Enda Kurina and Alislaws like this

#63
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,938 posts
  • LocationLondon

Have there been any official updates from Open Water since the 2018 TED talk where she mentioned Dev kits sometime this year as what they're shooting for?

 

Either to announce successes or delays or setbacks, or any additions to the long term timeline? 

 

Also I was rewatching that TED talk, and she mentions the ability to "Read/Write" to the brain as well as suppressing neurons.

 

How would you make a neuron fire noninvasively, can ultrasound and red light do that? Is it going to be practical to do that fast enough across enough nerves/neurons coming into the brain in order to interface with a computer. 

 

Do we have indicators on how high res/high speed the V1.0  of their imager will be? she mentions that the tech can go to single neuron level and read at high speed, but was this "there is no physical law preventing us from eventually using this system to scan individual neurons" or "Assuming the system is produced successfully, we should definitely be able to scan individual neurons in v1.0"

 

This tech has me 100% sold on just the medical imaging and Medical/AI research benefits but obviously I'm also hoping this will enable us someday to hijack the senses and place ourselves (subjectively) into a totally realistic virtual environment, indistinguishable from real life to our brains.

 

This would obviously be a world changing tech for a lot of reasons but it also has the advantage that while playing games/hanging out in FIVR you would also be getting real-time medical scans. So if you played daily and gave an AI doctor access to your image data, it could catch things incredibly early. 


  • Yuli Ban, Enda Kurina and starspawn0 like this

#64
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
I didn't notice you had commented on this.

There haven't been any major updates that I've heard about. However, a day or two ago, Jepsen Tweeted about new hires, and scaling-up:

https://mobile.twitt...408133415460864
 

I've been spending what seems like half of my time on @fetcherai and we have accelerated our hiring with this great service. More than even email it's my go-to most hours of the day. Most ways to scale your company involve great hiring processes. Try it. It's great!


@Fetcherai had Tweeted:

https://mobile.twitt...305830972878848
 

Congrats @mljmljmlj & Open Water on another Fetcher hire! This was a sophisticated technical hire, but they found their match in week 2. Don’t already know this team? Watch Mary’s INCREDIBLE #TED talk to get a sense of their tech. #MindBlown #MindRead


Fetcher.com seems to be co-founded by Steve Jurvetson's current wife, Genevieve:

https://mobile.twitter.com/gjurvetson
  • Alislaws likes this

#65
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
A small addendum to my last post: Openwater hired two additional people late last year (maybe they hired more?), whose names don't yet appear on their "about" page, and who I don't think are those Fetcher hires -- I say that because the Fetcher email makes it sound like the hire was more recent than that. One has a background in Mechanical Engineering, and the other is an biomedical imaging expert (ph.d. and short postdoc from U.C. Davis).

The two additional names can be found on this Linkedin page; I'll let you figure out who they are:

https://www.linkedin...ny/openwater-cc
  • Alislaws likes this

#66
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
A few more tidbits:

In January 2019 Openwater hired another physics engineer / researcher, named Tegan Johnson:

https://www.linkedin...n/tegan-johnson

My guess is to help with the lasers.

They've also been looking for someone with a background in ultrasound, so I expect to see another technical hire soon.

Looks like the startup is building up its technical staff.

....

Another piece of news is this work:

https://arxiv.org/abs/1903.06754

We introduce a large-scale dataset of human actions and eye movements while playing Atari videos games. The dataset currently has 44 hours of gameplay data from 16 games and a total of 2.97 million demonstrated actions. Human subjects played games in a frame-by-frame manner to allow enough decision time in order to obtain near-optimal decisions. This dataset could be potentially used for research in imitation learning, reinforcement learning, and visual saliency.


Eye-tracking data is similar to brain data. Both are bio-generated, and eye-tracking data contains information about the cognitive state of the individual; and, as I wrote in one of my earlier posts, it has been used to improve on certain tasks, such as text summarization. Pure brain data would work a lot better -- but eye-track data is still good, and useful.

....

Mark O. Riedl Tweeted something interesting the other day:

https://mobile.twitt...923144545415168

It is beginning to dawn on me how weird I am with regard to the current majority of AI researchers. I came to AI out of HCI. There has always been an element of interacting with users or modeling humans in my work. I publish in AAAI, IJCAI, NAACL, and CHI.

So anyway... Hi welcome to my world fellow AI travelers. Glad to have you along for the ride. Humans exist. Humans are fun. Humans are hard.


I agree with that. Large amounts of text and other media may be sufficient to imitate humans, but data from HCI will do better -- and imitation should be a key goal. To properly understand language and many human decisions, it will be necessary to actually model humans somehow. You can either do that using bio-generated data (e.g. BCI-based and eye-tracking), or through using massive amounts of text and other media -- which give a much weaker signal for how to model humans.

#67
tomasth

tomasth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

Its because we don't don't know how human intelligent work.

So imitation to get artificial human intelligent or close to that , is the first step to improve on that , or learn how it work to do AI in general.



#68
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,495 posts
  • LocationNew Orleans, LA

I realized the other day that zombie AGI sounds quite a bit like some sort of algorithm that has high neuroplasticity and flexibility in such a way that it can create new parameters for itself upon coming across new experiences. Since I have little background in mathematics, it's hard for me to express this, but I'm referring to a kind of neural network that responds to untrained inputs by creating a new subnetwork dedicated towards that input (and reinforced by other inputs) rather than completely breaking. Sort of like a tree sprouting a new branch. I can see how, early on it its emergence, it might create thousands of subnetworks for tasks that are actually identical (e.g. playing chess and go), but at some point, they would cluster into one mega network through interrelated conceptual understanding.

This is all so far beyond us and it doesn't necessarily allow for true AGI, of course, but it sounds quite a bit like zombie AGI.


  • starspawn0 likes this

And remember my friend, future events such as these will affect you in the future.


#69
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
I don't know about neuroplasticity and the other stuff you mentioned. I used the term "zombie" because the system won't have long-term autobiographical memory, and the part of the system built from human brain data will only react, and act as a "critic". The other, "language model" and "media synthesis" component won't be based on brains, but on massive amounts of text and other media; that part should make it act in non-zombie-like ways. Can a zombie/critic + media synthesis hybrid produce a convincing human-like agent? I conjecture it would, and that this can give the system a way to fake a personality and back-story. But it won't be all that fake, since it will really will have many components of human cognition baked-in.

I also conjecture that the part based on brain data will react in very human-like ways. I have many reasons for believing this -- and also believe it can be achieved with a lot of different models. This is in line with what a lot of neuroscientists have said about brain models: lots of different methods work, some just work better than others.
  • Yuli Ban likes this

#70
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
An article in Forbes with an important lesson:

https://www.forbes.c...g-in-the-wings/

There’s a lesson to be learned from HBO’s hit comedy series Silicon Valley. The show tells the story of a brilliant but socially awkward programmer who launches a new company based on his groundbreaking compression algorithm. He created the algorithm to download music faster, but the tech industry picks up on its incredible potential to do much more, and over time, the small company morphs into a provider of video streaming, mass data storage and, eventually, a “whole new internet.”

The lesson? An innovation designed to do one thing might be destined for something completely different and much bigger. It’s a scenario that has played out repeatedly in the business world throughout history -- and not just in high tech.


That is what will happen when high-performing BCIs are unleashed onto the world. They aren't intended to improve AI, but that will be one of their main uses. They will also, yes, lead to a whole new internet, and many other things I have described.
  • Yuli Ban and Alislaws like this

#71
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
This is a great article on Alex Huth's work. What he's working on right now is exactly the path I outlined for how to produce AI in this thread. It's right on the money.

https://research.ute...-and-vive-versa

This is leading Huth and Jain to consider a more streamlined version of the system, where instead of developing a language prediction model and then applying it to the brain, they develop a model that directly predicts brain response. They call this an end-to-end system and it's where Huth and Jain hope to go in their future research. Such a model would improve its performance directly on brain responses. A wrong prediction of brain activity would feedback into the model and spur improvements.

"If this works, then it's possible that this network could learn to read text or intake language similarly to how our brains do," Huth said. "Imagine Google Translate, but it understands what you're saying, instead of just learning a set of rules."


He's using massive amounts of fMRI data; but I think this won't be necessary in the future -- you'll be able to use top-of-the-line wearable BCIs, instead, and much more easily produce the training data.

It's only a matter of time...
  • Casey and Yuli Ban like this

#72
tomasth

tomasth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

And if the consumer wearable BCIs fail to arrive soon enough , scaling of what he does by him other in the west and in the east to achieve the same.


  • Yuli Ban and starspawn0 like this

#73
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
Yes, it's possible that FMRI alone might do it. I'm a little worried about the temporal resolution. Certainly, high spatial and temporal resolution BCIs should do the trick -- and I don't anticipate a severe slowdown in their arrival.
  • Yuli Ban likes this

#74
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
All the info our brain needs for language nearly fits on a floppy disk -- around 1.56 megabytes

https://www.newscien...-a-floppy-disk/

That's very little information. What it suggests is that it may not take all that many brain recordings to acquire the bits of information we use to do language processing. The brain is probably going to have a lot of redundancy in how it represents language, and its method for implementing "language processing" is probably not going to be super-efficient. But, still, even if it takes 100x as many bits of information to read off how the brain does it, that's only about 156 megabytes, well below the amount fed into neural nets to build language models, typically.
  • Zaphod, Casey and Yuli Ban like this

#75
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,495 posts
  • LocationNew Orleans, LA

^ Well that makes sense. All that we hear about GPT-2 using 20 billion 1.5 billion parameters to achieve something remarkable is only that way because GPT-2 is a disembodied neural network. It has to use a much larger model to achieve some level of contextual understanding because it, itself, has never "experienced" any of the things it's writing about.

 

Humans only need so large of a language model because we have a cognitive web in our brains. If you were to relate our brain functions to neural models, there really is no singular "image-recognition" model or "spatial understanding" model. They're all interconnected. Some more connected than others, but all connected nonetheless. Improving one improves them all recursively.

 

The body experiences what various things are. I can taste, smell, see apples in order to form words and stories about apples. I don't need to read 50 million words just to gain a rough understanding of apples. I can just eat an apple, remember that experience for life, and learn the word for apple.

 

Using brain data essentially cuts out the middleman between parsing so much data and having a machine experience these things itself. So it's no wonder.


  • Casey likes this

And remember my friend, future events such as these will affect you in the future.


#76
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
Three items:

* First is this one: https://arxiv.org/abs/1904.08377

Basically, they used gaze data to improve the generalization performance of self-driving cars:

Prediction error in steering commands is reduced by 23.5% compared to uniform dropout. Running closed loop in the simulator, the gaze-modulated dropout net increased the average distance travelled between infractions by 58.5%.


That's a big improvement!

Gaze and/or eye-tracking data should properly be considered "cognitive data". It's like a low-grade kind of BCI data.

If such improvements are possible just from gaze, then you should expect much larger improvements using next-gen BCIs. Not EEG, necessarily -- but, rather, BCIs that can scan 10,000+ voxels, with depth, at a temporal resolution of 100 milliseconds. Such BCIs don't yet exist, but soon will.

* Next, is Facebook's new virtual assistant to "Portal and Oculus Products": https://www.theverge...-vr-ar-products

What does that have to do with BCIs? Well, Facebook's "Reality Labs" is working on a BCI, and I'd say that's close enough to Oculus to where they have at least entertained the idea of connecting the BCI to this new virtual assistant. It would make sense. BCIs should vastly reduce ambiguity and give a lot of useful data to train the assistant, to make it "smarter". So, I think that's going to be on the drawing board, if it isn't already.

* New method to measure brain activity using MRI with temporal resolution down to 100 milliseconds: https://www.eurekale...h-fnm040919.php

That's the kind of temporal resolution you will need to really start to extract useful signals from brain data to train AI models; though, ordinary FMRI, with worse temporal resolution, can probably still tell you a lot about what the brain is doing -- and would also be useful for training AI models.
  • Zaphod and Yuli Ban like this

#77
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
Wired article from late last month on using eye-tracking data to improve AI models:
 
https://www.wired.co...omputers-learn/

Nora Hollenstein, a graduate student at ETH Zurich, thinks our reader’s gaze could be useful for another task: helping computers learn to read. Researchers are constantly looking for ways to make artificial neural networks more brainlike, but brain waves are noisy and poorly understood. So Hollenstein looked to gaze as a proxy. Last year she developed a dataset that combines eye tracking and brain signals gathered from EEG scans, hoping to discover patterns that can improve how neural networks understand language. “We wondered if giving it a bit more humanness would give us better results,” Hollenstein says.

Last fall, Hollenstein and collaborators at the University of Copenhagen used her dataset to guide a neural network to the most important parts of a sentence it was trying to understand. In deep learning, researchers typically rely on so-called attention mechanisms to do this, but they require large amounts of data to work well. By adding data around how long our eyes linger on a word, the researchers helped the neural networks focus on critical parts of a sentence as a human would. Gaze, the researchers found, was useful for a range of tasks, including identifying hate speech, analyzing sentiment, and detecting grammatical errors. In subsequent work Hollenstein found that adding more information about gaze, such as when eyes flit between words to confirm a relationship, helped a neural network better identify entities, like places and people.


  • Yuli Ban likes this

#78
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts

I decided to show up again to post this (Yes, Yuli, you are right about why I will have to cut back my social media presence for the next couple weeks; not cut it back to zero, but to a more manageable level, to take care of responsibilities.):

 

https://cns.utexas.e...e-a-human-audio

 

It's a show about Alex Huth's and his grad student's (Shaillee Jain) work on building an AI language-understanding system from FMRI data.  It is more in-depth than the article I linked-to above.  

 

As I've said before (over the past 3 years), a sufficiently rich amount of brain data, at a high enough spatial and temporal resolution, could enable this kind of AI to be built -- by basically anyone with a little programming and ML talent (not much; a few courses and some programming experience should suffice):

 

https://www.reddit.c...ual_assistants/


  • Zaphod, Yuli Ban and Alislaws like this

#79
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
A few more new items:

* Here's a Tweet thread on using Deep Learning to learn the dynamics of a single neuron... and all the complex processes that go on across its dendritic tree:

https://mobile.twitt...890349578829825

1) Neurons in the brain are bombarded with massive synaptic input distributed across a large tree like structure - its dendritic tree.
During this bombardment, the tree goes wild

2) These beautiful electricity waves are the result of many different ion channels opening and closing, and electric current flowing IN and OUT and ALONG the neuron.

This is complex, a lot of things are going on, and the question arises - how can we understand this complexity?

3) The approach we take in the paper is the attempt to compress all of this complexity inside as-small-as-possible deep artificial neural network.

We simulate a cell with all of it's complexity, and attempt to fit a DNN to the neuron's input-output transformation

4) We successfully manage to compress the full complexity of a neuron, that is usually described by more than 10,000 of coupled and non linear differential equations, to a smaller, but still very large, deep network.

What biological mechanism is responsible for this complexity?

5) The first candidate that comes to mind is the NMDA ion channel that is present in all synapses. So we remove NMDA ion channels and repeat the experiment keeping only AMPA synapses

Turns out, now we only need a very small artificial net to mimic the input-output transformation

6) So it turns out that most of the processing complexity of a single neuron is the result of two specific biological mechanisms - the distributed nature of the dendritic tree coupled with the NMDA ion channel.
Take away one of those things - and a neuron turns to a simple device

7) One additional advantage deep neural netowrks have compared to thousands of complicated differential equations, is the ability to visualize their inner workings.

The simplest method is to look at the first layer weights of the neural network:

etc.


Yet more evidence that Deep Learning is going to be good at vacuuming-up the dynamics -- the rules -- behind how the brain functions, and crystallizing it down into a compact, abstract model.


* Brain-mediated Transfer Learning of Convolutional Neural Networks

https://arxiv.org/abs/1905.10037

Basically, they used FMRI data to improve the generalization performance of convolutional neural nets.
  • Yuli Ban and Hyndal_Halcyon like this

#80
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 976 posts
More work on using FMRI data to improve Natural Language Processing:

https://arxiv.org/abs/1905.11833

We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally use the insights from the attention experiments to alter BERT: we remove the learned attention at shallow layers, and show that this manipulation improves performance on a wide range of syntactic tasks. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.


The second-named author, Leila Wehbe, is a new assistant professor at CMU, and I believe also got her ph.d. at CMU. If memory serves, she was Jack Gallant's postdoc at U.C. Berkeley.

This paper is interesting, but I think a lot more can be gained by using massively more data, and just using it to directly train models.
  • Yuli Ban likes this





Also tagged with one or more of these keywords: AI, BCIs, Brain-Computer Interface, Artificial Intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users