Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Gray Newell (Valve's Gabe Newell's son) talks about the future of BCIs


  • Please log in to reply
2 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 865 posts
Typo: should be "Gray", not "Gary".

YouTube video

He says that the technology is moving along a lot, lot faster than the public is aware of. He mentions his guess as to timeline. I'd have to listen to it again, but I think he said that in 5 years the tech will be there for high-bandwidth connection and decoding, but that it might take 10 years before we see products. I could be wrong, but I think that timeline had to do with invasive methods. Non-invasive hardware will be here a lot sooner. And I'm not talking about EEG, obviously, which has a very poor signal-to-noise ratio, and from which you can only get a few bits of information each second. Ordinary FNIRS is also piss poor. What I'm talking about is what comes after these technologies, that will deliver high spatial and temporal resolution, at depth (not just from the surface of the brain), and not just using the BOLD signal. People in neuroscience, HCI, gaming, etc. are generally unaware of what is under development; their viewpoint is shaped by EEG and ordinary FNIRS.

He also discusses writing to or affecting the brain, not just reading. He mentions some technologies he learned about from his father.

He seems to think the public should be talking more about this right now, and that it should be "on the political stage"; and he says people should be talking about the socioeconomic impact. He mentions how the "top 1% of the top 1%" will have access first, and talks like he thinks it will be like the movie Limitless -- those people will have their IQs pushed into the stratosphere. As I recall, he says that at first the impact will be like an advanced form of Augmented Reality; but that it will rapidly take off, and be more like AI / AGI in force and power.

The ways he talks, he thinks this will have greater economic impact than just about anything out there.

It seems there are some things he wasn't able to talk about, that he is privy to. He knew about Neuralink's work, for example, and couldn't talk about it.

Addendum: And, as have said before, AI will progress rapidly once we have BCIs to extract enough data. Those two fields -- BCIs and AI -- will develop in tandem.

Addendum 2: Having thought about this a little, I think he is too optimistic about the intelligence amplification stuff. It's going to take a lot longer. Where I see the biggest impacts, near term, are in things like: training AI systems to read text, semantically annotating webpages, reducing ambiguity in HCI, better videogame interaction, robot control, and so on. Again, not using EEG or ordinary FNIRS -- using next-gen non-invasive BCIs.
  • Yuli Ban, Erowind, Alislaws and 1 other like this

#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 865 posts

Incidentally, there was a recent paper, that included KurzweilAI's Amara Angelica as coathor, on brain-cloud interfaces:

 

https://www.frontier...2019.00112/full

 

And, of course, it had a nanotech spin, given the authors.


  • waitingforthe2020s likes this

#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 865 posts
This is incredible:

https://figshare.com..._00_mp4/8014577

It's a video supplement to this research:

https://www.biorxiv....0.1101/614248v1

The work is about using neural population responses from the Macaque "secondary auditory cortex" to decode what the macaque is hearing. Just based on the neural signals in that part of the brain, they are able to pretty faithfully reconstruct the sound using Machine Learning (e.g. a neural network).

I guess the way this differs from similar experiments you have seen with humans is that, because these are lab animals, they are able to use riskier neural recording methods than they could ever use on a human patient. These give much a higher bandwidth recording, and also allow for the collection of more data to train ML models.

What this shows is that, as BCIs get better and better, we will soon have much better recordings of the brain to work with, and we should get similar decoding results as you see in that video.

And if that is possible, then perhaps imagined speech can also be decoded with high accuracy -- the work in the above paper and video show should be seen as a proof-of-concept step towards that goal.
  • waitingforthe2020s likes this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users