Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

BCIs & Neurotechnology News and Discussions

cyberkinesis BCI psychotronics transhumanism bionics human enhancement brain computer interface transhuman cyborgs neuroscience

  • Please log in to reply
55 replies to this topic

#41
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

Feds fund creation of headset for high-speed brain link

A Rice University-led team of neuroengineers is embarking on an ambitious four-year project to develop headset technology that can directly link the human brain and machines without the need for surgery. As a proof of concept, the team plans to transmit visual images perceived by one individual into the minds of blind patients.
"In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery," said Rice's Jacob Robinson, the lead investigator on the $18 million project, which was announced today as part of the Defense Advanced Research Projects Agency's (DARPA) Next-Generation Nonsurgical Neurotechnology (N3) program.
Sharing visual images between two brains may sound like science fiction, but Robinson said a number of recent technological breakthroughs make the idea feasible. Just how feasible is the question DARPA hopes to address with a series of N3 awards to the Rice-led team and five others that have proposed different technological solutions for the broader challenge of connecting brains and machines.


  • Casey, Alislaws and starspawn0 like this

And remember my friend, future events such as these will affect you in the future.


#42
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

noise-tag-diagram.jpg?w=825&h=510&crop=1


And remember my friend, future events such as these will affect you in the future.


#43
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts

The BCI revolution is almost here. Although it is an "eternalist" idea, I can see various flavors of "virtual immortality" being a big deal in the not-too-distant future. Neuroscientists and others would scoff at the idea ("we don't even know what all the neurons are in the brain!"), but really nobody knows what the limits of brain-imitation and linking are, or if many new ideas will be needed to achieve them. Anybody who says otherwise is just guessing.

There are several ways this could work:

* One idea would be to wear a next-gen, high-resolution BCI basically all the time, and have it scan more than 10,000 voxels every 10 to 100 milliseconds, for tens of thousands of hours. As I have indicated before (and as some neuroscientists have conjectured), after about 200 hours of recordings, you can capture a large chunk of the brain's language understanding capacity, including many parts of world knowledge. After about 1,000 hours, you will probably start getting enough information to recover traces of episodic and autobiographical memory -- basically, everything you see in a given day will trigger brain processes corresponding to recognition / familiarity, and even some amount of "simulation". After about 10,000 hours, those "atoms" of memory start to tell a story -- if not full episodic memories, then at least the intuition and feeling of familiarity.

How much data are we talking about: let's see

(10,000 hours)*(3,600 seconds per hour)*(10 recordings per second)*(10,000 voxels, one floating-point number per voxel) = 3.6 quadrillion floating-point numbers.

Even if only 1% of that data is useful (the rest being "noise" or "uninformative"), that still would leave you will 36 trillion floating-point numbers of highly informative data about you.

And that doesn't count the sensory data that you might want to also record in tandem (though, it can be derived from the brain data directly -- e.g. you can recover images being seen by "decoding" retinal ganglion responses).

Some data scientist and/or neuroscientist then can use that data to build an abstract neural net model of you: it will attempt to predict future brain states, given the current brain state and sensory stream.

The biological you will continue to age, while the virtual version will remain ever young.

* You can attempt a biological version of this idea, where you use the recording -- somehow -- to alter the connections in a real brain (your clone), until it is basically you. It will take a while before we have the read-write tech for that.


* Another option is to connect your brain to a supercomputer, whereby you share some of your cognitive load with it. Over time, the supercomputer does more and more of the work, and the biological you fades and withers. You will, in a sense, be "uploaded to the cloud".

* Yet another option would be to directly link your brain with another brain somewhere, wirelessly -- maybe a clone of yourself, sitting in a chamber somewhere. Only its brain would be active. Like with the upload example, you could arrange things so that, over time, the other you's brain does more and more of the work, until it becomes you.


And the great thing is: for some of these, even if the data science / algorithms / compute part of the equation will take a lot of new innovations, you can still do the scanning part now, while you are still young. The "upload" can wait several years or even decades.


  • Raklian, Casey, Yuli Ban and 2 others like this

#44
Casey

Casey

    Member

  • Members
  • PipPipPipPipPipPip
  • 679 posts

 

Scientists create mind-controlled hearing aid

A mind-controlled hearing aid that allows the wearer to focus on particular voices has been created by scientists, who say it could transform the ability of those with hearing impairments to cope with noisy environments.
The device mimics the brain’s natural ability to single out and amplify one voice against background conversation. Until now, even the most advanced hearing aids work by boosting all voices at once, which can be experienced as a cacophony of sound for the wearer, especially in crowded environments.
Nima Mesgarani, who led the latest advance at Columbia University in New York, said: “The brain area that processes sound is extraordinarily sensitive and powerful. It can amplify one voice over others, seemingly effortlessly, while today’s hearing aids still pale in comparison.”

 

 

This would be amazingly beneficial for me, as someone whose cocktail party effect disappeared in early 2000 when I was 12 (Dyspraxia is often associated with genes that reduce the brain's ability to filter out toxins from the environment, so maybe that's what happened, I dunno). Any thoughts on when this might actually be a product that one can buy? Mid 20s, early 30s, etc.

 

Either way, I'm glad that those who were born x number of years later than I was will have to put up with the issue for x number of years fewer. A Dyspraxic kid born in 2017 would only have to wait until they were 13 years old to get this problem resolved if this gets released in, say, 2030, rather than my own 43.


  • Sephiroth6633 likes this

#45
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPip
  • 92 posts

How will we protect ourselves from a random thought caused by the persistently natural misfiring by neurons?   

 

Maybe that's innately part of the brain's design. Getting rid of random thoughts would negatively affect our creativity don't you think? How about just give brain noise a sliding scale from negative infinity as self-inflicted deadly seizures, zero making it a purely programmable substrate without sentience, and positive infinity causing hallucinations.


As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#46
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elon Musk’s Neuralink, the secretive company developing brain-machine interfaces, showed off some of the technology it has been developing to the public for the first time. The goal is to eventually begin implanting devices in paralyzed humans, allowing them to control phones or computers.
The first big advance is flexible “threads,” which are less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads also create the possibility of transferring a higher volume of data, according to a white paper credited to “Elon Musk & Neuralink.” The abstract notes that the system could include “as many as 3,072 electrodes per array distributed across 96 threads.”
The threads are 4 to 6 μm in width, which makes them considerably thinner than a human hair. In addition to developing the threads, Neuralink’s other big advance is a machine that automatically embeds them.


  • Alislaws likes this

And remember my friend, future events such as these will affect you in the future.


#47
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

Man with brain implant on Musk’s Neuralink: “I would play video games”

  • On Musk's plans: "When I heard he was working with a neural interface, I said I would be there in a heartbeat."
  • On the dangers: "If you are comparison shopping for brain implants, I think the Utah array is less risky."
  • On voluntary implants: "Honestly, I would have wanted one before my injury."

And remember my friend, future events such as these will affect you in the future.


#48
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Thought I would drop a few links:

Facebook post about their BCI work:

https://tech.fb.com/...-saying-a-word/

Tech Review's take:

https://www.technolo...eads-your-mind/

Brain-computer interfaces are developing faster than the policy debate around them

https://www.theverge...olicy-neuralink

And if you think it's fast now, just wait until these are in consumer hands and the data rolls in to improve models!
  • Raklian and Yuli Ban like this

#49
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

This device can hear the voice inside of your head

He’s worked on a lunar rover, invented a 3D printable drone, and developed an audio technology to narrate the world for the visually impaired.
 
But 24-year-old Arnav Kapur’s newest invention can do something even more sci-fi: it can hear the voice inside your head.
Yes, it’s true. AlterEgo, Kapur’s new wearable device system, can detect what you’re saying when you’re talking to yourself, even if you’re completely silent and not moving your mouth.
The technology involves a system of sensors that detect the minuscule neuromuscular signals sent by the brain to the vocal cords and muscles of the throat and tongue. These signals are sent out whenever we speak to ourselves silently, even if we make no sounds. The device feeds the signals through an A.I., which “reads” them and turns them into words. The user hears the A.I.’s responses through a microphone that conducts sound through the bones of the skull and ear, making them silent to others. Users can also respond out loud using artificial voice technology.

According to Kapur’s research, the system is about 92 percent accurate.

1s7Ks2E.jpg
The technology involves a system of sensors that detect the minuscule neuromuscular signals sent by the brain to the vocal cords and muscles of the throat and tongue. (MIT Media Lab)

The first thing I wonder is if this also works for non-speech. If so, then just imagine the applications for neuro-generated music!


  • Casey and starspawn0 like this

And remember my friend, future events such as these will affect you in the future.


#50
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
I've been seeing articles recently about Neuralink with comments like, "There are still many years of work to be done before we can translate thoughts into words, says ETH Zurich researcher such and so..." or "Despite advances in reading the brain, decoding what is read is still decades away."

Really?

I remember hearing similar talk before:

* "Good machine translation is still several decades away." -- maybe not those exact words, but something along those lines. Deep Learning has swept away other approaches in the span of a few years. It's not perfect yet, but for generic newspaper articles between common language pairs, it's pretty good; 5 to 10 years ago, it was unreadable.

* "Good speech recognition is still a long ways away." -- It's actually pretty good now (not perfect; but good enough for my uses), even using a crappy smartphone mic and small memory footprint model.

* "Deep Learning is not going to revolutionize Natural Language Processing the way it did image recognition. You can't put a $#%*! sentence into a vector!" -- and now, just about 3 or 4 years after that comment was made, DL has been used to crack the "average case" for some of the hardest Natural Language Understanding problems.

In each case, the thing that led to the progress was having massive amounts of data. If you have enough data, you don't even really need that much expertise in the area you seek to revolutionize. Some of the people that helped revolutionize language translation, for example, were not experts in that field -- or even speak all the languages they build models to translate for.

Similarly, I think what is going to happen with BCIs is: at first, when there isn't much data to train models, the skeptics will be proved right -- which will embolden them. But, then, as thousands of people start using them, and their data is harvested to train models, there will be a rapid increase in the quality of the translation from thought to text (e.g. imagined speech recognition). The progress will mirror what we saw with language translation and speech recognition. The skeptics will, initially, double-down, just like they did right when NNs were revolutionizing NLP; eventually, they will go silent, when they come to realize that these Deep Learning models can eat their lunch on brain decoding, when thrown enough data.

It will be fun to watch this replay of history...

Addendum: And, no, imagined speech decoding really isn't that different in complexity from all the others -- NLU/NLP/NLG, for example, were supposed to be f*****g hard to make progress on.

"But different people imagine speech differently. How can your model handle that!" -- Yes, but people also speak differently, yet speech recognition neural nets can be trained to handle a large number of different accents and speaking styles, all using the same model.

"If you close or move your eyes, it changes your brain activity; tap your feet, brain activity changes; sit or stand, brain activity changes. So many ways things to keep account of." -- Yet, for example, speech recognition can now handle noisy environments, and even recognize what you muddle your words a little. When NNs are applied to fickle and noisy brain data, the progress will be just like what we saw with speech recognition. And the skeptics are not prepared for it. They are still thinking about how hard it is, and how much it sucks when you only have a small number of hours of recordings from a small number of individuals; they aren't thinking about what happens when you have 10,000+ hours of data -- nor can they imagine what a difference it will make.
  • Casey and Yuli Ban like this

#51
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Translating Between Brain and World: Decoding Biological Neural Nets with Artificial Neural Nets - Two Six Labs | Advanced Analytics, Cyber Capabilities, Tactical Mobility Solutions for National Security

https://www.twosixla...al-neural-nets/

We asked the question, “Can we use an artificial neural network to link the signals of these biological neurons to a map of the mouse’s physical location?” That is, if we reverse engineer the biological neural network, can we read a mouse’s mind to know where it is?


  • Yuli Ban likes this

#52
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Brain2Char: A Deep Architecture for Decoding Text from Brain Recordings

https://arxiv.org/abs/1909.01401

In 3 participants tested here, Brain2Char achieves 10.6\%, 8.5\% and 7.0\% Word Error Rates (WER) respectively on vocabulary sizes ranging from 1200 to 1900 words. Brain2Char also performs well when 2 participants silently mimed sentences. These results set a new state-of-the-art on decoding text from brain and demonstrate the potential of Brain2Char as a high-performance communication BCI.


This is from the Chang lab at UCSF, the same group you may have read about recently concerning their breakthrough speech decoding from a BCI. The work here, on the other hand, maps BCI output to text, not sound. That's probably a harder challenge, since when you hear sound, it can be noisy and garbled, and yet you still will be able to understand it; but you don't get to "cheat" like this when it comes to text output.
  • Yuli Ban and Alislaws like this

#53
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Facebook bought CTRL-Labs for between $500 million and $1 billion:

https://www.bloomber...-with-your-mind

I suspect they were not just after the tech, but also the talent, that they can use on their BCI project headed by Mark Chevillet.

Wait and see if Google, Apple, or Microsoft doesn't try to acquire Mary Lou Jepsen's Openwater at some point in the future.

#54
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 10,053 posts

Enhancing functional abilities and cognitive integration of the lower limb prosthesis

 

https://stm.sciencem...11/512/eaav8939

 

Introduction:

 

(Science Magazine) Good vibrations

 

The lack of sensory feedback from the leg prosthesis in lower limb amputees is associated with risk of falls, low mobility, and perception of the prosthesis as external object. Here, Petrini et al. tested a leg neuroprosthesis, which provided real-time on-demand tactile sensory feedback through nerve stimulation in three transfemoral amputees. The stimulation improved mobility, decreased falling episodes, and increased the perception of the prosthesis as part of the body. Active complex tasks were accomplished with reduced effort when the nerve stimulation was turned on. The results suggest that real-time nerve stimulation could help restore natural sensation in lower leg amputees.

 

Abstract

 

Lower limb amputation (LLA) destroys the sensory communication between the brain and the external world during standing and walking. Current prostheses do not restore sensory feedback to amputees, who, relying on very limited haptic information from the stump-socket interaction, are forced to deal with serious issues: the risk of falls, decreased mobility, prosthesis being perceived as an external object (low embodiment), and increased cognitive burden. Poor mobility is one of the causes of eventual device abandonment. Restoring sensory feedback from the missing leg of above-knee (transfemoral) amputees and integrating the sensory feedback into the sensorimotor loop would markedly improve the life of patients. In this study, we developed a leg neuroprosthesis, which provided real-time tactile and emulated proprioceptive feedback to three transfemoral amputees through nerve stimulation. The feedback was exploited in active tasks, which proved that our approach promoted improved mobility, fall prevention, and agility. We also showed increased embodiment of the lower limb prosthesis (LLP), through phantom leg displacement perception and questionnaires, and ease of the cognitive effort during a dual-task paradigm, through electroencephalographic recordings. Our results demonstrate that induced sensory feedback can be integrated at supraspinal levels to restore functional abilities of the missing leg. This work paves the way for further investigations about how the brain interprets different artificial feedback strategies and for the development of fully implantable sensory-enhanced leg neuroprostheses, which could drastically ameliorate life quality in people with disability.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#55
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
The US military wants super-soldiers to control drones with their minds [And allow feedback for sending information into their brain]

https://www.technolo...ter-interfaces/

This is my favorite part:
 

Current understanding is that neural tissue swells and contracts when neurons fire electrical signals. Those signals are what scientists record with EEG, a Utah array, or other techniques. APL’s Dave Blodgett argues that the swelling and contraction of the tissue is as good a signal of neural activity, and he wants to build an optical system that can measure those changes.

The techniques of the past couldn’t capture such tiny physical movements. But Blodgett and his team have already shown that they can see the neural activity of a mouse when it flicks a whisker. Ten milliseconds after a whisker flicks, Blodgett records the corresponding neurons firing using his optical measurement technique. (There are 1,000 milliseconds in a second, and 1,000 microseconds in a millisecond.) In exposed neural tissue, his team has recorded neural activity within 10 microseconds—just as quickly as a Utah array or other electrical methods.

The next challenge is to do all that through the skull. This might sound impossible: after all, skulls are not transparent to visible light. But near-infrared light can travel through bone. Blodgett’s team fires low-powered infrared lasers through the skull and then measures how the light from those lasers is scattered. He hopes this will let them infer what neural activity is taking place. The approach is less well proven than using electrical signals, but these are exactly the types of risks that DARPA programs are designed to take.


Blodgett is a colleague of Facebook's Mark Chevillet (the head of their BCI division) at JHU; and I think those groups work together. Furthermore, their approaches to BCIs are compatible. So, if Blodgett is successful, their work will inspire and guide Facebook on its eventual BCI efforts.

This DARPA project is, therefore, is like a donation to Facebook. It reduces the amount of money they have to invest into proving-out these technologies.
  • eacao likes this

#56
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

Brain-computer interface can cause changes in the brain after just one hour’s training

The interdisciplinary study examined the influence of two different types of BCI on the brains of test subjects with no prior experience of this technology. The first subgroup was given the task of imagining that they were moving their arms or feet, in other words a task requiring the use of the brain's motor system. The task given to the second group addressed the brain's visual center by requiring them to recognize and select letters on a screen. Experience shows that test subjects achieve good results in visual tasks right from the outset and that further training does not improve these results, whereas addressing the brain's motor system is much more complex and requires practice. In order to document potential changes, test subjects' brains were examined before and after each BCI experiment using a special visualizing process - MRT (magnetic resonance tomography).
"We know that intensive physical training affects the plasticity of the brain," says Dr. Till Nierhaus of the Max Planck Institute for Human Cognitive and Brain Sciences. Plasticity refers to the brain's ability to alter depending on how it is used. Scientists distinguish here between functional plasticity, where changes only occur in the intensity of the signals between the individual synapses, and structural plasticity. Structural plasticity refers to a change in nerve cells or even the forming of new nerve cells.


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: cyberkinesis, BCI, psychotronics, transhumanism, bionics, human enhancement, brain computer interface, transhuman, cyborgs, neuroscience

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users