Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Mary Lou Jepsen physics demo of Openwater BCI tech at DLD 2018 New York


  • Please log in to reply
16 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

https://m.facebook.c...id=132302867112

 

She doesn't demo their product, but demos some of the physics, shows what the chips and strap will look like, shows video of their lab equipment.  

 

And, most importantly, they had a brain imaging expert from the Max Planck Institute there to challenge her.  He didn't seem to doubt the physics; but pushed back a little on whether one could read minds, as all brains are a little different.  He said he thought for a single individual, yes; but for different individuals, it's trickier.  I've discussed this before -- there are solutions.


  • Kynareth, Casey and Alislaws like this

#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
The most interesting part begins about 5:45 in, where she shows a "brain laser". The way that works, I think, is that they projected a laser into a brain (or something optically similar to a brain); measured the light pattern that came out the other end; transferred it to a holographic plate; then, shone a laser back through that plate, into the brain (this acts like a phase conjugate mirror); and reconstructed the coherent beam of light out the other end.

If you can make a "brain laser" like that, you can almost surely focus inside the brain itself -- If that is a real brain, or optically very similar to a real brain, then it means that brain tissue doesn't distort light in such a way that you can't invert the holographic light pattern, and focus at a point. In a live brain, the blood and tissue would move around and change the scatter profile; but over a 10 millisecond window, it should be more-or-less the same.

#3
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,481 posts
  • LocationNew Orleans, LA

Curious: have you ever managed to talk to Dr. Jepsen? IIRC, she was active on Quora last year and I'm unsure if she's still there.


And remember my friend, future events such as these will affect you in the future.


#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

I have not.  She's in startup mode right now, and you won't get a good answer from her until she's ready to do an AMA.  If she does that, then I might ask her some questions.



#5
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
This is interesting:
 
https://en.m.wikiped...ical_ultrasound
 
I was a little worried about Mary Lou Jepsen's use of ultrasound in her BCI, thinking that if the frequency was something like 20 kilohertz that she wouldn't be able to scan enough voxels fast enough to make the thing work. The time between scans would have to be at least 1/20,000 of a second, and so there wouldn't be enough time to scan that many voxels.

But if the frequencies used are on the order of 10 megahertz, then there's plenty of time to scan at least tens of thousands of voxels every 10 milliseconds.

For all I know, the scanning frequency can be pushed even higher, up to 100 megahertz or more.
In fact, it's known how to go up to a whopping 300 megahertz!:

https://www.nature.c...icles/srep28360

There's still some tissue elasticity issues, and issues about time delay that I'm still puzzling over...

....

The more I think about this, the more I'm convinced that her brain scanner would have been absolutely impossible 10 years ago. You need ultra-high resolution LCDs, ultra-high frequency compact ultrasound transducers, you're going to need the absolute fastest GPUs and CPUs to determine the new digital holographic light patterns every 10 milliseconds, and it all has to be very compact and cheap enough for consumers.

#6
bgates276

bgates276

    Member

  • Members
  • PipPipPipPipPip
  • 456 posts

Curious: have you ever managed to talk to Dr. Jepsen? IIRC, she was active on Quora last year and I'm unsure if she's still there.

 

She's got a youtube page. (assuming that is actually her) Separately, there is a Ted Talk video with her from last year: 

 

She replied to me in the comments.



#7
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
Another interesting fact: there is a type of LED called micro-LEDs that switch even faster than OLEDs. Next-gen displays are already using them (or will be soon), and they can switch states every couple of nanoseconds!:

https://www.androida...plained-805148/
 

The smaller microLED sizes also makes the prospect of higher resolution panels in a compact form factor, such as 4K or 8K smartphones or VR displays, more achievable. Speaking of VR, OLED panels already boasts very high response times in the µs (microsecond) range. This makes them ideal for virtual reality applications. However, microLED can reduce this down into ns (nanoseconds) or a thousand times faster.


If they can switch state 100 million times per second -- 10 nanosecond switching time -- then that gives even more leeway in rapidly zeroing-in on the light pattern associated to a particular voxel.

Again, this is something that wasn't possible 10 years ago; in fact, it wasn't even possible 5 years ago!

....

I also looked into "raster scanning" using ultrasound. I was a little worried that, although the sound frequency would be high enough, that the tissue in the brain wouldn't rebound back to its original state after the sound beam (or focus region) passed over it -- which would screw up the calculations. But it turns out that raster scanning using ultrasound is a common technique. An alternative would be something like a global method that focuses on many voxels at once, and somehow determines the structures by combining together all that information. But a global method would make it impossible to determine the light hologram pattern (or is not the sort of technique that I can imagine combining with holograms in a natural way) -- fortunately, rather-scanning via ultrasound exists.

....

So, again, there seems to be nothing standing in the way of making Mary Lou Jepsen's BCI system work. The high-speed LEDs exist; high-frequency raster-scan ultra-compact ultrasound exists; the computing resources exist at an affordable price.

Am I missing anything? I've tried my best to come up with potential problems. I still think the data processing / algorithms issues might be the weakest point; but having thought about it, I can imagine lots of ways to make it work.
  • Alislaws likes this

#8
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

Here's another thing worth pointing out:  you may think that because they will use ordinary LCDs that there should be some extra plate or something over the screen to produce the holograms -- why don't ordinary LCD screens produce holograms?

 

Here's the answer:  when the pixel size or aperture that light passes through is small enough relative to the wavelength, the light naturally diffracts.  And if the light is coherent (laser) and monochromatic (which it usually isn't in casual use), the spreading waves through all the pixels will interfere in a predictable way, according to which pixels are turned on. 

 

See this for a visual explanation:

 

https://en.m.wikiboo...lit_Diffraction

 

 

There is an equation, called the diffraction equation, that relates pixel size, wavelength, and angular spread, that I won't bother to state.  Suffice it to say that the current highest density LCD displays have pixel sizes right in the sweet spot to make Jepsen's idea work with near-infrared light.

 

....

 

This also means that if the pixel size could be shrunk even smaller, you should start to see diffraction patterns with visible light, and closer to the blue side of the spectrum.  You could make holograms with an ordinary display, and no extra diffraction plate.

 

Actually, I think this poses a problem for display-makers -- but not hologram-makers! -- as it means if the pixel size drops below a certain "diffraction limit", the light starts to interact in undesirable ways.


  • Yuli Ban likes this

#9
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
Another potential problem with the plan that turns out not to be (a problem):

With such high switching times for the high-speed LCDs, you might think fluorescence could be a problem -- i.e. the brain may re-emit some of the absorbed energy, causing various errors. However, typically when an object fluoresces, it does so at a different frequency from the light that was absorbed:

https://en.wikipedia...ki/Fluorescence

In most cases, the emitted light has a longer wavelength, and therefore lower energy, than the absorbed radiation. The most striking example of fluorescence occurs when the absorbed radiation is in the ultraviolet region of the spectrum, and thus invisible to the human eye, while the emitted light is in the visible region, which gives the fluorescent substance a distinct color that can be seen only when exposed to UV light.


The setup that Openwater plans to use limits things to a very narrow band of frequencies; so, this type of fluorescence would get filtered out.


Again, any problem I come up with I can quickly dispose of.

It's going to work.
  • Casey and Alislaws like this

#10
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

I have collected together some of the things I wrote above, added a few things, made a few corrections (e.g. even now monitors have pixel sizes small enough to make holograms with visible light; it's just that the light on computer screens is often not coherent, and the angular spread won't be that great) and additions, and posted it here:

 

https://www.reddit.c...and_ultrasound/

 

I think this topic is very important -- maybe the most impactful hardware of all the tech for the next 20 years; the tech that will unlock many others -- and hence my obsessive zeal.


  • Alislaws likes this

#11
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
This is an interesting Tweet:

https://mobile.twitt...203452207161345
 

Fascinated w/ the work @mljmljmlj is doing at Openwater. After seeing her talks (posted by @samim) spent many hours learning about photoacoustics, ramen [Raman] spectroscopy and IR holography. Fascinating stuff w/ the potential to *completely disrupt* noninvasive medical imaging + BCI

If you follow my Twitter feed and wondered why I was talking about Ultrasound the other day -- this research is why. Openwater is using ultrasound phased arrays (ultrasound-on-chip MEMs devices!) to target individual voxels inside the body for frequency shift from an IR laser


It's by a guy named Elliot Turner. Remember him? He was co-founder of AlchemyAPI. Ring a bell? It was an up-and-coming Machine-Learning-as-a-service company a few years back, with a community of something like 40,000 developers contributing (if the news is to be believed), that eventually got bought out by IBM:

https://www-03.ibm.c...lease/46205.wss

I remember following their progress, and was excited by what they were producing; and then seeing IBM acquire them made me suspect we'd never hear about their work again. Anyways, that acquisition probably made Turner a rich man.

Turner now has a new company (he's CEO) called, simply, Hologram -- right up Mary Lou Jepsen's alley, wouldn't you say? Here's what they say about it on their website:
 

We see a a digital future that is 3D, where holograms are core to the experience. Our vision is to create experiences that are so real and visceral that you need to recalibrate what was real. At the center of each experience is a Hologram, therefore Hologram is the name that accurately describes our company. It’s the name that matches vision.


Who knows? -- maybe in a couple years, if Openwater becomes a huge success, it might acquire Hologram. Or... maybe Facebook or Google or Apple or Amazon etc. will acquire them both!
  • Yuli Ban and Alislaws like this

#12
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

A new talk by Mary Lou Jepsen given at Wired Health 2019 UK:

YouTube video

She says they have already scanned rodents, already have a alpha kit prototype; alpha kits still available in 2019. She says there was a 2 month delay in upping image quality based on mouse experiments.

She mentions improving signal-to-noise 8x and laser consistency 23x.

Openwater has made a number of hires over the past year. Here are some of the people I have been able to track down (via web search), who now work there, or who have worked there in the past:

1. Andrew Haefner -- physicist/engineer, ph.d. U. C. Berkeley. 

2. Tegan Johnson -- Indiana University

3. Brad Hartl -- physicist, ph.d. U. C. Davis

4. Wilson Toy -- Mechanical Engineer

5. Caitlin Reagan -- Data Scientist, ph.d. from Beckman Laser Institute U. C. Irvine (BS Caltech)

6. Emilio Morales -- ph.d. in Photonics from EPFL

7. Hosain Haghany -- senior optical scientist, ph.d. from Beckman Laser Institute, U. C. Irvine.

8. Sarmishtha Satpathy -- ph.d. in EE University of Texas, Arlington.

9. Ian O'Donnell -- engineer, and expert in low-power electronics

10. Craig Newswanger -- expert in holography.

11. Mary Lou Jepsen (obviously)

Not all of these are on their website.

And here are a few names of people who work on Facebook's BCI program (on one of two I am not sure, and am only guessing):

1. Mark Chevillet -- team leader, professor at JHU.

2. Emily Mugler -- ph.d. University Illinois, Chicago. Neuroscientist / BCI engineer. Has done work on translating cortical signals into speech. I believe she wrote once before (on Twitter?) that she was going to work at Facebook on the BCI program, but could be mistaken.

3. Patrick Mineault -- ph.d. from McGill. Expert in neuroimaging; and also has a background in Machine Learning. He once posted to Twitter he was working on the BCI program, then removed it. I haven't seen anything to suggest he has left Facebook.

4. Tanya Jonker -- ph.d. Cognitive Psychology at University of Waterloo. Works with neuroimaging, neural encoding, memory, cognitive load.

5. Ealgoo Kim -- electrical engineer, says on LinkedIn page he is working on the BCI team at Facebook. Stanford.

6. Sahar Akram -- machine learning and software. Was grad student and postdoc at University of Maryland.


  • Yuli Ban and Alislaws like this

#13
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,915 posts
  • LocationLondon

​Well, I'm convinced! How do we invest in them?  :biggrin:

 

 

Does anyone have millions of $s and a reputation as a venture capitalist? That's probably step 1.


  • starspawn0 likes this

#14
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

Openwater is a private company and is not accepting investors.  And Facebook won't let you invest selectively in individual divisions of the company.


  • Alislaws likes this

#15
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts

New Mary Lou Jepsen video:

 

https://youtu.be/kuJhPh68svw

 

Seems things are delayed a little.  Alpha kits this year, presumably for animal testing.  Human tests next year.  She sees a mass consumer product maybe in 2022.

 

New images they have captured begin around 13:50.  Around 14:00 she shows scans of mouse organs.  The resolution and detail look good.  I think this is a living tissue scan, not a dead scan -- living scans are much harder, as cells and blood move around.

 

Looks like the tech is going to work exactly as I had foreseen.


  • Yuli Ban and Alislaws like this

#16
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,915 posts
  • LocationLondon

Thanks for the summary, cant view videos on this machine will check it out later!

 

2022 is great! My big worry is it turns out they are wrong in some fundamental assumptions and it won't work, or it will take 50 years or similar.

 

If they're behind their original (pretty optimistic!) schedule that makes sense honestly, the original TED talk felt like it was based on a pitch made to investors, so I have been expecting their original statements to be a bit over hyped. 

 

Part of me is just waiting for them to go "Oh? better resolution than an MRI? Well sure the technology is capable of that, but it'll take 50 years to get there, not sure how you got the idea that we could beat MRIs right out of the gate" or other similar disappointments.

 

So anything that confirms that no one has fled the country with the money and that work is continuing is great news to me.


  • starspawn0 likes this

#17
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 963 posts
They're not wrong. Trust me on that.

....

The real reason for me to write in this thread again is this:

https://www.openwater.cc/

They've updated their website! If you click on "team", you will see that they now list a much fuller team of researchers than what they had before -- they just had like 3 or 4 names before, and now have 20 names! The team has grown a lot in the past year!
  • Casey, Yuli Ban and waitingforthe2020s like this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users