Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

My attempt to understand how holographic optical BCIs will work

BCI physics

  • Please log in to reply
2 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts
I am not an optical physicist, though think I understand the basic idea of how Mary Lou Jepsen's OpenWater brain/body scanner will work, and thought I would write a post about it (my ulterior motive being to convince you that it's not as outlandish as it seems). It will be my last post for a while -- I will return once more when the stars are right again.

Ok, here goes: light passing into an object does one of two things as it moves from atom to atom -- either it scatters, or it is absorbed. If it is absorbed, then you don't see that photon on the other side of the object. But if it is only ever scattered, you do. What you want to know is all the places in the object where the light is absorbed (and emitted), as that tells you the oxy-deoxy concentration, which is the BOLD (Blood Oxygenation Level Dependent) signal that FMRI measures. But, unfortunately, because light is scattered, you can't see where it is being absorbed; scattering adds noise to your calculations. If you could undo the scattering, then you've got it.

And that's just what phase-conjugation does. Have a look at this interesting webpage that explains the concept of a "phase conjugation mirror":

http://cns-alumni.bu...eConjugate.html

Of particular interest is the example with the blue "distorting glass". Just above it is written:
 

With a phase conjugate mirror, on the other hand, each ray is reflected back in the direction it came from. This reflected conjugate wave therefore propagates backwards through the distorting medium, and essentially "un-does" the distortion, and returns to a coherent beam of parallel rays travelling in the opposite direction.


Neat, huh?

In that diagram, it assumes there is no absorption -- only scattering. And you can see that in that case the phase conjugation mirror effectively makes the distorting medium transparent.

So how would it look if the medium behaved as though there was only absorption, and no scattering, because you've eliminated it with that phase conjugation mirror? Well, if the medium were uniform, and absorbed uniformly, then it would probably look like a region of space where light passing through it was dimmer. It would be like a failed invisibility cloak of some kind, because the objects behind where you are standing would look dimmer. And if it had non-uniform absorption -- which is what happens in the brain, since the oxy-deoxy concentrations aren't uniform -- then it probably would look like a collection of little splotches, where light dims as it passes through some sploteches, but wouldn't dim as it passed through others. You could tell where the dim spots were located in space, if you could look at the object from different perspectives.

Ok, that's the BOLD signal. There are other absorption signals that are much faster, that should be detectable.

I suppose one remaining problem is, how do you turn all those perspectives of the absorption profiles into a 3D representation of the regions of high and low absorption? That should be a standard "inverse problem" with known solution methods.

Maybe one more issue is the fact that the phase conjugate mirror sends the light back to where it came from, so you'd have to have a laser and a detector at the same place. I'm sure there are lots of variations on the idea to get around this, and what Openwater has done is one of those variations. One could put the detector behind the laser or something, perhaps.

After-thought: I am unsure how complicated the computations would be, but maybe one could build the equivalent of a "phase conjugate mirror" in software. This would involve precisely measuring some of the light hitting where that mirror would be, the full light field; and, you'd have to know how the material scatters light (which may defeat the plan from the start). Maybe it's too complicated; but if it could work, you could get rid of the need of the complicated optics.

Although... maybe it's not that complicated. Remember that Deep Learning paper I posted on this subject?
  • Kynareth, Yuli Ban and Alislaws like this

#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts
Ok, so the above is a rough draft of the idea (which I had gathered from seeing some early talks by MLJ); but several things bothered me about it (that I don't want to get into).  So, I decided to read the patent: 
 
https://patents.goog.../US9730649B1/en
 
It seems a little different from how I imagined it from her talks; and, actually, seems more plausible than the idea presented in her talks -- or at least what I interpreted the idea to be.  Still, they're both in the same circle of ideas.  In this post I will discuss this patent (my reading of it), as well as potential problems and solutions to the approach.
 
To begin, let's recall that the human brain and skull are translucent to near-infrared light.  Oxygenated blood absorbs near-infrared light differently from deoxygenated blood; and the amount of oxygenated blood in a region of the brain indicates the presence of neural activity.  This is the so-called BOLD signal.  But, in addition to absorption, light can also scatter as it passes through the brain.  If there were no scattering, it would be relatively easy to use near-infrared light to image the brain at resolutions much higher than FMRI.
 
So how does MLJ / Openwater do it, according to this patent?  
 
Ok, so here's my understanding of how it works:  they will use LCDs to project a hologram into the brain. They want to sculpt the 3D pattern of this light so that it focuses at a particular small region of space inside the brain (why they want to do this is explained below) -- a "voxel", the volumetric analogue of a "pixel".  The problem is that, since they can't see inside the brain, they can't figure out what pattern of light they should have the LCDs project to do that.
 
The solution is to use something called a “beacon”:  basically, they send a very, very low energy ultrasound wave pattern into the brain (far below an amount that would disrupt tissue or activate neurons), such that the pattern focuses at the region they want to image.  Presumably it’s easier to focus sound than it is light.
  
Why do they want to focus that ultrasound?  Because the ultrasound compresses the tissue in that voxel (where the sound focuses), which in turn will shift the phase of the light passing through it.  Here is an image I found on the internet that explains the idea:
 
http://aups.org.au/P...27/Figure_1.png
 
By comparing the difference in the pattern of light exiting that voxel and brain (and entering the sensor outside the brain), between when the ultrasound is turned on and when it is turned off, they can determine how much light has focused into that voxel.  By iteratively altering the pattern of light projected by the LCDs, they can drive this difference (presumably the L^2 difference in the exit waves) to a maximum -- or at least a good local max -- which should correspond to having the light strongly focus on that voxel.
 
In a presentation that Mary Lou Jepsen gave a while back, she mentioned “Doppler Shifting” of the light from the ultrasound.  See 33 minutes into this talk, where she addresses a question by Nathan Intrator:
 
https://www.youtube....Zyf9ccfUc#t=33m
 
Phase shifting is not the same as Doppler Shifting / wavelength changing; so, one wonders if there is yet  another way to use ultrasound to achieve the same results.  Maybe she was simply referring to the contraction of the light waves inside the region where the ultrasound compresses material.
 
At any rate, finding that "maximal difference" is a math problem that can be solved many ways.  Since this is a “black box” optimization problem (where you know nothing about the mechanisms), gradient descent (e.g. Deep Learning) can’t be used at this stage.  So, simpler methods like simulated annealing must be, instead -- and note that you can do many, many iterations, since the number is limited only by the flicker/refresh rate of the LCD, which updates quickly.  Furthermore, Mary Lou Jepsen has said that she can get the refresh rate to be even higher, by removing the little capacitors that ordinarily are in LCD arrays -- companies put them there, because they make the screens easier on the eyes.  See this video, 12 minutes 33 seconds in:
 
https://www.youtube....9ccfUc#t=12m33s

A technical aside on my part:
 

Actually, I think this optimization can be done using Singular Value Decomposition (or even just some calculus -- set some partial derivatives to 0), but maybe I'm missing something: project a random pattern into the brain with the ultrasound focused at the voxel, and do the same with it turned off. Look at the difference in the two exit wave patterns. Turn that difference into a vector v1 = (x1, x2, ..., xn), where xi is the amplitude of the difference at time i. So, v1 is a discretized version of the exit wave difference over a very short time window. Now, repeat this for n-1 more vectors v2, v3, ..., vn. Now, find coefficients c1, c2, ..., cn on the unit sphere, that is

c12 + ... + cn2 = 1

Such that the L2 norm

||c1 v1 + ... + cn vn||

is maximal. This is a standard problem with a known solution. Basically, form the matrix A where the columns are v1, ..., vn; and let c be the column vector with ith coordinate ci. Then, left-multiplication by A by will map vectors on the unit sphere to a certain ellipsoid. The maximal norm of this image will be the antipodal points on the major axis of the ellipsoid.

Then, once you have those coefficients c1, ..., cn, the optimal LCD setting will be the linear combination of those random patterns, with coefficient of the ith setting equal to ci.

If the vectors v1, ..., vn are chosen to be orthonomal, then the solution vector c1 v1 + ... + cn vn will have norm 1.

I suppose there might be an issue with how to interpret a negative value for one of LCD settings -- e.g. maybe the energy is encoded as a positive real number? I would have thought that phase-shifting the light from the LCDs by half a period could be used to encode negative numbers. But if this doesn't work, then at least you know the structure of the ellipsoid, so may not be so difficult; and should be a standard problem in convex geometry.

But maybe I'm missing something?

 
Ok, let’s assume we know how to focus the light at a voxel.  And, now, by measuring the change in the light energy exiting the brain using that particular LCD setting (that focuses the light at the specific voxel), we can measure how the absorption level at that voxel changes with time, which is just what we were interested in!
 
Well, not quite, and this is the part where I would hesitate a little: the outgoing energy should also vary according to the extra amount absorbed from the rest of the brain as it exits the voxel and heads towards the sensor. What I think works in their favor -- if I'm understanding this correctly -- is the fact that the further the light is away from the voxel (on its way to the sensor), the more "averaging" that's going on. The light is spreading out to more and more parts of the brain, and so large changes in the total energy due to extra absorption in parts far away from the voxel, have less and less effect on the energy that the sensor sees. Basically, the number of voxels that the light interacts with is proportional to the square of the distance from the focus voxel.
 
I don't think it will be a problem.  It’s also worth mentioning that the optimization process described above should not only choose the LCD pattern that focuses the light on that voxel, but also minimizes the amount of absorption outside it.  So, problems with extra absorption should be even smaller.
 
Another potential problem is whether they can optimize the LCD light patterns fast enough, to scan a lot of voxels. I certainly think it's possible to do this for a few hundred or thousands of voxels of the brain at one time; but am not sure about tens of thousands or millions. She mentions saving the LCD patterns for each voxel into a "lookup table":
 

This technique is used to create a dictionary (i.e. lookup table) of 
holographic patterns (corresponding to input signal 123) to map to focus the light sequentially to each and every stimulus 197 and scanning of various stimuli.

 
 
As long as you are inside a short enough time window, so that the brain hasn't moved things around too much (e.g. blood), and made your LCD pattern invalid for a given voxel, then you can certainly use it again. I think this is known as the "speckle correction window", but would need to refresh my memory. So, periodically, you'd have to correct the lookup table. Maybe things could be staggered so that not too many entries would need to be corrected in a given window of time.
 
Even taking all this into consideration, I can certainly imagine that a system like this might scan a few thousand voxels each second, putting it on par with FMRI in spatial resolution, and beyond it in temporal resolution.
 
Here’s the problem with scanning many more voxels:  say you want to scan 1 million voxels.  Then, the number of parameters you have to set in that LCD array has to be at least 1 million -- you need at least as many parameters as voxels (the patterns that selectively focus on a given voxel are approximately “linearly independent”, so determine a space of dimension 1 million).  The total number of parameters that need to be determined, then, will exceed
 
(1 million voxels) * (1 million parameters per voxel) = 1 trillion parameters!
 
 
There’s no chance of determining all these fast enough before the brain changes it’s scattering profile, over a few tens of milliseconds.  
But here’s the thing:  the LCD settings to focus on even just a single voxel should contain most of the information about the brain’s scattering profile -- in the sense that, given the voxel and given the LCD settings, you can recover the scattering profile.  So, a lot of those 1 trillion parameters above contain highly redundant information.  
 
This sounds like a job for Deep Learning:  given the LCD settings to focus light on a few different voxels, determine the LCD settings for any other voxel.  This probably doesn’t have an easy analytic solution; but the function might be learnable with Deep Learning.  If it is, then it should be possible to image 1 million voxels, after all!
 
Another thing one could try is to parallelize the process.  For example, maybe it's possible to use ultrasound to stimulate multiple regions of the brain at once, where the amount of compression is different in the different voxels, resulting in different phase-shifts associated to the different voxels. If so, it should be possible to scan multiple regions of the brain at the same time, without having to "raster scan" nearly as much.
  • Kynareth, Yuli Ban and Alislaws like this

#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts

I wanted to say that I made some corrections and additions to the above post a week or two ago, with the final draft being what I wrote here:

 

https://www.reddit.c...ter_bci_patent/

 

(This draft is virtually identical to the above, except that I expanded on the algorithm part -- mainly my attempt to get a feel for how they will handle the massive computational load efficiently.)

 

I think the main difficulty they will face is not in the scanning part of the apparatus, but in the data processing part.  It's going to take pushing processors to the metal, and using absolute best algorithms.  

 

My analysis still hasn't found any show-stoppers.







Also tagged with one or more of these keywords: BCI, physics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users