Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Getting BCIs out of the unimpressive valley


  • Please log in to reply
7 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

The title of this piece is derived from an article written by Pete Warden back in 2014 about computer vision:

 

https://petewarden.c...ressive-valley/

 

I remembered that piece, since it strongly matched my own thinking at the time.  Warden co-founded -- and was the CTO of -- a company called Jetpac, that eventually was acquired by Google.  Jetpac applied computer vision to, for example, determine whether a coffee shop in some neighborhood was more on the hipster-ish side or more on the businessman side. 

 

What Warden realized early on was that computer vision at the time wasn't good enough to live up to the lofty dreams of science fiction; but was still good enough to add considerable value to services.  One of the keys was to exploit the power of Big Data and statistics:  while you might only be able to tell whether the person in a photograph was a "hipster" with 70% accuracy (which isn't very good); if you have hundreds of photos taken at a given coffee shop at different points in time, you gain considerable statistical strength, and can say that the business itself is a hipster-hangout with 95% accuracy. 

 

He looked for opportunities where he could apply big data and statistics, and where the price of failure wasn't especially high.  For example, he would have avoided applying computer vision to life-and-death medical decisions; but might have applied it to help make general policy recommendations, when given thousands and thousands of medical images.

 

In most of the things I've written about using BCIs, I've applied similar thinking.  e.g. in this "human enhancement" series I mostly thought in terms of using Big Data and statistics, and taking limited accuracy into account:

 

https://www.futureti...-and-not-write/

 

and also in this "zombie AGIs" piece:

 

https://old.reddit.c...ual_assistants/

 

Also see this piece (which is a somewhat more primitive type of A.I. application than "zombie AGIs"):

 

https://old.reddit.c..._braincomputer/

 

In fact, the Zombie AGI piece has a general method for applying BCIs in a Warden-like way:  the method is all about using a "critic" to screen out bad responses.  It's fine if the critic is only 75% accurate at this job; most of the fault would lie with the response generator, which isn't built with brain data. 

 

Where might "critics" of this sort be useful?  Anywhere that the machine generates several alternatives, where you need to screen out the bad ones.  e.g. that would apply to robot controllers, videogame agents, image synthesis (throw away the ugly ones), video synthesis, text synthesis, music recommendation,  product advertising, and many more.  

 

Now, you might think:  hasn't this been done with EEG headsets before?  Yes, it has; but EEGs are such noisy devices, and have such poor spatial resolution, that it takes a lot, lot, lot more hours of data to get good results, than it does with other brain scanning devices.  Kernel's recent BCI devices, in contrast to EEG, sound like they will finally make the more visionary Warden-style applications of BCIs possible!

 

https://www.kernel.co/



#2
Alric

Alric

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,087 posts

while you might only be able to tell whether the person in a photograph was a "hipster" with 70% accuracy (which isn't very good); 

 

I don't know, is it really that bad? How accurate is a person at telling if someone is a hipster or not just by looking at them once? I don't think they are a bad as people think they are, since we tend to think humans are better at doing stuff than we really are. Anyway, you make a good point that the computer also has the advantage where it can just brute force it's way through a huge amount of data quickly as well.

 

I would take it a step further though. You say it shouldn't be used in medicine for life or death decisions but it would be useful for policy making decisions. Well it might also be useful in life or death decisions as well. Not making the decision on it's own but having the computer give a second opinion. I could definitely see it being the case where you go to the doctor, they check you out, then have a computer check you a second time just to confirm what the doctor thinks or perhaps catch something they missed. If it gives a bad response you can just ignore it but if it notices something the doctor missed, he can recheck and maybe he will noticed something.

 

So you can definitely use this type of stuff even in more critical situation, if you apply it in reverse. Where instead of being a gate keeper and it goes through the AI first then to a person, it goes through a person first and the AI double checks it. Since the computer is so powerful and extra data isn't a big deal, it can also double check everything. So if a human reject it, it can double check and throw something back to the person if the AI think the human was wrong, or if the human accepted it, the AI can double check and flag it if it thinks the human was wrong.



#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

At the time Warden wrote his piece, the accuracy was much worse than it is now; but you're right it probably wasn't as bad as 70% for 2-category (hipster or not) classification; but it probably wasn't anywhere near as accurate as for classifying a whole business.

 

And, yes, if you pass the decision through a human first, and then through a machine to check it, that might could work well.  It would be like a "critic" module for doctors.  If a mistake occurs (resulting in an unnecessary death), the fault would lie with the doctor, not the critic.

 

Addendum:  Thinking about it again, I'm actually not sure what the accuracy of a hipster-classifier would be.  The problem is that "hipster" isn't a well-defined image category (to the degree that any image category is well-defined).  So, there would be considerable disagreement about whether someone looked like they were a hipster or not.  The "inner-annotator agreement" level of accuracy might be low enough to where the system would appear to be pretty inaccurate, perhaps as low as the 70% I stated above -- as inaccurate as any human's assessment of another human's classification work.



#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

I consider Kernel's recent Sound ID system a Warden-type application.  I've seen a few people on the web remind people that it's not accurate to call it "Shazam for the Mind" (Kernel didn't use this descriptor; journalists did); reminding people that it identifies only 1 of 10 songs.  However, I don't see any barriers to taking it a lot further -- all the way to identifying which of 1 million songs you are listening to, given a long enough audio clip.  Let me explain:

 

Let's say you have 1 million natural images clipped from the internet; or maybe 1 million photos.  If you pick 20 random pixels on the screen from an image, the values of those pixels might uniquely determine the image out of your list of 1 million.  Even if you change the contrast, reduce the color palette, and make various other alterations.  

 

What if you flip 25% of the pixels to random noise?  That's quite a lot! -- if you increase that 20 random pixels to 40 pixels, then you probably can still locate the image out of list of 1 million.  

 

What if you add rotations, translations, dilations, and shears?  Then 60 pixels is probably enough -- you might need a little more; but I doubt it would exceed 80 (for natural images).  If you get to choose the pixels (i.e. you don't just pick them at random), or get access to "50 pixels worth of information (e.g. the outcome of 50 different filters applied to the whole image)" about the image, you can probably get away with fewer.

 

And what about if you also add warps (stretches and compressions)?  Again, you don't need access to too much information about the image to match it to the library of 1 million.

 

Now, when a person listens to a piece of music, the pattern gets crumpled-up, noised (even some degree of non-i.i.d. noise, where there are correlations in the noise across different times), and various non-linear filters are applied, as it passes through the brain; and then the BCI gets access to an averaged and further noised copy of that information.  But, just like how we can pick out the image from a small number of bits of information, we should be able to do the same with music.  It really is how Shazam works -- as long as you can get enough bits of information about the song, you can match it, even in the face of considerable noise.

 

Here is a way to think about it:  let's suppose you break the song down into a sequence of notes.  And let's say you have an algorithm that identifies each note heard by looking at brain activity.  (or maybe some kind of average of notes heard over a short time interval).  A lot of the notes will be misidentified by the algorithm; but there will be some degree of signal standing above the noise.  Given a string of decoded notes -- which can include insertions and deletions, due to non-linear temporal alignment -- one very basic way to identify it would be to use a Longest Common Subsequence algorithm, to find the optimal match between the decoded notes and the notes in each song:

 

https://en.wikipedia...equence_problem

 

It would be a bit slow to do this for all 1 million songs.  There are various ways to speed it up -- e.g. application of the "triangle inequality" or some other method.  

 

....

 

It's a little surprising to me that Sound ID did so well with voices, which can vary quite a lot.  And, unlike songs, what and how things are spoken aren't frozen for all time (the song is the same any time you hear it; but if you ask people to read even the same text, they will read it slightly differently).  

 

It's even more surprising that systems can pick out an imagined phrase from a list of 5:

 

https://www.futureti...rning/?p=280916

 

I suspect with more data this can be pushed a lot further -- at least to 100 phrases, and perhaps even full imagined speech recognition.



#5
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

I don't think non-invasive methods like EEG will get very impressive. With denoising, interpretation using AI they will work better, but I think that the future lies in brain augmentation akin to Elon's Neuralink. The faster this area improves the better. We should get in front of AI, not behind. During 2030s AI may really surpass non-augmented humans in intelligence, so there have to be people who won't be outsmarted, like Elon for example. Things may quickly get out of hand otherwise.



#6
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,961 posts

Lolz!  You made exactly the same mistake I've mentioned dozens of times.  As I told some people on this and other forums:  if you mention a new BCI, they will instantly criticize EEG.  Explain that this isn't EEG, and they'll say, "I don't agree that EEG will work.". They can only think in terms of EEG, in terms of a consumer product, and nothing else!

 

In fact, here is the mistake again:

 

https://www.reddit.c...ty_and/fpydeqx/

 

It's a MEG scanner.  The M in MEG stands for "magnetic", not "electric".  Night and day different.

 

Ordinarily, you need a magnetic shielded room to block the Earth's magnetic field, and other fields, to do MEG.  And in the past, SQUIDs (super-conducting quantum interference devices) were used.  However, using Optically Pumped Magnetometers, with many channels, and a magnetic shielding tech built into the helmet, they have created an high channel count MEG scanner that you can wear outside.  It's radically more technologically advanced than flimsy old EEG headsets -- there's no comparison!

 

And the channel count is really very high.  Probably some of that is about blocking out magnetic fields.  Even so, it's going to be a lot less noisy than EEG, and the information transfer a lot higher, due to less attenuation (I'll probably write a piece about  the Nyquist-Shannon sampling theorem and the 129 channel limit of EEG).



#7
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,107 posts
  • LocationNew Orleans, LA

Funnily enough, that fits exactly into this thread's title: if you want to get BCIs out if the Unimpressive Valley and trough of disillusionment, detach the technology from EEGs.

 

I've explained before that EEGs have reached the peak of their refinement, that the technology is a century old. As long as EEGs get advertised as synonymous with contemporary BCI tech (and vice versa), we'll be stuck here.


And remember my friend, future events such as these will affect you in the future.


#8
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

EEG is like relays. We need transistors or even better - memristors.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users