The title of this piece is derived from an article written by Pete Warden back in 2014 about computer vision:
I remembered that piece, since it strongly matched my own thinking at the time. Warden co-founded -- and was the CTO of -- a company called Jetpac, that eventually was acquired by Google. Jetpac applied computer vision to, for example, determine whether a coffee shop in some neighborhood was more on the hipster-ish side or more on the businessman side.
What Warden realized early on was that computer vision at the time wasn't good enough to live up to the lofty dreams of science fiction; but was still good enough to add considerable value to services. One of the keys was to exploit the power of Big Data and statistics: while you might only be able to tell whether the person in a photograph was a "hipster" with 70% accuracy (which isn't very good); if you have hundreds of photos taken at a given coffee shop at different points in time, you gain considerable statistical strength, and can say that the business itself is a hipster-hangout with 95% accuracy.
He looked for opportunities where he could apply big data and statistics, and where the price of failure wasn't especially high. For example, he would have avoided applying computer vision to life-and-death medical decisions; but might have applied it to help make general policy recommendations, when given thousands and thousands of medical images.
In most of the things I've written about using BCIs, I've applied similar thinking. e.g. in this "human enhancement" series I mostly thought in terms of using Big Data and statistics, and taking limited accuracy into account:
and also in this "zombie AGIs" piece:
Also see this piece (which is a somewhat more primitive type of A.I. application than "zombie AGIs"):
In fact, the Zombie AGI piece has a general method for applying BCIs in a Warden-like way: the method is all about using a "critic" to screen out bad responses. It's fine if the critic is only 75% accurate at this job; most of the fault would lie with the response generator, which isn't built with brain data.
Where might "critics" of this sort be useful? Anywhere that the machine generates several alternatives, where you need to screen out the bad ones. e.g. that would apply to robot controllers, videogame agents, image synthesis (throw away the ugly ones), video synthesis, text synthesis, music recommendation, product advertising, and many more.
Now, you might think: hasn't this been done with EEG headsets before? Yes, it has; but EEGs are such noisy devices, and have such poor spatial resolution, that it takes a lot, lot, lot more hours of data to get good results, than it does with other brain scanning devices. Kernel's recent BCI devices, in contrast to EEG, sound like they will finally make the more visionary Warden-style applications of BCIs possible!