Here is something I wrote on my forum almost 1 year ago, on potential applications of CTRL-Labs's tech:
I've been thinking a little about potential consumer applications of CTRL-Labs's new neural interface:
It seems to have many of the same capabilities as Leap Motion's gadget:
It's my understanding that Leap Motion has a sizable developer community behind it, so it would seem to be doing well -- but I never hear about it being used, except in the tech press. I wouldn't be surprised if the company folds or gets bought out in a few years.
Assuming CTRL-Labs offers a huge improvement in usability over Leap Motion, there is a chance that it will do at least as well as that company.
Where might this improvement come from? Here are some potentials:
* You don't actually have to move your hand or fingers to control the CTRL-Labs device. You just have to act as though you are *about* to move them, and it can translate that into action. This would be especially useful if you wanted to use it to interact with a smartphone or AR glasses while in a public setting (e.g. on a train), where you don't want to draw attention to yourself.
* You also don't need a line-of-sight to an IR tracker. You can even put your hands in your pocket and control the device.
* You have the ability to signal intensity or pressure of your grip or motions. If you are controlling a robot arm remotely, this would be especially useful -- e.g. you might want to cup your hand tightly around a beer mug; but grab a delicate wine glass very gently.
* The device is relatively unobtrusive and one can imagine wearing it all day, even while in bed sleeping.
* Let us also assume that there is a way for the device to keep track of where you are located in space -- not just locally, but also e.g. where in a house or building. This might already be possible, using various contextual clues; but it should be easy to add a sensor or two, if not.
With that in mind, here are a few potential uses, beyond the ones in the above video and other videos, that you can't easily achieve with Leap Motion:
You can record all your days, including all your covert hand and finger movements, for efficient searching later.
For instance: say you are giving a lecture, and someone asks for lecture notes. You could work out when you gave that lecture, and then translate your recorded neural signals into a "hand-written" slides for anyone who wants them. Remember all those notes you forgot that you wrote down in a notebook you have long since thrown away? They are preserved for you for later searching -- no need to remember taking a picture with your phone.
Say you are at a restaurant with friends and want to make a note about what your favorite dish is for the next time you are there. You could simply move your hand in your pocket, pretending that you are writing with a pen, and even drawing diagrams -- and it will be recorded.
Everything you type during the day will also be recorded for efficient and accurate searching, including the files you deleted, messages you forgot about, all the lines you erased and wrote over -- it's all there for searching later on.
All the musical instruments you played, and how you played them, are also there for searching. Apps might offer you advice on your playing -- "Your rhythm is a bit off. I recommend you play the following pieces to improve."
"Where did I leave my laptop computer?" -- Maybe an app can search based on how you pinch your hand around it. You could perhaps search by asking an assistant, "When and where were the last times I pinched my hand like *this* [pinches as though grabbing a laptop]."
All the credit card slips and documents you signed, all the pictures you drew, all the clay statues you sculpted, all the home repair you did, all the meals you cooked, all the games of baseball you played, the times you fed the dog, the times you watered the plants, whether you unplugged the coffeemaker, the items you inspected in the grocery store, and so on -- it's all there for searching!
Nothing is forgotten!
Control of robots
Well, it's obvious that controlling robots with the device will be a big use, and they even show examples in their demo videos. So let me say something that is not so obvious: it is probably *exhausting* to control robots using Leap Motion, if you have to do this for more that about an hour at a time. Imagine you are a factory worker controlling a robot remotely using Leap Motion, where you have to keep moving your arms to have the robot pick up items off of a conveyor belt, and place them in a box -- minute after minute after minute of repetitive motion. Leap Motion lowers the amount of physical labor considerably, but the potential for exhaustion is still there.
A device like CTRL-Labs's, however, could all but eliminate the amount of physical effort needed. One would just have to act as though one is going to move the robot arm and hand, with ones own arms resting comfortably on an easy chair, not moving. About the only thing tiring would be the dull monotony of factory work.
With the device you could even command a robot arm to clean up your house while flying on an airplane. Nobody sitting next to you would even notice. They'd just see you wearing AR glasses (looking through the robot's eye), wearing funny watch-like devices on your arms, ever so slightly moving your hands, as though with a mild case of palsy. You could make the robot pick things off of the floor and put them away; open the fridge to see what you need; turn off the coffee pot; check on the pets; and so on.
Or maybe you want to look in on elderly relatives? Let's suppose their home is outfitted with a robot arm that can be controlled remotely. While riding on a noisy tram you can subtly command the robot to tidy up, and maybe even do something as complex as do the laundry or make the bed. The muscles in your arm barely ripple.
Expanding the repertoire of gestures
I think a big use of the device might be control of smartphones and AR glasses. CTRL-Labs gives a few examples of typing with an imaginary keyboard and playing Space Invaders with a smartphone, without moving ones hands. Those are interesting use cases; however, with a reworked smartphone user interface, there are many more possibilities:
One obvious one is to expand the set of gestures. For example, maybe you act like you want to clench your hand to raise the volume; or spread your fingers out twice (like sending out radio waves) to tun on Bluetooth. Maybe you can also call apps with a single gesture -- pinch your figures like you are holding money to open a banking app; flick your fingers like a bird tweeting to open Twitter; act like you are going to turn a page to open up your ebook library; hold your hand vertical like a flat TV screen to open Netflix; and so on.
For apps that are common and that you use a lot, they will probably be on the front screen -- and so simply tapping on the screen will suffice. Or, you can just speak to your phone; but there are situations where you don't want to announce in public what you are doing. What about apps that you have almost forgotten about, that you don't use all that often? -- maybe you can't even remember their names, but easily remember the gesture to call them up. I find that I forget many of the apps I have downloaded.
In social media, when sending messages to people, you can communicate your feelings much more precisely and intimately than using emojis. Imagine a love letter attached with a gentle caress gesture -- a piece of your very being sent across the Internet.
VR and Videogames
Everything you can do with Leap Motion, but then add:
* Reduced arm and hand fatigue;
* Ability to convey effort / pressure / intensity;
* No need to carry an IR sensor around, however small it may be.
* Covert play, not that that would be that useful with VR.
There are, of course, the health-tracking uses.
I imagine very many signs of physical and mental illness are determinable from bodily neural data. For example, diabetes and heart disease might subtly increase the amount of noise in the neural signals; cognitive decline might be observable in the loss of precision of fine motor control; Parkinson's and other neurodegenerative diseases are almost certainly detectable.
It's possible that CTRL-Labs's device can track more disorders, more accurately than any other device yet created, that can be worn basically all day long.
There are also applications to Machine Learning, through opening up new training datasets; though, these aren't things consumers care about directly.
It would take me a long time to list all the different uses of the data, but there are some obvious ones: training robots in dexterous manipulation; training systems to imitate your handwriting; training systems to correct robot errors based on visceral responses; and so on.
I also wrote a post (again, about 1 year ago) on how CTRL-Labs tech is better than Myo Armband:
I wrote a post a few days ago about how the CTRL-Labs neural interface will be better than Leap Motion (at least for a large number of quite useful applications)
but I should also say something about a forerunner neural interface produced by Thalmic Labs a few years ago: the Myo Armband.
I found a Hacker News post that is relevant:
Some comments (assuming they are credible):
I have a Myo, and it's the device which taught me that what I'm actually interested in is interface technology that reduces my movement cost, and the Myo -- while being a pretty cool device and also one of the few that actually works within 10% of what's advertised -- unfortunately increases my movement cost. It's also got a pretty narrow range of actions. More than a few, but not enough to make the increase in movement cost worthwhile, IMO.
(A side note: recall what I said about how Leap Motion might be exhausting to use to control robots, because you still have to move your arms a lot. CTRL-Labs will fix that problem.)
I had a job writing software for piloting a drone with Myo. Myo recognizes a small set of palm and finger gestures, not continuous movements, so, for example, you couldn't record yourself typing on the keyboard. It does stream arm movement and rotation though. No mind reading either, which is the gist of the Ctrl-labs' armband.
According to the article, better machine learning leading to more consistency.
I had figured that this last one was a big differentiator. Here is what I think are the differences, based on what I have read:
* The CTRL-Labs device is smaller, and can be worn near the wrist like a watch. The Myo Armband does not look like something you would wear all day long.
* I recall reading that CTRL-Labs put a lot of effort into increasing the Signal-to-Noise Ratio (SNR); so maybe it is much superior in that regard -- and, being closer to the hand, probably is even *more* accurate for decoding fine hand and finger motions.
* CTRL-Labs has much more good Machine Learning talent extracting as much capability from the device as possible. That ML work can make all the difference.
Speaking of the power of ML, I think it's even possible to use Machine Learning to make BCIs with existing near-infrared "diffuse optical tomography" (DOT) -- there is some work on using Deep Learning to invert the scattering, without the need for ultrasound to assist in localizing where to focus the light; but it's still only in the experimental stage.