All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.
The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly in these pages. No typing, no speaking, even, to distract you or slow you down as you interact with digital additions to the landscape: driving instructions superimposed over the freeway, short biographies floating next to attendees of a conference, 3-D models of furniture you can move around your apartment.
The Harvard audience was a little taken aback by the conversation’s turn, and Zittrain made a law-professor joke about the constitutional right to remain silent in light of a technology that allows eavesdropping on thoughts. “Fifth amendment implications are staggering,” he said to laughter. Even this gentle pushback was met with the tried-and-true defense of big tech companies when criticized for trampling users’ privacy—users’ consent. “Presumably,” Zuckerberg said, “this would be something that someone would choose to use as a product.”
In short, he would not be diverted from his self-assigned mission to connect the people of the world for fun and profit. Not by the dystopian image of brain-probing police officers. Not by an extended apology tour. “I don’t know how we got onto that,” he said jovially. “But I think a little bit on future tech and research is interesting, too.”
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Zuckerberg wants Facebook to build a mind-reading machine
Posted 07 March 2019 - 12:55 PM
- Yuli Ban and Enda Kurina like this
Posted 07 March 2019 - 01:17 PM
I don't do facebook, but this kind of technology sounds revolutionary! I mean, just by thinking, an AR screen pops up. I hope something like this comes out on the 2020s
Posted 07 March 2019 - 02:20 PM
The implications for when we use BCIs on a daily basis will be far-reaching. One implication I haven't written about much is that if the BCI has low noise and high information rate, it could be used to partially resurrect someone after they die. You could build a model to imitate some of their thoughts -- not today, not tomorrow... but eventually.
Posted 07 March 2019 - 07:02 PM
Facebook building BCI's is like monarchs building villages for their serfs to live in. Not saying houses are bad, neither are BCIs but should Facebook or the king really be the people we're excited to see building them?
This here is the exact reason I originally took issue with the futurist community and was pushed left into the margins of common discourse. Futurist communities are really good at getting excited about new technology without questioning the social consequence of who is creating that tech, and how they are using it. Should we not talk about this new research then? No. Of course we should. But at every avenue we should also acknowledge and critique that the same company who has been violating privacy rights certainly shouldn't be the one directly interfacing with our brains. That's genuinely dystopic.
Current status: slaving away for the math gods of Pythagoras VII.
Posted 07 March 2019 - 08:10 PM
I'm excited by the signal this will send to hardware makers. If Facebook can build it, they will reason it isn't as hard to build as they thought -- and so they will try to build their own versions, too. Then there will be cheap China-made knockoff versions that can scan at much higher resolution, just as happened with CD and DVD players.
I envision it won't be long before we see devices with the following characteristics:
* 1,000+ or maybe even 10,000+ high signal-to-noise-ratio channels.
* Spatially localized to within 3 millimeters.
* Temporal resolution at 100 milliseconds or lower (able to track brain signal changes over short time intervals).
* Able to scan at a depth of up to 2.5 to 3 centimeters.
* As light and easy to wear as a baseball cap -- or even lighter.
* Price of $500 or less.
When we have that, the world will change. So many doors will unlock for humanity. There will be risks, too, especially when it is placed in the hands of governments like China's. AI will advance using all that brain data.
- Yuli Ban likes this
Posted 07 March 2019 - 10:01 PM
/\ I agree with you on all of the bullet points. But I don't understand the focus on China. The Chinese State is an enemy sure, but the oligarchy here is fine? How can one only focus on China. There is a clear pathway to exploitation and oppression with BCI in both America and China.
Mass incarceration historically outpacing the Soviet Union.
Mass poisoned water supplies caused by agricultural industry
Mass poisoned water supplies caused by lead in the United States
Imperialism and with it genocide. I don't have to list individual massacres caused by U.S policy. Anyone who cares can easily find numerous examples.
Mass corporate and government surveillance alongside the unconsensual (there's no such thing as implied consent) commodification of user data for virtual marketplaces. (Particularly relevant to BCI)
(This is a common fact, I really shouldn't have to post a source, but will if prompted.)
The United States is not a good country. Its companies and institutions are broadly not good. And while the people running them might be good hearted, they themselves are not good people. Everything we do is political, that includes scientific research and this conversation is genuinely not being had in futurist communities in a substantive honest way. No one will actually say this because of how obviously flawed it is. But these communities tend to act as if technology alone can and will solve our problems without acknowledging the social change needed to responsibly use them. Even if someone is a self-interested Machiavellian it's important to recognize that the vast majority of us are not in positions of substantive power and those who are, by the logic of self-interest, will exploit us. Everyone being selfish is actually a really good argument for cooperation and caring about social wellbeing because more than likely you, the reader, are not an elite businessmen and or political figure.
Yes, the Chinese State is horrible, but it's a spectrum. Both America and China are horribly authoritarian and just because China is attempting to institute a citizenship score based on mass surveillance right now doesn't mean that American enterprise won't use BCI to exploit and abuse Americans in their own unique way. This is a not a question of who is worse, both are uniquely horrible in their own ways. China's ills do not justify Americas in any way, that is inconsistent, makes no sense, and common discourse conveniently ignores America's issues constantly. That's not even to mention that China isn't actually capable of instituting mass surveillance outside of certain districts in first tier cities. Common western discourse constantly plays up the Chinese State's ability to actually institute their dystopic policy because it allows that same discourse to use China as a propaganda piece. The bias is blatant and damming to the argument.
(Before anyone says, "well you have a bias too." Yes, of course I have a bias. But I'm getting better at recognizing it and in doing so can better recognize and curtail everyone elses too. For example here's some left bias. The left has completely ignored the subjective individual needs of White Men and in doing so alienated them and pushed them right. As a whole, through focusing so hardfastly on materialism the left has ignored other vital aspects of human experience.)
Mass surveillance and the commodification of human beings through data violations is a breach of fundamental human rights. It's not that people don't care about privacy, it's that they either don't understand what's happening to them or they don't have the capacity to live a life without interfacing with these networks because they are so prolific in every aspect of life. The moment you point out to nearly anyone what their phone is actually doing, and they understand, you can see the discomfort on their face. There's no such thing as implied consent, this is all blatant textbook exploitation. BCIs in the hand of corporate America will only exasperate this problem.
As a more fundamental critique of data collection in general. Yes, a lot of data can give us a lot of information that can help solve a lot of problems. But not everything can be quantified. I'm entirely unconvinced that simply reading the brains signals can actually help understand how a person truly feels. Consciousness and subjective experience is more illusive than that, it can't be reduced to a string of numbers. We might be able to jury rig things through trial and error. As in, "what do you feel when I send this signal Mr. Doe? How about now? Are you lounging on a tropical beach yet?" However, there has yet to be any indication that we, the observer, can actually understand what Mr. Doe truly feels. Even if we could experience the same feeling he feels, our subjective experience would be different from his interpretation of that feeling, adding more layers of complexity. BCIs are good, when they are responsibly used. But I think it's far too early to claim that simply aggregating data can solve the worlds problems in their entirety. I'm not really arguing against you right now Starspawn, more I'm taking a stab at the entire idea of Dataism itself. That's not even to mention how intertwined the Dataist community is with Corporate capitalism and how fundamentally flawed it can be because of that relationship. Can we truly save the world when our technology will only be used to maintain status quo? A status quo that permits and allows the vast majority of the species to be treated as subhuman and exploited as slaves?
Current status: slaving away for the math gods of Pythagoras VII.
Posted 09 March 2019 - 02:01 AM
This is a great article about Alex Huth's work:
Alex Huth, Assistant Professor of Computer Science and Neuroscience at the University of Texas at Austin, has an ambitious goal — to scan the brains of individual people for hundreds of hours to get fMRI data sets large enough to produce an accurate and detailed model of how language is represented in the brain.
AH: One of the things we did in the Gallant lab at Berkeley was really kind of different from the rest of the field. Instead of the standard MRI experiment [where] you take a bunch of people, scan each of them for maybe an hour, show them the same small set of stimuli, and average across these peoples’ brains to get some result.
What we did in the Gallant lab is take a smaller group of people, like 5-10 people, and scan each of them for many, many, many hours. In the paper that I published, it spanned from maybe 8-10 hours at least in the scanner, from which we got 2-3 hours of usable data. That’s a lot of time. Then we’d be able to do all these fantastic analyses. You could build these high-dimensional models because we’d have all this data from each person’s brain.
So one of the things I’m trying to do in my new lab is to take that idea and push it to the extreme. So ask — how much data can we get on a single subject? My goal is to have 200 hours from one person. So scan the same person over and over again, probably something like once a week for a couple years. The cool thing is that this allows us to really change how we think about the models that we would fit to this data.
Using our old approach of getting 2-3 hours per subject, we were kind of stuck in this mode of guess and check. We’d always have to guess — maybe this kind of thing is represented in the brain, maybe this kind of feature is important. Then we’d build models using that feature, then test to see if they work well.
But if you can get enough hours of data, enough data points, then we can let the data tell us what kind of features are important, instead of being forced to guess. So we can sort of flip the equation around. I think that’s really exciting, that’s the pivot point that we’re trying to get toward.
It’s getting a really, really big data set, getting enough that we can learn directly from that data what the features are, and that will tell us something about how the brain processes language. That’s the main thrust of my lab.
And next-gen BCIs will push that much further, since they will have much higher temporal resolution than fMRI, and spatial resolution in the ballpark of fMRI. It may make take a generation or two for BCIs to get to that level, but I could see it happening in a small number of years, given the rate of progress. It will be so much easier to build super-large datasets that way.
When we can build good models of the brain's capacity to understand language, then we will be much closer to having machines with True Natural Intelligence.
The clock is ticking... we are getting closer and closer...
- Yuli Ban likes this
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users