Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Would an advanced BCI allow us create a 3D/2D simulation just by "thinking" about it?


  • Please log in to reply
7 replies to this topic

#1
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

Imagine a software than can convert thoughts to text, relatively simple right? How about a program (an advanced AI) that can convert your thoughts into code, thus materializing your idea right in front of you on your screen! There would of course be hardware limitations (also depending on how "free" the software is, meaning it might not be able to code things it wasn't programmed to do), and ethical limitations. Also, the video game and entertainment industry would be disrupted, as there would be debates over weather or not the type of tech would undermine actual humans that put physical effort and work into their craft.



#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,933 posts
Yes, I think BCIs will enable 
 
thought --> code
 
translation, though through an indirect path.  I wrote about it here:
 
https://www.futureti...-and-not-write/

Closely related to helping disambiguate for chatbots and virtual assistants is the use of BCIs to extend what you can do with “Natural Language Programming”, which is where you specify in English, say, what you want a program to do -- like you would to an expert programmer you want to hire -- and then the system writes the code for you.

You can, of course, specify a program in natural language in such detail that the computer doesn’t have to “think” very hard to figure out what you mean. For example, if you say, “Input two floating point numbers x and y from the user. Then, print their sum, x+y.” -- you’ve essentially written a program, and doing it in English doesn’t seem like it improves productivity very much; you might as well just write it in C.

And if you want to be much more vague in your descriptions, then you lose creative control, as the computer has to guess what you mean, to fill in the details. For example, if you are designing a game, and use natural language programming, you might tell the computer, “I want this level to be in a big, wide city." The computer maybe has some stock game levels that it could fit to your description; but it doesn't know if you are talking about a city at night... near a body of water... ancient, modern, or futuristic... etc. Your description is under-specified. So, it will have to guess, and may guess wrongly.

BCIs could fix these problems. First of all, it will allow you to speak in vague terms about what you want -- “I want this level to be in a big, wide city.” Second, while you're (vaguely) describing what you want to see, you're also thinking about it; and these thoughts can be read, and turned into specifications to resolve the ambiguity due to the vagueness.

I'm not asserting here that you have a crystal-clear picture in your mind of what you want the scene to look like -- many people have trouble holding a stable image in their minds. Rather, as you think about that scene, there is sure to be "semantic information" that can be read off from your brain state, that could specify the time of day and other attributes of the scene.

Extend this principle to any type of software, and I think BCIs could truly pose a danger to professional programmers, as they -- and the right decoding and program synthesis algorithms -- would allow just about anybody to produce complex software using vague descriptions.



#3
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,432 posts

Yes, a fair ammount of Greg Egan's novels touch on this. Read a story like Diaspora or Permutation City and you'll encounter it.



#4
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

Yes, I think BCIs will enable 
 
thought --> code
 
translation, though through an indirect path.  I wrote about it here:
 
https://www.futureti...-and-not-write/
 

Closely related to helping disambiguate for chatbots and virtual assistants is the use of BCIs to extend what you can do with “Natural Language Programming”, which is where you specify in English, say, what you want a program to do -- like you would to an expert programmer you want to hire -- and then the system writes the code for you.

You can, of course, specify a program in natural language in such detail that the computer doesn’t have to “think” very hard to figure out what you mean. For example, if you say, “Input two floating point numbers x and y from the user. Then, print their sum, x+y.” -- you’ve essentially written a program, and doing it in English doesn’t seem like it improves productivity very much; you might as well just write it in C.

And if you want to be much more vague in your descriptions, then you lose creative control, as the computer has to guess what you mean, to fill in the details. For example, if you are designing a game, and use natural language programming, you might tell the computer, “I want this level to be in a big, wide city." The computer maybe has some stock game levels that it could fit to your description; but it doesn't know if you are talking about a city at night... near a body of water... ancient, modern, or futuristic... etc. Your description is under-specified. So, it will have to guess, and may guess wrongly.

BCIs could fix these problems. First of all, it will allow you to speak in vague terms about what you want -- “I want this level to be in a big, wide city.” Second, while you're (vaguely) describing what you want to see, you're also thinking about it; and these thoughts can be read, and turned into specifications to resolve the ambiguity due to the vagueness.

I'm not asserting here that you have a crystal-clear picture in your mind of what you want the scene to look like -- many people have trouble holding a stable image in their minds. Rather, as you think about that scene, there is sure to be "semantic information" that can be read off from your brain state, that could specify the time of day and other attributes of the scene.

Extend this principle to any type of software, and I think BCIs could truly pose a danger to professional programmers, as they -- and the right decoding and program synthesis algorithms -- would allow just about anybody to produce complex software using vague descriptions.

 

Thank you so much, I actually do recall reading that in the past. Overall excellent and well written post. This tech sounds too good to be true (and scary). Although, I don't recall you predicting a date for when these (advanced and efficient versions) BCI's will enter the mainstream. You think that the 2030's is too optimistic?



#5
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

Yes, a fair ammount of Greg Egan's novels touch on this. Read a story like Diaspora or Permutation City and you'll encounter it.

Thanks, I'll look into those!



#6
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,933 posts

It's hard to say when it would be a consumer product, given how unpredictable the path from 

 

prototype --> product.

 

Virtual assistants are a good case-study:  some company has a good idea about how to build one, and gets developers on-board.  Then, a big company like Google, Apple, or Samsung buys them out for several hundred million dollars.  And they take a working plan, and try to expand to a monstrous degree, serving every kind of language, while not doing something offensive to the brand, and not interfering with other products of the company or that it works with; and this causes delays.  And then there're inter-company fights.  And then the CEO decides they don't need to build it, because they have a kind of "understanding" with competitors, the effect of which is to slow it all down and proceed more conservatively.  Eventually, the original CEO of the startup that was bought out leaves the company, disillusioned.  

 

But it really is possible to make a virtual assistant significantly more capable than what exists today.  The holdup is not with the technology.

 

....

 

So, with that said, here is one possible path to a thought --> code app:

 

Perhaps late in 2020 or in 2021, some scientists that have had access to a new kind of BCI device, or even just MEG or FMRI, get the idea to see if they can improve on existing methods that map the text description of a program to the program -- e.g. OpenAI's recent work:

 

https://www.youtube....h?v=fZSFNUT6iY8

 

(And similar works on code-synthesis.)

 

Basically, as a proof-of-concept, the scientists consider simply adding input from a BCI to the text.  To add a little bit of fun, they consider the problem of game-synthesis, which has a well-established history (game-synthesis is a favorite topic in some parts of CS).  They work with a platform for creating Mario-style sliding screen games.  There are all sorts of different styles, character types, abilities, enemies, goals, and so on that you have to consider.  These all have to be described somehow.  The scientists break the problem down into a set of about 10 types of slots that have to be filled.  The problem then becomes one of mapping

 

spoken descriptions + BCI input --> slot values.

 

In addition, once the slots are filled, they read the BCI output to see if the person is satisfied.  If the person notices an error, that will be detected, and the slot value will be updated.  

 

The model will basically be a fine-tuned large language model, that can accept both text and BCI data as input.  

 

About 10 grad students will be recruited to help train the machine learning models, using their brain data.  Before each round of testing, all 10 students will be told the target game to produce.  5 will use spoken words and BCI to get the model to reach the target; and 5 will use only spoken words.   

 

And my guess is that they'll show that the BCI data speeds the whole process up by more than 50%, maybe even 100%.

 

The paper will be written up and sent to a conference like the one organized by SIGCHI:

 

https://sigchi.org/conferences/

 

It will be accepted, and there will be press articles about it, like in Techcrunch or IEEE Spectrum or New Scientist, with a catchy title like, "Computer scientists program a computer with their mind."  

 

This could all happen by late 2021.  

 

Now, at this point, high-end BCIs might be available for purchase as a consumer product, but at considerable expense (like $5,000 to $10,000).  Some labs might decide to see if they can push the technology further, and repeat OpenAI's text --> Python experiment.  

 

So, maybe by 2022 or 2023 you might see another group of researchers that demonstrate

 

spoken text + BCI data --> Python

 

that can produce programs considerably longer than OpenAI has shown with just text input, considerably more accurate, and produced considerably quicker and with far less effort.  The overall speedup in code-development for modest-sized programs might be 50% to 100%, on average, for longer programs.

 

But this probably will all be happening in academia, not as a consumer product.  

 

Maybe by 2025 some further experiments will refine all this, and show that it can be made to work even better, on even longer programs, and even greater levels of efficiency gains and accuracy.



#7
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 32 posts

It's hard to say when it would be a consumer product, given how unpredictable the path from 

 

prototype --> product.

 

Virtual assistants are a good case-study:  some company has a good idea about how to build one, and gets developers on-board.  Then, a big company like Google, Apple, or Samsung buys them out for several hundred million dollars.  And they take a working plan, and try to expand to a monstrous degree, serving every kind of language, while not doing something offensive to the brand, and not interfering with other products of the company or that it works with; and this causes delays.  And then there're inter-company fights.  And then the CEO decides they don't need to build it, because they have a kind of "understanding" with competitors, the effect of which is to slow it all down and proceed more conservatively.  Eventually, the original CEO of the startup that was bought out leaves the company, disillusioned.  

 

But it really is possible to make a virtual assistant significantly more capable than what exists today.  The holdup is not with the technology.

 

....

 

So, with that said, here is one possible path to a thought --> code app:

 

Perhaps late in 2020 or in 2021, some scientists that have had access to a new kind of BCI device, or even just MEG or FMRI, get the idea to see if they can improve on existing methods that map the text description of a program to the program -- e.g. OpenAI's recent work:

 

https://www.youtube....h?v=fZSFNUT6iY8

 

(And similar works on code-synthesis.)

 

Basically, as a proof-of-concept, the scientists consider simply adding input from a BCI to the text.  To add a little bit of fun, they consider the problem of game-synthesis, which has a well-established history (game-synthesis is a favorite topic in some parts of CS).  They work with a platform for creating Mario-style sliding screen games.  There are all sorts of different styles, character types, abilities, enemies, goals, and so on that you have to consider.  These all have to be described somehow.  The scientists break the problem down into a set of about 10 types of slots that have to be filled.  The problem then becomes one of mapping

 

spoken descriptions + BCI input --> slot values.

 

In addition, once the slots are filled, they read the BCI output to see if the person is satisfied.  If the person notices an error, that will be detected, and the slot value will be updated.  

 

The model will basically be a fine-tuned large language model, that can accept both text and BCI data as input.  

 

About 10 grad students will be recruited to help train the machine learning models, using their brain data.  Before each round of testing, all 10 students will be told the target game to produce.  5 will use spoken words and BCI to get the model to reach the target; and 5 will use only spoken words.   

 

And my guess is that they'll show that the BCI data speeds the whole process up by more than 50%, maybe even 100%.

 

The paper will be written up and sent to a conference like the one organized by SIGCHI:

 

https://sigchi.org/conferences/

 

It will be accepted, and there will be press articles about it, like in Techcrunch or IEEE Spectrum or New Scientist, with a catchy title like, "Computer scientists program a computer with their mind."  

 

This could all happen by late 2021.  

 

Now, at this point, high-end BCIs might be available for purchase as a consumer product, but at considerable expense (like $5,000 to $10,000).  Some labs might decide to see if they can push the technology further, and repeat OpenAI's text --> Python experiment.  

 

So, maybe by 2022 or 2023 you might see another group of researchers that demonstrate

 

spoken text + BCI data --> Python

 

that can produce programs considerably longer than OpenAI has shown with just text input, considerably more accurate, and produced considerably quicker and with far less effort.  The overall speedup in code-development for modest-sized programs might be 50% to 100%, on average, for longer programs.

 

But this probably will all be happening in academia, not as a consumer product.  

 

Maybe by 2025 some further experiments will refine all this, and show that it can be made to work even better, on even longer programs, and even greater levels of efficiency gains and accuracy.

Thank you, again! The near future is going to be incredible. Food for thought: Do you think that non-invasive BCI's will work just as well as invasive ones (chip in the brain itself)? And do you think that society will accept and favor the non-invasive ones (akin to a helmet) more? Of course, I think that eventually the invasive ones will become the dominant option.



#8
engram

engram

    Member

  • Members
  • PipPip
  • 10 posts

Thank you, again! The near future is going to be incredible. Food for thought: Do you think that non-invasive BCI's will work just as well as invasive ones (chip in the brain itself)? And do you think that society will accept and favor the non-invasive ones (akin to a helmet) more? Of course, I think that eventually the invasive ones will become the dominant option.


non-invasivs BCIs will be just as good as invasive ones when it comes to reading signals from the brain. But when it comes to writing signals to the brain, I'm pretty sure invasive BCIs will remain at the top forever.
There's no non-invasive computer to brain interface technology that will be just as good as implants/nanobots.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users