Difficult to predict with certainty. It's my understanding they will release tranches of data each year, finishing in 5 years; and probably they will experiment with the data each year. So, perhaps 1 to 2 years from now we will see the first AI models they have built with this data. It will probably work something like this: you send either a raw bitmap (down-sampled) or an encoding of frames of video + audio from a video game to the network, and it imitates a human player, complete with motor signals for how to move the joystick. The system then literally plays the game. The same brain-based system will be able to play a wide variety of games out-of-the-box -- say, play pretty much any game the way a human would that isn't familiar with it. So, it won't be an expert player, right away; but it will still be impressive.
Now, such a system can be used in lots of ways. One way is that it can serve as a "critic" in some larger system, that learns to play at a superhuman level -- crucially, it will learn to do this even for very complex games that have previously required complex architectures and add-ons; and it will do it playing far, far fewer games than these systems required in the past.
Oh, and I don't think complex architectures will be required. Basic ones will do pretty well, given enough data.
But while this is going on... Deepmind, OpenAI, Microsoft, Facebook, and others, may continue to improve their own game-playing systems that don't use brain data. It's uncertain which will win the race towards a general-purpose, efficient game-playing system.
In parallel to this work, Neuromod and also Alex Huth's group are pursuing brain imitation of language understanding (have a person listen to a recording, record their brain, and then try to model it). That will probably take at least 2 years to acquire enough data to do something really, really impressive; but, again, there will probably be releases after the first year (actually, I'm not sure if Neuromod will be doing this this year, or will focus on video games 100% in the first year). Once the data are acquired, and the first models are trained, I expect they will work very well, and will generalize easily to whole other categories of language input. Probably in about 2 years Huth will write a paper with his student on this. They'll try various networks, and report a few very surprising behaviors. For example, when a story turns sad, you will notice certain parts of the artificial brain light up, and this will persist until the mood of the story lifts -- showing that the system really understood
what was going on. Or, maybe to understand a story you have to be able to apply logic or to count, and that will be reflected by the brain-imitation system.
The first systems may not work quite at full human levels, but will show shocking, scary hints of human-like understanding. Shivers down the spine... "It's alive!"
Now, Huth might get the idea to integrate this system into a chatbot framework like I described here:https://www.reddit.c...ual_assistants/
He may not have the right data to do it; on the other hand, he just might.
He could use GPT-2 as the language model / generator. The brain-imitation network would be the critic. The two of them together would work far and away better than either one alone. The combined system might produce shockingly
human-like conversation -- so good as to have national security implications.