We've reached the very rough edges of what's possible to do with GPT-3 out of the box, and we'll only see great improvements at the bleeding edge of experimentation. It's become clear what it can and can't do.
As I myself have never directly used GPT-3, I can't tell you the nuances of its capabilities— only that they lie on the inner border of a twilight zone between how we've come to expect AI and how AI will soon develop.
Still, I can dream, and I'd like to imagine that we are only a few months away from seeing the next iteration of OpenAI's language model.
If I can throw out some predictions for what it can do.
I've been having some misgivings that the number of data parameters might be disappointing— people hope for something like 10 trillion or more, but it may be a bit less than 1 trillion overall. However, if they optimize things differently and train the model on image and audio, a high-quality multimodal 800 billion parameter GPT-4 could be functionally superior to a more generic text-only 20 trillion parameter GPT-4. The sequence length may be pushed much further, such as to 10,280 or more.
We've never seen a fine-tuned GPT-3 either, so I'd imagine one would be approaching par-human in its capabilities at certain specific tasks. In terms of what an off-the-shelf out-of-the-box GPT-4 can do, however, I can only expect it to be par-human at natural language generation and tasks related to such. If it's trained on multiple kinds of data, its image output will probably resemble GANs circa 2016 or 2017.
Its text-generating capabilities will range from being able to generate short stories with a coherent beginning, middle, and end all the way to what I call "sylistic editing" or literary style transfer (i.e. transferring one writer's style to another piece). With sufficient memory, it ought to be able to hold conversations as well. GPT-3 can do something like this, but when I say "hold a conversation," I mean that it can actively draw on what's been previously said as well as draw from other pools of knowledge so that it's an actual conversation and not a series of prompts that feels like a conversation. GPT-3 can probably do this too if it were fine-tuned on it.
It'll probably be revealed some time in the middle of 2021.
I'll stop here just in case others want to chime in with more realistic visions.
"Cut the crap, Yuli— you know what we want. Is it an AGI powering your waifu?"
1, I doubt it. indeed, I strongly doubt it. The first AGI may very well be from a language model and not include any brain data whatsoever, but GPT-4 is likely going to be more of an experiment in testing large multimodal transformer networks rather than an attempt at AGI. More of a "look at all the different things we can make a single AI do!"
2, I doubt even GPT-5 will be an AGI. Of course by "AGI," I mean something closer to an "oracle" type AGI, not a "godlike" AGI— first-gen AGI, or weak general AI, or however you wish to call it. But by that point in time (~2023 or so), plenty of very knowledgeable people will likely start second-guessing themselves and there'll be debates about just how really close we are to AGI with the estimates changing from "maybe 2045" to "could make one within a year or two."
Also:
[D] Biggest roadblock in making "GPT-4", a ~20 trillion parameter transformer