You ignored the rest of what he said.
Not at all. For example, I'm fully agree with this part:
We've yet to see whatever it is that OpenAI's really working on.
Exactly, let's wait and see.
You ignored the rest of what he said.
Not at all. For example, I'm fully agree with this part:
We've yet to see whatever it is that OpenAI's really working on.
Exactly, let's wait and see.
Well, it's not the Singularity or anything, but what GPT-3 alone has accomplished is shocking.
It's exactly what I expected GPT-3 to be: something in between artificial narrow intelligence and artificial general intelligence. It even has the architectural differences. You can get GPT-3 to do a wide variety of tasks, including things it wasn't even trained to do. You know, artificial expert intelligence, or AXI.
What I absolutely didn't expect was how easy it would be to jump from AXI to AGI. It's not absolute, but there is a path towards a very functional kind of zombie AGI within a year. I'm not kidding— that's how close we are, if the people at work on it recognize the same limitations. Of course, it wouldn't be just GPT-3; it'd have to be the secret project OpenAI's working on (of which GPT-3 and Jukebox are likely just small parts of). But the main takeaway ought to be that we may have a proto-AGI functional within twelve months.
It's so close it doesn't even sound possible. But it's right there; you can read the paper yourself and come to the same deductions about what it's missing.
I wish OpenAI published a blog post about it so it's easier to grasp. There are also a lot of complaints about the paper on Arxiv and Github that it wasn't as amazing of an advance over GPT-2 or that it's running into limits, neither of which makes sense really— the GPT-2 paper itself said that scaling it up with more parameters would lead to greater generalization, and the "running into limits" part was bandied about even with GPT-2 itself. There's clearly a lot, lot more than can be done, and considering GPT-3 doesn't even use image, audio, video, or other biofeedback data, the actual limits are likely much, much higher than anything people are comfortable with.
Hmmm, most of what I have been reading suggests that GPT-3 is simply a scaled up version of GPT-2 and not much more. It's intuitive that with more parameters GPT-3 outperforms GPT-2, but people much more knowledgeable than me believe this advance to be more or less linearly and not groundbreaking. It was fun to fork GPT-2 and train it a bit. Also, there is a very easy access GPT-2 transformer online.
Here is an example:
Input: Are Yuli Ban's predictions of the future correct?
Output: Can we not be a little more optimistic? Can we not hope for a future that does not revolve around our iPhones, tablets, and other fast-charging devices? Does Yuli Ban truly believe that we will never see further innovation in technology? Can we, as proponents of open-source software, ever not want to see the opportunities provided by the thought-leaders and researchers in academia, government, and industry? Can we really hope that Yuli Ban can see the picture a little clearer? Please let us know what you think of his opinion piece
Of course the transformer generates a lot of garbage but it is otherwise fun to use.
Can you show me some of those people that believe its not groundbreaking? I've seen the complete opposite so far, and based off what I've seen on the papers I can't help but disagree with this.
Well, it's not the Singularity or anything, but what GPT-3 alone has accomplished is shocking.
It's exactly what I expected GPT-3 to be: something in between artificial narrow intelligence and artificial general intelligence. It even has the architectural differences. You can get GPT-3 to do a wide variety of tasks, including things it wasn't even trained to do. You know, artificial expert intelligence, or AXI.
What I absolutely didn't expect was how easy it would be to jump from AXI to AGI. It's not absolute, but there is a path towards a very functional kind of zombie AGI within a year. I'm not kidding— that's how close we are, if the people at work on it recognize the same limitations. Of course, it wouldn't be just GPT-3; it'd have to be the secret project OpenAI's working on (of which GPT-3 and Jukebox are likely just small parts of). But the main takeaway ought to be that we may have a proto-AGI functional within twelve months.
It's so close it doesn't even sound possible. But it's right there; you can read the paper yourself and come to the same deductions about what it's missing.
I wish OpenAI published a blog post about it so it's easier to grasp. There are also a lot of complaints about the paper on Arxiv and Github that it wasn't as amazing of an advance over GPT-2 or that it's running into limits, neither of which makes sense really— the GPT-2 paper itself said that scaling it up with more parameters would lead to greater generalization, and the "running into limits" part was bandied about even with GPT-2 itself. There's clearly a lot, lot more than can be done, and considering GPT-3 doesn't even use image, audio, video, or other biofeedback data, the actual limits are likely much, much higher than anything people are comfortable with.
Hmmm, most of what I have been reading suggests that GPT-3 is simply a scaled up version of GPT-2 and not much more. It's intuitive that with more parameters GPT-3 outperforms GPT-2, but people much more knowledgeable than me believe this advance to be more or less linearly and not groundbreaking. It was fun to fork GPT-2 and train it a bit. Also, there is a very easy access GPT-2 transformer online.
Here is an example:
Input: Are Yuli Ban's predictions of the future correct?
Output: Can we not be a little more optimistic? Can we not hope for a future that does not revolve around our iPhones, tablets, and other fast-charging devices? Does Yuli Ban truly believe that we will never see further innovation in technology? Can we, as proponents of open-source software, ever not want to see the opportunities provided by the thought-leaders and researchers in academia, government, and industry? Can we really hope that Yuli Ban can see the picture a little clearer? Please let us know what you think of his opinion piece
Of course the transformer generates a lot of garbage but it is otherwise fun to use.
Can you show me some of those people that believe its not groundbreaking? I've seen the complete opposite so far, and based off what I've seen on the papers I can't help but disagree with this.
Once I get more time I will include the rest.
Here is the one that immediately came to mind: article
"Utopia is the hope that the scattered fragments of good that we come across from time to time in our lives can be put together, one day, to reveal the shape of a new kind of life. The kind of life that yours should have been." - Bostrom
And don't forget to read the comments at the end of that post.
How much computational power is required to run GPT-3? Are we talking the worlds most powerful supercomputers type of power or less?
Well, we're past the halfway point of the year, yet to be entirely honest, it feels like these past 6 months lasted half the decade already.
I didn't make any predictions on this thread at the end of last year since I figured they'd be pretty bland and vague, especially compared to the ones posted by wjfox or Yuli. I simply don't have my finger on the pulse quite like they do.
In any case, so far this year has shaped up to be pretty unpredictable and I'm excited to hear how the predictions from last year have fared so far. I think it would also be worth the time to reorient and "start over", meaning that now might be a good chance to cast a few more predictions before the year ends since we can adjust for the state of the world amidst SARS-CoV-2 and other things.
I might as well throw a few out there; nothing too substantial or impressive.
That's all I can think of at the moment. 2020 is setting the stage for many big motions in this next decade and a lot of the things happening now will truly be appreciated in their effect on what comes next. I have my hopes about what will or won't happen.
Well, we're about near the end of the year. How did you guys do?
This is Scatman's world, and we're living in it.
As of today, we have 4 days 'til the end of 2020 and the dawn of 2021. How are your 2020 prediction? Did they come to realise?
So I'm still juggling what's going to be in the 2021 Predictions thread and all, but I figured I might as well get around to doing this
As codified by ItalianUFO
Green: Fully confirmed, mostly confirmed, or essentially confirmed. It comes true just as I said it would, sometimes even down to the very words used by articles to describe what happened.
Blue: Inconclusively confirmed— very likely to be true, but it cannot be confirmed until a later date due to the data not yet being available (e.g. we won't know how much new solar capacity was constructed or how many EVs were sold this year until around April next year when the actual reports are released). Also to be used for predictions that are "75% true" in that they're not true enough to be green but not contentious enough to be yellow.
Yellow: Half confirmed, inconclusive to confirm or deny, tenuous and objectionable, or up to interpretation due to vague wording (e.g. "bionic enhancements are now available to the public" could include everything from wireless earbuds to full transhumanism; "a new treatment for cancer" could mean everything from a research study that saw lab improvements for one type of cancer all the way to a full cure for all cancers). Aspects of what I said come true, it's true to a certain extent, or implied aspects come true. That, or the trends are there and we need more time to see if I was even remotely right. Also includes "inconclusively debunked/untrue" if it's unlikely something came true but we'll only know until a later date.
Red: Completely debunked or mostly untrue. No possible way to spin it into even being kind of true. I completely missed the mark, something was canceled, or there was absolutely no mention of it. Just because a prediction is listed as red doesn't mean it's permanently debunked, vaporware, or a complete failure—especially in a year where one of history's deadliest pandemics has caused severe global disruption. I say this because some people get overly upset over seeing too much red and think "It was a terrible year" when a prediction can be deemed a failure just because "this new super-smartphone is predicted to be released in 2020" when it's actually releasing on January 1st, 2021.
Purple: Prediction itself failed, but not because it didn't come true or was delayed. In fact, the prediction I gave came too late or put something out into the future that happened this year (e.g. "the first flying saucer is planned for release in 2030" but it's actually released this year)
If there are certain specific aspects that are conclusively true but are marred by other specific aspects are that are conclusively untrue, then colors'll mix, e.g. "GPT-3 is a powerful and generalized autoregressive language model that is as capable as humans and thus qualifies as the first artificial general intelligence ever made."
Obviously the first part is true. The second part is very tenuous because it's only roughly as capable as humans in certain areas (it can conceivably become superhuman in these areas if fine-tuned and can become roughly capable in entirely new areas only if the network is retrained on new data), and the third part is just a flat-out lie.
I'll also add comments in brackets where needbe, mostly to explain or comment on contentious or failed predictions.
Main Timeline predictions
My Predictions
Disclaimer: Due to my atrocious batting average with them, I'm completely excluding any geopolitical predictions, save for those related to technology in some way.
Previous year predictions that came true...
2017:
2019:
Donald Trump becomes the 3rd U.S. President to get impeached by the House of Representatives, although he is likely not to be convicted by the Senate. He will finish his first term only to lose to a Democrat challenger during his bid for re-election. The Mueller investigations will not indict Trump until after he finishes his Presidency but some of his children and close associates will not be as lucky.
And remember my friend, future events such as these will affect you in the future.
- Deepfake videos & images will be retweeted and reposted on social media upwards of 2+ million times in the lead-up to the 2020 US Presidential Election [Surprisingly less of an issue than expected]
I maintain that though this year wasn't the mass proliferation it will at least begin sometime this decade and become widespread for propaganda and disinformation warfare over the coming decades. The sheer mass of current disinformation warfare is so grand that it would be astonishing if this doesn't happen at some point. Qboomers and Flatearthers are the prelude to the main act of microcultures of different relative realities forming en masse spurred by the wired exerting itself on physical space.
Even if this doesn't effect elections in a major way sometime soon we should expect to see more people believing things that are completely disconnected from reality. I wouldn't be surprised to see a mass subcultural occult revival where people genuinely believe magick exists and is effecting our world based on doctored footage and fake information online. Not the sort of minor occult organizations we see today, more on the scale of hundreds of thousands of people for a given sect acting mostly independently scattered around the world. Where dozens or even hundreds of these sects develop independently from one another in a similar timeframe. Especially in rural low income areas where people have no way to go see the supposed evidence that "exists" for themselves and are dependent on the internet for information. People who distrust institutions may even come to magickal explanations for the effects of climate crisis in their regions. Or any other explanation that isn't climate change itself for that matter.
It doesn't have to be magick either. That's just one possible example. It could be aliens or any combination of things really. One group of people might believe we're under active invasion during a particularly bad firestorm caused by climate change while another group thinks some lovecraftian beast is storming through the woods obscured by the smoke. Dear god what a meeting that would be.
Saw this in the city the other day, not Satanists. I'm getting Aleister Crowley vibes. Got me thinking about the topic.
Anywho, thanks again for the predictions and keeping these going every year Yuli. You da man.
Ah, the much anticipated rainbow splat that highlights our miserable failures and our triumphant successes courtesy of our resident writer in chief, a Mr. Yuli Ban the gynoid loving, nuclear war obsessed, and devout futurist. Thank you for your service you obstreperous, fantastical aberration of a human.
- Joe Biden wins the presidency (whether he will be able to finish his term remains to be seen), raising temper tantrums and uproar among Trump's base like nothing else seen. This might lead to a handful of domestic terror acts
Well. Does being a week out from the year still count?
0 members, 0 guests, 0 anonymous users