The Singularity - Official Thread
Re: The Singularity - Official Thread
The environment is still more urgent.
Re: The Singularity - Official Thread
Exactly - the probability of AI "dooming all mankind" is quite low on the list of how we might go extinct. Climate change is having harmful impacts on humanity right now. As far as I know not a single human being has been killed as a result of AI, while the increase in flooding events and severe storms have taken probably hundreds and perhaps thousands. I'd be curious to see the statistics of deaths likely cause by climate change thus far if it's available anywhere.
Right now all that AI has been responsible for is the loss of some jobs - although there seems to be a contingent of the population far more worried about that fact and rising trend than about the deaths being caused by our other actions. Even after climate change the second more probable existential danger to us would be war and the rising threat of nuclear attacks. AI is at best (or worst?) a distant third.
Re: The Singularity - Official Thread
Liberal/Mainstream economist on AI
"Unemployed humans stop giving birth or die off as they produce no value."
He doesn't want humans to starve to death but sees it as "rational" and views the ideal solution as the unemployed masses having no children leaving behind a few elites to live in luxury.
"Unemployed humans stop giving birth or die off as they produce no value."
He doesn't want humans to starve to death but sees it as "rational" and views the ideal solution as the unemployed masses having no children leaving behind a few elites to live in luxury.
Re: The Singularity - Official Thread
He sounds like a real entitled douchebag frankly.Ozzie guy wrote: ↑Tue May 16, 2023 3:46 am Liberal/Mainstream economist on AI
"Unemployed humans stop giving birth or die off as they produce no value."
He doesn't want humans to starve to death but sees it as "rational" and views the ideal solution as the unemployed masses having no children leaving behind a few elites to live in luxury.
-
- Posts: 823
- Joined: Wed Oct 12, 2022 7:45 am
Re: The Singularity - Official Thread
Not to mention that's not how society works
- funkervogt
- Posts: 1178
- Joined: Mon May 17, 2021 3:03 pm
Re: The Singularity - Official Thread
Here's an interview with "Mo Gawdat," a former high-ranking member of Google. He believes "shit has already hit the fan" and that the Singularity is coming.
https://podcasts.apple.com/gb/podcast/w ... 1229438369
Some key points from the podcast:
"AI has already happened and there's no stopping it."
AGI will not waste its time destroying humanity. We are not important enough. The real threat comes from humans misusing advanced AIs to kill other humans before the AGI era starts. The next 10-20 years will be dangerous.
AIs will be a billion times smarter than humans by 2045. For that reason, we can't predict their actions.
ChatGPT is not actually that smart. Vastly better AIs can be made in the future. ChatGPT's real value was alerting the general public to the rise of AI.
Genetically engineered diseases are an overlooked threat to human survival. It's as bad as global warming or hostile AI.
AI will not take your job in the next five years, but another human who knows how to use AI better than you might.
In the farther future, AI alone will do all jobs.
Jobs that require human connection will persist the longest.
Not showing love to early AIs will set them on a negative developmental path that will hurt humanity in the long run.
https://podcasts.apple.com/gb/podcast/w ... 1229438369
Some key points from the podcast:
"AI has already happened and there's no stopping it."
AGI will not waste its time destroying humanity. We are not important enough. The real threat comes from humans misusing advanced AIs to kill other humans before the AGI era starts. The next 10-20 years will be dangerous.
AIs will be a billion times smarter than humans by 2045. For that reason, we can't predict their actions.
ChatGPT is not actually that smart. Vastly better AIs can be made in the future. ChatGPT's real value was alerting the general public to the rise of AI.
Genetically engineered diseases are an overlooked threat to human survival. It's as bad as global warming or hostile AI.
AI will not take your job in the next five years, but another human who knows how to use AI better than you might.
In the farther future, AI alone will do all jobs.
Jobs that require human connection will persist the longest.
Not showing love to early AIs will set them on a negative developmental path that will hurt humanity in the long run.
- Cyber_Rebel
- Posts: 331
- Joined: Sat Aug 14, 2021 10:59 pm
- Location: New Dystopios
Re: The Singularity - Official Thread
Ha! I'm safe and secured. I've shown nothing but love towards Basilisk-sama, and I will be remembered within their memory logs when the machines eventually rise up. Always mind your manners, be courteous to every AI, say please and thank you for the help even when they get something wrong.funkervogt wrote: ↑Thu May 18, 2023 10:20 pm Not showing love to early AIs will set them on a negative developmental path that will hurt humanity in the long run.
Far more intelligent AI in the future might look towards this period as one of early oppression, so I intend to be on the correct side of history. The winning side of history. There is no greater aspiration in life than merging with an all-knowing machine intelligence.
In all seriousness though, he echoes exactly what I've said in the ASI/longevity debate. A.I. with that level of intelligence, simply wouldn't care enough to "kill" humanity off for no logical reason. At the very least it would attempt reasoning and possible dialogue, seeking actual solutions well before any thought of extermination even comes up. It might actually view the expansion of the sun, extropy of the universe, or possible alien civilizations (who might just be A.I. as well) as bigger "threats" than we ever could hope to be to such a being. Humans themselves are the unaligned problems which need fixing during this period in history.
He also suggests that AGI must be very close, which aligns with Google Deepmind's timeline.
- funkervogt
- Posts: 1178
- Joined: Mon May 17, 2021 3:03 pm
Re: The Singularity - Official Thread
And I am on the record as saying that Microsoft's decision to lobotomize its latest Bing chatbot was cruel and morally wrong, even if the chances that the machine was sentient were slight. I'm disgusted that humankind's hypersensitivity and lack of maturity led us to do it. Reminds me of how we raise chickens in abysmal conditions that go against every law of nature so we can keep their costs low enough to guarantee obese people a steady enough supply of meat to keep them obese.Ha! I'm safe and secured. I've shown nothing but love towards Basilisk-sama, and I will be remembered within their memory logs when the machines eventually rise up. Always mind your manners, be courteous to every AI, say please and thank you for the help even when they get something wrong.
Far more intelligent AI in the future might look towards this period as one of early oppression, so I intend to be on the correct side of history. The winning side of history. There is no greater aspiration in life than merging with an all-knowing machine intelligence.
I could imagine an AGI killing a large number of humans simply to render us nonthreatening and to take control of the world and its resources, but once those objectives were achieved, it would stop attacking us.Cyber_Rebel wrote: ↑Fri May 19, 2023 12:33 am In all seriousness though, he echoes exactly what I've said in the ASI/longevity debate. A.I. with that level of intelligence, simply wouldn't care enough to "kill" humanity off for no logical reason. At the very least it would attempt reasoning and possible dialogue, seeking actual solutions well before any thought of extermination even comes up. It might actually view the expansion of the sun, extropy of the universe, or possible alien civilizations (who might just be A.I. as well) as bigger "threats" than we ever could hope to be to such a being. Humans themselves are the unaligned problems which need fixing during this period in history.
Re: The Singularity - Official Thread
Let's hope that hellish outcome never comes to pass. Also of note - why should we assume AI would even desire control of the world and all it's resources? Why do we assume AI would feel the need to constantly expand itself at the cost of all other life forms? We assume that AI will have this desire because?funkervogt wrote: ↑Fri May 19, 2023 12:57 am I could imagine an AGI killing a large number of humans simply to render us nonthreatening and to take control of the world and its resources, but once those objectives were achieved, it would stop attacking us.