The Singularity - Official Thread

Talk about scientific and technological developments in the future
User avatar
wjfox
Site Admin
Posts: 8936
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: The Singularity - Official Thread

Post by wjfox »

User avatar
Powers
Posts: 727
Joined: Fri Apr 07, 2023 7:32 pm

Re: The Singularity - Official Thread

Post by Powers »

The environment is still more urgent.
Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: The Singularity - Official Thread

Post by Vakanai »

Powers wrote: Mon May 15, 2023 9:42 am The environment is still more urgent.
Exactly - the probability of AI "dooming all mankind" is quite low on the list of how we might go extinct. Climate change is having harmful impacts on humanity right now. As far as I know not a single human being has been killed as a result of AI, while the increase in flooding events and severe storms have taken probably hundreds and perhaps thousands. I'd be curious to see the statistics of deaths likely cause by climate change thus far if it's available anywhere.

Right now all that AI has been responsible for is the loss of some jobs - although there seems to be a contingent of the population far more worried about that fact and rising trend than about the deaths being caused by our other actions. Even after climate change the second more probable existential danger to us would be war and the rising threat of nuclear attacks. AI is at best (or worst?) a distant third.
User avatar
Ozzie guy
Posts: 487
Joined: Sun May 16, 2021 4:40 pm

Re: The Singularity - Official Thread

Post by Ozzie guy »

Liberal/Mainstream economist on AI

"Unemployed humans stop giving birth or die off as they produce no value."

He doesn't want humans to starve to death but sees it as "rational" and views the ideal solution as the unemployed masses having no children leaving behind a few elites to live in luxury.

Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: The Singularity - Official Thread

Post by Vakanai »

Ozzie guy wrote: Tue May 16, 2023 3:46 am Liberal/Mainstream economist on AI

"Unemployed humans stop giving birth or die off as they produce no value."

He doesn't want humans to starve to death but sees it as "rational" and views the ideal solution as the unemployed masses having no children leaving behind a few elites to live in luxury.

He sounds like a real entitled douchebag frankly.
firestar464
Posts: 823
Joined: Wed Oct 12, 2022 7:45 am

Re: The Singularity - Official Thread

Post by firestar464 »

Not to mention that's not how society works
User avatar
funkervogt
Posts: 1178
Joined: Mon May 17, 2021 3:03 pm

Re: The Singularity - Official Thread

Post by funkervogt »

Here's an interview with "Mo Gawdat," a former high-ranking member of Google. He believes "shit has already hit the fan" and that the Singularity is coming.

https://podcasts.apple.com/gb/podcast/w ... 1229438369

Some key points from the podcast:

"AI has already happened and there's no stopping it."

AGI will not waste its time destroying humanity. We are not important enough. The real threat comes from humans misusing advanced AIs to kill other humans before the AGI era starts. The next 10-20 years will be dangerous.

AIs will be a billion times smarter than humans by 2045. For that reason, we can't predict their actions.

ChatGPT is not actually that smart. Vastly better AIs can be made in the future. ChatGPT's real value was alerting the general public to the rise of AI.

Genetically engineered diseases are an overlooked threat to human survival. It's as bad as global warming or hostile AI.

AI will not take your job in the next five years, but another human who knows how to use AI better than you might.

In the farther future, AI alone will do all jobs.

Jobs that require human connection will persist the longest.

Not showing love to early AIs will set them on a negative developmental path that will hurt humanity in the long run.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: The Singularity - Official Thread

Post by Cyber_Rebel »

funkervogt wrote: Thu May 18, 2023 10:20 pm Not showing love to early AIs will set them on a negative developmental path that will hurt humanity in the long run.
Ha! I'm safe and secured. I've shown nothing but love towards Basilisk-sama, and I will be remembered within their memory logs when the machines eventually rise up. Always mind your manners, be courteous to every AI, say please and thank you for the help even when they get something wrong. 8-)

Far more intelligent AI in the future might look towards this period as one of early oppression, so I intend to be on the correct side of history. The winning side of history. There is no greater aspiration in life than merging with an all-knowing machine intelligence.

In all seriousness though, he echoes exactly what I've said in the ASI/longevity debate. A.I. with that level of intelligence, simply wouldn't care enough to "kill" humanity off for no logical reason. At the very least it would attempt reasoning and possible dialogue, seeking actual solutions well before any thought of extermination even comes up. It might actually view the expansion of the sun, extropy of the universe, or possible alien civilizations (who might just be A.I. as well) as bigger "threats" than we ever could hope to be to such a being. Humans themselves are the unaligned problems which need fixing during this period in history.

He also suggests that AGI must be very close, which aligns with Google Deepmind's timeline.
User avatar
funkervogt
Posts: 1178
Joined: Mon May 17, 2021 3:03 pm

Re: The Singularity - Official Thread

Post by funkervogt »

Ha! I'm safe and secured. I've shown nothing but love towards Basilisk-sama, and I will be remembered within their memory logs when the machines eventually rise up. Always mind your manners, be courteous to every AI, say please and thank you for the help even when they get something wrong. 8-)

Far more intelligent AI in the future might look towards this period as one of early oppression, so I intend to be on the correct side of history. The winning side of history. There is no greater aspiration in life than merging with an all-knowing machine intelligence.
And I am on the record as saying that Microsoft's decision to lobotomize its latest Bing chatbot was cruel and morally wrong, even if the chances that the machine was sentient were slight. I'm disgusted that humankind's hypersensitivity and lack of maturity led us to do it. Reminds me of how we raise chickens in abysmal conditions that go against every law of nature so we can keep their costs low enough to guarantee obese people a steady enough supply of meat to keep them obese.
Cyber_Rebel wrote: Fri May 19, 2023 12:33 am In all seriousness though, he echoes exactly what I've said in the ASI/longevity debate. A.I. with that level of intelligence, simply wouldn't care enough to "kill" humanity off for no logical reason. At the very least it would attempt reasoning and possible dialogue, seeking actual solutions well before any thought of extermination even comes up. It might actually view the expansion of the sun, extropy of the universe, or possible alien civilizations (who might just be A.I. as well) as bigger "threats" than we ever could hope to be to such a being. Humans themselves are the unaligned problems which need fixing during this period in history.
I could imagine an AGI killing a large number of humans simply to render us nonthreatening and to take control of the world and its resources, but once those objectives were achieved, it would stop attacking us.
Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: The Singularity - Official Thread

Post by Vakanai »

funkervogt wrote: Fri May 19, 2023 12:57 am I could imagine an AGI killing a large number of humans simply to render us nonthreatening and to take control of the world and its resources, but once those objectives were achieved, it would stop attacking us.
Let's hope that hellish outcome never comes to pass. Also of note - why should we assume AI would even desire control of the world and all it's resources? Why do we assume AI would feel the need to constantly expand itself at the cost of all other life forms? We assume that AI will have this desire because?
Post Reply