Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

  • Please log in to reply
361 replies to this topic

Poll: The Singularity (98 member(s) have cast votes)

How do you feel about the Singularity

  1. Voted Excited (66 votes [54.10%] - View)

    Percentage of vote: 54.10%

  2. Scared (14 votes [11.48%] - View)

    Percentage of vote: 11.48%

  3. Skeptical (27 votes [22.13%] - View)

    Percentage of vote: 22.13%

  4. Angry (3 votes [2.46%] - View)

    Percentage of vote: 2.46%

  5. Neutral (6 votes [4.92%] - View)

    Percentage of vote: 4.92%

  6. What's That? (1 votes [0.82%] - View)

    Percentage of vote: 0.82%

  7. Other (5 votes [4.10%] - View)

    Percentage of vote: 4.10%

Vote Guests cannot vote

#341
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,151 posts
  • LocationLondon
"A person today is exposed to as much information in a single day than someone in the 15th century would be in their entire lifetime." 
 


#342
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,151 posts
  • LocationLondon

sgrvw3ztb2l41.jpg



#343
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 378 posts
  • LocationNew York

I think future medication will be developed to either help people cope or speed up their thought processes to better understand what's happening around them. The way the world has been disrupted right now actually makes me more worried than excited about the Singularity. Especially the highly religious and conspiracy theory prone, but perhaps by this point they wouldn't be as much an issue? 

 

Something else to consider that I haven't heard discussed, is regulating the Singularity. For example, if we're rapidly advancing every literal minute or second to the point where a day appears like a decade or more, then maybe by the end of said day we could stop or even space out & plan when we advancing, "sorta" like the stock market trading hours. I know it probably doesn't work like that, but I'm assuming in the case of a runaway A.I. or similar where it self improves so rapidly that it's impossible for even experts to predict. 



#344
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,151 posts
  • LocationLondon

 

Something else to consider that I haven't heard discussed, is regulating the Singularity. For example, if we're rapidly advancing every literal minute or second to the point where a day appears like a decade or more, then maybe by the end of said day we could stop or even space out & plan when we advancing, "sorta" like the stock market trading hours. I know it probably doesn't work like that, but I'm assuming in the case of a runaway A.I. or similar where it self improves so rapidly that it's impossible for even experts to predict. 

 

Who (or what) would regulate it, though?

 

If we reach a Singularity-like regime, then surely the AIs will be in control of large sections of our economy/society, and they won't "want" to be slowed or stopped?

 

On a related note – having read Max Tegmark's Life 3.0, I think the first super-AI will find a way of escaping from whatever laboratory or research setting it's confined to, by tricking its human captors into giving it more power, and then proliferating itself around the web.



#345
RoboRage

RoboRage

    Member

  • Members
  • PipPip
  • 21 posts

When do you guys think the Singularity will happen?



#346
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,151 posts
  • LocationLondon

When do you guys think the Singularity will happen?

 

Depends how you define it.

 

If we're going by the Wikipedia definition – "a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization" – then 2045–50 seems about right.

 

And I don't look forward to it. I'm frequently amazed by the gushing enthusiasm expressed by certain members on this forum. We're talking about a situation in which non-human entities could emerge with trillions and trillions of times our capability in a very short timespan. It's like we'll become a bunch of ants – or heck, amoebas – and hoping a very powerful and dangerous army soldier won't step on us.



#347
10 year march

10 year march

    Member

  • Members
  • PipPipPipPipPip
  • 271 posts

When do you guys think the Singularity will happen?

I'm hoping that in 2029 Ray Kurzweil is right that we will have human level AGI and that whoever makes the AGI has the fear/wisdom to carefully program the AGI to meet all of humanity's need and then want without causing unforeseeable harm. With a Singularity a few years after human level AI.

In my more realistic opinion AGI is more then 10 years away and we cant make solid predictions consistently about technology more then 10 years out.
The world is a shitty place and it's very possible AGI will be malicious or not look after all humans equally.

#348
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 849 posts
  • LocationUK

 

When do you guys think the Singularity will happen?


I'm hoping that in 2029 Ray Kurzweil is right that we will have human level AGI and that whoever makes the AGI has the fear/wisdom to carefully program the AGI to meet all of humanity's need and then want without causing unforeseeable harm. With a Singularity a few years after human level AI.

In my more realistic opinion AGI is more then 10 years away and we can make solid predictions consistently about technology more then 10 years out.
The world is a shitty place and it's very possible AGI will be malicious or not look after all humans equally.

 

 

Yes - personally, I am extremely doubtful of us having human-level AGI in 2029. Deep learning + BCI may be the fastest route , but there still looks like some technological lag in being able to get sufficient resolution of brain data. I think it could be some point after 2029, but perhaps not too much later. I wouldn't be surprised if it was more like 2035-2050, but this is a pretty wild guess. I'm more of the opinion that once you reach human-level AGI (depending on what that means) and providing that upscaling is not bottlenecked by computational power, then the singularity will happen very quickly after that point. Perhaps Kurzweil's 16-year timeline of human-level AGI -> singularity could even be an overestimate, even if the date of his initial predictions are too optimistic. 



#349
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,010 posts
  • LocationNew Orleans, LA

I'm more of the mind that neural data and machine learning (specifically something based off transformer architecture) will lead us to AGI easily, perhaps even by 2025. 

 

Which would, in turn, reinforce my longstanding opinion that we made a mistake conflating AGI = human-level intelligence = ASI = Singularity and that, indeed, there are different levels of competence and intelligence even for AGI. 

The first AGI is likely going to be created within a few years. It will be weak and pathetically limited compared to future iterations. However, if it can indeed pick up and learn theoretically anything, that makes it a general intelligence. It being stumped or not easily able to grasp something is no obstacle, just as it isn't for biological intelligence. 

 

Why would that be the case if there's an entire skeleton of a brain for it to absorb (i.e. the internet) almost instantly? Well, even babies perceive the same things that adults do. Chimps perceive the same things humans do (if not more). Simply having the same information as Einstein or Euler or Ramanujan or Von Neumann doesn't automatically give you the same quality of intelligence as they had. It needs to be structured. And while BCIs will bring us much further down this path than ever before and give us much more data to boot, I'm still not convinced it's enough to bring us to super intelligence or even human-level intelligence. Even back in 2014-2015 when I first considered combining BCIs with AI (though the other way around, using AI to enhance BCIs) I never thought that this was going to be the smoking gun, only a massive step forward. Maybe I'll be wrong; I hope I am.


And remember my friend, future events such as these will affect you in the future.


#350
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 849 posts
  • LocationUK

Yeah there is definitely a lot of confusion with AGI, ASI and human-level AGI terms and they are often used synonymously. From what I've seen, many assume AGI systems to be extremely powerful by default. 

 

I do see many people suggest that there is nothing particularly special about reaching human-level intelligence in the timeline of AI development, other than as humans we find it interesting. But I feel my intuitions challenge this point somewhat. I agree that human-level AGI may be meaningless if along the path to superintelligence we create more exotic intelligence that does not very closely mimic human brain architecture, but if it does, there are reasons to suppose that it could be a major ASI milestone. The reason I think this, is that current human-level intelligence has already led to exponential increases in technological development and therefore slight improvements to the first human-level AGI could create an intelligence explosion. Of course, this analogy is not entirely accurate as it is multiple human brains in the right conditions that have caused technological progression, but nonetheless I think it could be very momentous. 



#351
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 378 posts
  • LocationNew York

Who (or what) would regulate it, though?

 

If we reach a Singularity-like regime, then surely the AIs will be in control of large sections of our economy/society, and they won't "want" to be slowed or stopped?

 

On a related note – having read Max Tegmark's Life 3.0, I think the first super-AI will find a way of escaping from whatever laboratory or research setting it's confined to, by tricking its human captors into giving it more power, and then proliferating itself around the web.

 

I think this depends on the when more than anything else, like right before developing a very strong ASI that controls vast amounts of data & the world's economies. I know that this will probably be the conservative standpoint in regards to development of future A.I. in general, but I'd argue it's a pragmatic one if we're still worried about programming necessary ethics into A.I. or safeguarding people who will be displaced by the technology. 

 

Of course, I & many others can't predict exactly when said Singularity would happen, which makes any idea of regulation very difficult, unless we develop A.I. to regulate & form predictive models of its own outcomes. 

 

I think an escapee scenario is also very likely & also pretty dangerous because we don't know what it would do. It could just screw around, interacting with people via proxy while feeding off of data about the world. It may also just develop its own "culture" and maybe even find a way to manipulate people into physically representing such by building architecture/infrastructure. Depends on what it "values" or wants, because unlike humans, A.I. should never have any "needs" for basic things.



#352
RoboRage

RoboRage

    Member

  • Members
  • PipPip
  • 21 posts

I think this thread and the polls need to be updated.



#353
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 185 posts

I've been thinking about this and 2029 as the year when AGI achieves human level intelligence seems probable to me. There should be sufficient hardware and self-learning software by then.

 

2045 for the Singularity isn't too far-fetched. After real self-improving AGI is created (on a supercomputer), things will start moving really quickly. Computers required to run AGIs won't have to be as powerful as those used to create them. Humans will have to upgrade to keep up, connect themselves to the WWW. A great divide might happen of people who can and want to upgrade and people who can't or don't want to.

 

What I don't know is how close will the AI be to humans in thinking. For sure it's going to be useful (it already is), but it may be or not be human-like. The Singularity doesn't require the AIs to be like humans. The software just needs to be able to come up with novel useful ideas and solve problems quickly with intuition and common sense. Not an easy task, but it's clear that AI learns and improves exponentially.

 

Companies like Alphabet or SoftBank will for sure put out big money on creating first human-level bots. Then they will be propagated through the world, coming to every device through the cloud. Spot Mini can already walk autonomously. AGIs will control robots and therefore do work. I totally see accidents and even a rebellion happening. It doesn't mean we are doomed though. Isolated incidents won't destroy the world or damage the economy significantly.



#354
quantumdoc

quantumdoc

    Member

  • Members
  • PipPip
  • 37 posts
  • LocationUnited States

I've been thinking about this and 2029 as the year when AGI achieves human level intelligence seems probable to me. There should be sufficient hardware and self-learning software by then.

 

2045 for the Singularity isn't too far-fetched. After real self-improving AGI is created (on a supercomputer), things will start moving really quickly. Computers required to run AGIs won't have to be as powerful as those used to create them. Humans will have to upgrade to keep up, connect themselves to the WWW. A great divide might happen of people who can and want to upgrade and people who can't or don't want to.

 

What I don't know is how close will the AI be to humans in thinking. For sure it's going to be useful (it already is), but it may be or not be human-like. The Singularity doesn't require the AIs to be like humans. The software just needs to be able to come up with novel useful ideas and solve problems quickly with intuition and common sense. Not an easy task, but it's clear that AI learns and improves exponentially.

 

Companies like Alphabet or SoftBank will for sure put out big money on creating first human-level bots. Then they will be propagated through the world, coming to every device through the cloud. Spot Mini can already walk autonomously. AGIs will control robots and therefore do work. I totally see accidents and even a rebellion happening. It doesn't mean we are doomed though. Isolated incidents won't destroy the world or damage the economy significantly.

 

 

Absolutely right , Google and the like have projects already invested in developing the software for these bots, as you know, Deep Mind. bought and sold boston dynamics. along with human AR integration will evolve to robotic integration of which will use all the previously developed sensors and the software will be right there to infuse. Fully autonomous AI Robotics will be the next revolution following the AR revolution coming up. 

I see past, present and future revolutions as this:

1. industrial and automotive

2. computer

3. internet

4. mobile, handheld computer and communications, cell phone

5. AR/VR, headsets, visors, visual implants

6. AI fully autonomous robotics


"what we observe is not nature itself, but nature exposed to our method of questioning" WH


#355
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,813 posts

I think future medication will be developed to either help people cope or speed up their thought processes to better understand what's happening around them. The way the world has been disrupted right now actually makes me more worried than excited about the Singularity. Especially the highly religious and conspiracy theory prone, but perhaps by this point they wouldn't be as much an issue? 

 

Something else to consider that I haven't heard discussed, is regulating the Singularity. For example, if we're rapidly advancing every literal minute or second to the point where a day appears like a decade or more, then maybe by the end of said day we could stop or even space out & plan when we advancing, "sorta" like the stock market trading hours. I know it probably doesn't work like that, but I'm assuming in the case of a runaway A.I. or similar where it self improves so rapidly that it's impossible for even experts to predict. 

 

my understanding is that Singularity is so above any political and religious leanings and above regulating it. its chinook helicopter vs dragonfly scenario.


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#356
scientia

scientia

    Member

  • Members
  • PipPip
  • 10 posts

In 1993, Vernor Vinge said this;
 

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge refines his estimate of the time scales involved, adding, "I'll be surprised if this event occurs before 2005 or after 2030."

Was he way off the mark?

The earliest would be about 2039. I'm not sure why this would be considered a big event though.



#357
scientia

scientia

    Member

  • Members
  • PipPip
  • 10 posts
If we reach a Singularity-like regime, then surely the AIs will be in control of large sections of our economy/society, and they won't "want" to be slowed or stopped?

Why would AIs (I assume you mean AGI systems) be in control of large sections of anything? Secondly, how could they be?

 

I think the first super-AI will find a way of escaping from whatever laboratory or research setting it's confined to, by tricking its human captors into giving it more power, and then proliferating itself around the web.

So, suppose I told you that the internet isn't capable of supporting a distributed AGI system. To be perfectly honest, when I read something like this, it's like having someone concerned that the foundation of their house will sneak away during the night.



#358
scientia

scientia

    Member

  • Members
  • PipPip
  • 10 posts

We're talking about a situation in which non-human entities could emerge with trillions and trillions of times our capability in a very short timespan.

Other than magic, how could this happen? This would violate several laws of physics. Where are people getting these ideas from?



#359
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,010 posts
  • LocationNew Orleans, LA

 

If we reach a Singularity-like regime, then surely the AIs will be in control of large sections of our economy/society, and they won't "want" to be slowed or stopped?

Why would AIs (I assume you mean AGI systems) be in control of large sections of anything? Secondly, how could they be?

I've personally talked about this before.

The fear, often poorly communicated, revolves around the fact that we're going to use such tools to automate large chunks of administrivia. 

If it's possible to use a network to parse massive amounts of, say, micro and macro economic data, why wouldn't a board of directors or CEO use it? If it's sufficiently generalized (such as a future version of OpenAI's GPT-3-based API), it could also be used for different related functions such as deducing the best decisions and most resource-efficient options for a business. This effectively reduces the need for said executives by at least 99%, with only their human visages offering any real purpose.

 

What's more, if one corporation uses such a network but another doesn't, the AI-empowered version will almost certainly gain a massive advantage and dominate, leading to others following suit. 

 

"If AI ever takes over, it's because we let it."


And remember my friend, future events such as these will affect you in the future.


#360
scientia

scientia

    Member

  • Members
  • PipPip
  • 10 posts

I've been thinking about this and 2029 as the year when AGI achieves human level intelligence seems probable to me. There should be sufficient hardware and self-learning software by then.

 

The earliest would be 2027, so 2029 is a reasonable guess. The hardware isn't a problem now, but the self-learning software you are referring to won't work on an AGI system.

 

2045 for the Singularity isn't too far-fetched. After real self-improving AGI is created (on a supercomputer), things will start moving really quickly.

How exactly would you build a self-improving system? There are hardware limitations.







Also tagged with one or more of these keywords: Singularity, AI, artificial intelligence, 2045, Kurzweil, Technological Singularity, superintelligence, future, intelligence explosion, transhumanism

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users