Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

  • Please log in to reply
365 replies to this topic

Poll: The Singularity (98 member(s) have cast votes)

How do you feel about the Singularity

  1. Voted Excited (66 votes [54.10%] - View)

    Percentage of vote: 54.10%

  2. Scared (14 votes [11.48%] - View)

    Percentage of vote: 11.48%

  3. Skeptical (27 votes [22.13%] - View)

    Percentage of vote: 22.13%

  4. Angry (3 votes [2.46%] - View)

    Percentage of vote: 2.46%

  5. Neutral (6 votes [4.92%] - View)

    Percentage of vote: 4.92%

  6. What's That? (1 votes [0.82%] - View)

    Percentage of vote: 0.82%

  7. Other (5 votes [4.10%] - View)

    Percentage of vote: 4.10%

Vote Guests cannot vote

#361
scientia

scientia

    Member

  • Members
  • PipPip
  • 10 posts

we're going to use such tools to automate large chunks of administrivia. 

If it's possible to use a network to parse massive amounts of, say, micro and macro economic data, why wouldn't a board of directors or CEO use it?

One of the aspects of AGI theory is being able to handle massive, noisy information. However, this is not actually related to automation. I would hope that a CEO or board of directors would use it. That is the point.
 

it could also be used for different related functions such as deducing the best decisions and most resource-efficient options for a business.,

Finding relevant information is one function. If it is sufficiently constrained you could use this information with traditional AI using brute force optimization or reduce the scope with machine learning tools. If it is not constrained then you could use abstractive tools. This is the purpose of these systems.

 


This effectively reduces the need for said executives by at least 99%, with only their human visages offering any real purpose.

We're already at that point. Corporate executives have a very low productivity rate. Reducing them by 90% would increase efficiency. I don't know what you mean by "human visage".

 

What's more, if one corporation uses such a network but another doesn't, the AI-empowered version will almost certainly gain a massive advantage and dominate, leading to others following suit.

Well, yes, that's the general idea. Also, personal units.

 

So, what is the problem you foresee?



#362
lechwall

lechwall

    Member

  • Members
  • PipPipPipPip
  • 181 posts
  • LocationSunny England

 

When do you guys think the Singularity will happen?

 

Depends how you define it.

 

If we're going by the Wikipedia definition – "a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization" – then 2045–50 seems about right.

 

And I don't look forward to it. I'm frequently amazed by the gushing enthusiasm expressed by certain members on this forum. We're talking about a situation in which non-human entities could emerge with trillions and trillions of times our capability in a very short timespan. It's like we'll become a bunch of ants – or heck, amoebas – and hoping a very powerful and dangerous army soldier won't step on us.

 

 

 

First point re timeline I lean more towards the end of this century not based on anything scientific just based on how the problem has been underestimated for so long and I suspect there'll be more a lot more roadblocks ahead. Ultimately it's impossible to predict when this will occur given this will require conceptual breakthroughs rather than just scaling up the capability of an existing system. When will some genius develop a full explanation of what makes a human intelligent? It could be next week or it could be 70 years from now its impossible to predict. A better prediction would be assuming we had a full understanding of what makes a person intelligent when is the earliest we could build this assuming sufficient available processing power. You could then say post 2030/40/50 AGI is possible.

 

Yes that are definite risks and it is almost certainly opening Pandora's box but ultimately humanity is doomed regardless of whether we birth AGI into the world. Given the amount of suffering that exists in the world it seems immoral not to try and create an AGI which would help to alleviate these.


"The future will be better tomorrow.  If we do not succeed, then we run the risk of failure.   For NASA, space is still a high priority. The Holocaust was an obscene period in our nation's history. No, not our nation's, but in World War II. I mean, we all lived in this century. I didn't live in this century, but in this century's history. Republicans understand the importance of bondage between a mother and child. We're going to have the best-educated American people in the world."  Dan Quayle

 


#363
10 year march

10 year march

    Member

  • Members
  • PipPipPipPipPip
  • 290 posts
With a robot doing chemistry we may have AI helping AGI researchers right now


https://www.reddit.c...ntist_that_has/

#364
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,410 posts
  • LocationLondon

Elon Musk says he's terrified of AI taking over the world and most scared of Google's DeepMind AI project

 

Ben Gilbert
Jul 27, 2020, 5:49 PM

Elon Musk has been sounding the alarm about the potentially dangerous, species-ending future of artificial intelligence for years.

In 2016, the billionaire said human beings could become the equivalent of "house cats" to new AI overlords. He has since repeatedly called for regulation and caution when it comes to new AI technology.

But of all the various AI projects in the works, none has Musk more worried than Google's DeepMind.

[...]

He said AI could surpass human intelligence in the next five years, even if we don't see the impact of it immediately.

"That doesn't mean that everything goes to hell in five years," he said. "It just means that things get unstable or weird."

 

https://www.business...020-7?r=US&IR=T



#365
10 year march

10 year march

    Member

  • Members
  • PipPipPipPipPip
  • 290 posts
I don't trust Elon I think he just wants to shit on his competition at deep mind to shove regulations on them so he has a greater chance of making ai first.

I hope his truthful and right and we end up with human level AI in 5 years.

#366
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPip
  • 189 posts

GPT-5 may be the general AI we are waiting for.

 

By 2030, computational capabilities will certainly be ready but software is another challenge.







Also tagged with one or more of these keywords: Singularity, AI, artificial intelligence, 2045, Kurzweil, Technological Singularity, superintelligence, future, intelligence explosion, transhumanism

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users