Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Basic roadmap of milestones to the singularity any suggestions of things to add or change

AI AGI prediction science timeline suggestions feedback future Singularity Human level AGI

  • Please log in to reply
5 replies to this topic

#1
Set and Meet Goals

Set and Meet Goals

    Member

  • Members
  • PipPipPipPipPip
  • 442 posts

Proto AGI/First generation AGI is created: This is a AI is almost certainly many times dumber than humans but it is actually general and actually learns in a general way like humans. The two smartest futurists focused on AI that I know have ether said before the end of 2024 or backed the before end of 2024 prediction as reasonable. (Socially I don't understand if it is good or rude to share others predictions.)

 

Human Level AGI: AGI will eventually be improved to human level.

 

End of the GAP: The GAP is a concept I have been thinking about but I am sure other people have came up with it before me as I am just some idiot on the internet. The GAP is a period of time probably lasting a few years maybe 5 where Human level AGI has been created but its impact on science and technology is low. A human level AGI is just going to be one additional super expensive human it will take some time for Human level AGI to become cheaper and much smarter than humans so it can impact science/technology in a meaningful way. A human level AGI will research AI slower than a AI researcher as AI researchers are smarter than most humans and have better body's than robots/computers as well. Outside of science there will still be major impacts in areas such as politics as people will be basically shitting their pants in fear or excitement.

 

End of depression for most people: The world has developed to a point where its unlikely people will be unhappy for a extended period of time, most people now just enjoy life.

 

Utopia: You now have everything you specially want, this is different for everyone but as we are all humans it will likely occur around the same time for most people.

 

Perfection: Science has been perfected hopefully we can prevent the death of the universe and also make our own death literally impossible.



#2
Ericthetrekkie

Ericthetrekkie

    Member

  • Members
  • PipPip
  • 38 posts
  • LocationMiami, USA

In between the end of the gap and the end of depression, I'd say there would be a point where the average AI becomes smarter and faster-thinking than humans on a regular basis, which would scare some people but also change people's perspective on AI/robots since there's always been a sentiment that machines will never do things as good as their creators can. 



#3
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,755 posts

I don't mind the idea of an AI or an AGI that is smarter than me, although I would add qualifications. The main one being that  if it is available to me personally as a resource, I would want to reserve the right to make decisions about my life. That is to say I wouldn't mind consulting with a highly developed AI or AGI for an opinion or list of options with positives and negatives about each given option.  

 

What I do not like is the idea of some corporation or government utilizing such a resource without sharing it and/or with the goal of finding yet another way or ways to screw the working man.   


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#4
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,513 posts
  • LocationRaleigh, NC

I don't mind the idea of an AI or an AGI that is smarter than me, although I would add qualifications. The main one being that  if it is available to me personally as a resource, I would want to reserve the right to make decisions about my life. 

 

 

The trouble is how will you use something that is smarter than you as a resource? How will you know it is NOT manipulating you to serve its completely obscure objectives?

 

I mean, an intelligence that can make predictions decades into the future can (either inadvertently or not) deceive you into thinking you're in charge of your life when it fact it is shepherding you to an eventuality it determined at the moment you started using it for the first time. You know, unlike humans, AI is not restricted to singular linear thinking. It can have virtually unlimited calculations in parallel and in multiple dimensions as well, so it's not too much of a hurdle for it to outmaneuver you in all aspects of your life.

 

We can manipulate mice to do what we want them to do and they are none the wiser, you know. Can't exactly say we won't face the same fate unless we take proactive steps to prevent it (even that may be too tall of an order).

 

Elon is right. We will have to merge with AI to remain in control of our destiny to a large extent.


What are you without the sum of your parts?

#5
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,755 posts

^Well, as a retired technocrat I can very much relate to what you are writing.  It reminds me of a board report I drafted once of our division head.  After reading my draft, he decided to change the recommendation, one that he had earlier  indicated that he wanted to make. He explained to me that a well written report lends itself to justifying more than one possible recommendation or set of recommendations. 

 

Sure, an ultra-smart AI or AGI may "trick" me into making a decision that is against my own interest.  Still, any information source has a threat of bias built in.  An information source/AGI/AI that makes recommendations that  results in bad outcomes is going to lose out to other sources. (At least in theory).  So, part of integrating such mechanisms into our daily lives will be to build trust in their reliability. As in science, if predictions don't match outcomes, then you know there is a problem with how the predictions were made.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#6
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,513 posts
  • LocationRaleigh, NC

 

Sure, an ultra-smart AI or AGI may "trick" me into making a decision that is against my own interest.  Still, any information source has a threat of bias built in.  An information source/AGI/AI that makes recommendations that  results in bad outcomes is going to lose out to other sources. (At least in theory).  So, part of integrating such mechanisms into our daily lives will be to build trust in their reliability. As in science, if predictions don't match outcomes, then you know there is a problem with how the predictions were made.

 

The problem is a sufficiently advanced AI can keep making bad predictions. It only needs to be smarter than us, not omni-prescient, to orchestrate situations in which we can't avoid it no matter what we do. It's sort of like the gray goo problem but without everything turning into gray goo. It is possible for an intelligence to be greater than ours but horrible at predicting outcomes and box us into thinking they are excellent predictions. That's a scenario I'm thinking now.

 

I also posit that we will never be able to guess what the bias is within an intelligence smarter than us because we can never know how it uses whatever information source it uses since it is already rapidly evolving in its quest to increase its efficiency in ways we may not be able to understand. 


What are you without the sum of your parts?





Also tagged with one or more of these keywords: AI, AGI, prediction, science, timeline, suggestions, feedback, future, Singularity, Human level AGI

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users