Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

The Singularity - Official Thread

Singularity AI Deep Learning Technology Artificial Intelligence Future Science Culture Government Computers

  • Please log in to reply
94 replies to this topic

Poll: The Singularity (69 member(s) have cast votes)

How do you feel about the Singularity

  1. Voted Excited (43 votes [51.19%] - View)

    Percentage of vote: 51.19%

  2. Scared (9 votes [10.71%] - View)

    Percentage of vote: 10.71%

  3. Skeptical (20 votes [23.81%] - View)

    Percentage of vote: 23.81%

  4. Angry (3 votes [3.57%] - View)

    Percentage of vote: 3.57%

  5. Neutral (5 votes [5.95%] - View)

    Percentage of vote: 5.95%

  6. What's That? (1 votes [1.19%] - View)

    Percentage of vote: 1.19%

  7. Other (3 votes [3.57%] - View)

    Percentage of vote: 3.57%

Vote Guests cannot vote

#81
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 522 posts
The title is a little misleading. Many of the things on this list won't be "taken over" for a long time; but machines might do well enough to beat humans in competitions well before then.

One on the list that I think is off is "research math". It's the kind of problem that machines can programmed to get better at all on their own -- just like the games Chess, Go and Dota 2. Thus, I wouldn't be surprised if machines can beat humans in about 10 years on IMO-style tests (which isn't "research math", but how else to measure?); and maybe even in the next 5 years, or even 3.

The only thing that might stand in the way is this: part of what causes people to recognize a mathematical result as "impressive" is the fact that humans are just bad at coming up with those particular proofs. Some things are just counterintuitive to humans. There are proofs that are very short, that took humans a very long time to come up with, that are lauded as "gems", that blind-search would have come up with a lot sooner (if they had tried) -- and that humans find mind-bogglingly counter-intuitive.

I would hope that building good math AIs wouldn't require that we understand human intuition better -- that we can just bulldoze the problem away will brute computational power. But there is a possibility that it's really important, if we want to beat humans at their own game in this one respect.
  • Yuli Ban and Erowind like this

#82
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,583 posts
  • LocationLondon

DkDbu2BWwAE39_s.jpg

Is this "when we will have the technological capability to automate this" or "when humans will have been mostly replaced in this task",

 

Because the idea that folding laundry will be mostly done by machines in 2021 seems absurd, people don't all have washing machines and dishwashers today, no way we're all buying laundry folding robots that fast. 

 

At the same time the idea that it will take till 2026 to have the technological capability to automate truck driving is equally absurd. the hard part of driving is interpreting and understanding the world around you, not manoeuvring the vehicle. Trucks will not be much more difficult than cars in this respect. 

 

EDIT:

Computers can already read text aloud?

"beat the fastest human in a 5k race" is only hard if your robot/AI cant use wheels, or flight, which is exactly what you would want for any practical application. 


  • Erowind likes this

#83
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 480 posts

exponential-growth-singularity.jpg

See: https://kk.org/thete...he-singularity/



#84
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,583 posts
  • LocationLondon

That's an interesting article, but it assumes Ray Kurzweil has arrived at his date for the singularity by plotting charts and picking a point that looks vertical. 

 

Instead his target date for the singularity is shortly after he expects us to have:

 

A) sufficient processing power available to equal the human mind in raw power (for a reasonable price). 

B) sufficiently high resolution brain imaging technology to observe in great detail the functions of the human brain. 

(assuming no major changes to the exponential growth in price/performance of computers, and brain imaging resolution)

 

The singularity is not some kind of natural function caused by exponential growth, you wont see anything growing exponentially eventually hit a "singularity" moment.

 

It is specifically when we are able to create intelligence that is equal to our own that the singularity theoretically begins, and it begins because of the great differences in how computerised and biological intelligence work. (i.e. you cannot copy and paste biological intelligence (well not yet anyway!)) or copy and paste knowledge and experience between biological minds. 

 

So computers that are equal to humans, will result in a huge intelligence explosion. (even if they couldn't make smarter computers, we could still make billions of human level AIs all linked together sharing experience and knowledge which would have a similar effect. It may not result in a world beyond our ability to comprehend, but it would make humanity obsolete. 


  • Yuli Ban likes this

#85
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,590 posts
  • LocationLondon

Peter Diamandis | The Future Is Faster Than You Think | Global Summit 2018 | Singularity University

 

https://youtu.be/FTTgdtl8FvM


  • Yuli Ban likes this

#86
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,700 posts
  • LocationNew Orleans, LA

"I hear LFADS has become conscious... beware..."
 
From Starspawn0:


First, the guy is a Stanford Neuroscience professor, and this is his only Twitter post (aside from replies).
 
A Deep Learning method to model the primate motor cortex using brain activity recordings. When the models were trained, they found them to be shockingly accurate at reproducing neural population response patterns. For the first time, Deep Learning has crossed over from simply modelling sensory areas (like the visual and auditory cortices), towards higher cortical areas. The next things to be tried might be the higher cognition parts of the frontal cortex -- if that succumbs to a Deep Learning modelling approach, it would be powerful evidence that one might be able to model the whole brain with Deep Learning.
 
We stand between the worlds of possibility...


  • Outlook likes this

And remember my friend, future events such as these will affect you in the future.


#87
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,700 posts
  • LocationNew Orleans, LA

Artificial General Intelligence Is Here, and Impala Is Its Name

One of the most significant AI milestones in history was quietly ushered into being this summer. We speak of the quest for Artificial General Intelligence (AGI), probably the most sought-after goal in the entire field of computer science. With the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.
Let’s define AGI, since it’s been used by different people to mean different things. AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning. During meta-learning, the acquisition of one skill enables the learner to pick up another new skill faster because it applies some of its previous “know-how” to the new task. In other words, one learns how to learn — and can generalize that to acquiring new skills, the way humans do. This has been the holy grail of AI for a long time.
As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations.  Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. However, these aren’t nearly so much so an artificial general intelligence, which could conceivably unlock the kind of recursive self-improvement variously referred to as the “intelligence explosion” or “singularity.”
Those who thought that day would be sometime in the far distant future would be wise to think again. To be sure, DeepMind has made inroads on this goal before, specifically with their work on Psychlab and Differentiable Neural Computers. However, Impala is their largest and most successful effort to date, showcasing a single algorithm that can learn 30 different challenging tasks requiring various aspects of learning, memory, and navigation.


  • Casey, Erowind, Hyndal_Halcyon and 1 other like this

And remember my friend, future events such as these will affect you in the future.


#88
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,590 posts
  • LocationLondon

10 years ago this month, I read this book.

 

I can honestly say it changed my whole outlook on life. :)

 

 

singularity-is-near-book-amazon.jpg


  • Zaphod, Casey and Yuli Ban like this

#89
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 736 posts
  • LocationUK

Did anyone else sort of come up with an approximate idea of the singularity without really knowing about it?

 

I remember just having a thought experiment that simply was: if you create a computer capable of designing a more powerful computer and that resultant computer can do the same, then given enough iterations you will quickly get a superintelligence. It was only after thinking that when I realised there were people taking this idea seriously and had called it the singularity. 


  • wjfox, Yuli Ban and Erowind like this

#90
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,886 posts
  • LocationGeorgia

I rate it as the best book I've read fiction or otherwise. It expanded my mind and colored my world. Thank you Raymond Kurzweil. It also gave me chronic anxiety because i'm afraid i'm going to die before we get a grip on death. Damn you Raymond Kurzweil. 


  • wjfox, Zaphod, Casey and 3 others like this

The growth of computation is doubly exponential growth. 


#91
Futurology2039

Futurology2039

    New Member

  • Members
  • Pip
  • 6 posts

Actually I see the end result of scientific technology looking more like Human's incorporating it into themselves  "AIhumans" ..completely human but assisted by technology specifically in the utilization of a huge percentage of the HUMAN BRAIN  which we don't seem to be able to access completely at this time... We will never become GOD but we will always  make and attempt to become gods by  better utilization of our  brains . Obviously TECHNO-HUMAN is not a new idea or term :I prefer AIHUMAN

 

techno-humanism : The merging of technology and humans. Also known as "posthumanism" and "transhumanism," although there are nuances implied in these and other similar theories.

 

     Why don't I see AI by AI's self :  Human EGO... most of us present  three normal multiple personalities to some quantitative extent ..   To be exact …seeker of recognition, daytime dreamer ,hedonist.....   All three of them don't allow or actually force almost all of us to refuse to see our selves as second or inferior..   Undoubtedly our major stumbling block to a total acceptance of GOD.  BUT GOD we can't change or control…  Limiting AI in Robotic machine forms we can.  I do not believe our DEEP need for dominance and our  Universe sized egos  would allow our humanism to be supplanted by something initially of our own invention.  But at some point I think almost all of us would accept technology that would make us literally mega times more cerebral,  with out removing our humanity,  and  in the process capaciously feeding our egos

 

      Because of this opinion I don't  concern myself with the ethics of AI although it may be both naïve and cheeky of me.

 

 

 
 



#92
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,590 posts
  • LocationLondon

DGsJKuQ.jpg


  • Zaphod and Yuli Ban like this

#93
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,700 posts
  • LocationNew Orleans, LA

What's funny is that it's still going to look flat compared to the wealth created by asteroid mining and nanofabrication.


  • wjfox, Zaphod and FrogCAT like this

And remember my friend, future events such as these will affect you in the future.


#94
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,590 posts
  • LocationLondon

The Singularity - feat. Ray Kurzweil & Alex Jones [RAP NEWS 28]

 

https://youtu.be/dHVtUw5wToA



#95
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 8,768 posts

What's funny is that it's still going to look flat compared to the wealth created by asteroid mining and nanofabrication.

 

Also, this may not be a very good way to measure actual wealth. That is to say, the chart  may a be a better indication of the success of capitalism.  Capitalism  turns everything into commodities.  So, more and more wealth is measured by $s.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls






Also tagged with one or more of these keywords: Singularity, AI, Deep Learning, Technology, Artificial Intelligence, Future, Science, Culture, Government, Computers

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users