Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

The Singularity - Official Thread

Singularity AI Deep Learning Technology Artificial Intelligence Future Science Culture Government Computers

  • Please log in to reply
89 replies to this topic

Poll: The Singularity (64 member(s) have cast votes)

How do you feel about the Singularity

  1. Voted Excited (38 votes [50.00%] - View)

    Percentage of vote: 50.00%

  2. Scared (8 votes [10.53%] - View)

    Percentage of vote: 10.53%

  3. Skeptical (20 votes [26.32%] - View)

    Percentage of vote: 26.32%

  4. Angry (2 votes [2.63%] - View)

    Percentage of vote: 2.63%

  5. Neutral (5 votes [6.58%] - View)

    Percentage of vote: 6.58%

  6. What's That? (1 votes [1.32%] - View)

    Percentage of vote: 1.32%

  7. Other (2 votes [2.63%] - View)

    Percentage of vote: 2.63%

Vote Guests cannot vote

#81
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPip
  • 505 posts
The title is a little misleading. Many of the things on this list won't be "taken over" for a long time; but machines might do well enough to beat humans in competitions well before then.

One on the list that I think is off is "research math". It's the kind of problem that machines can programmed to get better at all on their own -- just like the games Chess, Go and Dota 2. Thus, I wouldn't be surprised if machines can beat humans in about 10 years on IMO-style tests (which isn't "research math", but how else to measure?); and maybe even in the next 5 years, or even 3.

The only thing that might stand in the way is this: part of what causes people to recognize a mathematical result as "impressive" is the fact that humans are just bad at coming up with those particular proofs. Some things are just counterintuitive to humans. There are proofs that are very short, that took humans a very long time to come up with, that are lauded as "gems", that blind-search would have come up with a lot sooner (if they had tried) -- and that humans find mind-bogglingly counter-intuitive.

I would hope that building good math AIs wouldn't require that we understand human intuition better -- that we can just bulldoze the problem away will brute computational power. But there is a possibility that it's really important, if we want to beat humans at their own game in this one respect.
  • Yuli Ban likes this

#82
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,511 posts
  • LocationLondon

DkDbu2BWwAE39_s.jpg

Is this "when we will have the technological capability to automate this" or "when humans will have been mostly replaced in this task",

 

Because the idea that folding laundry will be mostly done by machines in 2021 seems absurd, people don't all have washing machines and dishwashers today, no way we're all buying laundry folding robots that fast. 

 

At the same time the idea that it will take till 2026 to have the technological capability to automate truck driving is equally absurd. the hard part of driving is interpreting and understanding the world around you, not manoeuvring the vehicle. Trucks will not be much more difficult than cars in this respect. 

 

EDIT:

Computers can already read text aloud?

"beat the fastest human in a 5k race" is only hard if your robot/AI cant use wheels, or flight, which is exactly what you would want for any practical application. 



#83
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPip
  • 449 posts

exponential-growth-singularity.jpg

See: https://kk.org/thete...he-singularity/



#84
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,511 posts
  • LocationLondon

That's an interesting article, but it assumes Ray Kurzweil has arrived at his date for the singularity by plotting charts and picking a point that looks vertical. 

 

Instead his target date for the singularity is shortly after he expects us to have:

 

A) sufficient processing power available to equal the human mind in raw power (for a reasonable price). 

B) sufficiently high resolution brain imaging technology to observe in great detail the functions of the human brain. 

(assuming no major changes to the exponential growth in price/performance of computers, and brain imaging resolution)

 

The singularity is not some kind of natural function caused by exponential growth, you wont see anything growing exponentially eventually hit a "singularity" moment.

 

It is specifically when we are able to create intelligence that is equal to our own that the singularity theoretically begins, and it begins because of the great differences in how computerised and biological intelligence work. (i.e. you cannot copy and paste biological intelligence (well not yet anyway!)) or copy and paste knowledge and experience between biological minds. 

 

So computers that are equal to humans, will result in a huge intelligence explosion. (even if they couldn't make smarter computers, we could still make billions of human level AIs all linked together sharing experience and knowledge which would have a similar effect. It may not result in a world beyond our ability to comprehend, but it would make humanity obsolete. 


  • Yuli Ban likes this

#85
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,369 posts
  • LocationLondon

Peter Diamandis | The Future Is Faster Than You Think | Global Summit 2018 | Singularity University

 

https://youtu.be/FTTgdtl8FvM


  • Yuli Ban likes this

#86
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,430 posts
  • LocationNew Orleans, LA

"I hear LFADS has become conscious... beware..."
 
From Starspawn0:


First, the guy is a Stanford Neuroscience professor, and this is his only Twitter post (aside from replies).
 
A Deep Learning method to model the primate motor cortex using brain activity recordings. When the models were trained, they found them to be shockingly accurate at reproducing neural population response patterns. For the first time, Deep Learning has crossed over from simply modelling sensory areas (like the visual and auditory cortices), towards higher cortical areas. The next things to be tried might be the higher cognition parts of the frontal cortex -- if that succumbs to a Deep Learning modelling approach, it would be powerful evidence that one might be able to model the whole brain with Deep Learning.
 
We stand between the worlds of possibility...


  • Outlook likes this

And remember my friend, future events such as these will affect you in the future.


#87
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 19,430 posts
  • LocationNew Orleans, LA

Artificial General Intelligence Is Here, and Impala Is Its Name

One of the most significant AI milestones in history was quietly ushered into being this summer. We speak of the quest for Artificial General Intelligence (AGI), probably the most sought-after goal in the entire field of computer science. With the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.
Let’s define AGI, since it’s been used by different people to mean different things. AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning. During meta-learning, the acquisition of one skill enables the learner to pick up another new skill faster because it applies some of its previous “know-how” to the new task. In other words, one learns how to learn — and can generalize that to acquiring new skills, the way humans do. This has been the holy grail of AI for a long time.
As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations.  Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. However, these aren’t nearly so much so an artificial general intelligence, which could conceivably unlock the kind of recursive self-improvement variously referred to as the “intelligence explosion” or “singularity.”
Those who thought that day would be sometime in the far distant future would be wise to think again. To be sure, DeepMind has made inroads on this goal before, specifically with their work on Psychlab and Differentiable Neural Computers. However, Impala is their largest and most successful effort to date, showcasing a single algorithm that can learn 30 different challenging tasks requiring various aspects of learning, memory, and navigation.


  • Casey, Hyndal_Halcyon and Alislaws like this

And remember my friend, future events such as these will affect you in the future.


#88
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 9,369 posts
  • LocationLondon

10 years ago this month, I read this book.

 

I can honestly say it changed my whole outlook on life. :)

 

 

singularity-is-near-book-amazon.jpg


  • Zaphod, Casey and Yuli Ban like this

#89
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 727 posts
  • LocationUK

Did anyone else sort of come up with an approximate idea of the singularity without really knowing about it?

 

I remember just having a thought experiment that simply was: if you create a computer capable of designing a more powerful computer and that resultant computer can do the same, then given enough iterations you will quickly get a superintelligence. It was only after thinking that when I realised there were people taking this idea seriously and had called it the singularity. 


  • wjfox and Yuli Ban like this

#90
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,869 posts
  • LocationGeorgia

I rate it as the best book I've read fiction or otherwise. It expanded my mind and colored my world. Thank you Raymond Kurzweil. It also gave me chronic anxiety because i'm afraid i'm going to die before we get a grip on death. Damn you Raymond Kurzweil. 


  • wjfox, Casey, Yuli Ban and 1 other like this

The growth of computation is doubly exponential growth. 






Also tagged with one or more of these keywords: Singularity, AI, Deep Learning, Technology, Artificial Intelligence, Future, Science, Culture, Government, Computers

6 user(s) are reading this topic

0 members, 6 guests, 0 anonymous users