Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
214 replies to this topic

#1
Caiman

Caiman

    Administratus Extremus

  • Administrators
  • PipPipPipPipPipPip
  • 952 posts
  • LocationManchester, England
This is defined well enough by Wikipedia;

'A technological singularity is a hypothetical event occurring when technological progress becomes so rapid and the growth of artificial intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict.'

There are those who believe that the singularity is not a matter of IF but WHEN, and some who believe that humanity will trigger a technological singularity think that we're not so far from it, perhaps even seeing it happen in the first half of this century.

What are your thoughts on the matter, do you question whether or not we're going to reach a point where we build computers that can build better computers which in turn design even better computers, triggering an exponential explosion of processing power like nothing we've seen so far? Is it inevitable, whether in the next fifty years or five hundred years, and can we control or stop it? Should we?

In 1993, Vernor Vinge said this;

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge refines his estimate of the time scales involved, adding, "I'll be surprised if this event occurs before 2005 or after 2030."

Was he way off the mark?
  • Chronomaster likes this
~Jon

#2
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,963 posts
  • LocationLondon
I used to think a Singularity was "inevitable" - but the more I learn about the world's problems, the less likely I think it is.

I think people underestimate just how devastating peak oil will be. It's going to turn the world upside down. I would encourage everyone to watch the videos on this page:

http://www.postpeakl...-post-peak-life

Even if peak oil is mitigated, there's still the problem of overpopulation, resource depletion, water shortages, climate change, terrorism etc. We will face multiple converging problems before the Singularity occurs, any one of which would be bad, but together will be catastrophic. In my opinion, what's "inevitable" is a collapse of our monetary system, then some sort of transition to a more sustainable society - then maybe a Singularity in 100 years or so... maybe.
  • atoblade likes this

#3
CamGoldenGun

CamGoldenGun

    Member

  • Members
  • PipPipPipPipPipPip
  • 595 posts
  • LocationFort McMurray, AB. Canada.
There have been too many movies and novels that have given us an inherent distaste to let AI evolve far enough to destroy us. On the other hand, creating a "super-human" like Khan doesn't sound too far-fetched to me.

I think as a species we're always looking to improve ourselves so as far as increasing our own abilities the sky is the limit, but when it comes to giving our tools the ability to think for themselves: there will likely become laws to prevent sentience beyond a certain level when it comes to AI

#4
OrbitalResonance

OrbitalResonance

    Cosmic Emperor

  • Members
  • PipPipPipPipPipPipPip
  • 1,224 posts
  • LocationDeep Space
Become the AI into posthuman?

We make our world significant by the courage of our questions and the depth of our answers. - Carl Sagan


#5
Prolite

Prolite

    Member

  • Members
  • PipPipPipPipPipPip
  • 609 posts
I don't think a singularity in technology will occur, but I do believe the pace of technology will keep increasing. Ray Kurzweil's biggest blunder is going to be the so called "singularity".
I'm a business man, that's all you need to know about me.

#6
Shimmy

Shimmy

    Member

  • Members
  • PipPipPipPipPipPip
  • 601 posts
I don't believe problems in the world and politics will be able to stop the singularity. The singularity itself only needs to occur in one specific place and it will then spread easily everywhere else. Even if most governments try to prevent it it only takes one country or company to get there. With the wonders of the internet progress will never be lost so anyone can just pick it up from where the last people left it. At worst the worlds problems will only delay it by 10 years or so. There's a chance whoever does first reach it will try not to share the knowledge as it will instantly give them global worldwide power, but it will be impossible to contain for long.
  • Shimmy and ddmkm122 like this

#7
Prolite

Prolite

    Member

  • Members
  • PipPipPipPipPipPip
  • 609 posts
INTRODUCTION: I do not agree with most futurists that insinuate intelligent A.I. - - a computer or android that you can have an intelligent conversation with that has equal intelligence, wit, humor, and creativity as you or I, will transpire during this century. Nor do I think the human brain can be completely reversed engineered in 100 years. We can however probably reverse engineer the human brain at an extremely primitive level, but I'm not sure that would equate to creating "very intelligent A.I.". SINGULARITY 1 - [1995-2000] The technological singularity in my opinion, has already occurred as "version 1.0" with the mass onslaught of the personal computer in 1994-1995. Within 2-3 years time I would say, the entire planet drastically changed it's culture, past times, and conduct of business. By the year 2000, most of the intricacies of society was either dependent upon or reliant on computers, and in a significant way. When I was 15 years old (1996), most people played sports or went outside and did things when they came home from school. By 2000, the streets looked like a ghost town and I could practically see the tumble weeds rolling up and down my block. Gaming had taken a significant downfall on how we spend our past time. And in technology, 1994 was the year right before "version 1.0 technological singularity", and had little effect or infiltration yet. By 2000, mobile phones and mp3's players, blackberries and personal computers were <almost> ubiquitous. SINGULARITY 2 - [2015-2020] In my opinion, there will be a second technological singularity, "version 2.0" wherein the internet becomes smart. It's already happening (2010), sort of. A lot of computer geeks are calling this phase"Web 3.0". This will be an internet where artificial intelligence grows at a significant rate and makes better use of our knowledge and organizational skills. Search engines for instance will be able to understand what you're asking of them <notice I said "them"> rather than spitting out useless results based on a search string. Already there are programs for mobile computers that can search for hotels, flights, or restaurants by simply asking the computer a question or making a statement. This type of artificial intelligence will lead to universal connection of devices such as automatically sharing files or information from your mobile computer, to your personal computer, and your TV -- which will also have the internet as well. Augmented reality, telepresence, and AI will combine in such a way that everyday society will reflect these things such as: train stations, airports, billboards, electronic newspapers, signs, etc.. Over the next 10 t o 20 years, the internet will be so smart that people will expect certain things to happen such as "look up everything about blackholes in our galaxy that was researched ONLY by this <named scientist> and compare that with these other results". SINGULARITY 3 - "The Technological Singularity" - [2045-2050] This is the singularity that Ray Kurzweil commonly refers to and the one I bear to challenge. Firstly as said in the introduction, I honestly don't think we're not going to reach a point in this century where A.I. can make itself super smart. Sorry, I just don't. Secondly, the "Technological Singularity" is not going to be what everyone thinks it is, almost like the whole "Y2k" thing. What a load of **** that was. The most likely way - - and probably the only way we're going to get to the Technological Singularity is to help our machines become smarter at helping us arrange and organize information in a way that our human community can build upon it. Almost like Wikipedia by with A.I. and personal accounts, chatbots, graphs, audio and video such as youtube, database technology, speech recognition, virtual assistants, twitter (repeated data), artificial photo recognition, and online contributing community that is ALL COMBINED TOGETHER IN ONE MASSIVE EXPLODING APPLICATION. I would call this perhaps: The Human Intelligence Project. Only a computer with the speed of all of humanity would be able to this -- expected in 2045 by Ray Kurzweil. It would be the most massive and sophisticated database ever created. It would include all of human knowledge. The application would basically be a search engine that has cloud computing (computations are made on a supercomputer and results are rendered on a single portal or webpage), but in the form of a virtual assistant, and/or video, audio, graphics, etc.. Information could either be retrieved or taught to the search engine, and by way of chatbot or voice recognition (virtual assistant). The virtual assistant A.I. would be smart enough to understand what you're saying per say, such as ideas or concepts, and apply that information to it's database(s). Nothing so far insinuates that we need to have "self-aware" A.I. to be able to do this. This is just bull****. It can be done using quantitative mathematics, hueristics algorithms, and speech recognition, and probably some other geeky programming stuff. The point I am making is that right now, the internet is ONLY a place to RETRIEVE or SEND information. The internet is not smart. The internet needs to be able to compute, analyze, combine, and communicate information as one method or idea, rather than resourcing countless webpages for pointing you in the right direction. So if someone from China taught the search engine the idea of "customer appreciation", someone from New York who was doing the same thing would have their data compared to that person in China by the search engine A.I., and the search engine would learn what data was used over and over again in multiple ways to predict the future of a given idea, concept, or situation. It would be given a "confidence coefficient" basically, and that information can be formed into a sentence or a paragraph that bears great value to what you're asking of the search engine and can be spoken back to you. The search engine is not "super smart" per say because it has "awareness, the search engine only knows the data its been given in the vast multitude of context's the same data has been taught in. One of the ways in which Google is trying to develop artificial intelligence is to create programs which interpret visual data and is able to make a prediction about the nature of that image. In order to do this, there needs to be an underlying foundation of programming that understands human thinking. So for instance, if you were to picture a regular sized bird, you would think about something that you'd see in your backyard or in nature that is common to you. But if you were to picture a large bird, you'd think something along the lines of a vulture or eagle. But would you think about an ostrich if someone said the word "bird" to you? People are trying to make computer programs understand this "visual-spatial-connection relationship". Based on our experiences, our brains are wired to think a certain way. Our brains are networked to make completing repeated tasks automatic. And these connections obviously are plastic. Well how do you make a computer plastic? Well here's how Google is doing it: "Imagine, at your fingers, a computing power so potent that is capable of doing as much operations in a second as there are particles in our observable universe. Leaving aside some of its apparently “blunt” applications like cracking all the cryptographic codes invented so far, searching and instantly finding elements from databases so big that wouldn’t fit all the servers on the internet, factorizing numbers so large that no network of present-day supercomputers could ever have the chance at succeeding in our lifetimes, imagine how this could give us the power to build all of our future, but highly advanced and unimplementable on today’s computers, artificial intelligence systems. With the help of quantum computers we could build super brains, simulate complex molecule interactions that are completely intractable on present day supercomputers, find out the secrets to unlimited resources, and maybe discover the ultimate secrets of reality. " Quantum computers can be used to do only certain things. "In fact, they are: factoring, unstructured search, and quantum simulation". IBM's "Watson" uses unstructured search. "How does Watson differ from Deep Blue? We are now at a similar juncture. This time it is about how vast quantities of digitally encoded unstructured information (e.g., natural language documents, corporate intranets, reference books, textbooks, technical reports, blogs, etc.) can be leveraged by computers to do what was once considered the exclusive domain of human intelligence: rapidly answer and rationalize open-domain natural language questions confidently, quickly and accurately."
  • archikind likes this
I'm a business man, that's all you need to know about me.

#8
Trezoristo

Trezoristo

    Member

  • Members
  • PipPip
  • 19 posts
  • LocationLeiden, The Netherlands
I'm just now familiar with the concept, but it does seem unlikely to me. I'm not sure raw computing power alone would allow us to get there, unless in the process we create some giant artificial intelligence, though it is hard to imagine what that would look like. Part of it is a nature/nurture thing: It seems likely that eventually we'll build computers with processing powers similar to the human mind (perhaps we're already there, it's hard to compare), but can such a machine become 'smart' in the human sense of the word without ever growing up? Also, and I'd like to state up front I have a very mechanical view of the world, I do not believe we have any idea what a species (artificial or biological) has to do to become self aware and smart and all those things we attribute to our own species. I do believe in nature different gradations in those properties can be observed among different species. Computers on the other hand, though they have become a lot faster in the past decades, have as far as I know so far not shown any of those properties at all (they still do pretty much exactly what we tell them, it is our instructions that have become more complex). What I'm trying to say with this is that I'm not convinced by the hypothesis that artificial intelligence is inevitable once computers become strong enough. Getting to a singularity by science seems even less likely to me; the pace of scientific discovery may be increasing, but resources are spread thinner over increasing numbers over ever-broader fields, so at least some momentum is lost there. If we look at the frontier then we see that answering open questions becomes increasingly more expensive. Just look at the amount of money and people it costs to build and start a new particle accelerator (CERN), or to design and build a fusion power plant (ITER). Make no mistake, I think both are worthy projects, but they have taken and will take still more. In some sense it seems that, at least in physics, the easy experiments have been done. There is more to do still, but I don't see exactly how that would happen increasingly quicker. Lastly, bio-engineering is a field with a lot of potential, but if you genetically manufacture one or several super-smart individuals, it will still take a lot of time and work and (expensive) experiments to get scientific results.
  • Caiman and Chronomaster like this

#9
Andy

Andy

    Member

  • Members
  • PipPip
  • 41 posts
  • LocationUnited Kingdom
Wikipedia being what it is, the opening paragraph has changed since this thread was started to reflect the probability that a technological singularity does not rely on artificial intelligence as a catalyst:

A technological singularity is a hypothetical event occurring when technological progress becomes so rapid and the growth of super-human intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict.


I don't know much about the subject, but I wonder whether a sufficiently augmented intelligence would qualify as super-human enough to trigger a singularity-like event. And not in the technologically/genetically modified human sense, but a regular human with powerful tools at their disposal.

For instance, take Wolfram Research. They're pretty much all about making computer programs to do complex maths for us, and the discoveries they find from those programs, and made a couple of TED talks about their work. One discussed the idea that instead of teaching kids how to calculate in maths lessons, we should teach them how to imagine and develop complex problems to be solved and immediately get used to computers do the calculating part for them. Basically, training humans to use computers more effectively than we do now. It's kind of turned into a little movement of theirs: http://www.computerbasedmath.org/ (the education TED talk is linked on that page, fyi)

While that's an unlikely scenario given the politics of the world, and it might not be a very effective approach, it's definitely not the only idea aimed at augmenting human beings with technology to make us more productive over the course of our relatively short life spans. Maybe most of the ideas might wither and die, or get lost to greed, but if the right kind of augmentation happens in the right place, at the right time, then maybe.

But the singularity seems too isolated a concept, so I wouldn't say it was inevitable. Just a possibility.
  • Caiman and Chronomaster like this
For everyone's sake, watch this video

#10
Chronomaster

Chronomaster

    Member

  • Members
  • PipPipPip
  • 80 posts
Ultimately, unless we stop research and development of faster and faster, more powerful computers, can we really say that this is not an inevitability? Isn't the concept of the technological singularity really based on the moment we design a computer capable, without human intervention, of designing a better computer. Doesn't it then become a runaway effect of computers designing better comptuers ad infinitum?
Counting down...

#11
OrbitalResonance

OrbitalResonance

    Cosmic Emperor

  • Members
  • PipPipPipPipPipPipPip
  • 1,224 posts
  • LocationDeep Space
And the ability to use resources and invent new things.

We make our world significant by the courage of our questions and the depth of our answers. - Carl Sagan


#12
atoblade

atoblade

    Time Lord

  • Members
  • PipPip
  • 47 posts

I used to think a Singularity was "inevitable" - but the more I learn about the world's problems, the less likely I think it is.

I think people underestimate just how devastating peak oil will be. It's going to turn the world upside down. I would encourage everyone to watch the videos on this page:

http://www.postpeakl...-post-peak-life

Even if peak oil is mitigated, there's still the problem of overpopulation, resource depletion, water shortages, climate change, terrorism etc. We will face multiple converging problems before the Singularity occurs, any one of which would be bad, but together will be catastrophic. In my opinion, what's "inevitable" is a collapse of our monetary system, then some sort of transition to a more sustainable society - then maybe a Singularity in 100 years or so... maybe.


Just watched the 3 part peak oil video!
Thats insane I thought it was coming in 20 or 30 year but it seems only just round the corner after watching that!
  • wjfox likes this

#13
psikeyhackr

psikeyhackr

    New Member

  • Members
  • Pip
  • 6 posts
This Singularity business is associated with Artificial Intelligence and direct brain interfacing so much that I only use the term Singularity to mean that in terms of techno-cultural change. I don't think they are going to happen in the next 50 years. The problem with Artificial Intelligence is the association between SYMBOLS and REALITY. You can put the word CAT into a computer and the DEFINITION OF CAT but that definition is just MORE SYMBOLS. Is the computer going to know what it feels like to be scratched by a cat. Does the definition include that cats do that. Intelligence is about comprehending reality. Symbols are not enough. But these computers could be the crux of the next global cultural change. I will probably upgrade to a quad-core in the next few months but it is really because so much software is so crappy and inefficient not because I really think I need the processing power. But when are people going to start realizing that these are all von Neumann machines and they are more powerful than we need despite Moore's Law. The cultural change that can occur doesn't need for most people to have more processing power it is what to do with it. We need a sufficiently decent language so most people can program on their own and the system calls to operating systems and drivers need to be clearly documented. The problem is the CULTURAL INFORMATION HIDING. I never saw the term von Neumann machine in years at IBM. So it is the distribution of comprehensible information that can trigger the singularity not AI or more transistors. We could have a culture of VULCANS. :rofl: psik

#14
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,963 posts
  • LocationLondon


Just watched the 3 part peak oil video!
Thats insane I thought it was coming in 20 or 30 year but it seems only just round the corner after watching that!

Terrifying, isn't it? Just shows how utterly complacent we've become.

#15
psikeyhackr

psikeyhackr

    New Member

  • Members
  • Pip
  • 6 posts



Just watched the 3 part peak oil video!
Thats insane I thought it was coming in 20 or 30 year but it seems only just round the corner after watching that!

Terrifying, isn't it? Just shows how utterly complacent we've become.


I don't think it is complacency so much as the inability to filter out the bullshit.

We are constantly bombarded with so much BS information it is nearly impossible to figure out what is important and what is correct.

So we have hundreds of millions of people making bad decisions on the basis of bad information then we wonder why the world is so screwed up. The economists can't even subtract the depreciation of all of the junk that is designed to become obsolete.

Buy more junk it increases GDP. That is good for the economy.

Who cares if it is bad for you?

psik

#16
Chronomaster

Chronomaster

    Member

  • Members
  • PipPipPip
  • 80 posts

But when are people going to start realizing that these are all von Neumann machines and they are more powerful than we need despite Moore's Law.

It needs to be this way, really. They are more powerful than we need now, perhaps... but we will be using them to the capacity they provide within a few years. Of course, by then there'll be a new generation of more powerful machines to replace them but thus is the march of technology, it goes on and on... until global economic collapse or the singularity, the only two options according to this thread? Is there an inbetween?
Counting down...

#17
Prolite

Prolite

    Member

  • Members
  • PipPipPipPipPipPip
  • 609 posts
I believe we've already been through a technological singularity. It occurred in the mid 1990's with the advent of personal computers and the world wide web. And I could easily argue that if we were to time travel back to 1990 and talk about the technology and the world we live in TODAY, no one would believe us except the technology guru's and the sci-fi fans. Moreover, the world changed so fast between 1995-2000 and thereafter, that most people could have not predicted how we'd be living our lives today before that time, especially with mobile computing and our economy - - being totally reliant on computers, and our automobiles.

Some can argue that we're just hitting "paradigm shifts". But go back to the original Singularity argument: you can't predict past a certain moment in time. Well I agree with that until after the mid 1990's. And I also think we're going to hit another technological singularity within 15 years, before Kurzweil's singularity. Within 15 years, the next Singularity will be caused by two things: the first is the advent of IBM's Watson, wherein personal desktop computers are powerful enough to handle such programs. Secondly, the Singularity in 15 years time will be the quantum computer (a powerful one that solves life's hardest problems. I am thinking solar panels and our energy crisis). This I believe will occur around 2025 or 2030 the latest. Every week there's major breakthroughs in quantum computers. I read about these breakthroughs in Google's news section. In fact, here's proof.

Then a third technological singularity will occur again some time later in the future which I can not predict, but I think this one will be a singularity in which we can amplify the processing power of the human brain. Imagine a computer chip that works like caffeine does on the brain, but only 100X better. Maybe there will be a drug instead? And if this doesn't cause it, it will be caused by a radical shift in our education system, namely, students are taught: process, analysis, problem solving and creativity ... NOT memorization and standardized testing. QUESTION: when math is taught in school, WHY are students taught how to figure out a math equation? Who the hell cares. They should be taught HOW to USE the math equation to solve real life problems. DUH.


  • wjfox likes this
I'm a business man, that's all you need to know about me.

#18
Chronomaster

Chronomaster

    Member

  • Members
  • PipPipPip
  • 80 posts
By definition a technological singularity has occurred when we can no longer reliably predict what is going to happen next, I don't think it's accurate to say we've arrived at that point, or seen it happen, yet? The very fact Moore's law has remained so consistent for decades would seem to imply such?
Counting down...

#19
Prolite

Prolite

    Member

  • Members
  • PipPipPipPipPipPip
  • 609 posts

By definition a technological singularity has occurred when we can no longer reliably predict what is going to happen next, I don't think it's accurate to say we've arrived at that point, or seen it happen, yet? The very fact Moore's law has remained so consistent for decades would seem to imply such?


Ray Kurzweil's singularity I don't think will happen during this century. You seem to define it based on Moore's Law. Basing something on Moore's Law is retarded to be honest, society is much more complicated than that. MY technological singularities are one's that make it hard if not impossible to predict the future of technology for the vast majority of the population. You can question: "what's vast majority mean? How many people would that be?" Vast majority means 95% of the population. Most people are clueless about the world around them so perhaps I'm making this too easy to fit the description of a singularity. Oh well. I made up my own definition of technological singularity. Don't be a sheep and follow Ray's exact definition. Life's too complicated to be a follower. In 1990, 95% of the population in the world would have NOT predicted the future we're living in today. THAT = Singularity. Period. lol.
I'm a business man, that's all you need to know about me.

#20
Chronomaster

Chronomaster

    Member

  • Members
  • PipPipPip
  • 80 posts
No, I don't base my understanding of the technological singularity on Moore's Law, I simply used it as one example to show why I disagree with your position. The fact that Moore's Law has remained consistent is evidence, contrary to your position, that we CAN still predict with accuracy how computer technology is going to unfold in the near future. It doesn't matter if 95% of people cannot predict what the future is going to be like, that's not the singularity. The singularity occurs when NO ONE can predict what's going to happen because we're no longer in control. That hasn't happened yet.
  • Caiman and Shimmy like this
Counting down...





Also tagged with one or more of these keywords: technological singularity, singularity, Ray Kurzweil, strong ai, singularity, technology, technological singularity, the singularity, ray kurzweil, kurzweil, trends, artificial intelligence, computers, brain

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users