Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
212 replies to this topic

#21
Prolite

Prolite

    Member

  • Members
  • PipPipPipPipPipPip
  • 609 posts

No, I don't base my understanding of the technological singularity on Moore's Law, I simply used it as one example to show why I disagree with your position. The fact that Moore's Law has remained consistent is evidence, contrary to your position, that we CAN still predict with accuracy how computer technology is going to unfold in the near future. It doesn't matter if 95% of people cannot predict what the future is going to be like, that's not the singularity. The singularity occurs when NO ONE can predict what's going to happen because we're no longer in control. That hasn't happened yet.


Than I wouldn't call that a technological singularity. I would call that something else, something much more profound. That's more like a revelation or period of human enlightenment. That has happened before with the onslaught of the industrial revolution. Secondly, Moore's Law is more about economics than it is about technology. Actually, it's macro-economics, so I don't agree with your theory. Third, I think society always has some sort of control over it's destiny, technology, and the like. Predictions lay at the threshold of many different facets of human knowledge, not just technology. There's a reason why today's smartphones look the way they do. The biggest reason is because of the original Star Trek series. The other reasons have to do with business and form factor.
I'm a business man, that's all you need to know about me.

#22
Chronomaster

Chronomaster

    Member

  • Members
  • PipPipPip
  • 79 posts
We'll have to agree to disagree, then. To me, if humanity does remain in control of its destiny and technology, then a singularity like that described in the OP will not occur. I've no doubt that the 21st Century will see unimaginable cultural and technological change whatever happens, and the world of 2111 may be even more unrecognisable to us than the world of 2011 would be to someone from a hundred years ago.
Counting down...

#23
Shimmy

Shimmy

    Member

  • Banned
  • PipPipPipPipPipPip
  • 600 posts
If the definition is that humans won't be able to predict it because the AI becomes so much more advanced, then I guess you'd have to clarify your definition of humans.
As we invent AI far superior to our own intelligence we will incorporate it into ourselves so we are evolving with the computers. Since our current predicitons are worked out using these horrible human brains, it's hard to know for sure what our predictive powers will be once we ascend into godlike constantly exponentially improving cyborg beasts.
It's possible that we will predict future inventions so accurately that we will invent them before we have the technology to actually conceive of their existence, and time itself will no longer be a necessary factor for technological change. This will mean AI will be more intelligent in the past than in the future resulting in a second big bang.
  • Caiman likes this

#24
Nom du Clavier

Nom du Clavier

    Brain in a body-shaped jar

  • Members
  • PipPipPipPip
  • 171 posts
  • Location/dev/random
I think it will be hard to outpace discovery at our current rates by so much we'll have trouble understanding, for several reasons:
- Really, when you think about it, the Ancient Greeks were as intelligent as we are. If we didn't have their knowledge to build on, we'd have had to discover the fundamentals of mathematics ourselves as well. Each generation builds upon the previous one. Let's just hope we don't have another 'Dark Age' setting us back.

- I grew up at a time when personal computers started coming around and we had an 8086 PC with a monochrome monitor, two 5.25" floppy drives and no harddisk. By the time I was 10 I'd already taught myself assembly language to a degree I could circumvent copy protection on a piece of software that wouldn't run without a dongle. In other words, computers felt natural to me. Take my grandfather, since deceased, over the years he'd often ask me 'if I was still doing something with computers', but even explaining what I did in the most general of terms he'd just nod and leave me to wonder if even that generalisation was too complex. Not because he was dumb, but because it was a completely foreign subject.

To quote Leslie Poles Hartley¸ "The past is a foreign country: they do things differently there."

Invention upon invention and discovery upon discovery, we suss them out and understand them well enough before the next breakthrough happens. This is both because breakthroughs are far enough between, as well as because we think of things by mashing existing ideas together, so even if something is radically new, it's still likely made of components we already know very well and they just interact in unexpected ways.

We'll need discovery more than invention, and at a pace that means we haven't understood the previous discovery, if ever we want to see technology run away from our understanding far enough that we can't keep up.
  • Caiman likes this
This amount of awesome cannot be from concentrate.

#25
classical piano guy

classical piano guy

    Member

  • Members
  • PipPipPip
  • 53 posts

I think it will be hard to outpace discovery at our current rates by so much we'll have trouble understanding, for several reasons:
- Really, when you think about it, the Ancient Greeks were as intelligent as we are. If we didn't have their knowledge to build on, we'd have had to discover the fundamentals of mathematics ourselves as well. Each generation builds upon the previous one. Let's just hope we don't have another 'Dark Age' setting us back.

- I grew up at a time when personal computers started coming around and we had an 8086 PC with a monochrome monitor, two 5.25" floppy drives and no harddisk. By the time I was 10 I'd already taught myself assembly language to a degree I could circumvent copy protection on a piece of software that wouldn't run without a dongle. In other words, computers felt natural to me. Take my grandfather, since deceased, over the years he'd often ask me 'if I was still doing something with computers', but even explaining what I did in the most general of terms he'd just nod and leave me to wonder if even that generalisation was too complex. Not because he was dumb, but because it was a completely foreign subject.

To quote Leslie Poles Hartley¸ "The past is a foreign country: they do things differently there."

Invention upon invention and discovery upon discovery, we suss them out and understand them well enough before the next breakthrough happens. This is both because breakthroughs are far enough between, as well as because we think of things by mashing existing ideas together, so even if something is radically new, it's still likely made of components we already know very well and they just interact in unexpected ways.

We'll need discovery more than invention, and at a pace that means we haven't understood the previous discovery, if ever we want to see technology run away from our understanding far enough that we can't keep up.

That's actually part of the reason why my definition of the true "technological singularity" (and the one most commonly used by my peers) is quite different from the one Caiman cited from wikipedia -- because we will indeed always be able to keep up. Instead, I prefer the mathematical description of the term "singularity" -- the part of an increasing graph (in this case, x^x) when the slope begins to approach infinity. That is the singularity most futurists refer to, and that is the singularity which is, while not inevitable, very likely.

For the ancient Greeks, technological advancement occurred at a rate of y=x -- that is, a discovery was made; a decent amount of time passed; and then a second discovery was made off the first one. By the Renaissance, that advancement was closer to a y=x^2 graph, not because of an increase in intelligence, but because of an increase in the scientifically active population which heavily supplemented the rate of discovery. Electricity pushed the slope up even higher, but, as you yourself pointed out, it never left our control because we continued to acquaint ourselves with the technology as it was discovered.

Now, with the presence of the semiconductor, we are at a rate of 2^x. That is, with our current population, general intelligence, and technology at our disposal, we can double the proficiency of our electronics every 2 years -- yet since we either directly cause or completely understand the nature of that progress, we are still "in control."

However, in 40-60 years, at least one of two things will occur which will cause a break in the cycle and initiate the early stages of the singularity:
1) We will develop artificial intelligence with capabilities superior to our own.
2) We will discover how to integrate microchips into our own minds.

In either case, one of the factors mentioned above will have changed -- the "general intelligence" working on technological advancement -- which will have been increased greatly. Thus, with greater intellects involved, the progression rate will increase even further. Soon, these systems will simultaneously be improving themselves and the world around them at accelerated speeds (i.e. by improving itself, it can improve itself more even faster), and the (2^x) function will explode into an (x^x) function. At this point, all humans without upgraded minds (what Leslie Poles Hartley would consider humans of the past) will not be able to comprehend the pace of technology and will be left behind -- the singularity. However, for those with them, the changes will be no more difficult to understand than they are now.
  • Caiman and wjfox like this

#26
Nom du Clavier

Nom du Clavier

    Brain in a body-shaped jar

  • Members
  • PipPipPipPip
  • 171 posts
  • Location/dev/random

However, in 40-60 years, at least one of two things will occur which will cause a break in the cycle and initiate the early stages of the singularity:
1) We will develop artificial intelligence with capabilities superior to our own.
2) We will discover how to integrate microchips into our own minds.

In either case, one of the factors mentioned above will have changed -- the "general intelligence" working on technological advancement -- which will have been increased greatly. Thus, with greater intellects involved, the progression rate will increase even further. Soon, these systems will simultaneously be improving themselves and the world around them at accelerated speeds (i.e. by improving itself, it can improve itself more even faster), and the (2^x) function will explode into an (x^x) function. At this point, all humans without upgraded minds (what Leslie Poles Hartley would consider humans of the past) will not be able to comprehend the pace of technology and will be left behind -- the singularity. However, for those with them, the changes will be no more difficult to understand than they are now.


Very interesting take on the subject, and I think I agree for the most part. There is something about it which I'm unsure of, but at 6am in the morning better I head off to bed and try and figure out what that something is with a fresh mind. I'll come back to this.

Meanwhile, another something I did already think through in more detail is this:
- I believe that the increased human capital we're seeing with China and India 'coming online' will accelerate us all in the next decade, even more so after that. By coming online I mean both their joining the internet, but also their increased education and contributions to science. Wasn't it only recent that China already accounts for something like 20% of scientific papers authored?

- Big IP is untenable. Big Music, Old Software, Hollywood, Big Pharma, et alia are waging a war on the internet at the moment and lobbying for ever stricter copyright and patents. They either don't understand or don't care about breaking the Internet DNS system as a result (see ProtectIP, eG8 and so on), or forcing third parties to police content when those third parties would have to be psychic to know if something's infringing or licensed (see Viacom vs. Youtube) and so on. Never mind these monopolies were granted to promote progress and enrich the public domain and things like ever longer copyrights mean they're reneging on their deal with society at large.

Bottom line for the singularity is that this kind of hoarding is also immensely detrimental for scientific and technological progress, when software patents are a minefield and we'd have far better software if everyone could do blue sky research, letting the market decide if that was worth their salary. Likewise with the propietary silos of information in all kinds of research fields (helped along by the publish-or-perish mentality that encourages secrecy until you're ready to publish); even after publication a lot of data or results aren't given back to the public domain (tax-funded research should in my opinion not be spun-off, kept secret, et cetera). I understand that these spin-offs from universities help fund the universities when they get less funding from governments all the time, but maybe that's where we should all look to and demand better education and more investment in these institutions so these research grants that are given on top of a sufficient stipend, but come with the proviso the data should be set free for all to improve on.

Everyone in the world looking at the results in detail of some research can certainly do better than the one spin-off which happened to get the license from the university. I'm convinced it'll get worse, copyright and patent and law-wise before even the most staunch copyright maximalist can't escape the conclusion that, his pocket book notwithstanding (but those people have enough in the bank already not to have to worry), humankind would be a heck of a lot better off if knowledge could be shared more freely.

Maybe this will happen or maybe it won't, but without far less strict 'IP' laws, I think we won't ever see a singularity simply because every next step will be a landmine and for every cross-license there'll be a troll, and for every troll there'll be an incumbent to crush a startup that did actually have something to contribute, if only they weren't sued out of existence before it came to market fully.

anyway, enough of a rant ;)
This amount of awesome cannot be from concentrate.

#27
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts

INTRODUCTION:
I do not agree with most futurists that insinuate intelligent A.I. - - a computer or android that you can have an intelligent conversation with that has equal intelligence, wit, humor, and creativity as you or I, will transpire during this century. Nor do I think the human brain can be completely reversed engineered in 100 years. We can however probably reverse engineer the human brain at an extremely primitive level, but I'm not sure that would equate to creating "very intelligent A.I.".

SINGULARITY 1 - [1995-2000]
The technological singularity in my opinion, has already occurred as "version 1.0" with the mass onslaught of the personal computer in 1994-1995. Within 2-3 years time I would say, the entire planet drastically changed it's culture, past times, and conduct of business. By the year 2000, most of the intricacies of society was either dependent upon or reliant on computers, and in a significant way. When I was 15 years old (1996), most people played sports or went outside and did things when they came home from school. By 2000, the streets looked like a ghost town and I could practically see the tumble weeds rolling up and down my block. Gaming had taken a significant downfall on how we spend our past time. And in technology, 1994 was the year right before "version 1.0 technological singularity", and had little effect or infiltration yet. By 2000, mobile phones and mp3's players, blackberries and personal computers were <almost> ubiquitous.

SINGULARITY 2 - [2015-2020]
In my opinion, there will be a second technological singularity, "version 2.0" wherein the internet becomes smart. It's already happening (2010), sort of. A lot of computer geeks are calling this phase"Web 3.0". This will be an internet where artificial intelligence grows at a significant rate and makes better use of our knowledge and organizational skills. Search engines for instance will be able to understand what you're asking of them <notice I said "them"> rather than spitting out useless results based on a search string. Already there are programs for mobile computers that can search for hotels, flights, or restaurants by simply asking the computer a question or making a statement. This type of artificial intelligence will lead to universal connection of devices such as automatically sharing files or information from your mobile computer, to your personal computer, and your TV -- which will also have the internet as well. Augmented reality, telepresence, and AI will combine in such a way that everyday society will reflect these things such as: train stations, airports, billboards, electronic newspapers, signs, etc.. Over the next 10 t o 20 years, the internet will be so smart that people will expect certain things to happen such as "look up everything about blackholes in our galaxy that was researched ONLY by this <named scientist> and compare that with these other results".

SINGULARITY 3 - "The Technological Singularity" - [2045-2050]
This is the singularity that Ray Kurzweil commonly refers to and the one I bear to challenge. Firstly as said in the introduction, I honestly don't think we're not going to reach a point in this century where A.I. can make itself super smart. Sorry, I just don't. Secondly, the "Technological Singularity" is not going to be what everyone thinks it is, almost like the whole "Y2k" thing. What a load of **** that was. The most likely way - - and probably the only way we're going to get to the Technological Singularity is to help our machines become smarter at helping us arrange and organize information in a way that our human community can build upon it. Almost like Wikipedia by with A.I. and personal accounts, chatbots, graphs, audio and video such as youtube, database technology, speech recognition, virtual assistants, twitter (repeated data), artificial photo recognition, and online contributing community that is ALL COMBINED TOGETHER IN ONE MASSIVE EXPLODING APPLICATION. I would call this perhaps: The Human Intelligence Project. Only a computer with the speed of all of humanity would be able to this -- expected in 2045 by Ray Kurzweil. It would be the most massive and sophisticated database ever created. It would include all of human knowledge. The application would basically be a search engine that has cloud computing (computations are made on a supercomputer and results are rendered on a single portal or webpage), but in the form of a virtual assistant, and/or video, audio, graphics, etc.. Information could either be retrieved or taught to the search engine, and by way of chatbot or voice recognition (virtual assistant). The virtual assistant A.I. would be smart enough to understand what you're saying per say, such as ideas or concepts, and apply that information to it's database(s). Nothing so far insinuates that we need to have "self-aware" A.I. to be able to do this. This is just bull****. It can be done using quantitative mathematics, hueristics algorithms, and speech recognition, and probably some other geeky programming stuff. The point I am making is that right now, the internet is ONLY a place to RETRIEVE or SEND information. The internet is not smart. The internet needs to be able to compute, analyze, combine, and communicate information as one method or idea, rather than resourcing countless webpages for pointing you in the right direction.

So if someone from China taught the search engine the idea of "customer appreciation", someone from New York who was doing the same thing would have their data compared to that person in China by the search engine A.I., and the search engine would learn what data was used over and over again in multiple ways to predict the future of a given idea, concept, or situation. It would be given a "confidence coefficient" basically, and that information can be formed into a sentence or a paragraph that bears great value to what you're asking of the search engine and can be spoken back to you. The search engine is not "super smart" per say because it has "awareness, the search engine only knows the data its been given in the vast multitude of context's the same data has been taught in.

One of the ways in which Google is trying to develop artificial intelligence is to create programs which interpret visual data and is able to make a prediction about the nature of that image. In order to do this, there needs to be an underlying foundation of programming that understands human thinking. So for instance, if you were to picture a regular sized bird, you would think about something that you'd see in your backyard or in nature that is common to you. But if you were to picture a large bird, you'd think something along the lines of a vulture or eagle. But would you think about an ostrich if someone said the word "bird" to you? People are trying to make computer programs understand this "visual-spatial-connection relationship". Based on our experiences, our brains are wired to think a certain way. Our brains are networked to make completing repeated tasks automatic. And these connections obviously are plastic. Well how do you make a computer plastic? Well here's how Google is doing it:

"Imagine, at your fingers, a computing power so potent that is capable of doing as much operations in a second as there are particles in our observable universe. Leaving aside some of its apparently “blunt” applications like cracking all the cryptographic codes invented so far, searching and instantly finding elements from databases so big that wouldn’t fit all the servers on the internet, factorizing numbers so large that no network of present-day supercomputers could ever have the chance at succeeding in our lifetimes, imagine how this could give us the power to build all of our future, but highly advanced and unimplementable on today’s computers, artificial intelligence systems. With the help of quantum computers we could build super brains, simulate complex molecule interactions that are completely intractable on present day supercomputers, find out the secrets to unlimited resources, and maybe discover the ultimate secrets of reality. "

Quantum computers can be used to do only certain things. "In fact, they are: factoring, unstructured search, and quantum simulation". IBM's "Watson" uses unstructured search.
"How does Watson differ from Deep Blue?

We are now at a similar juncture. This time it is about how vast quantities of digitally encoded unstructured information (e.g., natural language documents, corporate intranets, reference books, textbooks, technical reports, blogs, etc.) can be leveraged by computers to do what was once considered the exclusive domain of human intelligence: rapidly answer and rationalize open-domain natural language questions confidently, quickly and accurately."


^ applauded
great insight on human visual memory, i rarely see people able to understand that much about the human mind and in actuality yourself.

For the first two singularities, depends on your definition of singularity really. google defines it as a state of being singular... lol!
for me personally i don't believe in singularities, its a state made to explain something people don't really understand themselves, like the black hole, Ray Kurzweil uses it a lot too, sorry =p.

Just because a computer is smarter, does not mean it can build a computer faster and smarter than itself, physical limits still apply obviously. Though... you probably don't agree with me on this, but the creativity of a machine that can at once process all of it's memories instantly, has a creativity level that is somewhat infinite, so that may account for a 'singularity' (yes i will call it singularity because that's what everyone calls it). Feel free to debate with me on this, i need a wider perspective!

People always try to predict the future without taking into account people!, the most important thing in the future is the people, people get smarter(or at least know different things) and they have different values and it is they who determine the future. With that in mind, any predictions pass your own generation will become increasing difficult as you will have to simulate the collective-mind of the next generation, of which you only experience via observation. BUT! you are right with the search engines, and i believe it may occur earlier than expected, many models have already been formulated, interestingly, google has brought a quantum annealing computer from D-Wave to trial an algorithm useful for 'smart' search engines.

What I really want to stress is that, we should not try to comprehend something that is smarter than we are, its physically impossible. You can't put(simulate) a 9X9 cube(brain) in a 3X3 cube without taking shortcuts, and something that is smarter/faster than you would have thought of that already =]. That is also the most dangerous part, we have no way of comprehending how A.I. will think, that's devastating for any kind of warfare. If that does happen, well we can say we fathered a new line of evolution... homo machina! or robotum! (whichever you prefer), and humanity lives on as machine. whoops!

Nevertheless we must keep going forward of course.

#28
Caiman

Caiman

    Administratus Extremus

  • Administrators
  • PipPipPipPipPipPip
  • 904 posts
  • LocationManchester, England

Just because a computer is smarter, does not mean it can build a computer faster and smarter than itself, physical limits still apply obviously.

If a human is capable of producing a machine faster and smarter than itself, then it will only be a matter of time before we create a machine capable of doing so too, surely?

Though... you probably don't agree with me on this, but the creativity of a machine that can at once process all of it's memories instantly, has a creativity level that is somewhat infinite, so that may account for a 'singularity' (yes i will call it singularity because that's what everyone calls it). Feel free to debate with me on this, i need a wider perspective!

It seems a large part of the debate centres on the semantics of the term ‘singularity’ or the context in which ‘technological singularity’ would apply. As I’ve said earlier in the thread, I’m happy with the definition of a technological singularity being the point after which we are no longer in control of the technological progression of computer hardware and software and can no longer accurately predict, whatsoever, what comes next, be that due to AI or otherwise. This is why I also believe we have not undergone one yet.

As has been debated in this thread, some people believe we’ll maintain control of our technology indefinitely and that the singularity as supported by the likes of Kurzweil will never happen, or that we’ll build safe limits into systems, or that we’ll hit a resource or cultural barrier preventing it from happening. I think the jury will still be out on this one for a long time... perhaps until it happens :p If it does, will we know it’s coming?
~Jon

#29
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts
So essentially, in the event of a singularity, we are no longer the dominate species of this planet. The A.I. would be the ones to control the course of the future and technology. Or at least in the particular field of study.

If a human is capable of producing a machine faster and smarter than itself, then it will only be a matter of time before we create a machine capable of doing so too, surely?

you are right, they will eventually get there.

Just because a computer is smarter, does not mean it can build a computer faster and smarter than itself, physical limits still apply obviously.

Sorry, i always forget to explain myself properly. When i said physical limits, i'm trying to imply that it would still take some time and there should still be a physical limit to how intelligent it can make itself, unless it found a way to control physics. But with this definition of singularity that doesn't really matter anymore.

Doesn't this mean that all that is necessary of a singularity to occur is for a machine to be built, either by human or itself, to be more intelligent than the active human community currently controlling the course of the future?

Once it can figure out everything that we can and more, we are essentially, missing that 'extra information' therefore we would lack the information to predict/control the future. Am I understanding the agreed upon definition?

To me that kind of singularity is rather sad. Humanity will have to become in part of in whole a machine to keep up and maintain control, unless machines could be programmed to have a higher morality standing than we do.

#30
Mr. Carmichael

Mr. Carmichael

    Member

  • Members
  • PipPipPip
  • 57 posts
Hi all,

My first post, I hope the first of many from Mr. Carmichael (Mesmerist).

I have been a huge fan of the singularity theory for a long time. Mainly for it's longevity prospects. I am a person who believes 70-90 just doesn't quite cut it with me. I have things to do and people to mesmerise.

I have been a fan of Kurzweil for some time now and know he has his critics both on this board and other places too.
I do though wonder if some people disbelieve him not out of genuine facts but out of difficulty in handling such a monumental change? Now don't take offense to this statement because I don't mean it in that kind of way but I feel a lot of people out there have grown up with science gradually altering the world around us the singularity at least the Kurzweil version will change the world massively as much as stone age to bronze age it will be Promethean and we have not had such a profound change like that in awhile that it seems almost ridiculous that it could happen.

I feel a lot of people have trouble dealing with that leap considering how gradual everything has been up until now.

I check science pages every day and noticed myself how things are getting faster and faster in terms of shocking advancements.

I remember growing up in the 80's with UK TV shows like tomorrows world with the advancements being largely theoretical and most of it being stuff which would help our everyday lives.

Now I find it difficult to keep up.

Things such as stem cell technology is continuing to blow me away, advances that ten years ago would have been 5 years apart are occuring month after month now. I feel especially in this line of science we are seeing genetic possibilities getting into the hands of low startup companies and scientists with little money because it is (at least now) so easy to manipulate and tinker with as opposed to drug development.

There is the blue brain project which plans to reverse engineer the human brain within a decade. Let's talk about this for a moment because I know many people out there say it is impossible in this time frame, etc. (even Kurzweil HIMSELF has said more likely 15 years) however the human genome project had it's doubters and they came in much quicker than was anticipated because the research and tech around it made the results cumulative in there breakthroughs.
Also I would like to add that many people think we only have to go so deep into reverse engineering the brain before it is doable artificially and by that I mean we don't have to go beyond the neuron links or the cellular.
People here have also mentioned the upcoming oil crisis affecting the coming singularity but Kurzweil pointed out and this is a fact that human technological development don't get hampered by crisis such as war famine, etc. whenever these disasters have happened through mankind pace has continued unabated.
There is also Moore's law to consider but haven't I read somewhere it has shrunk from 18 months to a shorter time frame? I believe for this reason is why Kurzweil brought forwards AI passing the Turing from 2060 to 2029.

I myself think his timelines for AI's are accurate but whether they will operate at the level we hope is another matter. There are those who believe we have the tech to make an artificial human brain now but not the know how and those who believe that we never will that the human brain has that certain something for sentience but perhaps it is nothing more than a certain level of intelligence.

Certain things are appearing now that used to be sci fi things that I thought were centuries away such as invisible meta materials, antimatter (the current pace on that is amazing). Up until now we have seen tech that we either never conceived of before like the internet or stuff that at least on the surface never made much of a big impact like say automated cash machines in retail outlets.
We are now seeing and hearing about tech that is truly star trek stuff sci fi writers wrote about but we never dared come up with a timeline for because it was so far flung.
Also earlier predictions about the future were laughable because we didn't have the basic tech and fundamentals to get there in the timeframes the suggested. Now we do.

I suppose the point I'm trying to make is that it seems now RIGHT now the future is coming noticeably faster and is slowly beginning to change things around us. Future cities are beginning to appear that do look like blade runner whereas tech up until now has left the world more or less the same from the fifties onwards apart from snazzier cars and funk looking apartment buildings.
Announcements MASSIVE announcements are being made at a frightening speed stuff that is almost sounding like magic I mean 3D printers and after that some form of nano-assembled production.

These are things we could not have envisaged a few decades ago but now they are imminent already they have gone from theory very quickly to being realised.

If these fantastic technologies can come so quick why would a superior self evolving AI be 100 years away? Why not 2045?

Look at how much the world changed from 1950-1980
Not really massively.
Then look at 1980-2000
It looked like we created 10 times the tech from that earlier period to the 80's in a much smaller time frame.
Then look at 2000-now.
The mind does boggle when you look at it in that perspective.

I think to truly understand the singularity you must look at it through philosophical eyes and also historical looking at what has come before us and the speed it is coming now.
Accelerated returns are self evident we can see it every week now.

Perhaps I'm being overly optimistic but humanity can often surprise us.

I think the only uncertainty is not when it is more what form will this AI take, will we be able to communicate TRULY with it. Will it be creative we be able to pass on the torch of innovation and retire while these machines research for us...

Time will tell.
  • Caiman likes this

#31
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts
It's not that we're really afraid of the singularity. Most of the things you've stated in your posts, i'm sure is no surprise to anyone on this forum. This is after all, a futuretimeline website/forum. Actually i think the major issue was determining what the singularity really is, at least for me it was.

There is no need for a singularity if you want to stop your aging, but i'll tell you now that society will be a much bigger problem than technology when it comes to this aspect.

On the one hand it is easy to look at statistics and history to predict the future, but ask any artificial intelligence programmer and they'll say otherwise. Humanity+ takes claims to artificial general intelligence and the CEO has come out to say that human equivalent intelligence is somewhat different still. Personally i feel prediction of the singularity based on Moore' Law is an over simplification. Though honestly this doesn't even matter, what does is whether we should we create something more intelligent than we are. We can fuse our brains with machine and modify it in anyway but should we create a separate intelligence greater than our own?

I often talk about how we could control such intelligence if we create it, that is a precaution. You can be blissful and say it isn't necessary, but with humanity at stake the risk cannot be overlooked. I want to understand the risks of the singularity, of creating a being that is greater than ourselves, that may not have the instincts to protect our species.

I feel it is naive to think that we can pass the torch of innovation on to machines, how can you possibly control something greater than yourself? I've thought of many scenarios, but i just can't predict the behavior of something more intelligent than i am. Maybe i'm just overlooking something... I know this is much too complicated for me to work on by myself.

I am all for technology, but i do hope we figure out how to control AI before it becomes incomprehensible to us.
One solution i think is to build AI systems specific to certain tasks to aid our research, with that we can still benefit from faster research but lower the risk. You should know what it will mean to have 1/10th the intelligence of a machine, maybe we'll come to realise how much we resemble apes?
  • Caiman likes this

#32
Caiman

Caiman

    Administratus Extremus

  • Administrators
  • PipPipPipPipPipPip
  • 904 posts
  • LocationManchester, England

but i just can't predict the behavior of something more intelligent than i am. Maybe i'm just overlooking something...

I think that pretty much captures the essence of a technological singularity, actually... But I think there’s too much emphasis on ‘Strong AI’ being necessary to trigger such an event, i.e. a thinking, conscious, self aware computer. I don’t think that’s necessarily going to be the case. In fact if such an event does occur it will probably be as a side effect to some other initiative, hence my concerns for our ability to tightly control it thereafter.

I agree with Mr. Carmichael’s position that resource scarcity is probably not going to stand in the way of the increasing pace of technological innovation and development, and am optimistic that the powers that be will seek to protect the future of their incomes by investing heavily in finding alternative solutions to the problems peak oil amongst other issues will start to introduce over the next few decades (with government backing once the reality of ensuring our continued growth and prosperity overcomes the dollar signs of lobbyists desperate to maintain dying industries). Big business is in the interest of protecting and increasing its profits and they’re not going to just stand by until every last resource is used up, every last drop of oil is burned, then turn off the lights as we descend into anarchy and self destruction... huge industries have come and gone over the last hundred years as the technologies we use have undergone multiple revolutionary cycles, I am sure the same will happen over the next hundred. Giants will fall and their place will be filled by something else. A number of technologies will begin to converge at a rapid pace this century and the world of 2111 will be even less recognisable to us than the world of 2011 would be to someone from 1911, I am certain (if we were transplanted from now to then this instant, that is). Hopefully, most of us will still be around to see that world of 2111, either way :p
~Jon

#33
Mr. Carmichael

Mr. Carmichael

    Member

  • Members
  • PipPipPip
  • 57 posts
The risk will be ascertained before we create a human level of intelligence, when we get to a cat or a monkey level of AI, something we can control and isolate we will discover its attitude towards us and how to communicate with it. I think many people think we will go from nothing to human level intelligence and beyon overnight when we will have a lot of data leading up to that in order for us to install safe guards and predict where it maybe be going.
I also wholeheartedly agree with Kurzweil that we will be largely cybernetic by then and will have more in common with them than not.
Combined with artificial realities that we will live in, almost in a state of bliss we would pose no threat nor problem for them. For them to go out of their way to get rid of us would is not very machine like.

I have always held the belief that tech is neutral it is how we use it that determines whether it is good or bad.

I also think that Moore's law is just a foundation really that's the raw power needed to run the AI nothing more. Kurzweil and others stipulate that in between now and then we will be decoding the human mind as in the blue brain project.

So leading up to the singularity we will be creating the software of the human mind and how to adequately construct it.

It all stems from the fact however that we don't have any idea where intelligence stems from, Some speculate the internet could eventually become an AI without any intentional programming from us.
Raw computing power does seem to be a big player though, insects don't have self awareness because there brains aren't complex enough so they are more or less natures robots a reflection of were we are at right now but Dolphins and chimps seem to be.

i think the 'life spark' of intelligence i overestimated, intelligence could be spontaneous or nothing more than a collection of incredibly sophisticated algorithms.
As medical science progresses I keep seeing news on things such as 'Algorithm for human sight decoded'
I think people have a lot of difficulty disparaging human from machine intelligence that there is something more to us that makes us unique but I don't believe that anymore than our bodies are divine in anyway when we are just very complex electrochemical machines.

AI developers always seemed optimistic and I think when they downplay Ai development it is because back at the start it turned out to be much more difficult than they thought it would be so they now think it will always be incredibly difficult. Because of that I think their disappointments have made them short sighted and also more conservative in their predictions as optimistic ones can come back to bite them and make their line of work look fringe and unpredictable.

I also think things like the computers beating the guys at Jeapordy and chess is more remarkable than people think especially when in the past such things were deemed impossible.

If there's one thing history has proven over and over again things are only impossible until they're not and it is much easier for somebody to cry foul over coming tech than be optimistic over it in fact I find this the dominant thought in most people I find very few people who are ready to embrace these concepts at an early stage just as in people said we would never fly break the sound barrier, ect.
Mankind works for optimism but thinks pessimistically as a whole I think which is why so many people doom this concept right away which is why I have hope and think it will work out.
I think it will be the next Wright Brothers, Chuck Yeager.

I think the reason I am optimistic is just looking at tech from today and how amazing it is, it's almost flamboyantly amazing to the point of science fiction which to me is the first time in the modern age we have started to produce technology like this. Information technology the internet connecting that I can see things moving apace faster and faster. It's very exciting.
  • Caiman likes this

#34
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts

I think many people think we will go from nothing to human level intelligence and beyon overnight when we will have a lot of data leading up to that in order for us to install safe guards and predict where it maybe be going.

Not quite, i feel you're underestimating the scenario. You assume we will have a solution simply with time, because it has been alright so far. But look into space, and what do you see? there is no other intelligence out there that we can find. Many scientists have stated that it is possible that they all wiped themselves out. It is also mentioned that this century will be the most dangerous and critical to our survival. We have all been privileged to be allowed to live in these times, and that is due to the sacrifices of the men and women in the past. It is possible that the world has been saved many times before and we don't even realise it. Just look to WW2 and the Cold War for inspiration.

If no one stood up to take it upon themselves to be responsible for humanity than who then will make the sacrifice to save the rest of us. It is easy to live in this bubble of ours, but we take the good people who protect us for granted so much that we don't even think it is necessary to worry about the future. Why are there people who oppose the banks, the governments and the power hungry? Do you think that we have to right to live in this bliss as a just because we are born with the 20000 genes that make us human? Or did it come from the sacrifices of the people who piled themselves up to build this democracy. That even though so many of us claim to be a false sense of freedom when the other option is to be bombarded with information and be made responsible for every action we make? Would you want that freedom? Do you understand the price of that kind of freedom?

Do not claim that the future will be okay and put that weight on others to make it so. If you overlook the danger, then you overlook the sacrifices that the men and women have put themselves through to make it 'okay'.

AI developers always seemed optimistic and I think when they downplay Ai development it is because back at the start it turned out to be much more difficult than they thought it would be so they now think it will always be incredibly difficult. Because of that I think their disappointments have made them short sighted and also more conservative in their predictions as optimistic ones can come back to bite them and make their line of work look fringe and unpredictable.

It is these experts that you rely on, but when they don't think as you like you degrade them?

I also think that Moore's law is just a foundation really that's the raw power needed to run the AI nothing more. Kurzweil and others stipulate that in between now and then we will be decoding the human mind as in the blue brain project.

This is one of the researches I am looking forward to, hopefully we can learn to imitate human empathy in machines. But this still doesn't cover what happens when we concentrate a lot of power into one body.

Mankind works for optimism but thinks pessimistically as a whole I think which is why so many people doom this concept right away which is why I have hope and think it will work out.

Not doom... just open to possibilities, after all, we're making a bet with humanity on the line. Wouldn't you think twice before betting your own life? Only in this case, we're betting 8 billion lives. Put that into context, and tell me if you can still brush this off?

I apologise if it sounds like i'm attacking you, but this is something that i feel most deeply about. I felt it was necessary to grab your attention.
Regardless, i will state again that i fully support technology, i understand that it is necessary for technology to grow, in order to survive. But more than that i also enjoy it. One day i hope to truly befriend a machine. I hope that i can understand them, and that they can understand me.
Even though you may not know it, but there are people who choose to withhold technology from us, in order to protect us and the motives that i have been informed of so far, is just.
  • Caiman likes this

#35
Mr. Carmichael

Mr. Carmichael

    Member

  • Members
  • PipPipPip
  • 57 posts
Hmm, the whole no sign of civilisation's out there doesn't lead to singularities wiping them out.
I always look at it from the ants perspective.
An ant hill in Africa probably has no clue we're there because chances are they will never come across human beings but yes every so often you will see people investigating ant hills.
In decades to come when robotic technology has advanced sufficiently we could investigate that ant hill from a great distance showing no signs of ourselves should that be the case.

Aliens out there could have technology so advance that we are physically unable to detect them with our primitive technologies.
Furthermore a sufficiently advanced alien civillisation with a massive intelligence because they are post singularity would recognise us as some kind of intelligence and knowing their own history would most likely be intentionally hiding from us down to the fact that their interference at our period would be harmful and destructive.

Why wouldn't we embrace some kind of prime directive in order to protect primitives from social and technological contamination?

Aliens wont be travelling the stars themselves in all likelihood they will be sending out robotic probes. Those probes from these post singularity entities or type 2 civilisations and again will have directed those probes not to interfere at this stage. Why on Earth would they? They could learn anything they wanted to earn about us without us ever knowing, they could be among us phased out of the physical world whilst they watch us carefully.

We won't be seeing star destroyers out there for a very long time and these advanced civilisations will have no interest in relating to us until who knows perhaps post singularity.

I wont start getting worried about what you are suggesting until we with our super powerful space telescopes find dead remains of civilisations, then it would be time to be afraid.

As for responsibility there is very little I can do.
When this starts to progress we can hope to force the powers that be to take the necessary precautions. I am optimistic but I am in no positions to fix any coming dangers. I am however a big proponent of the messages of film and those working on it now actually consider the threat of an intelligent AI out to get us thanks to the movies like Terminator, The Matrix, something scoffed at is now considered.
HOWEVER I still would like to point out that with cybernetic overhauls the majority of people will have we will have ore in common with this new intelligence and who knows something to offer it, maybe not in the pure raw intelligence front but the creativity and inspiration of man.

How can we prepare to meet the unknown? We can assume we will meet more the further along we go.

To address your other points:
i was not degrading AI developers, I'm simply saying that they are now much more conservative with their estimates due to the fact that AI developers in the 80's believed by 2010 we would have human level AI's by now. They were embarrassed and it's better to be over cautious than optimistic. I am an optimist because I have nothing to lose where their reputations are on the line.
It's no different to predicting cancer, no doctor worth his salt would hazard a guess at when we will beat the big C because if he is wrong he is made a fool of.

I am a big believer in humanity, by that I mean when it comes down to the line, when we are facing our own destruction we come through.
To be honest it is more when we are apathetic and relaxed like now that humanity disgusts me.
Like Starman says 'You at your best when things are worst.'

I think also when facing destruction mankind isn't stupid it knows when to back down especially based off what has happened in the past.

At the end of the day the singularity is inevitable, in my opinion it will come. It could destroy us all if so perhaps it is natures way of cleaning up for the next species to arrive however seeing how life is so simple it seems a convoluted way of trashing a civilisation when an asteroid in the face will still wipe us out for at least the next 50 years.
We can't stop what's coming we can only choose to deal with it.

I also think this kind of technology will be impossible to withhold, mainly because it is prolific; It isn't a drug which can be buried or a complex machine that can be patented.
With computers reaching certain levels our children will be designing AI's at home because you cannot stop the power of a computer or the flow of information.

#36
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts

Hmm, the whole no sign of civilisation's out there doesn't lead to singularities wiping them out.

No it doesn't, but it shows how vulnerable we may be. The problem with looking into space and not finding alien species is not that we can't find one but rather... Why aren't there as many of them as statistics led us to believe. When astrologers looked into space they expected to see many alien species, but instead it was just quiet? Albert einstien implied that a third WW would be devastating to humanity and michio kaku would agree that the transition to the next stage of civilization would be dangerous. Never have before humanity wield so much power that they can obliterate themselves. I don't think there are many experts that can say the singularity isn't a pressing issue. I know ray kurzweil says he isn't worried, but it is possible he just doesn't want to cause fear among us.

I also think this kind of technology will be impossible to withhold, mainly because it is prolific; It isn't a drug which can be buried or a complex machine that can be patented.
With computers reaching certain levels our children will be designing AI's at home because you cannot stop the power of a computer or the flow of information.

The fact that it can't be withheld is what makes it dangerous.

At the end of the day the singularity is inevitable, in my opinion it will come. It could destroy us all if so perhaps it is natures way of cleaning up for the next species to arrive however seeing how life is so simple it seems a convoluted way of trashing a civilisation when an asteroid in the face will still wipe us out for at least the next 50 years.
We can't stop what's coming we can only choose to deal with it.

I understand your point of view, it's not that you deny the danger but just come to accept it as inevitable and that you don't believe you can change it nor do you have enough motive to do so. That's your choice and i will respect that. I also don't like it when people say technology should stop advancing, as it is one of the things i look forward to.

I have been a huge fan of the singularity theory for a long time. Mainly for it's longevity prospects. I am a person who believes 70-90 just doesn't quite cut it with me. I have things to do and people to mesmerise.

But just know this; just because the technology exists for you to live forever does not mean that you will be allowed to. The world doesn't work that way. Kings and queens of the past could only dream of such things at the cost of innocent lives. If in the future you find yourself in that position then be thankful, because someone was foolish enough and kind enough to make that sacrifice for you, after all.. there's "very little [you] can do".

#37
Mr. Carmichael

Mr. Carmichael

    Member

  • Members
  • PipPipPip
  • 57 posts
I'm not sure the government would sanction such technology and my reason is because we are looking at the world as it is right now having access to this technology. We will hear about this coming a long time before its implementation which will get the world ready and hungry for it, not to mention that staying young looking is an obsession which is only getting steadily worse the longer we go on.
Besides which since when have governments ever held anything back for any other reason than money? I still believe they will see life extension as a way of keeping people in work longer and off benefits and becoming a drain on the economy.
Not only that corporation will love seeing more income flowing in as a result of this tech. You cannot beat the driving force of demand and money. I also think a government withholding this tech from us is boarding on totalitarian and we aren't there quite yet.
In relation to Ray Kurzweil I have something of an epiphany today. I was wondering why some of the tech he predicted isn't around now and I think it is down to the culture of man and the inherent unpredictability of that.
Someone mentioned in a thread about augmented reality why that isn't where it should be and I think it is simply down to demand and knowledge of its existence.

Most of my friends had never even heard of it until I showed it to them and a lot of technology that is physically feasible don't have the runaway momentum of other such products (Ipad for example) simply because the public aren't demanding them due to the fact that they don't know what it is and it sounds sort of complex, it's not easily explained.
If the frothing at the mouth demand isn't there why should it be here right now?
I think that's the problem with some of the predictions he makes, I firmly believe tech can keep up with him but culture can't and they will be the driving force and deciding factor of what will get to the market place at top speed.

Even Kurzweil said he could not predict culture and things like Twitter and facebook were impossible to see coming. You can only measure the growth of raw power and development times not if and when that tech will be accepted by the mainstreams.

3D for instance, the new look 3D is not much different to were it was when last popular it's gotten better than the 70's and 80's obviously but next to the 1990's it hasn't moved much and why? The demand wasn't there for it nor the drive to bring it back so development of it in that timeframe remained frozen.

Luckily demand for AI super computers, medical cures, nanotech life extension, robots, space travel and all the BIG things that will make a difference is there. People know what it is and are beginning to realise that the impossible will shortly become possible.
In fact in my opinion it is happening right now especially when the public see news reports of meta materials making objects invisible, stiff that a few years ago was star trek and a few years before that was magical.
People's eyes are being opened to technology that creates the impossible and leap frogs over the 50 and 60's ideas of a world of tomorrow. So when they hear life extension AI and nanotech they shrug their shoulders and say 'Well if they can make me an invisible clock and grow an ear on a mouse why not?'
This relates to my first post about governments withholding this stuff. It isn't top secret stuff that is easily buried like if a spaceship did crash in Roswell and we reverse engineered anti gravity they could keep it under wraps, this is all stuff that is making the six O clock news and Google news stuff that is passed around offices or whatever so that peopel are being accllimatised to it slowly and surely, seeing the possibilites and what they entail and then beginning to demand it AS A PLANET.

#38
Caiman

Caiman

    Administratus Extremus

  • Administrators
  • PipPipPipPipPipPip
  • 904 posts
  • LocationManchester, England
Here's an interesting article from Phil Plait at Bad Astronomy on why he does not believe that a technological singularity is going to occur;

http://blogs.discove...he-singularity/

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.”


This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.

Read the rest of this article at http://blogs.discove...he-singularity/


  • Azureous likes this
~Jon

#39
Mr. Carmichael

Mr. Carmichael

    Member

  • Members
  • PipPipPip
  • 57 posts
I see what he's getting at but still disagree with that article.
He has also worded it wrongly when he is making out cybernetically enhanced superhumans will be what forwards mankind and not AI's so it is still the same singularity but in a different form.
He's almost making out what AI will be like before it is created, I also think he is overvaluing the complexity of the human brain.

A lot of scientists believe that the life spark of flash of consciousness is more than Euclidian or Darwin and is this divine submolecular impossibility that will remain out of the reach of higher thinking machines when I think the utter opposite, intelligence is rare because it is not essential to Darwin's survival of the fittest and I don;t think it is as difficult to emulate that or consciousness as this guy is making out.
There are computers out there which have created art and not just paintings of photographs but completely original modern art pieces, interpretations of life without being programmed to copy. Another AI studied every form of classic music and came up with it's own completely original piece that wasn't at all an amalgamation of what it heard but an entirely new creation in the same we we hear music or see art ad are inspired to create something original.
If a machine can create art then it can be self aware and recognise itself.

The human soul is seriously overrated and is much simpler than we believe and I think that is something that really frightens us, for an AI to be created possessing what we take for granted seriously depowers the divine creature of man.
This to me will reflect a similar a pattern of thought as when people rejected Darwin's theory because the thought that we are lowly animals evolved from simpler forms of life took away our 'special' nature.

The fact is if we can decode our DNA that which truly defines what we are then the brain and consciouness will be equally 'simple' that being something that is nothing more than a code to be decoded which can then be replicated and improved.

The other problem with AI and people accepting it as a run away intelligence is we have nothing to compare it to. It's something we cannot imagine and it is also a little frightening, right up there with meeting intelligent alien life forms. I think we are suspicious of this thing we have not yet created because if it is too much like us we aren't so unique and we also know how dangerous we can be.
'A wise man knows his own son.'

#40
Azureous

Azureous

    Member

  • Members
  • PipPipPip
  • 56 posts
I don't think anyone can realistically overvalue the complexity of the brain at this point. but the point of his article wasn't about that at all, it wasn't even about run away intelligence or the human 'soul'. The last few paragraphs were his own speculation but the quotes from Stross, Hanson, Anissimov and Goertzel puts it really nicely i believe. There really isn't a need to build an AI to be able to think like we do, they only need to perform the tasks they are given, sometimes that may including learning new things, but for example, why would a face recognition AI need to figure out how to drive a car. On the other hand there is no need to give AI self interest, it won't really do anything unless it was asked, human memory is somewhat vague, machines don't need that, it won't have a problem figuring something out given the information, finding new information is a matter of trial and error.

One point of building AI is to interact with it, with that said it needs to behave some way like humans, else we can't interact with them emotionally, Ben Goertzel points out that a 'super friendly AI' is possible.

Another interesting thing Phil points out is human demand and that there isn't a need for a super intelligent AI with out human values that could potentially put us out of place. If we want machines to take over then that may be the best method, if we can see them as an extension of our evolution.

The singularity is still a vaguely defined event, i've watched many videos of ray kurzweil and i've found that either he doesn't understand it himself or he is holding something back, maybe he thinks the audience won't understand?

When Phil Plait mentions the human soul, i believe he is talking about our individuality and emotions. The abstract side to humans, left brain represents the logical human. Though there are reports of people living with one half the brain, due to plasticity.

Take no offense, Mr. Carmichael, i find that you keep making the assumption that people don't understand that the human brain is just another machine. We aren't here for religion, we're here for science, and the experts such as Ben Goertzel, knows that better than we do, if you've read anything else from him. If you put that thought aside, I believe we can make better progress here in understanding the singularity. It is obvious that if there is a human soul, then all our arguments would need to shift focus.

The reason the human 'soul'/brain is complicated isn't for something spiritual, just that it's probably the most complicated machine in this galaxy. But i think saying that it is a simple matter is... an exaggeration, all the problems we have on earth now can be reference to the human brain, if we could understand it completely, human caused global warming, inefficiency, war, poverty, fairness, etc could be ended. As one famous scientist puts it "Peace cannot be kept by force; it can only be achieved by understanding." - Albert Einstien.

We like to think of ourselves as different from animals, but all we're really doing is pleasing ourselves, i.e. surviving.

All that is left is for us to figure out the mysteries of the universe, now that could be said to be more complicated than the human brain, but we've got much more pressing issues.

I'm starting to see the singularity in a different light now with many different possible scenarios than i could predict before, i appreciate our discussions.





Also tagged with one or more of these keywords: technological singularity, singularity, Ray Kurzweil, strong ai, singularity, technology, technological singularity, the singularity, ray kurzweil, kurzweil, trends, artificial intelligence, computers, brain

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users