INTRODUCTION:
I do not agree with most futurists that insinuate intelligent A.I. - - a computer or android that you can have an intelligent conversation with that has equal intelligence, wit, humor, and creativity as you or I, will transpire during this century. Nor do I think the human brain can be completely reversed engineered in 100 years. We can however probably reverse engineer the human brain at an extremely primitive level, but I'm not sure that would equate to creating "very intelligent A.I.".
SINGULARITY 1 - [1995-2000]
The technological singularity in my opinion, has already occurred as "version 1.0" with the mass onslaught of the personal computer in 1994-1995. Within 2-3 years time I would say, the entire planet drastically changed it's culture, past times, and conduct of business. By the year 2000, most of the intricacies of society was either dependent upon or reliant on computers, and in a significant way. When I was 15 years old (1996), most people played sports or went outside and did things when they came home from school. By 2000, the streets looked like a ghost town and I could practically see the tumble weeds rolling up and down my block. Gaming had taken a significant downfall on how we spend our past time. And in technology, 1994 was the year right before "version 1.0 technological singularity", and had little effect or infiltration yet. By 2000, mobile phones and mp3's players, blackberries and personal computers were <almost> ubiquitous.
SINGULARITY 2 - [2015-2020]
In my opinion, there will be a second technological singularity, "version 2.0" wherein the internet becomes smart. It's already happening (2010), sort of. A lot of computer geeks are calling this phase"Web 3.0". This will be an internet where artificial intelligence grows at a significant rate and makes better use of our knowledge and organizational skills. Search engines for instance will be able to understand what you're asking of them <notice I said "them"> rather than spitting out useless results based on a search string. Already there are programs for mobile computers that can search for hotels, flights, or restaurants by simply asking the computer a question or making a statement. This type of artificial intelligence will lead to universal connection of devices such as automatically sharing files or information from your mobile computer, to your personal computer, and your TV -- which will also have the internet as well. Augmented reality, telepresence, and AI will combine in such a way that everyday society will reflect these things such as: train stations, airports, billboards, electronic newspapers, signs, etc.. Over the next 10 t o 20 years, the internet will be so smart that people will expect certain things to happen such as "look up everything about blackholes in our galaxy that was researched ONLY by this <named scientist> and compare that with these other results".
SINGULARITY 3 - "The Technological Singularity" - [2045-2050]
This is the singularity that Ray Kurzweil commonly refers to and the one I bear to challenge. Firstly as said in the introduction, I honestly don't think we're not going to reach a point in this century where A.I. can make itself super smart. Sorry, I just don't. Secondly, the "Technological Singularity" is not going to be what everyone thinks it is, almost like the whole "Y2k" thing. What a load of **** that was. The most likely way - - and probably the only way we're going to get to the Technological Singularity is to help our machines become smarter at helping us arrange and organize information in a way that our human community can build upon it. Almost like Wikipedia by with A.I. and personal accounts, chatbots, graphs, audio and video such as youtube, database technology, speech recognition, virtual assistants, twitter (repeated data), artificial photo recognition, and online contributing community that is ALL COMBINED TOGETHER IN ONE MASSIVE EXPLODING APPLICATION. I would call this perhaps: The Human Intelligence Project. Only a computer with the speed of all of humanity would be able to this -- expected in 2045 by Ray Kurzweil. It would be the most massive and sophisticated database ever created. It would include all of human knowledge. The application would basically be a search engine that has cloud computing (computations are made on a supercomputer and results are rendered on a single portal or webpage), but in the form of a virtual assistant, and/or video, audio, graphics, etc.. Information could either be retrieved or taught to the search engine, and by way of chatbot or voice recognition (virtual assistant). The virtual assistant A.I. would be smart enough to understand what you're saying per say, such as ideas or concepts, and apply that information to it's database(s). Nothing so far insinuates that we need to have "self-aware" A.I. to be able to do this. This is just bull****. It can be done using quantitative mathematics, hueristics algorithms, and speech recognition, and probably some other geeky programming stuff. The point I am making is that right now, the internet is ONLY a place to RETRIEVE or SEND information. The internet is not smart. The internet needs to be able to compute, analyze, combine, and communicate information as one method or idea, rather than resourcing countless webpages for pointing you in the right direction.
So if someone from China taught the search engine the idea of "customer appreciation", someone from New York who was doing the same thing would have their data compared to that person in China by the search engine A.I., and the search engine would learn what data was used over and over again in multiple ways to predict the future of a given idea, concept, or situation. It would be given a "confidence coefficient" basically, and that information can be formed into a sentence or a paragraph that bears great value to what you're asking of the search engine and can be spoken back to you. The search engine is not "super smart" per say because it has "awareness, the search engine only knows the data its been given in the vast multitude of context's the same data has been taught in.
One of the ways in which Google is trying to develop artificial intelligence is to create programs which interpret visual data and is able to make a prediction about the nature of that image. In order to do this, there needs to be an underlying foundation of programming that understands human thinking. So for instance, if you were to picture a regular sized bird, you would think about something that you'd see in your backyard or in nature that is common to you. But if you were to picture a large bird, you'd think something along the lines of a vulture or eagle. But would you think about an ostrich if someone said the word "bird" to you? People are trying to make computer programs understand this "visual-spatial-connection relationship". Based on our experiences, our brains are wired to think a certain way. Our brains are networked to make completing repeated tasks automatic. And these connections obviously are plastic. Well how do you make a computer plastic? Well here's how Google is doing it:
"Imagine, at your fingers, a computing power so potent that is capable of doing as much operations in a second as there are particles in our observable universe. Leaving aside some of its apparently “blunt” applications like cracking all the cryptographic codes invented so far, searching and instantly finding elements from databases so big that wouldn’t fit all the servers on the internet, factorizing numbers so large that no network of present-day supercomputers could ever have the chance at succeeding in our lifetimes, imagine how this could give us the power to build all of our future, but highly advanced and unimplementable on today’s computers, artificial intelligence systems. With the help of quantum computers we could build super brains, simulate complex molecule interactions that are completely intractable on present day supercomputers, find out the secrets to unlimited resources, and maybe discover the ultimate secrets of reality. "
Quantum computers can be used to do only certain things. "In fact, they are: factoring, unstructured search, and quantum simulation". IBM's "Watson" uses unstructured search.
"How does Watson differ from Deep Blue?
We are now at a similar juncture. This time it is about how vast quantities of digitally encoded unstructured information (e.g., natural language documents, corporate intranets, reference books, textbooks, technical reports, blogs, etc.) can be leveraged by computers to do what was once considered the exclusive domain of human intelligence: rapidly answer and rationalize open-domain natural language questions confidently, quickly and accurately."
I'm a business man, that's all you need to know about me.