Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The count down until 2030


  • Please log in to reply
10 replies to this topic

#1
10 year march

10 year march

    Member

  • Members
  • PipPip
  • 11 posts

Ray kurzweil expects human level Ai to occur before January 1st 2030.

I am going to count down the time until this date and provide news and updates that I think are relevant towards human level Ai or me preparing for human level Ai here.

My current goal is to get through the next few weeks to reach a clean 10 years to go and post relevant things untill then.

I will set a new date for me to struggle through once a clean 10 years to go has been met.

 

 

According to a website which is I think based on US time zone I have 29 days to go until January 1st 2020 which is my first waiting to the singularity goal.

 

https://howlongagogo.../2030/january/1


  • SkyHize, starspawn0, johnnd and 1 other like this

#2
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,050 posts
  • LocationLondon

Well this sounds like fun! 

 

(It does sort of overlap with a few other thread topics though, esp. the singularity thread) 

 

It certainly doesn't look, at thi point like Human level AI is on target for an end of 2030 release. But I guess the whole thing with exponentials is you see the vast majority of progress in the last few years, so maybe things are still on track. 

 

I think it could depend a lot on various other technologies (BCIs, etc.) and how fast those come through.

 

Also if i'm right in my random speculation that Ray K. underestimates or even ignores the time between "technically possible with current technology" and "consumer ready product released" Then that would mean it might be 2035 or 2040 before we figure out how to get a human brain's worth of computing power to perform as a human level AI, even if we have the needed computer by 2030.



#3
Sephiroth6633

Sephiroth6633

    New Member

  • Members
  • Pip
  • 4 posts
  • LocationUK

10 year is a long road. It may be interesting to see the world change by rise of advanced technologies. I personally think China could be first for invested a human level AI before 'AI President of China'. 

 

"On 8 July 2017, the Chinese Sate Council announced plans to turn China into the world leader in Artificial Intelligence (AI) by 2030, seeking to make the industry worth 1 trillion yuan. The State Council published a three-step road map to that effect in which it outlined how it expects AI to be developed and deployed across a wide number of industries and sectors, such as in areas from the military to city planning. According to the road map, China plans to catch up to current AI world leaders' technological abilities by 2020, make major breakthroughs by 2025 and be the world leader in 2030."

~Science and Technology in China (Wikipedia)

 

I've been watched BBC's Reggie in China. China had been rise of technologies really unbelievably faster by three times than west countries. as I watching, I felt as if I am watching a sci fi show, not documental. So it leave me think that China would be very first country in the world to make major breakthrough.

 

I personally rely on AI because it may be able to find a way to restore hearings for deaf people in a blink!


  • johnnd likes this

#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Just this year we got chatbots that can do a pretty good job in short conversations. I discussed it here, and also later on in that thread:

https://www.futureti...-like/?p=270632

I hate to guess what 10 more years of refinement will do for that.

Already, these so-called "language models" like GPT-2 and Megatron can do a fair bit of "improvizing", as I discussed here:

https://www.futureti...-like/?p=271299

(I recommend clicking that link and reading it. It's totally insane!)

It just can't do it perfectly reliably.

Here is what I would guess is true: let's say you have a person engage in an online chat with GPT-2, where a human picks out the best of 25 responses that the program generates. The best response is probably an almost perfect thing to say in the situation more than 95% of the time. This means that all that separates us from hard-to-distinguish-from-humans chatbots is a sufficiently good "critic" module to pick the best-of-25.

I base this on several pieces of evidence I've seen -- e.g. an article in The Economist, and then also the performance of that DialoGPT chatbot, when selecting from 16 possible responses.

This is not to say that you can't reach its limitations. It has limitations; but during most casual conversations -- even ones that require a little bit of logic -- you would not be able to tell what they are.

For example, let's say you were to test it out:
 
 

User: What is the first word, alphabetically, in this sentence?
 
Chatbot: "Alphabetically."
 
User: How the $%#$^@ did you do that! NO WAY someone programmed you to answer that question!



That's what it will look like. Somehow, it knew to sort the words in that sentence and find "alphabetically".

To find its limits, you are going to have to try harder:
 
 

User: Sort the words in this sentence, and then write down the fourth word in that list.

Chatbot: "in" (appears twice)

User: Again, how the #%@%& did you do that!!!! Is somebody pulling my leg? Is there are human on the other end?



Still not good enough. Try even harder:
 
 

User:  Sort the words in this sentence, write the list in reverse order, and then write down the nth word, where n is 2 times 3.

Chatbot: "And."

User: Ah, hah! Gotcha! You're just a dumb machine!




But that's if they only do limited depth of reasoning, like language models currently do. In 10 years, who knows how far they will come?

Perhaps you think you can prove its in-humanness by asking it questions that require commonsense reasoning? That probably won't work -- it will have so very much "world knowledge", represented by statistical relations among word patterns, that it will get your questions right as often as an above-average-intelligence human.
 

User:  Suppose you set a dictionary, an apple, a bible, and a copy of War and Peace on a table.  How many books are on the table?
 
Chatbot: Three.  What a stupid question!


Maybe you can ask it philosophical questions to find it out? Again, won't work -- it will answer those perfectly. See the DialoGPT examples in that thread above; the chatbot can handle these types of questions, too.

Maybe ask it to write (bad) poetry, like a limerick? Again, it will spin out as much poetry as you are willing to read -- and it will be as good as an above-average human poet.

To tell that the thing isn't human in 2029, you're going to have to pay very close attention, and ask very subtle questions. Kurzweil may very well be proved right by 2029, that machines can pass an official Turing Test. 10 years is such a long time away, and I could see enough of the gaps being filled by then to prove him right.

....

I recently took a test to see how well I can spot GPT-2-generated headlines -- I forget the website it was on; might have been vox.com. I got pretty much all of them right, and the computer said that I scored above the 99 percentile among people who took it. I see that as confirmation that I am very good at discriminating the subtleties of machine and human-generated text.

And here is what this ability to "parse" machine-generated text is telling me: it's getting better at a frightening pace; and people who think it won't pose a danger (e.g. acting as an online troll army) are seriously deluded. You better pray there are adequate safeguards in place by 2029! -- maybe even 2025!
  • johnnd likes this

#5
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 839 posts

 

 

User:  Sort the words in this sentence, write the list in reverse order, and then write down the nth word, where n is 2 times 3.

Chatbot: "And."

User: Ah, hah! Gotcha! You're just a dumb machine!

That doesn't actually prove it's a machine. Science academics who think about the Turing Test a lot are...a bit smarter than average...and probably don't realize that the User's initial instructions are too complicated for most humans to understand and execute correctly. I'm not sure I understand it myself. So getting a wrong answer, or having Chatbot respond with "I don't know" is not proof by itself that the Chatbot is a machine. 

 

Ironically, if Chatbot responded correctly to questions that were too detailed or complex, then it would look too smart for a human and would fail the Turing Test. 

 

BTW, the points that I'm making here will probably be some of the arguments people (including we on this forum) will put forth in a few years when examining the transcripts of the first Turing Tests passed by a machine. 

 

"Ah, but you can tell it is actually a machine if you carefully look at this answer and consider this!..."

 

"Ah, but no! That is the natural response you would get from many people! Such as..." 

 

And on and on it will go. 


  • Yuli Ban, SkyHize and starspawn0 like this

#6
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

^ Exactly something I was going to mention. 

 

If I were tasked with hosting a Turing Test and I asked my client a bunch of innocuous questions that they all pass believably only to throw a curveball with "what is 7,686,369,774,870 multiplied by 2,465,099,745,779" (with no calculator use) and they give the correct answer in under 30 seconds, I'd immediately assume that my subject is either Shakuntala Devi or a computer— and I don't think the prospects are too likely for the former. 

Humans occupy a stratum of competence. Anything that occupies almost that same stratum but goes too low or high in some particular area will be seen as nonhuman, hence things like the Uncanny Valley. That's something I imagine the Turing Test and similar tests compensate for, that not only will a machine have limits but so do humans that an insufficiently advanced machine might not realize we have.


  • starspawn0 and funkervogt like this

And remember my friend, future events such as these will affect you in the future.


#7
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Both of you could be right. However, I can imagine that one of these chatbots will learn that the appropriate thing to write when the questioning gets too tough is "I don't know." or maybe "Wow! That's such a hard question!" And it won't be programmed to do that -- it will learn that from the data, too. (It's really like a spirit protruding from another plain of existence -- it's just uncanny the skills these language models acquire that you just can't imagine it being possible to do so in the way that it does!) It may well be that 90% of humanity will not be able to tell these chatbots apart from humans by 2030. That is a very dangerous thing for our fragile world.
  • Yuli Ban and johnnd like this

#8
10 year march

10 year march

    Member

  • Members
  • PipPip
  • 11 posts
I might reply to some of the posts here later through editing this post. I am ultra sleepy so I don't think I can coherently do it now.

I did want to say I looked at your links starspawn it was very Interesting and promising.


Also there is only just over 318 million seconds to go this made my day I thought we were billions of seconds away.

When I reach my goal of getting through to Jan 1st 2020 I will set goals of shaving off a few million seconds in a similar way to a obese person having a goal of losing a few pounds.

In fact if you treat a million seconds as a Killo (divided by 1000) and devide the seconds in a day by 1000 as well.

You will notice the seconds shaved off each day is around the grams of weight someone looses each day for moderate weight loss speed.

In a way we can be seen as 318 kg overweight and losing 86.4 grams a day.

This may not make sense as I'm tired I might try and edit it later.
  • johnnd likes this

#9
johnnd

johnnd

    Member

  • Members
  • PipPipPip
  • 50 posts

Happy to join this ride on the ground floor; it's going to be wild!



#10
Miky617

Miky617

    Member

  • Members
  • PipPip
  • 33 posts

You've got a lot of dedication to pledge to follow this forum and give updates for 10 years. I've been following FutureTimeline for about 3 or 4 years now and I'm wondering if I'll still be paying visits here in 10 years, which raises me to ask: What do you guys think you'll be doing when 2030 rolls around? How will your lives be different or the same, and how do you think it'll be affected by whatever developments we foresee happening between now and then?



#11
10 year march

10 year march

    Member

  • Members
  • PipPip
  • 11 posts
Clocked off work and I now have 317832922 seconds until January 1st 2030 new york.

I kinda regret making my first goal waiting to usa new years day 2020 shaving off seconds feels funner.

Today I watched Ray kurzweil cis 2019 (might not be cis but a word like it) and Ray kurzweil talking to Neil de grassi Tyson (on phone so I can't Google the right spelling) to get myself more pumped for human level ai.

I must say I have doubts about 2029 and it being used perfectly but at minimum ai is going to be so much better.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users