Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

OpenAI News and Discussions

OpenAI AGI weak general AI Elon Musk friendly AI deep learning DeepMind deep reinforcement learning AI artificial intelligence

  • Please log in to reply
31 replies to this topic

#21
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,970 posts
  • LocationLondon

The original stated goal of OpenAi was to ensure good/safe AGI is used to benefit mankind. And now they are slapping a great big profit motive overr the top of that. 

 

As we have seen time and time again, for profit businesses chase profit to the exclusion of all else. The more profitable the business, the more rich shareholders will have invested in it, and the greater pressure on the management team to provide maximal ROI. 

 

"No this AGI we're working on is too poorly understood, we need to halt development for 3 years while we test some things" is just not acceptable in a for-profit company.

 

They have every right to try and do this of course, but it's certainly not what they said they were going to do, and someone else will need step up and to try and do the job they originally claimed they would do and hopefully make sure ASIs don't murder us for paperclip parts. 


  • Yuli Ban and Enda Kurina like this

#22
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,067 posts

The original stated goal of OpenAi was to ensure good/safe AGI is used to benefit mankind.


Yes, but the messaging in the original pitch for OpenAI didn't make it sound like it was close to being a possibility. One thing that's new here is that they seem to be suggesting they think it could be near. Actually, Sutskever has been saying that for the past year or two (OpenAI is at least 3 years old). He doesn't commit to a timeline -- if he did, he would get attacked on social media for having drunk his own Kool-Aid.

#23
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,970 posts
  • LocationLondon

Yeah definitely!

 

I don't think the "your making it sound like AGI might happen sooner than I think it will" criticism is valid.

 

If someone believes this is premature, they can just not invest. If someone believes AGI is imminent based on this announcement with no research of their own? A fool and their money are soon parted, if it wasn't this theyd be sticking the money in some sensationalist kickstarter.

 

EDIT: Also on further reading it does sound like they have taken steps to stop the for profit stakeholders overwhelming everything else, so i'll be watching with great interest to see if they are successful!

 

If they are it could be a model for other tech businesses which are likely to cause enormous disruption. And also I won't get made into paperclips, so a double win!



#24
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,577 posts
  • LocationNew Orleans, LA

GPT-2 As Step Toward General Intelligence

A very careless plagiarist takes someone else’s work and copies it verbatim: “The mitochondria is the powerhouse of the cell”. A more careful plagiarist takes the work and changes a few words around: “The mitochondria is the energy dynamo of the cell”. A plagiarist who is more careful still changes the entire sentence structure: “In cells, mitochondria are the energy dynamos”. The most careful plagiarists change everything except the underlying concept, which they grasp at so deep a level that they can put it in whatever words they want – at which point it is no longer called plagiarism.
GPT-2 writes fantasy battle scenes by reading a million human-written fantasy battle scenes, distilling them down to the concept of a fantasy battle scene, and then building it back up from there. I think this is how your mom (and everyone else) does it too. GPT-2 is worse at this, because it’s not as powerful as your mom’s brain. But I don’t think it’s doing a different thing. We’re all blending experience into a slurry; the difference is how finely we blend it.
“But don’t humans also have genuinely original ideas?” Come on, read a fantasy book. It’s either a Tolkien clone, or it’s A Song Of Ice And Fire. Tolkien was a professor of Anglo-Saxon language and culture; no secret where he got his inspiration. A Song Of Ice And Fire is just War Of The Roses with dragons. Lannister and Stark are just Lancaster and York, the map of Westeros is just Britain (minus Scotland) with an upside down-Ireland stuck to the bottom of it – wake up, sheeple! Dullards blend Tolkien into a slurry and shape it into another Tolkien-clone. Tolkien-level artistic geniuses blend human experience, history, and the artistic corpus into a slurry and form it into an entirely new genre. Again, the difference is how finely you blend and what spices you add to the slurry.
“But don’t scientists have geniunely original ideas?” Scientists are just finding patterns in reality nobody has ever seen before. You say “just a pattern-matcher”, I say “fine, but you need to recognize patterns in order to copy them, so it’s necessarily a pattern-recognizer too”. And Einstein was just a very good pattern-recognizer.
“But don’t humans have some kind of deep understanding that pattern-recognition AIs don’t?”
Here’s a fun question: the human brain is undoubtedly the most powerful computer in the known universe. In order to do something as simple as scratch an itch it needs to solve exquisitely complex calculus problems that would give the average supercomputer a run for its money. So how come I have trouble multiplying two-digit numbers in my head?
The brain isn’t directly doing math, it’s creating a model that includes math and somehow doing the math in the model. This is hilariously perverse. It’s like every time you want to add 3 + 3, you have to create an entire imaginary world with its own continents and ecology, evolve sentient life, shepherd the sentient life into a civilization with its own mathematical tradition, and get one of its scholars to add 3 + 3 for you. That we do this at all is ridiculous. But I think GPT-2 can do it too.


  • Alislaws likes this

And remember my friend, future events such as these will affect you in the future.


#25
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,970 posts
  • LocationLondon

 

GPT-2 As Step Toward General Intelligence

A very careless plagiarist takes someone else’s work and copies it verbatim: “The mitochondria is the powerhouse of the cell”. A more careful plagiarist takes the work and changes a few...

 

I really like this article, it appeals to my love of futurism, and my love of childish insults, and my stubborn conviction that there is no magic or conceptually revolutionary stuff going on in the brain. 


  • Casey and Yuli Ban like this

#26
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,067 posts
https://mobile.twitt...166996917870592

OpenAI Five makes history by winning a best-of-three versus @OGesports, the Dota 2 world champions!!!! Huge congrats to the team and to OG for an extremely well-played match.


  • Yuli Ban likes this

#27
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,067 posts
Why the world’s leading AI charity decided to take billions from investors

https://www.vox.com/...-ilya-sutskever
 

And the thing is — the naysaying prediction will be right most of the time. But the question is what happens when that prediction is false.


So true. If you want to be right most of the time without thinking, just say "no" when asked by people about whether some impressive-sounding tech mentioned in the press will be a reality in the next couple years.

#28
Sciencerocks

Sciencerocks

    Member

  • Banned
  • PipPipPipPipPipPipPipPipPipPipPip
  • 13,326 posts

Why the world’s leading AI charity decided to take billions from investors

https://www.vox.com/...-ilya-sutskever
 

And the thing is — the naysaying prediction will be right most of the time. But the question is what happens when that prediction is false.


So true. If you want to be right most of the time without thinking, just say "no" when asked by people about whether some impressive-sounding tech mentioned in the press will be a reality in the next couple years.

 

 

 

Friend that's how I feel about most tech and science advances too as most of them turn out to be dead ends. I believe the rate of advancement has been slowing little by little over the last 50 years depending on field.

 

Even computers are running into the end of moores law.


  • starspawn0 likes this

#29
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,067 posts
There are whole fields that are slowing down, but there are some narrow ones that are speeding up. Progress in those narrow fields will be enough to build exciting futures.

Fundamental physics, for instance, has slowed considerably, compared to the 1970s and earlier. Solid state physics and other parts of physics have probably seen a lot of advances.

....

The future may not provide the things people expect (e.g. faster than light travel), but it will still be fun (e.g. AI and BCIs).
  • Sciencerocks likes this

#30
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,067 posts
This is amazing!:
 
https://mobile.twitt...719459977584641
 

Releasing the Sparse Transformer, a network which sets records at predicting what comes next in a sequence — whether text, images, or sound. Improvements to neural 'attention' let it extract patterns from sequences 30x longer than possible previously:


https://openai.com/b...se-transformer/

And now, I shall take a break for a month. Be back in late May.
  • Yuli Ban likes this

#31
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,577 posts
  • LocationNew Orleans, LA

Teaching and grading for finals, I presume? No need to answer that; have a good break.


And remember my friend, future events such as these will affect you in the future.


#32
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,577 posts
  • LocationNew Orleans, LA

OpenAI on Twitter: "GPT-2 6-month follow-up: we're releasing the 774M parameter model, an open-source legal doc organizations can use to form model-sharing partnerships, and a technical report about our experience coordinating to form new publication norms: "


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: OpenAI, AGI, weak general AI, Elon Musk, friendly AI, deep learning, DeepMind, deep reinforcement learning, AI, artificial intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users