Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

OpenAI News and Discussions

OpenAI AGI weak general AI Elon Musk friendly AI deep learning DeepMind deep reinforcement learning AI artificial intelligence

  • Please log in to reply
42 replies to this topic

#41
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,919 posts
  • LocationNew Orleans, LA

OpenAI’s text-generating system GPT-3 is now spewing out 4.5 billion words a day

Tne of the biggest trends in machine learning right now is text generation. AI systems learn by absorbing billions of words scraped from the internet and generate text in response to a variety of prompts. It sounds simple, but these machines can be put to a wide array of tasks — from creating fiction, to writing bad code, to letting you chat with historical figures.
The best-known AI text-generator is OpenAI’s GPT-3, which the company recently announced is now being used in more than 300 different apps, by “tens of thousands” of developers, and producing 4.5 billion words per day. That’s a lot of robot verbiage. This may be an arbitrary milestone for OpenAI to celebrate, but it’s also a useful indicator of the growing scale, impact, and commercial potential of AI text generation.
OpenAI started life as a nonprofit, but for the last few years, it has been trying to make money with GPT-3 as its first salable product. The company has an exclusivity deal with Microsoft which gives the tech giant unique access to the program’s underlying code, but any firm can apply for access to GPT-3’s general API and build services on top of it.
As OpenAI is keen to advertise, hundreds of companies are now doing exactly this. One startup named Viable is using GPT-3 to analyze customer feedback, identifying “themes, emotions, and sentiment from surveys, help desk tickets, live chat logs, reviews, and more”; Fable Studio is using the program to create dialogue for VR experiences; and Algolia is using it to improve its web search products which it, in turn, sells on to other customers.


And remember my friend, future events such as these will affect you in the future.


#42
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,919 posts
  • LocationNew Orleans, LA

The Inherent Limits of GPT

A new natural language AI model launched by OpenAI, GPT-3, has been making waves in the artificial intelligence community. GPT-3 is a transformer model (simplifying greatly, a neural network approach with modifications for better performance on text) trained on one specific task: predicting the next word, given all previous words within some text. This simple goal means that GPT-3 can be trained on any text (no labeling required), and the model has certainly made use of this fact, with training conducted over 499B tokens (for context, the entirety of Wikipedia is 3B tokens – and is included in the training set, along with 429B tokens from the web, and 67B from books). This massive training set is used to optimize values for a correspondingly massive set of 175B parameters, more than 10x any previous model. The upgrade has paid off, with GPT-3 demonstrating proficiency in the areas of storytellingpoetry, coding, and more (though some areas still require work). This progress is certainly significant, and has been hyped as such. However, there are inherent limitations to the GPT approach, and these limitations are often skipped over, especially by those heralding GPT-3 and its successors as the start of human-level (and eventually superhuman) artificial general intelligence. The primary issue is one of domain; GPT-3’s domain of natural language is insufficient for general intelligence in the natural world. OpenAI called out this limitation with the release of the first GPT (see below); this post aims to drive home just how significant it is.
....
GPT-3 is impressive because the domain of natural language is far wider than the previous domains we’ve conquered with AI; while it’s governed by syntactical rules, its semantics are as flexible and open-ended as the natural world it seeks to describe. GPT-3 has taken this wide domain and made sense of it, recognizing the deep patterns in how words are used together to craft stories, communications, code, and more. However, while the domain of natural language is wider than the more limited mathematical domains of previous AI efforts, it’s still far more narrow than the natural world.

GFTKAN8.png

Essentially the point I and Starspawn0 have been making: GPT-3 can't be an AGI or even proto-AGI as it is currently, but the ability to model the world inherent in natural language is what allows it to generalize as far as it does. But it can only go so far. In order to go further, it needs a way to model the other senses and experiences, hence the repeated use of that term "multimodal" or multiple modes of qualia and communication.

 

However, I do feel the need to say these things repeatedly because these exact same arguments are often levied by skeptics who note that GPT-3 is incapable of generalizing further than it has and thus deduce that it's a dead-end, little more than the kinds of algorithmic magic trick it's replaced.


And remember my friend, future events such as these will affect you in the future.


#43
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,919 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: OpenAI, AGI, weak general AI, Elon Musk, friendly AI, deep learning, DeepMind, deep reinforcement learning, AI, artificial intelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users