OpenAI News & Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

OpenAI is committed to the safe deployment of AI. Since the launch of our API, we’ve made deploying applications faster and more streamlined while adding new safety features. Our progress with safeguards makes it possible to remove the waitlist for GPT-3. Starting today, developers in supported countries can sign up and start experimenting with our API right away.

Improvements to our API over the past year include the Instruct Series models that adhere better to human instructions, specialized endpoints for more truthful question-answering, and a free content filter to help developers mitigate abuse. Our work also allows us to review applications before they go live, monitor for misuse, support developers as their product scales, and better understand the effects of this technology.

Other changes include an improved Playground, which makes it easy to prototype with our models, an example library with dozens of prompts to get developers started, and Codex, a new model that translates natural language into code.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »



Way I see it, it could be one of three things
1: GPT-4, which would undoubtedly be an astoundingly powerful language model, possibly even trained on multiple data modalities for even more accuracy
2: DALL-E 2, ten times bigger than the first, able to generate anything in high-definition, and possibly even videos just from text
3: Jukebox 2, far more coherent than the first, without the "AM radio from another dimension" limitation, plus actual understanding of rondo form so that it can basically generate any verse-chorus-verse song coherently. Might even be able to generate any audio. i.e. "birds chirping" or "dogs barking" or "explosion" with obvious text-to-speech capabilities that sound indistinguishable from an actual person speaking
And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Re: OpenAI News & Discussions

Post by funkervogt »

With the recent advances in narrow AI language models, can we say that there now exists an equivalent to "GPT-3.5"?
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

funkervogt wrote: Sun Dec 12, 2021 2:26 pm With the recent advances in narrow AI language models, can we say that there now exists an equivalent to "GPT-3.5"?
By that same metric, Turing-NLG circa 2020 was GPT-2.5.
Even then, it doesn't look like the sequence length for a lot of these transformers is all that longer than GPT-3.
Megatron's context window remains a sequence length of 2,048 tokens, the exact same as GPT-3:
https://github.com/NVIDIA/Megatron-LM

A true mid-generation upgrade ought to be longer, if you ask me, despite the increased parameter count.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

Improving the factual accuracy of language models through web browsing
We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser. Our prototype copies how humans research answers to questions online – it submits search queries, follows links, and scrolls up and down web pages. It is trained to cite its sources, which makes it easier to give feedback to improve factual accuracy. We’re excited about developing more truthful AI,1 but challenges remain, such as coping with unfamiliar types of questions.

Language models like GPT-3 are useful for many different tasks, but have a tendency to “hallucinate” information when performing tasks requiring obscure real-world knowledge.23 To address this, we taught GPT-3 to use a text-based web-browser. The model is provided with an open-ended question and a summary of the browser state, and must issue commands such as “Search ...”, “Find in page: ...” or “Quote: …”. In this way, the model collects passages from web pages, and then uses these to compose an answer.
This sounds suspiciously similar to my hypothesis about "cognitive agents"
Agents that respond to natural language requests to accomplish multi-step tasks. Such as if I need a certain type of file, I ask a cognitive agent, and it goes online, finds something that seems to fit my description, downloads it, and helps install it. The example I remember using was that of a mod for a video game. I want, say, to add a literal "Easter egg" into a video game, so I prompt the agent to get find a mod online that's an Easter egg. It scours the internet, finds the best and safest match, downloads it (or at least prompts me to), and then tells a second cognitive agent specialized on installing mods to start the installation process. All I did was ask it to do something and click a box once or twice.

I don't see this being exactly that at this juncture, but it's an early step towards it. Such a multi-step agent that can browse websites will go a very long way in creating a conversational "friendly" web.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

GPT-3 proved beyond doubt to be a master of language generation tasks. From writing poetry and songs to mimicking human-made essays, to coding. Not a few startups have built products on top of the model — and some have found impressive success. However, setting apart GPT-3's tendency to engage in toxic and biased behaviors and generate misinformation if prompted to do so, users would probably agree that GPT-3’s limitations are mainly tied to prompt engineering.
GPT-3 is a jack of all trades. It’s good for a broad array of language tasks but it isn’t great for any in particular. Prompt engineering is the easiest way to bypass this issue: users can improve GPT-3’s abilities through conditioning. For instance, if I want it to write a story about the moon and the stars, I could input three full examples and the first sentence of a fourth. Then, the model will unmistakenly get that I want it to continue the fourth story.
This method works but it’s quite laborious. Users that aren’t familiar with the model’s inner workings will have a hard time making it work adequately. GPT-3’s optimal performance often remains unreachable despite the efforts.
OpenAI has now solved this shortcoming. They’ve introduced a new version of the GPT family they named InstructGPT — they called it that way but after “overwhelmingly positive feedback” they’ve decided to remove the ‘instruct’ descriptor and set these models as the default in the API (they recommend using this version for all language tasks instead of the original GPT-3. In the API, the 175B InstructGPT model is named text-davinci-001).
This version of GPT-3 (which I’ll call InstructGPT in this article for the sake of clarity) is optimized to follow instructions, instead of predicting the most probable word. This change largely removes the necessity to write good prompts to extract all the power from the models. It not only makes them easier to use for most people — you don’t need to learn (as much) prompt engineering anymore — but makes the models more reliable and functional. The quality of the completions isn’t nearly as dependent on the prompt as for the original GPT-3 models, which prevents the model from making too many human-derived mistakes.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: OpenAI News & Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: OpenAI News & Discussions

Post by Ozzie guy »

OpenAI releasing this video makes me think they might have something big up their sleeves. Maybe this is evidence proto AGI does indeed exist in a lab. It's hard to explain but the video really does feel suspect. It is a short high production value video on how they are working on AI safety and why that is important.
Edit the video cannot be watched on future timeline simply click the watch on youtube link in the video square.

User avatar
andmar74
Posts: 389
Joined: Mon May 24, 2021 9:10 am
Location: Denmark

Re: OpenAI News & Discussions

Post by andmar74 »

Set and Meet Goals wrote: Wed Feb 16, 2022 10:53 am OpenAI releasing this video makes me think they might have something big up their sleeves. Maybe this is evidence proto AGI does indeed exist in a lab. It's hard to explain but the video really does feel suspect. It is a short high production value video on how they are working on AI safety and why that is important.
Edit the video cannot be watched on future timeline simply click the watch on youtube link in the video square.

I don't know, the video seems pointless. Maybe it's for the average person who has no clue.
Anyway, they can't solve the "alignment problem".
Post Reply