OpenAI News & Discussions

User avatar
wjfox
Site Admin
Posts: 8668
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

Image
spryfusion
Posts: 388
Joined: Thu Aug 19, 2021 4:29 am

Re: OpenAI News & Discussions

Post by spryfusion »

User avatar
ººº
Posts: 359
Joined: Fri Sep 16, 2022 3:54 am

Re: OpenAI News & Discussions

Post by ººº »

wjfox wrote: Tue Mar 14, 2023 7:34 pm Image
Damn.

You asked this? If so, then I will seriously consider joining.
User avatar
wjfox
Site Admin
Posts: 8668
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

User avatar
wjfox
Site Admin
Posts: 8668
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

User avatar
wjfox
Site Admin
Posts: 8668
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

ººº wrote: Wed Mar 15, 2023 5:32 am
You asked this? If so, then I will seriously consider joining.
Not my image - somebody else posted it.
User avatar
Miky617
Posts: 9
Joined: Mon May 17, 2021 2:14 pm

Re: OpenAI News & Discussions

Post by Miky617 »

I find it a little surprising that it scored relatively poorly on the AP English Literature exam as well as the AP English Language exam, given that language and composition are its strong suits. Maybe for the literature exam it didn't have many of the test's books represented in its training data, but I also find that hard to believe since many of them are well-known classics. It's been a while since I took the AP Lit exam but from what I remember, that exam focused heavily on interpreting literature passages abstractly and drawing analogies, inferring author intent, tone, etc. Seems like there's still room to grow
weatheriscool
Posts: 12727
Joined: Sun May 16, 2021 6:16 pm

Re: OpenAI News & Discussions

Post by weatheriscool »

GPT-4 Has the Memory of a Goldfish (The Atlantic)
https://www.theatlantic.com/technology/ ... ow/673426/
Archive page at https://archive.ph/fsh6b[quote]

By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. GPT-4, the newest and most advanced such model yet, is already being subjected to the same scrutiny, and it still seems to misfire in pretty much all the ways earlier models did. But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.

-snip-

The trouble is that ChatGPT’s memory—and the memory of large language models more generally—is terrible. Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. ChatGPT has a context window of roughly 4,000 words—long enough that the average person messing around with it might never notice but short enough to render all sorts of complex tasks impossible. For instance, it wouldn’t be able to summarize a book, review a major coding project, or search your Google Drive. (Technically, context windows are measured not in words but in tokens, a distinction that becomes more important when you’re dealing with both visual and linguistic inputs.)

-snip-

GPT-4 still can’t retain information from one session to the next. Engineers could make the context window two times or three times or 100 times bigger, and this would still be the case: Each time you started a new conversation with GPT-4, you’d be starting from scratch. When booted up, it is born anew. (Doesn’t sound like a very good therapist.)

But even without solving this deeper problem of long-term memory, just lengthening the context window is no easy thing. As the engineers extend it, Millière told me, the computation power required to run the language model—and thus its cost of operation—increases exponentially. A machine’s total memory capacity is also a constraint, according to Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the Institute for Foundations of Machine Learning. No single computer that exists today, he told me, could support, say, a million-word context window.

-snip-[/quote]
User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: OpenAI News & Discussions

Post by lechwall »

weatheriscool wrote: Sun Mar 19, 2023 12:03 am GPT-4 Has the Memory of a Goldfish (The Atlantic)
https://www.theatlantic.com/technology/ ... ow/673426/
Archive page at https://archive.ph/fsh6b[quote]

By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. GPT-4, the newest and most advanced such model yet, is already being subjected to the same scrutiny, and it still seems to misfire in pretty much all the ways earlier models did. But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.

-snip-

The trouble is that ChatGPT’s memory—and the memory of large language models more generally—is terrible. Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. ChatGPT has a context window of roughly 4,000 words—long enough that the average person messing around with it might never notice but short enough to render all sorts of complex tasks impossible. For instance, it wouldn’t be able to summarize a book, review a major coding project, or search your Google Drive. (Technically, context windows are measured not in words but in tokens, a distinction that becomes more important when you’re dealing with both visual and linguistic inputs.)

-snip-

GPT-4 still can’t retain information from one session to the next. Engineers could make the context window two times or three times or 100 times bigger, and this would still be the case: Each time you started a new conversation with GPT-4, you’d be starting from scratch. When booted up, it is born anew. (Doesn’t sound like a very good therapist.)

But even without solving this deeper problem of long-term memory, just lengthening the context window is no easy thing. As the engineers extend it, Millière told me, the computation power required to run the language model—and thus its cost of operation—increases exponentially. A machine’s total memory capacity is also a constraint, according to Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the Institute for Foundations of Machine Learning. No single computer that exists today, he told me, could support, say, a million-word context window.

-snip-
[/quote]

Good article while LLM's can be a useful tool just blindly scaling them is not a pathway to AGI and we will eventually hit a brick wall with them. They can be part of the solution in getting to AGI but are not the full solution themselves.
User avatar
wjfox
Site Admin
Posts: 8668
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

Post Reply