Page 11 of 43

Re: OpenAI News & Discussions

Posted: Tue Mar 14, 2023 7:34 pm
by wjfox
Image

Re: OpenAI News & Discussions

Posted: Tue Mar 14, 2023 8:42 pm
by spryfusion

Re: OpenAI News & Discussions

Posted: Wed Mar 15, 2023 5:32 am
by ººº
wjfox wrote: Tue Mar 14, 2023 7:34 pm Image
Damn.

You asked this? If so, then I will seriously consider joining.

Re: OpenAI News & Discussions

Posted: Wed Mar 15, 2023 7:40 am
by wjfox

Re: OpenAI News & Discussions

Posted: Wed Mar 15, 2023 8:32 am
by wjfox

Re: OpenAI News & Discussions

Posted: Wed Mar 15, 2023 8:48 am
by wjfox
ººº wrote: Wed Mar 15, 2023 5:32 am
You asked this? If so, then I will seriously consider joining.
Not my image - somebody else posted it.

Re: OpenAI News & Discussions

Posted: Wed Mar 15, 2023 2:28 pm
by Miky617
I find it a little surprising that it scored relatively poorly on the AP English Literature exam as well as the AP English Language exam, given that language and composition are its strong suits. Maybe for the literature exam it didn't have many of the test's books represented in its training data, but I also find that hard to believe since many of them are well-known classics. It's been a while since I took the AP Lit exam but from what I remember, that exam focused heavily on interpreting literature passages abstractly and drawing analogies, inferring author intent, tone, etc. Seems like there's still room to grow

Re: OpenAI News & Discussions

Posted: Sun Mar 19, 2023 12:03 am
by weatheriscool
GPT-4 Has the Memory of a Goldfish (The Atlantic)
https://www.theatlantic.com/technology/ ... ow/673426/
Archive page at https://archive.ph/fsh6b[quote]

By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. GPT-4, the newest and most advanced such model yet, is already being subjected to the same scrutiny, and it still seems to misfire in pretty much all the ways earlier models did. But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.

-snip-

The trouble is that ChatGPT’s memory—and the memory of large language models more generally—is terrible. Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. ChatGPT has a context window of roughly 4,000 words—long enough that the average person messing around with it might never notice but short enough to render all sorts of complex tasks impossible. For instance, it wouldn’t be able to summarize a book, review a major coding project, or search your Google Drive. (Technically, context windows are measured not in words but in tokens, a distinction that becomes more important when you’re dealing with both visual and linguistic inputs.)

-snip-

GPT-4 still can’t retain information from one session to the next. Engineers could make the context window two times or three times or 100 times bigger, and this would still be the case: Each time you started a new conversation with GPT-4, you’d be starting from scratch. When booted up, it is born anew. (Doesn’t sound like a very good therapist.)

But even without solving this deeper problem of long-term memory, just lengthening the context window is no easy thing. As the engineers extend it, Millière told me, the computation power required to run the language model—and thus its cost of operation—increases exponentially. A machine’s total memory capacity is also a constraint, according to Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the Institute for Foundations of Machine Learning. No single computer that exists today, he told me, could support, say, a million-word context window.

-snip-[/quote]

Re: OpenAI News & Discussions

Posted: Mon Mar 20, 2023 3:19 pm
by lechwall
weatheriscool wrote: Sun Mar 19, 2023 12:03 am GPT-4 Has the Memory of a Goldfish (The Atlantic)
https://www.theatlantic.com/technology/ ... ow/673426/
Archive page at https://archive.ph/fsh6b[quote]

By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. GPT-4, the newest and most advanced such model yet, is already being subjected to the same scrutiny, and it still seems to misfire in pretty much all the ways earlier models did. But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.

-snip-

The trouble is that ChatGPT’s memory—and the memory of large language models more generally—is terrible. Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. ChatGPT has a context window of roughly 4,000 words—long enough that the average person messing around with it might never notice but short enough to render all sorts of complex tasks impossible. For instance, it wouldn’t be able to summarize a book, review a major coding project, or search your Google Drive. (Technically, context windows are measured not in words but in tokens, a distinction that becomes more important when you’re dealing with both visual and linguistic inputs.)

-snip-

GPT-4 still can’t retain information from one session to the next. Engineers could make the context window two times or three times or 100 times bigger, and this would still be the case: Each time you started a new conversation with GPT-4, you’d be starting from scratch. When booted up, it is born anew. (Doesn’t sound like a very good therapist.)

But even without solving this deeper problem of long-term memory, just lengthening the context window is no easy thing. As the engineers extend it, Millière told me, the computation power required to run the language model—and thus its cost of operation—increases exponentially. A machine’s total memory capacity is also a constraint, according to Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the Institute for Foundations of Machine Learning. No single computer that exists today, he told me, could support, say, a million-word context window.

-snip-
[/quote]

Good article while LLM's can be a useful tool just blindly scaling them is not a pathway to AGI and we will eventually hit a brick wall with them. They can be part of the solution in getting to AGI but are not the full solution themselves.

Re: OpenAI News & Discussions

Posted: Tue Mar 21, 2023 9:56 am
by wjfox