Page 1 of 1

Can LLMs produce AGI?

Posted: Sat Nov 18, 2023 11:34 am
by erowind
I suppose I have become the antithesis of the general consensus here, oh well, I see what I see friends and I'm curious what you all think? This question isn't "Is AGI possible" it's specifically in respect to LLMs. Another method of creating AI could come about that isn't purely LLM based in the next 20 years that may completely revolutionize the AI field just like the breakthrough with LLMs did.

EDIT: "Never" Should say "LLMs alone will never produce AGI" if a mod could edit the poll I'd appreciate it :)

Re: Can LLMs produce AGI?

Posted: Sat Nov 18, 2023 1:03 pm
by wjfox
Current LLMs are very impressive, but still classed as "weak" or "narrow" AI. The leap between LLMs and true AGI will be orders of magnitude greater in terms of capability. We're talking human-level competence across a vast range of tasks, not just language processing, but many in the real world.

We've arguably seen proto-AGI already, but a true AGI will have a sense of self, memories, autonomous learning, subjective experiences. I know we keep getting these hints from OpenAI and others about next year, but I think the milestone of AGI is probably further out.

Re: Can LLMs produce AGI?

Posted: Sat Nov 18, 2023 1:43 pm
by funkervogt
I doubt LLMs can directly evolve into AGIs, though they're capable of being upgraded to the point that they can accurately imitate intelligent behavior 99.9% of the time. That 0.1% of the time when advanced LLMs make mistakes will remind human users that the former actually don't have general intelligence. If you heavily interact with the LLMs, those "slips of the mask" will happen several times a day. I think the next ~20 years will be "The Age of Fake AI."

LLMs will instead find use as tools that human researchers use to build other, fundamentally different machines that are real AGIs. My timeline:

1) Give LLMs another 10 years to max out their potential (algorithms optimized and all possible training data consumed).

2) Give them another 10 years to work on the problem of building an AGI.

If LLMs haven't solved the problem after that much time, then they're inherently incapable of doing it.

Re: Can LLMs produce AGI?

Posted: Sat Nov 18, 2023 2:04 pm
by funkervogt
wjfox wrote: Sat Nov 18, 2023 1:03 pm Current LLMs are very impressive, but still classed as "weak" or "narrow" AI. The leap between LLMs and true AGI will be orders of magnitude greater in terms of capability. We're talking human-level competence across a vast range of tasks, not just language processing, but many in the real world.

We've arguably seen proto-AGI already, but a true AGI will have a sense of self, memories, autonomous learning, subjective experiences. I know we keep getting these hints from OpenAI and others about next year, but I think the milestone of AGI is probably further out.
All true, but keep in mind that human-level and even superhuman-level narrow AIs are achievable in the near future. In fact, they already exist for a few domains. Multitudes of narrow AIs specialized for specific tasks will massively change the world.

Re: Can LLMs produce AGI?

Posted: Sat Nov 18, 2023 5:40 pm
by Cyber_Rebel
In other words, Sam is either deluded or lying imo
Sam Altman is not the one, or should I say only one giving hints that they were close to achieving a workable definition of AGI. Sam himself is actually the one who kept saying rather vague comments on the matter including ones which even aligned with yours in the chat, and that may have been for other reasons considering the drama happening at Open AI. It's Ilya Sutskever who thinks otherwise, and as he's the chief person actually building frontier models to begin with, I have more of an actual reason to trust his opinion on the matter than Sam's.


Re: Can LLMs produce AGI?

Posted: Sun Nov 19, 2023 5:58 am
by erowind
I was very wrong! There isn't a consensus here on LLMs. The community here always surprises me :)