Artificial General Intelligence (AGI) News and Discussions

Post Reply
User avatar
Grace2Paulson
Posts: 6
Joined: Mon May 02, 2022 6:03 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Grace2Paulson »

Yuli Ban wrote: Wed Apr 27, 2022 12:19 am
Thanks for the interesting article
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
agi
Posts: 21
Joined: Tue Apr 05, 2022 3:03 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by agi »

deleted
Last edited by agi on Tue Apr 04, 2023 12:34 pm, edited 1 time in total.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Artificial general intelligence: Are we close, and does it even make sense to try?
But Legg and Goertzel stayed in touch. When Goertzel was putting together a book of essays about superhuman AI a few years later, it was Legg who came up with the title. “I was talking to Ben and I was like, ‘Well, if it’s about the generality that AI systems don’t yet have, we should just call it Artificial General Intelligence,’” says Legg, who is now DeepMind’s chief scientist. “And AGI kind of has a ring to it as an acronym.”

The term stuck. Goertzel’s book and the annual AGI Conference that he launched in 2008 have made AGI a common buzzword for human-like or superhuman AI. But it has also become a major bugbear. “I don’t like the term AGI,” says Jerome Pesenti, head of AI at Facebook. “I don’t know what it means.”

He’s not alone. Part of the problem is that AGI is a catchall for the hopes and fears surrounding an entire technology. Contrary to popular belief, it’s not really about machine consciousness or thinking robots (though many AGI folk dream about that too). But it is about thinking big. Many of the challenges we face today, from climate change to failing democracies to public health crises, are vastly complex. If we had machines that could think like us or better—more quickly and without tiring—then maybe we’d stand a better chance of solving these problems. As the computer scientist I.J. Good put it in 1965: “the first ultraintelligent machine is the last invention that man need ever make.”


“Talking about AGI in the early 2000s put you on the lunatic fringe,” says Legg. “Even when we started DeepMind in 2010, we got an astonishing amount of eye-rolling at conferences.” But things are changing. “Some people are uncomfortable with it, but it’s coming in from the cold," he says.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

agi wrote: Mon May 09, 2022 1:02 pm
It's entirely possible that scale is all we ever needed. Hence the success of deep learning.
My opinion is that there's still more to it than JUST scale; some clever programming is necessary to make sure said scaling is optimized. For example, as far as we know, Sapiens are more intelligent than Neanderthals, despite us having a smaller brain on average, just because of the way our brains are structured (though we may never know 100% for sure until we can perfectly simulate Neanderthals).
Likewise, it's clear that there are multiple AGI projects underway as we speak. I expect either OpenAI or Baidu will be the first to get there, but DeepMind's will be the best and most intelligent when it arrives.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

On that note, there does exist that question of why the Hell it took so long for us to realize scale was all we needed. AI as a field has been around for 70 years. Why now? Why so much faffing about with connectionism, symbolic logic, fuzzy logic, expert systems, and other titans of GOFAI?

Because we couldn't scale models in the 1960s even if we wanted to. It’s part of that "foundational futurism" tripe I love talking about, a good showcase that I might actually know what I'm talking about on occasion.

Modern models require a sick amount of data to train, as well as an enormous amount of compute. We simply didn't have that even ten years ago, let alone further back. We needed massive infrastructural development, physically and digitally. We absolutely needed petascale and exascale supercomputers, terascale personal computing, the Cloud, widespread smartphone proliferation, ultra-large data storage, YouTube and other video streaming services, Wikipedia, open source access to books, peer to peer communications, THE INTERNET ITSELF— just an absurd number of things coming together.

People in the 1960s and 70s didn't have any of it. Quite frankly, I'm truly amazed those geniuses managed to develop AI as good as it was for the time. Imagine trying to create GPT-3 when even supercomputers could only run at a few kiloflops, could store only a few megabytes, and the total extent of the Internet is about five nodes in a bunch of universities in the American southwest.

This is also a good reason why the late 2000s and 2010s were so incredibly frustrating as an AI-obsessed futurist— because it was clear that we were ALMOST but NOT QUITE there. Except with the doubly thick fog of not even knowing that scale was all you needed, so AGI could've been anywhere from ten to hundreds of years away as far as you knew. At least now it's coming into focus that we're almost certainly no more than two to five years away for at least proto-AGI.
And remember my friend, future events such as these will affect you in the future
Tadasuke
Posts: 494
Joined: Tue Aug 17, 2021 3:15 pm
Location: Europe

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Tadasuke »

Yes, now I do agree that proto-AGI is coming in the next few years. But the widely available AI will be far from the best possible AI, probably. I wonder what will petascale analog processors in smartphones/laptops change. Turing Test (Kurzweil-Kapor version of it) may be even passed by 2027 or 2028. I would say there's over 50% chance of it being passed before this decade ends.
Global economy doubles in product every 15-20 years. Computer performance at a constant price doubles nowadays every 4 years on average. Livestock-as-food will globally stop being a thing by ~2050 (precision fermentation and more). Human stupidity, pride and depravity are the biggest problems of our world.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

agi wrote: Mon May 09, 2022 1:02 pm
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Starspawn0:
I don't think it's drifted too much. We've been commenting on the latest sci-tech developments on this forum for years... with the occasional big futurist projection.

One thing I've learned over the years is just how big the gap between what is possible and what big companies will actually produce to sell to consumers. Academia can't help to close this gap too much, either, since to compete with big companies they need hundreds of millions of dollars in funding to scale-up AI models. e.g. Something like Deepmind's Flamingo, but with some improvements to make it more accurate, is probably possible within the next year or two as a consumer product, provided people are willing to pay for the compute (monthly subscription fee). But I doubt we'll see that (consumer product) so soon -- the probable outcome is that we might have to wait until the end of the decade.

As far as what is possible in the near-term, in terms of AI, I'd say something like this: by 2025 I expect Tesla to finally have a level-4 driverless car system, that works in most areas, except for the most highly-congested ones, and in areas of heavy ice and snow. It won't be perfect, but will drive about as well as a human -- or perhaps a little better. Of course, this assumes that Karpathy doesn't leave Tesla and that Musk keeps pushing it. Skeptics will still point out any little failure on social media and in the tech-press; but the reality will be that it's pretty good. I don't buy skeptic claims about "the long tail", that these will forever plague AI models. I don't believe them because humans know how to overcome the long tail, and humans don't have an infinite list of contingency plans or methods; eventually, machines will absorb enough methods to where they can handle pretty much as many long-tail surprises as the best human -- it's just a matter of time.

On the AI front, for the next several years we will continue to read about improvements to large language models. They'll get longer context windows, show improved reasoning when paired with chain-of-thought and self-consistency, use other modalities to help learn better representations, and generally seem intelligent in most ways humans care about. The new things you'll see that you haven't seen yet from these models are: (1) People will teach the models via conversation to play new games or complete new tasks, and then the model will ace it right on the spot (e.g. describe a new variant of chess and have it play a decent game). I expect this to be an emergent phenomenon of models just getting better at predicting the next token. Beyond some point, it will look less like just shallowly predicting patterns, and more like actually using what is said in the text to help it predict things better. (2) Ability to do long-range, hierarchical planning much better. This is important for writing coherent works of fiction, for instance. Currently, models can't do this past a few paragraphs. (3) The robustness and accuracy will improve a lot, to where it will actually be pretty hard to trick the model into making a mistake -- and if it makes a mistake, it will be human-like; (4) It's possible models may show some signs of "understanding" their own limitations (this will help improve accuracy on "I don't know" questions), a kind of "self-awareness".

These improvements I expect to trickle in over the next couple years. I expect in the next year or two to see examples like (1) where people teach models new things like playing new games, and it doing an ok job on the spot. The long-range planning (2) part might be solved by 2025, by changing the "loss function" of these models, to where it doesn't just try to predict the next token, but also part of the loss incorporates a prediction about the next several tokens and even perhaps some guess about a representation of what it "plans" to write after that. I expect by about 2025 this will lead to coherent short story-generation, with stories about 1 or 2 pages long.

Chain-of-thought and self-consistency and other methods to improve the "reasoning" should lead to models capable of assisting scientists in the next year or two. At first, the kinds of areas where it might help will be in medicine and biology, where the kind of reasoning is more about using lots of background knowledge + relatively short inference chains; also, there is a lot more biomedical training data to work from, compared to other fields, like math. By about 2024 to 2029 I expect models to do a pretty good job at math competition problems; OpenAI is already making progress, and I think that will continue -- though, I don't like their approach to it.

By about 2025 I expect conversational models to be so good that for a 15 minute conversation most people won't be able to tell they are talking to a computer. Even general-purpose conversational systems will show an ability to learn from instructions (like learning to play new games), an ability to solve reasoning problems that require stringing together long chains of inference, and will show a strong ability at doing "planning" as needed to write short stories. So, there won't be any easy test of ability you can stump it with and prove that it isn't human.

Experts will still find flaws with these systems; but as I said, they will get harder and harder to find. These systems, when given tests of many different skills, will perform at human level or above on almost all of them -- e.g. the BigBench 150 tests.

You won't be able to buy these systems, probably -- but you'll read about them.

What will they be lacking? Well, long-term autobiographical memory may still not be completely fixed (it's not just a matter of looking-up previous conversations); baking-in skills to long-term procedural memory may also not be completely fixed (they could be done by fine-tuning); models may still make glaring errors 2% of the time; they may still not have real-time access to a sensory stream (as it's computationally costly to do so); giving agents stable goals and personality might be a challenge; learning from experience in a strong way might also pose challenges; and so on.

The AI by 2025 won't be reliable enough to where Google could just turn it loose and have it write software for them. But they will probably have something like a next-next-gen version of Codex that they can use internally to help their engineers write code.

By 2030, all bets are off on how advanced it will be. I don't think, though, that it will have as big an impact on the world as AI-boosters think it will -- even if it's able to generate scientific theories to help solve aging, say. As I've said before, experiments will still need to be done; and there is still all the physical labor that needs to be done before we can have some kind of future utopia. We'll need physical robots to do this kind of labor, and the main problem that needs to be solved is having a sophisticated enough AI brain to drive them.

What makes robotics harder than building conversational models is that we don't have the data. The reason it is possible to build conversational models is that there are petabytes of text data on the internet that humans have been generating for decades; there isn't a similar source of robotics data. But that is only part of the problem. There is also the problem that different robot bodies would require slightly different control data -- the analogous problem for text would be if instead of having 5 or 6 main languages, there were hundreds or thousands of languages, where each language only has less than 1% of the data. If that happened with text, it would dilute the training data to such a degree that Google wouldn't have been able to train a model like Chinchilla.

One solution to the problem might be that language models contain enough "general, cognitive modules and represetnations" (https://arxiv.org/abs/2103.05247 ) to where they can be fine-tuned to drive robots (or perhaps frozen models are good enough, as in that arxiv paper), using relatively little data. It's not clear if that will work as well as hoped -- if it does, then I'd say by 2030 we'll have robots that do some crazy-impressive things, and complete automation of most forms of physical labor will arrive a lot sooner than people think.

BCI's might also help provide the needed data, assuming that revolution takes off quickly enough. If enough people use them, and all that data can be collected, then it'd be like running the rise of the internet again, but instead of media being "text", it would be "brain signals". We'd have another type of media with which to train AI models like Chinchilla and PaLM. This new type of data should have a lot more of the kind of "motor control" information we'd need to drive robots.

Regardless, I don't expect it to take more than a small number of decades before we have robots that can do pretty much everything.
And remember my friend, future events such as these will affect you in the future
Post Reply