Artificial General Intelligence (AGI) News and Discussions

User avatar
raklian
Posts: 1746
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: Proto-AGI/First Generation AGI News and Discussions

Post by raklian »

Yuli Ban wrote: Wed May 11, 2022 6:24 pm Starspawn0:

One solution to the problem might be that language models contain enough "general, cognitive modules and represetnations" (https://arxiv.org/abs/2103.05247 ) to where they can be fine-tuned to drive robots (or perhaps frozen models are good enough, as in that arxiv paper), using relatively little data. It's not clear if that will work as well as hoped -- if it does, then I'd say by 2030 we'll have robots that do some crazy-impressive things, and complete automation of most forms of physical labor will arrive a lot sooner than people think.

BCI's might also help provide the needed data, assuming that revolution takes off quickly enough. If enough people use them, and all that data can be collected, then it'd be like running the rise of the internet again, but instead of media being "text", it would be "brain signals". We'd have another type of media with which to train AI models like Chinchilla and PaLM. This new type of data should have a lot more of the kind of "motor control" information we'd need to drive robots.
I'm wondering if Tesla engineers will use neural data from Neuralink animal and human subjects in combination with the A.I. behind FSD will help map the "brain" for the Teslabot.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Now, I wouldn't call this AGI or proto-AGI because it's still too small and weak. But it IS very generalized. Generalization >>>>> strength. We've had superhuman narrow AI for decades, but never anything as general as this. Let them scale it up. And then oh boy oh boy. Oh boy oh boy oh boy oh boy oh boy.

One Gato grows up to become Pantera, beware!!


Edit: Actually, wouldn't "Sapiens" be a better animal name for the first AGI? ;)
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

WHOA
https://www.metaculus.com/questions/347 ... s-devised/

The Metaculus collective prediction for when AGI will be realized has jumped extraordinarily forward. It's now at 2029. Coincidentally the same year Kurzweil called it.

It was at 2042 a month ago. And it was pretty solidly in the 2040s-very late 2030s range ever since 2020. Everyone's starting to shift their opinions.

Edit: Now at 2027
And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ozzie guy »

So the gist I have gotten from reading Yuli's statements here and on reddit is that all you need to do is change it from a feedforward model to a recursive learning model plus scale and you have not proto but real AGI.

From a prediction standpoint scale is almost a throwaway we don't need to worry about as it will easily occur as far as I am aware.

So how hard is it to create a recursive learning model and is this all we truly need to reach true AGI?

Maybe now AI safety will be the main hurdle slowing things down.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

I wouldn't know, as it's beyond my knowledge base. Perhaps Starspawn0 could answer it more effectively, as he's the one who enlightened me to that limitation.

I can just tell you WHY it's the case: transformers are trained once and that's it. For example, GPT-3, even the most fine-tuned version, has no knowledge of anything from after it was initially trained. That's how their architecture works and is baked into it, so achieving recursivity would require an entirely new kind of architecture so that it learns continuously.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Some discussions over at https://news.ycombinator.com/item?id=31355657

viksit
(Former AI researcher / founder here)It always surprises me at the ease at which people jump on a) imminent AGI and b) human extinction in the face of AGI. Would love for someone to correct me / add information here to the contrary. Generalist here just refers to a "multi-faceted agent" vs "General" like AGI.
For a) - I see 2 main blockers,
1) A way to build second/third order reasoning systems that rely on intuitions that haven't already been fed into the training sets. The sheer amount of inputs a human baby sees and processes and knows how to apply at the right time is an unsolved problem. We don't have any ways to do this.
2) Deterministic reasoning towards outcomes. Most statistical models rely on "predicting" outputs, but I've seen very little work where the "end state" is coded into a model. Eg: a chatbot knowing that the right answer is "ordering a part from amazon" and guiding users towards it, and knowing how well its progressing to generate relevant outputs.
For (b) -- I doubt human extinction happens in any way that we can predict or guard against.
In my mind, it happens when autonomous systems optimizing reward functions to "stay alive" (by ordering fuel, making payments, investments etc) fail because of problems described above in (a) -- the inability to have deterministic rules baked into them to avoid global fail states in order to achieve local success states. (Eg, autonomous power plant increases output to solve for energy needs -> autonomous dam messes up something structural -> cascade effect into large swathes of arable land and homes destroyed).
Edit: These rules can't possibly all be encoded by humans - they have to be learned through evaluation of the world. And we have not only no way to parse this data at a global scale, but also develop systems that can stick to a guardrail.
kromem
(Former emerging tech consultant for ~10% of Fortune 500 here)(a) I've noticed a common trend of AI researchers looking at the tree in front of them and saying "well, this tree is not also a forest and won't be any time soon."
But there's not always awareness of what's going on in other specialized domains, so an AI vision researcher might not be intimately aware of what's currently being done in text or in "machine scientists" in biology for example.
As well, it overlooks the development of specialization of the human brain. We have some specialized structures that figured their niche out back with lizards, and others that developed much later on. And each of those specialized functions work together to give rise to 'human' intelligence.
So GPT-3 might be the equivalent of something like the Wernicke's area, and yes - on its own it's just a specialized tool. But what happens as these specialized tools start interconnecting?
Throw GPT-3 together with Dall-E 2 and the set of use cases is greater than just the sum of the parts.
This is going to continue to occur as long as specialized systems continue to improve and emerge.
And quickly we'll be moving into territory where orchestration of those connections is a niche that we'll both have data on (from human usage/selection of the specialist parts) and will in turn build meta-models to automate sub-specialized models from that data.
Deterministic reasoning seems like a niche where a GAN approach will still find a place. As long as we have a way for one specialized model to identify "are these steps leading to X" we can have other models only concerned with "generate steps predicted to lead to X."
I don't think we'll see a single model that does it all, because there's absolutely no generalized intelligence in nature that isn't built upon specialized parts anyways, and I'd be surprised if nature optimized excessively inefficiently in that progress.
Will this truly be AGI in a self-determining way? Well, it will at least get closer and closer to it with each iteration, and because of the nature of interconnected solutions, will probably have a compounding rate of growth.
In a theoretical "consciousness" sense of AGI, I think the integrated information theory is interesting, and there was a paper a few years ago about how there's not enough self-interaction of information possible in classical computing to give rise to consciousness, but we'll probably have photonics in commercial grade AI setups within two years, so as hand-wavy as the IIT theory is, the medium will be shifting towards one compatible with their view of consciousness-capable infrastructure much sooner for AI than quantum competing in general.
So I'd guess we may see AI that we're effectively unable to determine if it is "generally intelligent" or 'alive' within 10-25 years, though I will acknowledge that AI is the rare emerging tech that I've been consistently wrong about the timing on in a conservative direction (it keeps hitting benchmark improvements faster than I think it will).
(b) The notion AGI will have it out for us is one of the dumbest stances and my personal pet peeves out there, arguably ranked along with the hubris of "a computer will never be able to capture the je ne sais quoi of humanity."
The hands down largest market segment for AI is going to be personalization, from outsourcing our work to a digital twin of ourselves to content curation specific to our own interests and past interactions.
Within a decade, no one is going to give the slightest bit of a crap about interactions with other humans in a Metaverse over interacting with AIs convincingly human enough but with the key difference of actually listening to our BS rather than just waiting for their turn to talk.
There's a decent chance we're even going to see a sizable market for feeding social media data of deceased loved ones and pets into AI to make twins available in such settings (and Microsoft already holds a patent on that).
So do we really think humans are so repugnant that the AI which will eventually reach general intelligence within the context of replicating itself as ourselves, as our closet friends and confidants, as our deceased loved ones - will suddenly decide to wipe us out? And for what gains? What is AI going to selfishly care about land ownership and utilization for?
No. Even if some evolved AGI somehow has access to DARPA killer drones and Musk's Terminator robots and Boston Dynamics' creepy dogs, I would suspect a much likelier target would be specific individuals responsible for mass human suffering the AIs will be exposed to (pedophiles, drug kingpins, tyrants) than it is grandma and little Timmy.
We're designing AI to mirror us. The same way some of the current thinking of how empathy arises in humans is from our mirror neurons and the ability to put ourselves in the shoes of another, I'm deeply skeptical of the notion that AI which we are going to be intimately having step into human shoes will become some alien psychopath.
hans1729
I’m not sure how to word my excitement about the progress we see in AI research in the last years. If you haven’t read it, give Tim Urbans classic piece a slice of your attention: https://waitbutwhy.com/2015/01/artifici ... ion-1.html
It’s a very entertaining read from a couple of years ago (I think I’ve read it in 2017), and man, have things happened in the field since then. If feels like things truly start coming together. Transformers and then some incremental progress look like a very, very promising avenue. I deeply wonder in which areas this will shape the future more than we are able to anticipate beforehand.
Kind of frustrating that so much discussion immediately goes towards extinction risks, but I completely understand as we do need to have that conversation sooner than later.
It's part frustrating, part exciting. Part exciting because YES, we finally have developed a proof of concept for AI generality that has gotten people talking, taking AGI seriously, and widely discussing the Control Problem. It feels vindicating to finally see people take AGI seriously rather than dismiss it as a hypothetical nothing for a nebulous future Kurzweilian/Star Trekian era.

Frustrating because Gato isn't an AGI and all this talk inflates its abilities into something it's not. Inevitably people will look into it and say "Wait a second, it's impressive but not THAT impressive. What's all this hype about AGI for?" Funkervogt said it best, it's basically a less-narrow AI, something in that murky twilight period between narrow and general AI. It's exciting, absolutely, but Jesus, temper yourselves!
And remember my friend, future events such as these will affect you in the future
Post Reply