Artificial General Intelligence (AGI) News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »




starspawn0
https://arxiv.org/abs/2002.06177

Systematicity might be one area where language models still aren't perfect. More general sorts of tests than we've seen in these tweets would reveal how systematic they are. I have some theories about why large neural nets should do ok with systematicity. My theory about this uses the fact that if you take different kinds of "objects" (words, phrases, paragraphs, events, or objects in images, etc.) and feed them through a neural net, at each layer (higher layers learn more abstract representations) the neural net maps them to a common representation space. So, a paragraph about vampires will activate some of the same (high-level) "neurons" as an image of a vampire, and those neurons will also activate when the model hears the word "vampire" spoken (audio input to model). There will even be neurons that represent generic "objects" the same way -- when you talk about some initial object and how it relates to some other, secondary object, there will be neurons that light up for "first object" versus "second objects"; and these will happen regardless of the modality, context, and what "object" refers to (vampire, werewolf, President Lincoln, the Blue Danube sweet).

But, now, the model may learn to apply "variables" at any level of this abstraction hierarchy (the higher up in the hierarchy, the higher the layer in the neural net); the neural net has the same topology regardless of what layer you look at, so anything that can happen at the "lower levels" that usually encode "low level features" of the input will apply to the "higher levels" as well. Maybe initially when you train the model on very specific types of problems using variable binding, it learns to apply some rule at a low level, and can't apply the rule in some other modality or to objects in the same modality but of a different type, say. But train it long enough, and it keeps making errors if it doesn't learn to use rules at the more abstract level -- eventually, in order to reduce the training loss yet further, it develops the right "circuits" at these higher levels, and starts showing a capacity to apply the same rule across multiple modalities. Maybe it was only trained to learn to apply new rules across a few hundred contexts; but as a consequence of learning to apply rules at those higher levels, it shows a capacity to apply them to all or almost all contexts. That's one of the hallmarks of systematicity!
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »


And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »



Edit: I just noticed the thumbnail was generated by DALL-E 2. Incredible.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Why I'm skeptical of Szegedy's 2029 bet for "autoformalization" and advanced forms of theorem-proving...
Starspawn0

It really depends on how thorough he wants the "autoformalization" (plain text --> Lean) to be. For example, if you read a lot of math proofs, what you will find is that a lot of important steps are missing. A good example is when a proof breaks down into a bunch of cases that are all kind of similar, and then after proving the first case, the author says, "And the other cases follow from a similar argument." A person with a fairly high level of mathematical ability can then piece together how those other cases can be proved, given how it worked in the first case; but it's usually not "trivial".

Another example where mathematicians leave things out is when some step in an argument boils down to applying a standard technique like induction -- they'll write, "The inequality follows from a simple induction argument." The thing about that is that it often requires a non-trivial amount of do this (often, you need a trick or two); but if someone is sufficiently skilled this will be considered "standard" or "routine".

One final sticking point worth mentioning is that, oftentimes proofs make use of heavy amounts of geometric and topological intuition to save time. They might say something like that, "It's obvious that you can deform the surface into a rectangle". Or maybe they'll say, "It's obvious that the two curves intersect, with intersection point having 0 < x < 1." Both of these would be murder to translate into Lean to make formally-verifiable.

This is why I have said that a better way to produce a powerful theorem-prover is to just use raw natural language proofs as the training data, without translating into Lean first. There are gargantuan amounts of training data out there, and I have little doubt that a sufficiently powerful language model will learn to prove theorems in natural language, given the amount that is out there -- and will cut corners in human-like ways using tricks like the above (e.g. "the other cases follow from a similar argument"). However, if autoformalization is the first step in the path that they take, I'm afraid all these torturous steps (where humans lefts lots out) will ultimately result in their not being able to formalize > 90% of math text. In fact, I would put the amount they could do by 2029 at maybe about 50%.

So, I think what's going to happen is that Marcus might win the bet. But at the same time, it will look as though they had made enormous progress. They'll maybe be able to formalize 50% of the math out there, including math with lots of key steps missing. Thus, Marcus will probably say to himself, "Shew!... that was a close call! They got a lot further along than I imagined!" but publically he'll say something like, "Szegedy and his team lost the bet, and for exactly the reasons I have been saying for almost my entire career. See my recent medium piece where I discuss the failures of deep learning to solve hard reasoning problems."

On the other hand, if I'm wrong and they do succeed, then I think it will be hard to deny that AGI (in a very strong sense) is close. That's because this autoformalization challenge requires solving a lot of very hard problems that are core obstacles to producing a machine that can reason at a very high level -- in basically any field. If Szegedy succeeds, it won't be long before we have AI scientists that can, for example, derive General Relativity from first principles when given some experimental data to work from. In fact, computers will be able to develop whole theories of physics that far surpass anything we have to date. And they'll also be able to make advances in chemistry, biology, medicine, and most every other field.
And remember my friend, future events such as these will affect you in the future
Post Reply