Artificial General Intelligence (AGI) News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

We may be very close to the rise of the first proto-AGIs alright.

Looking back at this graphic made by Ray Kurzweil, surely people have noticed that we've passed through everything up to mouse brains and yet there's been no generally-intelligent AIs as of yet
Image

Mother Jones' graphic also represents just how powerful computers have become:


Well, if we've been able to develop insect-level AI since the 1980s-1990s, why haven't we? The answer is not rooted in hardware but rather software.

I've been thinking about this lately: GANs and transformers have unexpected flashes of intelligence. I wouldn't call them intelligences at all, but it takes some amount of abstraction to learn that water reflects and maintain this even in generated images that are not overfitted. And GPT-2 and GPT-3 clearly operate on some level like a human, most notably when it comes to mathematical abilities:
THE OBLIGATORY GPT-3 POST
But again, only the delusional, religious, and seriously over-optimistic would call GPT-3 an AGI. And yes, they have! I just saw someone recently who was aggressively, combatively serious in his/her dedication that GPT-3 was an AGI.

It isn't. But could it still be somewhere along the spectrum of biological intelligence if it were coalesced into a more effective form? To be conservative, could it be as generally-intelligent as an insect? Insects can't speak English or do math and a language model clearly isn't meant to replicate insectoid intelligence, so clearly we're dealing with two wholly different kinds of architecture. Still, it's fascinating to consider that training a general-knowledge model on purely insect experiences may produce something indistinguishable from an insect. It's possible to do something like this, but there's no real reason to do it other than to prove it's done, and I doubt certain groups would want to waste compute just to make a computer think it's a dragonfly. Still, it'll be something if this were confirmed to be feasible and would definitely give us the best proof yet that we're on the right track.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

We may be very close to the rise of the first proto-AGIs alright.

DeepMind has finally started catching up to OpenAI's work, and may have even succeeded them in a few areas. OpenAI isn't the most advanced group, nor do they have the best minds and researchers. DeepMind snagged all the real quality. It was a quirk of fate that OpenAI chased after a far more fruitful methodology while DeepMind was left running after what seems to be dead-ends of the 2010s.

I've long guessed that, as soon as DeepMind accepted they lost the lead and followed the exciting path of large language modeling and world-knowledge modeling, they'd more than easily surpass OpenAI and perhaps reach general AI.

[2106.13884] Multimodal Few-Shot Learning with Frozen Language Models

Starspawn0's comments: Deepmind. I think I might have posted the tweet thread to this before. It's *amazing*. It's like the kind of thing you would expect from GPT-4 -- super-fast / few-shot learning of new visual-and-text combined skills.


So what do we expect from GPT-4?? We might expect it to have few-shot capability, whereby you can show it an image, and then teach it a new task on-the-fly. For example, maybe it's an analogy task: {image} is to X as Y is to ....? [fill in the blank], and it quickly learns to output Z (where Z is the correct answer). Or, you can maybe teach it to play chess -- you show it a board, and say, "white to move," and it gives a decent move. Maybe you need to give it a few examples, first, so that it gets the idea of what you want it to do -- just like the few-shot learning in GPT-3; except here it's with text and images combined.
What's missing is the image-synthesis. That's what OpenAI's DallE is all about. If you combine what DallE can deliver with the model in this Deepmind paper, and then scale it up way, way up, you'll have something mind-blowing. So, take that chess example: instead of you always supplying the board for it to decide the next move, it could also generate the board! A sufficiently powerful version of this would literally allow you to create a chess game on-the-fly, just by giving it a few examples.
You could even make up a whole new board game, and teach it how to play with some examples, and then it would maybe do a passable, amateur-level job as your opponent -- and would even generate subsequent game boards for you.
Just think of the business applications. You could show it some graphs and ask if there is anything that "stands out", and it might generate a paragraph or two -- and it would use its world-knowledge about other companies, industries, supply chains, and so on, to give a plausible answer.
Or maybe you're a student in a chemistry class. You took some hand-written notes about some of the molecules the teacher drew at the board. You could show it one of your drawings, and ask it some questions about it. Maybe you made a mistake, and ask it to correct -- and it will do that, similar to doing "grammar correction".

Addendum: Take a look at the example in Figure 1. It's amazing that it knew to map Macaulay Culkin's scream pose to a scream emoji. Look also at Figure 4 -- learns on the fly.
I haven't read it through that deeply yet, but it doesn't seem they are revealing what language model they used -- I could be totally wrong, though. They say, on page 13 in A.2:
The pretrained transformer language model we used has a GPT-like architecture [29]. It consists of a series of identical residual layers, each comprised of a self-attention operation followed by a positionwise MLP. The only deviation from the architecture described as GPT-2 is the use of relative position encodings [36]. Our seven billion parameter configuration used 32 layers, with each hidden layer having a channel dimensionality of 4096 hidden units. The attention operations use 32 heads each with key/value size dimensionality of 128, and the hidden layer of each MLP had 16384 hidden units. The 400 million parameter configuration used 12 layers, 12 heads, hidden dimensionality of 1536, and 6144 units in the MLP hidden layers.
They trained their own GPT-2??
And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ozzie guy »

Yuli Ban wrote: Tue Jul 06, 2021 12:42 am We may be very close to the rise of the first proto-AGIs alright.
Has this changed your predicted date for proto AGI at all or is it more so added confirmation reality is on track to meet your prediction? :)
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

The latter.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Generally capable agents emerge from open-ended play
In recent years, artificial intelligence agents have succeeded in a range of complex game environments. For instance, AlphaZero beat world-champion programs in chess, shogi, and Go after starting out with knowing no more than the basic rules of how to play. Through reinforcement learning (RL), this single system learnt by playing round after round of games through a repetitive process of trial and error. But AlphaZero still trained separately on each game — unable to simply learn another game or task without repeating the RL process from scratch. The same is true for other successes of RL, such as Atari, Capture the Flag, StarCraft II, Dota 2, and Hide-and-Seek. DeepMind’s mission of solving intelligence to advance science and humanity led us to explore how we could overcome this limitation to create AI agents with more general and adaptive behaviour. Instead of learning one game at a time, these agents would be able to react to completely new conditions and play a whole universe of games and tasks, including ones never seen before.

Today, we published "Open-Ended Learning Leads to Generally Capable Agents," a preprint detailing our first steps to train an agent capable of playing many different games without needing human interaction data. We created a vast game environment we call XLand, which includes many multiplayer games within consistent, human-relatable 3D worlds. This environment makes it possible to formulate new learning algorithms, which dynamically control how an agent trains and the games on which it trains. The agent’s capabilities improve iteratively as a response to the challenges that arise in training, with the learning process continually refining the training tasks so the agent never stops learning. The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task. This new approach marks an important step toward creating more general agents with the flexibility to adapt rapidly within constantly changing environments.

starspawn0:
This is one of the tasks I wrote once before it would be nice to see brain data applied to. In an old post of mine, I wondered: could one use brain data to build a game-playing agent that can do decently on a new game out-of-the-box? You see, humans can be shown a new game, and if they have some game-playing experience, can do an ok job in the first try -- e.g. they won't die immediately; won't run into enemies; will predict where the enemies are moving, using physical commonsense reasoning; infer what a goal might be; and so on. That's a much, much harder problem than training an agent to solve any particular game. It requires something closer to AGI than we've seen in game-playing AIs in the past.

I would say this is as much a breakthrough and shock as GPT-3 (and GPT-2). Scale this up and use more real-world tasks (instead of games), and you could probably make something that genuinely seems intelligent, if put in a robot body and allowed to interact with the world. Add in some language capability, and you're going to have something that needs to be watched carefully!

....

The success of this work will lead to even larger attempts by other groups. The perceived risk in attempting something like this is now a lot lower. Before this work, some teams might have had the same idea, but then thought, "Ahh... probably won't work. And if we try, we'll have wasted large numbers of hours and millions of dollars, with little to show for it, except marginally better game-playing agents. Could we really make this work?..." and the doubt and skepticism set in.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Astounding!!

DeepMind’s XLearn trains AI agents to complete complex tasks
DeepMind today detailed its latest efforts to create AI systems capable of completing a range of different, unique tasks. By designing a virtual environment called XLand, the Alphabet-backed lab says that it managed to train systems with the ability to succeed at problems and games including hide and seek, capture the flag, and finding objects, some of which they didn’t encounter during training.

The AI technique known as reinforcement learning has shown remarkable potential, enabling systems to learn to play games like chess, shogi, Go, and StarCraft II through a repetitive process of trial and error. But a lack of training data has been one of the major factors limiting reinforcement learning–trained systems’ behavior being general enough to apply across diverse games. Without being able to train systems on a vast enough set of tasks, systems trained with reinforcement learning have been unable to adapt their learned behaviors to new tasks.

DeepMind designed XLand to address this, which includes multiplayer games within consistent, “human-relatable” digital worlds. The simulated space allows for procedurally generated tasks, enabling systems to train on — and generate experience from — tasks that are created programmatically.
Just imagine where this'll be in just five years with cognitive agents that can accomplish a long, logical sequence of convoluted tasks, including tasks that require learning from entirely different tasks without any retraining. It'll genuinely start becoming hard to tell the difference of where the "A" ends and the "I" begins.
And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ozzie guy »

https://www.lesswrong.com/posts/mTGrrX8 ... open-ended

"EDIT: My warm take: The details in the paper back up the claims it makes in the title and abstract. This is the GPT-1 of agent/goal-directed AGI; it is the proof of concept. Two more papers down the line (and a few OOMs more compute), and we'll have the agent/goal-directed AGI equivalent of GPT-3. Scary stuff."

I am not sure how much clout Daniel has however I think basically everyone who posts to LessWrong is a serious person.
User avatar
Ken_J
Posts: 241
Joined: Sun May 16, 2021 5:25 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ken_J »

I'm increasingly convinced that a major factor that will limit the development of AI in most of the western world is anxieties about data privacy and unwillingness by the general population to entrust so much authority to someone or something that we don't think of as part of our tribe. Something that doesn't seem to be hindering places like China.

Americans in particular would freak out if all cars suddenly had recorders to track every aspect of their driving. But such a thing would provide amazing amounts of data that could help regulate supply chains, traffic managment, and even times that would be most useful for companies to have most of their work hours for employees instead of staffing the same amount every hour of a day.

Data is key to AI development. Not only does china have a government and population willing to collect and look at all that data, but the size of the population itself gives an absolutely massive sample size.

So I'm increasingly sure that the AI future is likely to be written in Mandarin.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Initially, we'd freak out. See what happened with Google Glass as an example. But we'd grow used to it over time. It'd just be delayed.
And remember my friend, future events such as these will affect you in the future
User avatar
suroy
Posts: 2
Joined: Fri Aug 06, 2021 3:25 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by suroy »

I think the first true AGI will be created in the late 2030s to 2040s. A true AGI/NPC will be created under 12K to 16K VR world; However, I can't dismiss the possibility that some proto AGIs will be created in the 2020s, albeit quite crude. We definitely will have the tools to create a true AGI in this decade but we'll be limited by the resolution of brain scanning technologies. We need the latter to extract models/algorithms from the brain and incorporate them into an AGI. Right now OpenBCI seems focused on language/speech models to achieve AGI, but a generally intelligent human is not solely dependent on language/speech. A body is needed to learn the environment, objects, etc. but we don't need robotics for that. We could incorporate these AGI models like GPT and other models extracted from the brain (sensation models, perception models, motor action models, memory models, etc.) and incorporate them into an NPC avatar. VR is needed for a human to assist the AGI in real-time learning. The human will have an avatar of its own and interact with the AGI/NPC and teach it. Learning as we know it is accompanied by growing. We need to incorporate time-adaptive rendering in this VR world to simulate aging. This means the first generation of these AGIs in NPC bodies will have baby avatars and over time, their body/mind will grow and become adults. This means we need self-generating, self-coding models that are quite adaptable and reactive. The baby AGIs will have crude models at first. Then over time through nurturing by humans in the VR world, the models within these AGIs grow, and essentially become like ours. But I think its important to give them mortality so that humans are not required to teach them indefinitely. AGIs will grow, give birth to other AGIs, pass down what they know, grow old, and die. Cycle continues.

It's scary because we could definitely simulate the VR world like a medieval village. And only allow medieval technology to exist and persist. This means the AGIs wouldn't even know that their reality is simulated and created by us because of 12K to 16K resolution, the VR world pretty much has a submillimeter precision. There's no way for them to even know that it's simulated.

The likelihood of JOIs (AI companion from the movie "Blade Runner 2049) seems inevitable. Except they won't be through holograms but through VR we would interact with them. I can't dismiss that there may be people who would give them an android body like Ex Machina, but it'll be too expensive, crude, and robotic compared to a VR avatar that is indistinguishable from a real human body (submillimeter precision) and real human-like movements. Even something akin to WestWorld can be done in VR, and we could do away with robotics. But to have Westworld-level resolution, we'd have to wait until the 2040s.
Post Reply