Google AI and DeepMind News and Discussions

Post Reply
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1165
Joined: Mon May 17, 2021 3:03 pm

Re: Google DeepMind News and Discussions

Post by funkervogt »

funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi
Nanotechandmorefuture
Posts: 478
Joined: Fri Sep 17, 2021 6:15 pm
Location: At the moment Miami, FL

Re: Google DeepMind News and Discussions

Post by Nanotechandmorefuture »

funkervogt wrote: Mon May 16, 2022 9:16 pm
funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi
Jeez.
Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: Google DeepMind News and Discussions

Post by Vakanai »

funkervogt wrote: Mon May 16, 2022 9:16 pm
funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi
Is it weird that even in the context of AI "subhuman" still sounds so...problematic?
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »


And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1165
Joined: Mon May 17, 2021 3:03 pm

Re: Google DeepMind News and Discussions

Post by funkervogt »

DeepMind researcher and decorated computer scientist Richard Sutton says there's a "50% probability of human-level AI by 2040."

agi
Posts: 21
Joined: Tue Apr 05, 2022 3:03 pm

Re: Google DeepMind News and Discussions

Post by agi »

deleted
Last edited by agi on Tue Apr 04, 2023 12:31 pm, edited 1 time in total.
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Google DeepMind News and Discussions

Post by Ozzie guy »

Deepmind update

"Abstract
A longstanding goal of the field of AI is a strategy for compiling diverse experience into a highly capable, generalist agent. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model -- with a single set of weights -- trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction."


https://sites.google.com/view/multi-game-transformers
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »



And remember my friend, future events such as these will affect you in the future
Post Reply