Page 7 of 17

Re: Google DeepMind News and Discussions

Posted: Sat May 14, 2022 4:09 pm
by Yuli Ban

Re: Google DeepMind News and Discussions

Posted: Mon May 16, 2022 8:02 pm
by Yuli Ban

Re: Google DeepMind News and Discussions

Posted: Mon May 16, 2022 9:16 pm
by funkervogt
funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi

Re: Google DeepMind News and Discussions

Posted: Tue May 17, 2022 2:12 am
by Nanotechandmorefuture
funkervogt wrote: Mon May 16, 2022 9:16 pm
funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi
Jeez.

Re: Google DeepMind News and Discussions

Posted: Tue May 17, 2022 6:56 am
by Vakanai
funkervogt wrote: Mon May 16, 2022 9:16 pm
funkervogt wrote: Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
https://storage.googleapis.com/deepmind ... 0Agent.pdf
Someone else has invented the term "subhuman AGI" for Gato.

https://www.lesswrong.com/posts/TwfWTLh ... -early-agi
Is it weird that even in the context of AI "subhuman" still sounds so...problematic?

Re: Google DeepMind News and Discussions

Posted: Mon May 23, 2022 8:17 am
by Yuli Ban


Re: Google DeepMind News and Discussions

Posted: Thu May 26, 2022 11:14 pm
by funkervogt
DeepMind researcher and decorated computer scientist Richard Sutton says there's a "50% probability of human-level AI by 2040."


Re: Google DeepMind News and Discussions

Posted: Fri May 27, 2022 10:08 am
by agi
deleted

Re: Google DeepMind News and Discussions

Posted: Wed Jun 01, 2022 12:11 am
by Ozzie guy
Deepmind update

"Abstract
A longstanding goal of the field of AI is a strategy for compiling diverse experience into a highly capable, generalist agent. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model -- with a single set of weights -- trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction."


https://sites.google.com/view/multi-game-transformers

Re: Google DeepMind News and Discussions

Posted: Fri Jun 17, 2022 9:31 pm
by Yuli Ban