Re: Google DeepMind News and Discussions
Posted: Fri May 06, 2022 12:07 am
A community of futurology enthusiasts
https://www.futuretimeline.net/forum/
https://www.futuretimeline.net/forum/viewtopic.php?f=16&t=152
https://storage.googleapis.com/deepmind ... 0Agent.pdfInspired by progress in large-scale language modeling, we apply a similar approach towards building a
single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a
multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights
can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based
on its context whether to output text, joint torques, button presses, or other tokens. In this report we
describe the model and the data, and document the current capabilities of Gato.
If my term had ever taken off, we'd have "artificial expert intelligence" to describe it. That type of AI in between narrow and general AI.funkervogt wrote: ↑Thu May 12, 2022 4:18 pm DeepMind builds an AI that is neither narrow nor general. "Narrowly general" or "multipurpose" might be the right term.
We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief,We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions.
- The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions.
- The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist - Machine Learning, Research Scientist - Cognitive Science, Research Engineer, and Software Engineer positions.