Google AI and DeepMind News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Acquisition of Chess Knowledge in AlphaZero
Researchers at DeepMind and Google Brain, in collaboration with Grandmaster Vladimir Kramnik, are working to explore what chess can teach us about AI and vice versa. Using Chessbase’s extensive historical chess data along with the AlphaZero neural network chess engine and components from Stockfish 8, they ask: what can we learn about chess history by studying AlphaZero, how does AlphaZero learn to evaluate positions, and is AlphaZero computing anything human-like? Their paper, “Acquisition of Chess Knowledge in AlphaZero”, has just been published.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

DeepMind, the AI research laboratory funded by Google’s parent company, Alphabet, today published the results of a collaboration between it and mathematicians to apply AI toward discovering new insights in areas of mathematics. DeepMind claims that its AI technology helped to uncover a new formula for a previously-unsolved conjecture, as well as a connection between different areas of mathematics elucidated by studying the structure of knots.

DeepMind’s experiments with AI run the gamut from systems that can win at StarCraft II and Go to machine learning models for app recommendations and datacenter cooling optimization. But the sciences remain of principle interest to DeepMind, not least of which because of their commercial applications. Earlier this year, DeepMind cofounder Demis Hassabis announced the launch of Isomorphic Labs, which will use machine learning to identify disease treatments that have thus far eluded researchers. Separately, the lab has spotlighted its work in the fields of weather forecasting, materials modeling, and atomic energy computation.

“At DeepMind, we believe that AI techniques are already sufficient to have a foundational impact in accelerating scientific progress across many different disciplines,” DeepMind machine learning specialist Alex Davies said in a statement. “Pure maths is one example of such a discipline, and we hope that [our work] can inspire other researchers to consider the potential for AI as a useful tool in the field.”
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Language Modelling at Scale
Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans.

As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose. This includes collaboration between experts from varied backgrounds to thoughtfully anticipate and address the challenges that training algorithms on existing datasets can create.

Today we are releasing three papers on language models that reflect this interdisciplinary approach. They include a detailed study of a 280 billion parameter transformer language model called Gopher, a study of ethical and social risks associated with large language models, and a paper investigating a new architecture with better training efficiency.

Gopher - A 280 billion parameter language model
Image

Wow!... a 280 billion parameter model that outperforms GPT-3 by a large margin in lots of different MMLU tasks. See the chart on page 56 of the paper:

https://storage.googleapis.com/deepmind ... Gopher.pdf

I seem to recall that poor performance on the RACE-h dataset by GPT-3 was a reason to believe "language models will never learn to understand language" or something like that. But the improvement of the new, larger model over GPT-3 is massive. Improvement on physical commonsense intuition is likewise massive.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

DeepMind debuts massive language A.I. that approaches human-level reading comprehension
DeepMind, the London-based A.I. research company that is owned by Google-parent Alphabet, has created an artificial intelligence algorithm that can perform a wide range of language tasks—from reading comprehension to answering questions on a broad range of subjects—better than any existing similar software. In a few areas, such as a high school reading comprehension test, the software approaches human-level performance. But in others, including common sense reasoning and mathematical reasoning, the system fell well short of human abilities.

In announcing the new language model Wednesday, DeepMind signaled its intent to play a larger role in advancing natural language processing. The company is best known for creating an A.I. system that could beat the world’s top human player in the strategy game Go, a major milestone in computer science, and it recently achieved a breakthrough in using A.I. to predict the structure of proteins. But DeepMind has done far less work on natural language processing (NLP) than rival labs, such as OpenAI, the San Francisco-based A.I. research company, and the A.I. research arms of Facebook, Microsoft, Alibaba, Baidu, and even its sister company Google.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time. We further identify architectural and algorithmic techniques that improve performance, such as hierarchical action selection. Altogether, our results demonstrate that imitation of multi-modal, real-time human behaviour may provide a straightforward and surprisingly effective means of imbuing agents with a rich behavioural prior from which agents might then be fine-tuned for specific purposes, thus laying a foundation for training capable agents for interactive robots or digital assistants. A video of MIA’s behaviour may be found here.
Image
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

In a paper published today in the scientific journal Science, DeepMind demonstrates how neural networks can be used to describe electron interactions in chemical systems more accurately than existing methods.

Density Functional Theory, established in the 1960s, describes the mapping between electron density and interaction energy. For more than 50 years, the exact nature of mapping between electron density and interaction energy—the so-called density functional—has remained unknown. In a significant advancement for the field, DeepMind has shown that neural networks can be used to build a more accurate map of the density and interaction between electrons than was previously attainable.

By expressing the functional as a neural network and incorporating exact properties into the training data, DeepMind was able to train the model to learn functionals free from two important systematic errors—the delocalisation error and spin symmetry breaking—resulting in a better description of a broad class of chemical reactions.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Bigger is better—or at least that’s been the attitude of those designing AI language models in recent years. But now DeepMind is questioning this rationale, and says giving an AI a memory can help it compete with models 25 times its size.

When OpenAI released its GPT-3 model last June, it rewrote the rulebook for language AIs. The lab’s researchers showed that simply scaling up the size of a neural network and the data it was trained on could significantly boost performance on a wide variety of language tasks.

Since then, a host of other tech companies have jumped on the bandwagon, developing their own large language models and achieving similar boosts in performance. But despite the successes, concerns have been raised about the approach, most notably by former Google researcher Timnit Gebru.

In the paper that led to her being forced out of the company, Gebru and colleagues highlighted that the sheer size of these models and their datasets makes them even more inscrutable than your average neural network, which are already known for being black boxes. This is likely to make detecting and mitigating bias in these models even harder.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Competitive programming with AlphaCode
Solving novel problems and setting a new milestone in competitive programming.

Creating solutions to unforeseen problems is second nature in human intelligence – a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.

We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.

To help others build on our results, we’re releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct — a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.
starspawn0
Wow!!!!!! I'm really impressed! That's much more impressive than AlphaGo, in my opinion, since AlphaGo is restricted to a very narrow problem -- this new work, however, is the beginning of a revolution, mark my words! I'd also say it's much more impressive than OpenAI's Codex. There's no comparison.

Scary times ahead!....
These programming competition problems (I used to compete in them about 30 to 40 years ago, when I was much younger) are a quantum leap harder than solving simple Codex-type problems. Maybe there's something being swept under the rug in this new work by Deepmind; but if not, then we're seeing some very deep progress now in AI.

If you'd asked experts in AI from about 2017 like in Grace's survey when they thought this would be a solved problem (at this level of performance), the typical range would have been "2035 to 2050".

This is in another league of accomplishment compared to AlphaGo, AlphaFold, and most of the other work by Deepmind.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.
Just holy SHIT
And remember my friend, future events such as these will affect you in the future
Post Reply