Researchers at DeepMind and Google Brain, in collaboration with Grandmaster Vladimir Kramnik, are working to explore what chess can teach us about AI and vice versa. Using Chessbase’s extensive historical chess data along with the AlphaZero neural network chess engine and components from Stockfish 8, they ask: what can we learn about chess history by studying AlphaZero, how does AlphaZero learn to evaluate positions, and is AlphaZero computing anything human-like? Their paper, “Acquisition of Chess Knowledge in AlphaZero”, has just been published.
Google AI and DeepMind News and Discussions
Re: Google DeepMind News and Discussions
Acquisition of Chess Knowledge in AlphaZero
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
DeepMind, the AI research laboratory funded by Google’s parent company, Alphabet, today published the results of a collaboration between it and mathematicians to apply AI toward discovering new insights in areas of mathematics. DeepMind claims that its AI technology helped to uncover a new formula for a previously-unsolved conjecture, as well as a connection between different areas of mathematics elucidated by studying the structure of knots.
DeepMind’s experiments with AI run the gamut from systems that can win at StarCraft II and Go to machine learning models for app recommendations and datacenter cooling optimization. But the sciences remain of principle interest to DeepMind, not least of which because of their commercial applications. Earlier this year, DeepMind cofounder Demis Hassabis announced the launch of Isomorphic Labs, which will use machine learning to identify disease treatments that have thus far eluded researchers. Separately, the lab has spotlighted its work in the fields of weather forecasting, materials modeling, and atomic energy computation.
“At DeepMind, we believe that AI techniques are already sufficient to have a foundational impact in accelerating scientific progress across many different disciplines,” DeepMind machine learning specialist Alex Davies said in a statement. “Pure maths is one example of such a discipline, and we hope that [our work] can inspire other researchers to consider the potential for AI as a useful tool in the field.”
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
Language Modelling at Scale
Wow!... a 280 billion parameter model that outperforms GPT-3 by a large margin in lots of different MMLU tasks. See the chart on page 56 of the paper:
https://storage.googleapis.com/deepmind ... Gopher.pdf
I seem to recall that poor performance on the RACE-h dataset by GPT-3 was a reason to believe "language models will never learn to understand language" or something like that. But the improvement of the new, larger model over GPT-3 is massive. Improvement on physical commonsense intuition is likewise massive.
Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans.
As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose. This includes collaboration between experts from varied backgrounds to thoughtfully anticipate and address the challenges that training algorithms on existing datasets can create.
Today we are releasing three papers on language models that reflect this interdisciplinary approach. They include a detailed study of a 280 billion parameter transformer language model called Gopher, a study of ethical and social risks associated with large language models, and a paper investigating a new architecture with better training efficiency.
Gopher - A 280 billion parameter language model
Wow!... a 280 billion parameter model that outperforms GPT-3 by a large margin in lots of different MMLU tasks. See the chart on page 56 of the paper:
https://storage.googleapis.com/deepmind ... Gopher.pdf
I seem to recall that poor performance on the RACE-h dataset by GPT-3 was a reason to believe "language models will never learn to understand language" or something like that. But the improvement of the new, larger model over GPT-3 is massive. Improvement on physical commonsense intuition is likewise massive.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
DeepMind debuts massive language A.I. that approaches human-level reading comprehension
DeepMind, the London-based A.I. research company that is owned by Google-parent Alphabet, has created an artificial intelligence algorithm that can perform a wide range of language tasks—from reading comprehension to answering questions on a broad range of subjects—better than any existing similar software. In a few areas, such as a high school reading comprehension test, the software approaches human-level performance. But in others, including common sense reasoning and mathematical reasoning, the system fell well short of human abilities.
In announcing the new language model Wednesday, DeepMind signaled its intent to play a larger role in advancing natural language processing. The company is best known for creating an A.I. system that could beat the world’s top human player in the strategy game Go, a major milestone in computer science, and it recently achieved a breakthrough in using A.I. to predict the structure of proteins. But DeepMind has done far less work on natural language processing (NLP) than rival labs, such as OpenAI, the San Francisco-based A.I. research company, and the A.I. research arms of Facebook, Microsoft, Alibaba, Baidu, and even its sister company Google.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time. We further identify architectural and algorithmic techniques that improve performance, such as hierarchical action selection. Altogether, our results demonstrate that imitation of multi-modal, real-time human behaviour may provide a straightforward and surprisingly effective means of imbuing agents with a rich behavioural prior from which agents might then be fine-tuned for specific purposes, thus laying a foundation for training capable agents for interactive robots or digital assistants. A video of MIA’s behaviour may be found here.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
In a paper published today in the scientific journal Science, DeepMind demonstrates how neural networks can be used to describe electron interactions in chemical systems more accurately than existing methods.
Density Functional Theory, established in the 1960s, describes the mapping between electron density and interaction energy. For more than 50 years, the exact nature of mapping between electron density and interaction energy—the so-called density functional—has remained unknown. In a significant advancement for the field, DeepMind has shown that neural networks can be used to build a more accurate map of the density and interaction between electrons than was previously attainable.
By expressing the functional as a neural network and incorporating exact properties into the training data, DeepMind was able to train the model to learn functionals free from two important systematic errors—the delocalisation error and spin symmetry breaking—resulting in a better description of a broad class of chemical reactions.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
Bigger is better—or at least that’s been the attitude of those designing AI language models in recent years. But now DeepMind is questioning this rationale, and says giving an AI a memory can help it compete with models 25 times its size.
When OpenAI released its GPT-3 model last June, it rewrote the rulebook for language AIs. The lab’s researchers showed that simply scaling up the size of a neural network and the data it was trained on could significantly boost performance on a wide variety of language tasks.
Since then, a host of other tech companies have jumped on the bandwagon, developing their own large language models and achieving similar boosts in performance. But despite the successes, concerns have been raised about the approach, most notably by former Google researcher Timnit Gebru.
In the paper that led to her being forced out of the company, Gebru and colleagues highlighted that the sheer size of these models and their datasets makes them even more inscrutable than your average neural network, which are already known for being black boxes. This is likely to make detecting and mitigating bias in these models even harder.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
Competitive programming with AlphaCode
starspawn0Solving novel problems and setting a new milestone in competitive programming.
Creating solutions to unforeseen problems is second nature in human intelligence – a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.
In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.
We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.
To help others build on our results, we’re releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct — a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.
Wow!!!!!! I'm really impressed! That's much more impressive than AlphaGo, in my opinion, since AlphaGo is restricted to a very narrow problem -- this new work, however, is the beginning of a revolution, mark my words! I'd also say it's much more impressive than OpenAI's Codex. There's no comparison.
Scary times ahead!....
These programming competition problems (I used to compete in them about 30 to 40 years ago, when I was much younger) are a quantum leap harder than solving simple Codex-type problems. Maybe there's something being swept under the rug in this new work by Deepmind; but if not, then we're seeing some very deep progress now in AI.
If you'd asked experts in AI from about 2017 like in Grace's survey when they thought this would be a solved problem (at this level of performance), the typical range would have been "2035 to 2050".
This is in another league of accomplishment compared to AlphaGo, AlphaFold, and most of the other work by Deepmind.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
Just holy SHITDeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.
Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
As always I and everyone else would love to grab your opinion on this.Yuli Ban wrote: ↑Thu Feb 03, 2022 12:41 amJust holy SHITDeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.
Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.
Edit I didn't see that you had already shared starspawns opinion above.
Im gonna check reddit and see what the hype train thinks.
Re: Google DeepMind News and Discussions
Starspawn0 also commented this tweet:
Would this mean that AlphaCode is (soon) good enough to improve its own source code? How can it do that? Deep neural nets are like black boxes, where do you start improving them?
Here's a comment from Reddit/singularity:
This seems really important. If a larger language model ( Gopher, which exists now) + inner monologue ( easy implementation ) can bring AlphaCode into the top 10 %, that's kind of insane. They can build one today.I don't know about that "light years away" comment. I also competed in programming competitions (though not for "thousands of hours", and I doubt this guy did either), though it's been a long, long time. Reaching about the 50th percentile in these competitions shows that the model is at least generating solutions that a human would say are "creative" (the people who compete in these competitions are young, yes, but they are like people who already have considerable skill -- at hacking code together, yes, not doing software development -- and are usually quick on their feet). And I doubt that 50th percentile is anywhere near the limits of what these models can achieve, given that they could have used a larger language model and didn't even use an inner-monologue. Maybe if they used Gopher it would have scored 75th percentile (getting maybe one or two more problems right per set); and then with an inner-monologue, they might could have boosted it to 90th percentile or higher. And even if you take out the input-output examples, it might still score above the 75th percentile.
As to the comment about how the time limit is the main limiter, that is correct. However, that applies to just about any competition -- if you have more time, you can solve the problems, as you can "brute force" it (trying lots of combinations to see what works). Beating humans in a 6 hour test is really impressive! -- it shows that the probability distribution of "solutions" generated is not so spread out that the correct answers are too far to the tail to be found by a random search in reasonable time.
Would this mean that AlphaCode is (soon) good enough to improve its own source code? How can it do that? Deep neural nets are like black boxes, where do you start improving them?
Here's a comment from Reddit/singularity:
This is massive, we knew it was coming with Codex and now DeepMind has made it clear the next iteration of this is being built to be better than even the most skilled coders, there is little doubt future versions of this will be superhuman in their abilities. When Codex came out months ago, we knew it was over, and now we have something which Deepmind, if they really wanted, could bring up to speed very quickly and use to replace the average human coder and rent this program out to software firms for a fraction of the price.
They won't do that now, because it's still a prototype and that would be a shit storm that no one wants, but this technology is coming and it will absolutely be orders of magnitude better and faster than even the best programmers.
Re: Google DeepMind News and Discussions
AlphaCode as a dog speaking mediocre English
Tonight, I took the time actually to read DeepMind’s AlphaCode paper, and to work through the example contest problems provided, and understand how I would’ve solved those problems, and how AlphaCode solved them.
It is absolutely astounding.
Consider, for example, the “n singers” challenge (pages 59-60). To solve this well, you first need to parse a somewhat convoluted English description, discarding the irrelevant fluff about singers, in order to figure out that you’re being asked to find a positive integer solution (if it exists) to a linear system whose matrix looks like
1 2 3 4
4 1 2 3
3 4 1 2
2 3 4 1.
Next you need to find a trick for solving such a system without Gaussian elimination or the like (I’ll leave that as an exercise…). Finally, you need to generate code that implements that trick, correctly handling the wraparound at the edges of the matrix, and breaking and returning “NO” for any of multiple possible reasons why a positive integer solution won’t exist. Oh, and also correctly parse the input.
Yes, I realize that AlphaCode generates a million candidate programs for each challenge, then discards the vast majority by checking that they don’t work on the example data provided, then still has to use clever tricks to choose from among the thousands of candidates remaining. I realize that it was trained on tens of thousands of contest problems and millions of solutions to those problems. I realize that it “only” solves about a third of the contest problems, making it similar to a mediocre human programmer on these problems. I realize that it works only in the artificial domain of programming contests, where a complete English problem specification and example inputs and outputs are always provided.
Forget all that. Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English. And people are complaining that the dog isn’t a very eloquent orator, that it often makes grammatical errors and has to start again, that it took heroic effort to train it, and that it’s unclear how much the dog really understands.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
I wonder if the critics ever will be satisfied. Even when we have super AI that can do everything we can do and more, they might say: " Yes it can solve those math problems which I can't, but it doesn't truly understand as we do".
Re: Google DeepMind News and Discussions
I don't think I will be happy even with human level AGI it will more so be a sign I will be happy soon.
I want super human level AGI as I want to through simulation or other means live in a perfect replica of a past earth (e.g. earth 2022) as myself. But then modify certain things so I am living out a fantasy.
Re: Google DeepMind News and Discussions
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
YouTube is now using A.I. on videos that had previously mastered games including chess and Go
YouTube has begun using an algorithm first developed to conquer board games such as chess and Go to improve its video compression.
The artificial intelligence algorithm, called MuZero, was developed by YouTube’s London-based sister company within Alphabet, DeepMind, which is dedicated to advanced A.I. research. When applied to YouTube videos, the system has resulted in a 4% reduction on average in the amount of data the video-sharing service needs to stream to users, with no noticeable loss in video quality.
While that might not sound like a major improvement, given YouTube’s scale it is a major savings in computing power and bandwidth. It also will help people in countries with very limited broadband to watch video content they would otherwise struggle to view, Anton Zhernov, a DeepMind researcher who worked to adapt the algorithm for YouTube, said. Already, video streaming occupies a good chunk of the world's internet capacity, and that figure is only expected to climb.
The system is now in active use across most, but not all, of the videos on YouTube, Zhernov said. The A.I. system specifically works to improve on an open-source video compression method called VP9 that is widely used by YouTube, although some of its content is compressed using other protocols.
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
THE INSIDE OF a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.
That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.
In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).
And remember my friend, future events such as these will affect you in the future
Re: Google DeepMind News and Discussions
And remember my friend, future events such as these will affect you in the future