Google AI and DeepMind News and Discussions

User avatar
Ozzie guy
Posts: 508
Joined: Sun May 16, 2021 4:40 pm

Re: Google DeepMind News and Discussions

Post by Ozzie guy »

Yuli Ban wrote: Thu Feb 03, 2022 12:41 am
DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.
Just holy SHIT
As always I and everyone else would love to grab your opinion on this. ;)

Edit I didn't see that you had already shared starspawns opinion above.

Im gonna check reddit and see what the hype train thinks.
User avatar
andmar74
Posts: 389
Joined: Mon May 24, 2021 9:10 am
Location: Denmark

Re: Google DeepMind News and Discussions

Post by andmar74 »

Starspawn0 also commented this tweet:

I don't know about that "light years away" comment. I also competed in programming competitions (though not for "thousands of hours", and I doubt this guy did either), though it's been a long, long time. Reaching about the 50th percentile in these competitions shows that the model is at least generating solutions that a human would say are "creative" (the people who compete in these competitions are young, yes, but they are like people who already have considerable skill -- at hacking code together, yes, not doing software development -- and are usually quick on their feet). And I doubt that 50th percentile is anywhere near the limits of what these models can achieve, given that they could have used a larger language model and didn't even use an inner-monologue. Maybe if they used Gopher it would have scored 75th percentile (getting maybe one or two more problems right per set); and then with an inner-monologue, they might could have boosted it to 90th percentile or higher. And even if you take out the input-output examples, it might still score above the 75th percentile.

As to the comment about how the time limit is the main limiter, that is correct. However, that applies to just about any competition -- if you have more time, you can solve the problems, as you can "brute force" it (trying lots of combinations to see what works). Beating humans in a 6 hour test is really impressive! -- it shows that the probability distribution of "solutions" generated is not so spread out that the correct answers are too far to the tail to be found by a random search in reasonable time.
This seems really important. If a larger language model ( Gopher, which exists now) + inner monologue ( easy implementation ) can bring AlphaCode into the top 10 %, that's kind of insane. They can build one today.

Would this mean that AlphaCode is (soon) good enough to improve its own source code? How can it do that? Deep neural nets are like black boxes, where do you start improving them?


Here's a comment from Reddit/singularity:
This is massive, we knew it was coming with Codex and now DeepMind has made it clear the next iteration of this is being built to be better than even the most skilled coders, there is little doubt future versions of this will be superhuman in their abilities. When Codex came out months ago, we knew it was over, and now we have something which Deepmind, if they really wanted, could bring up to speed very quickly and use to replace the average human coder and rent this program out to software firms for a fraction of the price.

They won't do that now, because it's still a prototype and that would be a shit storm that no one wants, but this technology is coming and it will absolutely be orders of magnitude better and faster than even the best programmers.
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

AlphaCode as a dog speaking mediocre English
Tonight, I took the time actually to read DeepMind’s AlphaCode paper, and to work through the example contest problems provided, and understand how I would’ve solved those problems, and how AlphaCode solved them.

It is absolutely astounding.

Consider, for example, the “n singers” challenge (pages 59-60). To solve this well, you first need to parse a somewhat convoluted English description, discarding the irrelevant fluff about singers, in order to figure out that you’re being asked to find a positive integer solution (if it exists) to a linear system whose matrix looks like
1 2 3 4
4 1 2 3
3 4 1 2
2 3 4 1.
Next you need to find a trick for solving such a system without Gaussian elimination or the like (I’ll leave that as an exercise…). Finally, you need to generate code that implements that trick, correctly handling the wraparound at the edges of the matrix, and breaking and returning “NO” for any of multiple possible reasons why a positive integer solution won’t exist. Oh, and also correctly parse the input.

Yes, I realize that AlphaCode generates a million candidate programs for each challenge, then discards the vast majority by checking that they don’t work on the example data provided, then still has to use clever tricks to choose from among the thousands of candidates remaining. I realize that it was trained on tens of thousands of contest problems and millions of solutions to those problems. I realize that it “only” solves about a third of the contest problems, making it similar to a mediocre human programmer on these problems. I realize that it works only in the artificial domain of programming contests, where a complete English problem specification and example inputs and outputs are always provided.

Forget all that. Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English. And people are complaining that the dog isn’t a very eloquent orator, that it often makes grammatical errors and has to start again, that it took heroic effort to train it, and that it’s unclear how much the dog really understands.
And remember my friend, future events such as these will affect you in the future
User avatar
andmar74
Posts: 389
Joined: Mon May 24, 2021 9:10 am
Location: Denmark

Re: Google DeepMind News and Discussions

Post by andmar74 »

I wonder if the critics ever will be satisfied. Even when we have super AI that can do everything we can do and more, they might say: " Yes it can solve those math problems which I can't, but it doesn't truly understand as we do".
User avatar
Ozzie guy
Posts: 508
Joined: Sun May 16, 2021 4:40 pm

Re: Google DeepMind News and Discussions

Post by Ozzie guy »

andmar74 wrote: Wed Feb 09, 2022 7:05 am I wonder if the critics ever will be satisfied. Even when we have super AI that can do everything we can do and more, they might say: " Yes it can solve those math problems which I can't, but it doesn't truly understand as we do".
I don't think I will be happy even with human level AGI it will more so be a sign I will be happy soon.

I want super human level AGI as I want to through simulation or other means live in a perfect replica of a past earth (e.g. earth 2022) as myself. But then modify certain things so I am living out a fantasy.
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

YouTube is now using A.I. on videos that had previously mastered games including chess and Go
YouTube has begun using an algorithm first developed to conquer board games such as chess and Go to improve its video compression.

The artificial intelligence algorithm, called MuZero, was developed by YouTube’s London-based sister company within Alphabet, DeepMind, which is dedicated to advanced A.I. research. When applied to YouTube videos, the system has resulted in a 4% reduction on average in the amount of data the video-sharing service needs to stream to users, with no noticeable loss in video quality.

While that might not sound like a major improvement, given YouTube’s scale it is a major savings in computing power and bandwidth. It also will help people in countries with very limited broadband to watch video content they would otherwise struggle to view, Anton Zhernov, a DeepMind researcher who worked to adapt the algorithm for YouTube, said. Already, video streaming occupies a good chunk of the world's internet capacity, and that figure is only expected to climb.

The system is now in active use across most, but not all, of the videos on YouTube, Zhernov said. The A.I. system specifically works to improve on an open-source video compression method called VP9 that is widely used by YouTube, although some of its content is compressed using other protocols.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

THE INSIDE OF a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Programming has been for a long time a high-status, high-demand skill.
Companies and businesses across industries depend at a very foundational level on the ability of human developers: People who write and understand the language of computers. Recently, with the advent of large language models, AI companies have begun to explore the possibilities of systems that can learn to code. OpenAI’s Codex — embedded into GitHub Copilot — was the first notable example. Codex can read simple natural language commands and instructions and write code that matches the intention of the user.
Yet, writing small programs and solving easy tasks is “far from the full complexity of real-world programming.” AI models like Codex lack the problem-solving skills that most programmers rely on in their day-to-day jobs. That’s the gap DeepMind wanted to fill with AlphaCode, an AI system that has been trained to “understand” natural language, design algorithms to solve problems, and then implement them into code.
AlphaCode displays a unique skillset of natural language understanding and problem-solving ability, combined with the statistical power characteristic of large language models. The system was tested against human programmers on the popular competitive programming platform Codeforces. AlphaCode averaged a ranking of 54.3% across 10 contests, which makes it the first AI to reach the level of human programmers in competitive programming contests.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

Restoring, placing, and dating ancient texts through collaboration between AI and historians.

The birth of human writing marked the dawn of History and is crucial to our understanding of past civilisations and the world we live in today. For example, more than 2,500 years ago, the Greeks began writing on stone, pottery, and metal to document everything from leases and laws to calendars and oracles, giving a detailed insight into the Mediterranean region. Unfortunately, it’s an incomplete record. Many of the surviving inscriptions have been damaged over the centuries or moved from their original location. In addition, modern dating techniques, such as radiocarbon dating, cannot be used on these materials, making inscriptions difficult and time-consuming to interpret.

In line with DeepMind’s mission of solving intelligence to advance science and humanity, we collaborated with the Department of Humanities of Ca' Foscari University of Venice, the Classics Faculty of the University of Oxford, and the Department of Informatics of the Athens University of Economics and Business to explore how machine learning can help historians better interpret these inscriptions – giving a richer understanding of ancient history and unlocking the potential for cooperation between AI and historians.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over \nummodels language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, \chinchilla, that uses the same compute budget as \gopher but with 70B parameters and 4× more more data. \chinchilla uniformly and significantly outperforms \Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that \chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, \chinchilla reaches a state-of-the-art average accuracy of 67.5\% on the MMLU benchmark, greater than a 7\% improvement over \gopher.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

One key aspect of intelligence is the ability to quickly learn how to perform a new task when given a brief instruction. For instance, a child may recognise real animals at the zoo after seeing a few pictures of the animals in a book, despite any differences between the two. But for a typical visual model to learn a new task, it must be trained on tens of thousands of examples specifically labelled for that task. If the goal is to count and identify animals in an image, as in “three zebras”, one would have to collect thousands of images and annotate each image with their quantity and species. This process is inefficient, expensive, and resource-intensive, requiring large amounts of annotated data and the need to train a new model each time it’s confronted with a new task. As part of DeepMind’s mission to solve intelligence, we’ve explored whether an alternative model could make this process easier and more efficient, given only limited task-specific information.

Today, in the preprint of our paper, we introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks. This means Flamingo can tackle a number of difficult problems with just a handful of task-specific examples (in a “few shots”), without any additional training required. Flamingo’s simple interface makes this possible, taking as input a prompt consisting of interleaved images, videos, and text and then output associated language.

Similar to the behaviour of large language models (LLMs), which can address a language task by processing examples of the task in their text prompt, Flamingo’s visual and text interface can steer the model towards solving a multimodal task. Given a few example pairs of visual inputs and expected text responses composed in Flamingo’s prompt, the model can be asked a question with a new image or video, and then generate an answer.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4769
Joined: Sun May 16, 2021 4:44 pm

Re: Google DeepMind News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
Post Reply