Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
167 replies to this topic

#141
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google's DeepMind made an AI that can imagine the future

The so-called Imagination-Augmented Agents, or I2As, use an internal ‘imagination encoder’ that helps the AI decide what are and what aren’t useful predictions about its environment

Google’s London-based AI outfit DeepMind has created two different types of AI that can use their ‘imagination’ to plan ahead and perform tasks with a higher success rate than AIs without imagination. Sorry if I made you click because you wanted AIs predicted flying cars. I promise this is cool too.
In a post on their site, DeepMind researchers give a short review of “a new family of approaches for imagination-based planning.” The so-called Imagination-Augmented Agents, or I2As, use an internal ‘imagination encoder’ that helps the AI decide what are and what aren’t useful predictions about its environment.
The researchers argue that giving AI imagination is crucial for dealing with real-world environments, where it’s helpful to test a few possible outcomes of actions ‘in your head’ to predict which one is best.


  • Alislaws likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#142
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Agents that imagine and plan | DeepMind

Imagining the consequences of your actions before you take them is a powerful tool of human cognition. When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking. This form of deliberative reasoning is essentially ‘imagination’, it is a distinctly human ability and is a crucial tool in our everyday lives.
If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge. We have seen some tremendous results in this area - particularly in programs like AlphaGo, which use an ‘internal model’ to analyse how actions lead to future outcomes in order to to reason and plan. These internal models work so well because environments like Go are ‘perfect’ - they have clearly defined rules which allow outcomes to be predicted very accurately in almost every circumstance. But the real world is complex, rules are not so clearly defined and unpredictable problems often arise. Even for the most intelligent agents, imagining in these complex environments is a long and costly process.

In two new papers, we describe a new family of approaches for imagination-based planning. We also introduce architectures which provide new ways for agents to learn and construct plans to maximise the efficiency of a task. These architectures are efficient, robust to complex and imperfect models, and can adopt flexible strategies for exploiting their imagination.


Imagination-augmented agents
The agents we introduce benefit from an ‘imagination encoder’- a neural network which learns to extract any information useful for the agent’s future decisions, but ignore that which is not relevant. These agents have a number of distinct features:

  • they learn to interpret their internal simulations. This allows them to use models which coarsely capture the environmental dynamics, even when those dynamics are not perfect.
  • they use their imagination efficiently. They do this by adapting the number of imagined trajectories to suit the problem. Efficiency is also enhanced by the encoder, which is able to extract additional information from imagination beyond rewards - these trajectories may contain useful clues even if they do not necessarily result in high reward.
  • they can learn different strategies to construct plans. They do this by choosing between continuing a current imagined trajectory or restarting from scratch. Alternatively, they can use different imagination models, with different accuracies and computational costs. This offers them a broad spectrum of effective planning strategies, rather than being restricted to a one-size-fits-all approach which might limit adaptability in imperfect environments.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#143
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#144
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

DeepMind papers at ICML 2017 (part one)

The first of our three-part series, which gives brief descriptions of the papers we are presenting at the ICML 2017 Conference in Sydney, Australia.
 
Decoupled Neural Interfaces using Synthetic Gradients
Authors: Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, Koray Kavukcuoglu
When training neural networks, the modules (layers) are locked: they can only be updated after backpropagation. We remove this constraint by incorporating a learnt model of error gradients, Synthetic Gradients, which means we can update networks without full backpropagation. We show how this can be applied to feed-forward networks which allows every layer to be trained asynchronously, to RNNs which extends the time over which models can remember, and to multi-network systems to allow communication.
For further details and related work, please see the paper.

There's a lot more than just this one, so it would be a good thing if you read this blog.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#145
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google DeepMind AI Declares Galactic War on StarCraft

Tic-tac-toe, checkerschessGopoker. Artificial intelligence rolled over each of these games like a relentless tide. Now Google’s DeepMind is taking on the multiplayer space-war videogame StarCraft II. No one expects the robot to win anytime soon. But when it does, it will be a far greater achievement than DeepMind’s conquest of Go—and not just because StarCraft is a professional e-sport watched by fans for millions of hours each month.
DeepMind and Blizzard Entertainment, the company behind StarCraft, just released the tools to let AI researchers create bots capable of competing in a galactic war against humans. The bots will see and do all all the things human players can do, and nothing more. They will not enjoy an unfair advantage.

 

johnnd

This quote
"Churchill guesses it will be five years before a StarCraft bot can beat a human. He also notes that many experts predicted a similar timeframe for Go—right before AlphaGo burst onto the scene."
reminds me of this other one:
“It would appear that we have reached the limits of what it is possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in 5 years.” - John von Neumann
Five years is a long time, indeed.


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#146
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

DeepMind AI teaches itself about the world by watching videos

To an untrained AI, the world is a blur of confusing data streams. Most humans have no problem making sense of the sights and sounds around them, but algorithms tend only to acquire this skill if those sights and sounds are explicitly labelled for them.
Now DeepMind has developed an AI that teaches itself to recognise a range of visual and audio concepts just by watching tiny snippets of video. This AI can grasp the concept of lawn mowing or tickling, for example, but it hasn’t been taught the words to describe what it’s hearing or seeing.
“We want to build machines that continuously learn about their environment in an autonomous manner,” says Pulkit Agrawal at the University of California, Berkeley. Agrawal, who wasn’t involved with the work, says this project takes us closer to the goal of creating AI that can teach itself by watching and listening to the world around it.

 
 
dgamr

Novel and very interesting research, but the headline is pure clickbait and exaggerates the scope of the project (like most reporting on “AI” these days).
The effect of this isn't to “Learn like humans do” in an unlimited capacity. This is going to produce a very novel object-detection model that can detect more dogs in Youtube videos because it understands what barking sounds like.
INCREDIBLY interesting but headlines these days are out of control.


  • BasilBerylium likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#147
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google’s DeepMind AI has a new trick: taking a nap

Google has been pretty far ahead of the curve when it comes to its artificial intelligence research. The world was shocked when its AI beat a top human player at the game of Go. More recently the company taught AI to use imagination and make predictions. The latest trick in Google’s machine-learning research? Naps.
Google is making its AI more human — to a startling degree. It’s taught DeepMind how to sleep. In a recent blog post the company said:
 

At first glance, it might seem counter-intuitive to build an artificial agent that needs to ‘sleep’ – after all, they are supposed to grind away at a computational problem long after their programmers have gone to bed. But this principle was a key part of our deep-Q network (DQN), an algorithm that learns to master a diverse range of Atari 2600 games to superhuman level with only the raw pixels and score as inputs. DQN mimics “experience replay”, by storing a subset of training data that it reviews “offline”, allowing it to learn anew from successes or failures that occurred in the past.

 


  • Jakob likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#148
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#149
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

DeepMind announces ethics group to focus on problems of AI

Deepmind, Google’s London-based AI research sibling, has opened a new unit focused on the ethical and societal questions raised by artificial intelligence.
The new research unit will aim “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”, according to the company, which hit headlines in 2016 for building the first machine to beat a world champion at the ancient Asian board game Go.
The company is bringing in external advisers from academia and the charitable sector, including Columbia development professor Jeffrey Sachs, Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres to advise the unit.


  • BasilBerylium likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#150
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Here's Why Google's Assistant Sounds More Realistic Than Ever Before

If you’re playing around with Google’s new Home Max or Mini smart speakers, or if you’re just using an Android phone such as the new Pixel 2, you may be familiar with the Google Assistant virtual helper. And if you’ve done so in the last couple days, you may have noticed that the virtual assistant’s voice is sounding more realistic than before.
That’s because Alphabet’s Google has started using a cutting-edge piece of technology called WaveNet—developed by its DeepMind “artificial intelligence” division—in Google Assistant.
Synthesized speech is traditionally created by gluing together bits of recorded speech, in a technique known as “concatenative text-to-speech.” The result does not sound natural, although some versions of the technique are better than others.


  • Zaphod likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#151
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

DeepMind’s elusive third cofounder is the man making sure that machines stay on our side

DeepMind, a London-based artificial intelligence (AI) lab bought by Google for £400 million in 2014, was cofounded by three people in 2010 but one of them remains relatively unknown.
Shane Legg, DeepMind's chief scientist, gives significantly fewer talks and far less quotes to journalists than his other cofounders, CEO Demis Hassabis and head of applied AI Mustafa Suleyman.
Last year, Hassabis saw his face splashed across the internet alongside Google cofounder Sergey Brin when DeepMind pitched its AlphaGo AI against the Lee Se-dol, the world champion of Chinese board game Go. Suleyman has also garnered much of the limelight due to DeepMind's work with the NHS, which has resulted in both positive and negative headlines
But Legg remains somewhat of an unknown entity, choosing only to talk about his work at the occasional academic conference or university lecture. With the exception of this rare Bloomberg interview, you'll be hard pushed to find many stories about DeepMind that contain quotes from the safety-conscious cofounder, who mathematically defined intelligence as part of his PhD with researcher Marcus Hutter.


  • Zaphod, BasilBerylium and Nerd like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#152
Maximus

Maximus

    Spaceman

  • Members
  • PipPipPipPipPipPipPip
  • 1,547 posts
  • LocationCanada
In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help
Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.
 
Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, AlphaGo Zero won 100 to 0.
 
The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.
 
At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.
 

 

 

DeepMind’s Go-playing AI doesn’t need human help to beat us anymore


  • Zaphod, Yuli Ban, Alislaws and 1 other like this
If the world should blow itself up, the last audible voice would be that of an expert saying it can't be done. -Peter Ustinov
 

#153
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 610 posts
  • LocationUK

 

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help 

 

 

 

Wow... AlphaGo Zero beat the original AlphaGo (already master of Go against all human players) 100-0 purely by learning and teaching itself Go in only 72 hours.

 

The potential power of the AI that DeepMind is working on is astounding. I personally believe that this development is more significant for the future of AI than when AlphaGo beat Lee Sedol and Ke Jie.


  • eacao, Yuli Ban and Alislaws like this

#154
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 716 posts
  • LocationLondon

 

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help 

Assuming this isn't being massively over-hyped or distorted (as the media tends to do with science stories) then this is amazing!

 

How amazing depends on to what extent this is an AI that is designed to learn how to play Go from just being told the rules and to what extent it is an AI that can learn any competitive task after being given the rules/objectives. 


  • Jakob likes this

#155
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,227 posts
  • LocationIn the Basket of Deplorables

 

 

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help 

Assuming this isn't being massively over-hyped or distorted (as the media tends to do with science stories) then this is amazing!

 

How amazing depends on to what extent this is an AI that is designed to learn how to play Go from just being told the rules and to what extent it is an AI that can learn any competitive task after being given the rules/objectives. 

 

Not sure about "no human help". Someone told it to learn Go, after all.


  • Alislaws likes this

Click 'show' to see quotes from great luminaries.

Spoiler

#156
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

In this instance, "no human help" refers to its skillset. The code itself was programmed by humans, and it was directed to learn Go by humans. That hasn't changed.

What has is that we basically said "Here's Go, and here are the rules; we want you to be an unbeatable lord of this game by this time 168 hours from now. Go!"


  • Jakob and Alislaws like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#157
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,509 posts
  • LocationRaleigh, NC

Let's do this with conquering aging.


What are you without the sum of your parts?

#158
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 716 posts
  • LocationLondon

Let's do this with conquering aging.

I don't think we know all the rules yet, unfortunately. 


  • Jakob and BasilBerylium like this

#159
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 610 posts
  • LocationUK

Reddit AMA on AlphaGo Zero by project members David Silver and Julian Schrittwieser:

 

https://www.reddit.c..._schrittwieser/



#160
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,509 posts
  • LocationRaleigh, NC

 

Let's do this with conquering aging.

I don't think we know all the rules yet, unfortunately. 

 

 

Then use AI to find all of them. No problem if we don't know how to program the AI to find those "hidden" rules - an self-learning AI can find out on all its own.


What are you without the sum of your parts?





Also tagged with one or more of these keywords: DeepMind, deep learning, deep reinforcement learning, progressive neural network, artificial intelligence, AGI, differentiable neural, Google, RankBrain, artificial neural networks

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users