Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
167 replies to this topic

#21
sasuke2490

sasuke2490

    veteran gamer

  • Members
  • PipPipPipPipPip
  • 464 posts

how is this important?


https://www.instagc.com/scorch9722

Use my link for free steam wallet codes if you want


#22
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

nerfviking

When I was in college in the early 2000s, neural networks were basically a non-starter. They were like, "well, neural nets seemed promising years ago, but people have basically given up on them and moved on."

When computers started beating human grandmasters at chess, it was a testament to the power of computer hardware. Sure, deep blue was an impressive feat of programming, but it really was just an optimization of a conceptually simple recursive algorithm.
This, on the other hand, is a fucking neural net. It's not winning by doing things computers do well, it's winning by doing things humans do well better than humans. That's really significant. It's simultaneously exciting and scary.
I'm suspicious a computer is going to pass the Turing Test by 2020.


  • eacao, Casey, SkyHize and 1 other like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#23
Rusakov

Rusakov

    Member

  • Validating
  • PipPipPipPipPip
  • 339 posts
  • LocationIllinois

nerfviking

When I was in college in the early 2000s, neural networks were basically a non-starter. They were like, "well, neural nets seemed promising years ago, but people have basically given up on them and moved on."

When computers started beating human grandmasters at chess, it was a testament to the power of computer hardware. Sure, deep blue was an impressive feat of programming, but it really was just an optimization of a conceptually simple recursive algorithm.
This, on the other hand, is a fucking neural net. It's not winning by doing things computers do well, it's winning by doing things humans do well better than humans. That's really significant. It's simultaneously exciting and scary.
I'm suspicious a computer is going to pass the Turing Test by 2020.

 

 
I'll just post this:
 


  • eacao likes this

#24
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

how is this important?

Did no one answer this?
 
Let's go down the reasons, starting with this 10 year old article. I feel that articles from a long time ago will be more brutally honest about the difficulties in achieving victory.
 
Inside IT: Computers just can't seem to get past Go

A board of possibilities

While simple to explain and to learn, Go has subtle gradations of ability. There are hundreds of professionals, mainly in Japan, Korea and China, yet even the best computer version is only as good as an average European club player, who is as far from being professional as the average tennis club player is from playing at Wimbledon. Even the best Go-playing program is presently only ranked about 9kyu. Why are computers so bad at Go? First, playing Go plunges a computer into a sea of possibilities in which most drown. A chess board, with 64 squares, is comparatively tiny: each turn offers about 30 possible legal moves. In Go, with 361 points, few moves are illegal, offering more possibilities - on average, about 200 per turn. Thus the total number of possible moves in chess is between 1060 and 1070; in Go it is about 10250.

Secondly, according to David Stern of the Cavendish Laboratory in Cambridge, who is working on a doctorate on computer Go with the team at Microsoft, it is very hard to determine, for each move, what its effects will be. Although the stones do not move, their presence affects the value and "strength" of the others; adjacent stones of the same colour form "groups" which are harder to capture. That's unlike chess, where it is comparatively easy to determine the "static value" of all the pieces at any time, because there are only 32 at most, where a Go board constantly fills with new pieces. "It is very difficult to produce an efficient 'static evaluation' function to compute the value of board positions in Go for a particular player," notes Stern. "The reason is that the stones on the Go board influence each other's value in complex ways. The value of a particular stone to a player is derived from its relationships with the surrounding stones, not from itself."

The effect is that in Go there are many non-ideal moves at any point, but because games last longer - typically about 200 moves (100 stones placed by each side) rather than 70 (35 by both sides) in chess - it's harder to look far ahead enough to see a non-ideal move's defects show up. David Fotland - author of the Go-playing program Many Faces of Go, still ranked one of the strongest available - reckons that for humans, reading ahead is actually easier in Go than in chess. "People are visual, and the board configuration and relationships change less from move to move than they do in chess," he told the Intelligent Go website (intelligentgo.org).

It's the visual element of the game that nobody can quite put into code. Go has a visual element; a high-good level player will reject a potential move because its "shape" - that is, the position of a stone move being considered in relation to the stones already there - "looks bad". They're not intuitively obvious. Equally, good players also talk of stones and groups having "influence" on other parts of the board, or being "heavy" or "light" or "overextended". More simply, "urgent" moves are those that will bolsterthe player's position; good players consistently choose the most urgent moves.

But computer chess games don't understand chess; they just got better at crunching moves. Won't brute force do the job on Go, as it did in other games? No, says Bob Myers, who runs the Intelligent Go website. "A very rough estimate might be that the evaluation function [for computer Go] is, at best, 100 times slower than chess, and the branching factor is four times greater at each play; taken together, the performance requirements for a chess-like approach to Go can be estimated as 1027 times greater than that for computer chess. Moore's Law holds that computing power doubles every 18 months, so that means we might have a computer that could play Go using these techniques sometime in the 22nd century."

Based on crunching numbers and AI that existed in 2006, that is. 
 
Even then, it was apparent that we needed a new method to achieve results sooner.
 

Now, though, Stern and the Microsoft team are trying a different tack. Instead of wondering how to get a computer to beat a human, they are showing the computer how humans beat each other - by creating a huge database of moves and positions from professional games. So far they have fed in around 180,000 games, adding them to a huge database so the program can pick the best available in any given situation. Thore Graepel, of the machine learning and perception research group at Microsoft Cambridge, who is helping coordinate the work, says that both the winner's and loser's moves are included: "From the point of view of the computer, these pros are so much better that any variation in their skills is minimal, compared to the computer's playing strength." In other words, it's better to play like a losing pro than the best computer.

 
Go rankings: kyu to dan
40 kyu: Child beginner
25 kyu: Adult learner
15 kyu: Top-level social player
10 kyu: Weak club player
4-5 kyu: Average club player
1 kyu/1 dan: Transition to expert (equivalent to county-level chess player)
2 dan: Competent well-informed amateur
3-4 dan: Good amateur
5-6 dan: Strong amateur (top 100 in Europe, top 10,000 in South Korea!)
7 dan: Amateur good enough to be professional 1 dan; can play world's best without embarrassment
Pro 4 dan: can make a living at Go in East Asia (akin to golf tour professional)
Pro 9 dan: among top 100 in world; can win pro tournament
 
https://www.reddit.c..._for_computers/
 
"The number of possible configurations of the board is more than the number of atoms in the universe."
It's actually a googol times more complex than chess.
Google's blog post

 
So now you can understand why it's so hard for AI to beat Go. Now to translate that into "what does this mean".
 
Here
https://www.reddit.c...le_can_now_win/

 

So tl;dr, Go requires strategy, reason, and long-term planning, which are basic aspects of general intelligence. Before now, computers were utterly incapable of such a task. Computers that can beat us at Go will lead us to computers that can beat us at anything

 

An. Y. Thing.

 

This includes 'creative' jobs, executive positions, stock marketeering, and much more.

 

Couple this with news that AI has mastered simple 3D spaces and you have the basis for AGI. In fact, the computer that achieved this landmark goal technically was generally intelligent in a small way because it learned everything it needed to learn in order to beat us at Go. It was not preprogrammed. Think of it as "less-narrow AI." 


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#25
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

How Google’s AI Beat a Human at ‘Go’ a Decade Earlier Than Expected

Last week, news broke that the holy grail of game-playing AI—the ancient and complex Chinese game Go—was cracked by AI system AlphaGo.
AlphaGo was created by Google’s DeepMind, a UK group led by David Silver and Demis Hassabis. Last October the group invited three-time European Go champion Fan Hui to their office in London. Behind closed doors, AlphaGo defeated Hui 5 games to 0—the first time a computer program has beaten a professional Go player.
Google announced the achievement in a blog post, calling it one of the “grand challenges of AI” and noting it happened a decade earlier than experts predicted.
 
Why we thought Go required human intellect
With more potential board configurations than the number of atoms in the universe, Go is in a league of its own in terms of game complexity—and because of its vast range of possibilities, a game that requires human players use logic, yes, but also intuition.
 
The rules of Go are relatively simple: two players go back and forth playing black or white stones on a 19-by-19 grid. The goal is to capture an opponent’s stone by surrounding it completely. A player wins when their color controls more than 50 percent of the board.
The twist is, there are too many possible moves for a player to comprehend, which is why many experts often make their moves based on intuition.
Though intuition was thought to be a uniquely human element needed to master Go, DeepMind’s AlphaGo shows this isn’t necessarily the case.


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#26
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Keep up with this thread to see how the real test will turn out!


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#27
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

AlphaGo won! 4-1!
 
Let's rewind about 20 years...
 



New York Times, 1997 'Computer needs another century or two to defeat Go champion'

DEEP BLUE's recent trouncing of Garry Kasparov sent shock waves through the Western world. In much of the Orient, however, the news that a computer had beaten a chess champion was likely to have been met with a yawn.
While there are avid chess players in Japan, China, Korea and throughout the East, far more popular is the deceptively simple game of Go, in which black and white pieces called stones are used to form intricate, interlocking patterns that sprawl across the board. So subtle and beautiful is this ancient game that, to hear aficionados describe it, Go is to chess what Asian martial arts like aikido are to a boxing match.
And, Go fans proudly note, a computer has not come close to mastering what remains a uniquely human game.
Over the last decade, inspired in part by a $1.4 million prize offered by a Taiwanese organization for a computer program that can beat a champion human player, designers have been coming up with better and better Go-playing machines. Later this year, about $25,000 in prizes will be given to the best programs in two annual international contests in Japan and the United States.
As impressive as the winners of these tournaments have been, they can still be defeated by even an amateur player with perhaps a year's experience.
Deep Blue defeated the world chess champion by leveraging a moderate amount of chess knowledge with a huge amount of blind, high-speed searching power.
But this roughshod approach is powerless against the intricacies of Go, leaving computers at a distinct disadvantage. ''Brute-force searching is completely and utterly worthless for Go,'' said David Fotland, a computer engineer for Hewlett-Packard who is the author of one of the strongest programs, called The Many Faces of Go. ''You have to make a program play smart like a person.''
To play a decent game of Go, a computer must be endowed with the ability to recognize subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark of human intelligence.
''It may be a hundred years before a computer beats humans at Go -- maybe even longer,'' said Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., and a fan of the game. ''If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don't have to be a Kasparov.''
When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing.
''Go is the highest intellectual game,'' said Dr. Chen Zhixing, a retired chemistry professor at Zhongshan University, in Guangzhou, China.


  • Casey and Ghostreaper like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#28
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,508 posts
  • LocationRaleigh, NC

That's the Singularity for you. No one can see beyond the temporal event horizon.


What are you without the sum of your parts?

#29
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Well to be fair, it's not really the Singularity as of yet. It's more like a pre-Singularity teaser. 
 
Deep reinforcement learning is what's accelerating all this. Our methods prior to the development and popularization of deep learning seem extraordinarily weak by comparison. In fact, there was a chart, lemme see if I can find it...
 
There we are! Red: uses deep learning. Blue: does not use deep learning.
fvUQK9J.png

So quite frankly, if it feels like we hit the knee of the curve, it's because of deep learning. We're a decade ahead of where we should be right now. '2016 Our Timeline' = '2026 Alternate Timeline'. In the alternate timeline, where deep learning is never utilized en masse, 2016 doesn't feel too different from 2010 or 2011. Some things are better, obviously. But Watson defeating humans at Jeopardy is still the biggest deal in AI. The best AI is still no better than maybe a very lucky 1-dan Go player and would creamed, whipped, and stirred if it faced off against Lee Sedol. Doesn't matter if it plays 100 games, Mr. Sedol will win every single one of them so hard that the AI team will have to surrender each time just out of sheer embarrassment. Things like computers learning to recognize whole paragraphs or understand that a dog is a dog and a cat is a cat, that's "magical stuff for the 2040s" to this ATL, whereas it's already been trumped for us.

 
On KurzweilAI right now...
 
noam23

LOL, I think in his book "The singularity is near", Kurzweil said that in the 20 years between 2000 to 2020, humanity will make the same progress it did in the 100 years between 1900 to 2000. So it seems NYT was right in a way :)


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#30
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

AlphaGo now world's No. 1 Go player

Superior To Ke Jie and Lee Sedol

AlphaGo is now the world's No 1 Go player, toppling human's dominance in the game for thousands of years.
According to the latest ranking of GoRating, AlphaGo replaced Chinese player Ke Jie Monday to become the world's No 1 Go player, the first non-human to win the honor.
AlphaGo is an artificial intelligence Go-playing program designed by Google DeepMind, a British artificial intelligence company.
It beat South Korean Go master Lee Sedol by 4 to 1 in a challenge match in March, causing a sensation around the world.
Rumors said AlphaGo would face off against Ke Jie after winning the game against Lee. But this was denied by DeepMind co-founder Demis Hassabis later.
According to GoRating rules, a player's score on the list will change according to that of his opponent. For example, AlphaGo will see its score going up if Lee Sedol's score increases.
 
Defeating a human Go master is not the only thing the AI can achieve. DeepMind also plans to apply its AI technology to improve human well-being, including promoting healthcare in cooperation with the National Healthcare Service.
 
The success of AlphaGo drew global attention to AI, a sector attracting increasing investment.
 
Global AI revenue is expected to reach 119 billion yuan ($18.1 billion) in 2020, with an annual growth rate of 19.7 percent from 2015 to 2020, according to a report by consulting firm iResearch.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#31
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google's DeepMind has created a platform to generate speech that mimics human voice better than any other existing text-to-speech systems
Google's DeepMind AI fakes some of the most realistic human voices yet 
Google Deepmind makes breakthrough in making computers sound like humans
Google’s DeepMind Achieves Speech-Generation Breakthrough

Google’s DeepMind unit, which is working to develop super-intelligent computers, has created a system for machine-generated speech that it says outperforms existing technology by 50 percent.
U.K.-based DeepMind, which Google acquired for about 400 million pounds ($533 million) in 2014, developed an artificial intelligence called WaveNet that can mimic human speech by learning how to form the individual sound waves a human voice creates, it said in a blog post Friday. In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.
Many computer-generated speech programs work by using a large data set of short recordings of a single human speaker and then combining these speech fragments to form new words. The result is intelligible and sounds human, if not completely natural. The drawback is that the sound of the voice cannot be easily modified. Other systems form the voice completely electronically, usually based on rules about how the certain letter-combinations are pronounced. These systems allow the sound of the voice to be manipulated easily, but they have tended to sound less natural than computer-generated speech based on recordings of human speakers, DeepMind said.
 
WaveNet is a type of AI called a neural network that is designed to mimic how parts of the human brain function. Such networks need to be trained with large data sets.

 

Voice actors tremble, while indie devs squeal. Couple this with the photorealistic CG we're now capable of, and you can see where this is going...


  • wjfox, eacao, Casey and 1 other like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#32
eacao

eacao

    Member

  • Members
  • PipPipPipPipPip
  • 219 posts
  • LocationAustralia
Getting closer to unlocking the full potential of games. Unscripted dialogue with NPC's. Soon enough we'll be able to step into procedurally generated Worlds with countless unique characters, each with artificial histories and memories of their own, and each being able to generate conversation with the player on the fly based on these artificial individual memories. Unbelievably exciting. Imagine stepping into the Mass Effect universe with characters that have their own memories, personalities, beliefs, agendas, and then dynamically interacting with them as they react to your actions on the fly. Someday devs may not even need to construct preplanned endings to games, just flesh out the lore in spectacular detail for NPC's to operate within and allow the player to navigate the World and carve it out as they see fit.

Having characters that sound real to players is a big piece to that puzzle. Muy emocionante.

As a tangent, imagine NPC's with curiosity developing experiments with their own intelligence to test the laws of physics that have been programmed into the game. Envision for a sec walking through skyrim, and finding an NPC measuring how long it takes to drop a plate from a roof and comparing that to a sword to see if they fall at the same rate. And then telling his friends about his findings, and having the information stored into their own personally memories. What if NPC's develop theories on their own, and some come up with the religious theories of creation organically. What if they develop religious beliefs and traditions by themselves. Haha, fun thoughts.
  • Casey likes this
Only take advice from people who have what you want.
You don't decide your future. You decide your habits, and your habits decide your future.
Nearly all men can stand adversity, but if you want to test a man's character, give him power. - Abraham Lincoln.

#33
Whereas

Whereas

    Member

  • Members
  • PipPipPipPipPip
  • 469 posts

As a tangent, imagine NPC's with curiosity developing experiments with their own intelligence to test the laws of physics that have been programmed into the game. Envision for a sec walking through skyrim, and finding an NPC measuring how long it takes to drop a plate from a roof, and comparing that to a sword to see if they fall at the same rate. And then telling his friends about his findings, and having the information stored into their own personally memories. What if NPC's develop theories on their own, and some come up with the theory of religion organically. What if they develop religious beliefs by themselves. Haha, fun thoughts.

Not as fun as you might think ... I could easily imagine this type of AI would be prohibited. An NPC villain as smart or close to as smart as humans could well wreck havoc in the real world, and I don't just mean via a terrorist asking NPCs for advice on some real world situation.

Imagine an NPC that's smart enough to figure out he's in a game, and who wants to deal damage to the real world. This could be either as revenge for his tragic personal history (which had essentially taken place for the benefit of us being entertained), or because he worships Cthulhu. How could he affect the real world? Through the player, most likely by radicalizing them somehow (though theoretically it could also trick them in other ways if it had or gained access to, say, deeper knowledge of chemistry than the player's). Obviously their plots probably wouldn't work against most people, but how would you feel about a future in which some school shootings were *actually* caused by video games, or more specifically, by intelligent video game villains.

 

It would take some very crafty tricks (which would essentially boil down to thought-policing), so we could prevent AI NPCs from hurting us.


  • eacao and BasilBerylium like this

If you're wrong, how would you know it?


#34
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

I should also mention that DeepMind has managed to synthesize incredibly human-like piano playing as well.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#35
Water

Water

    Member

  • Members
  • PipPipPipPip
  • 176 posts

Getting closer to unlocking the full potential of games. Unscripted dialogue with NPC's. Soon enough we'll be able to step into procedurally generated Worlds with countless unique characters, each with artificial histories and memories of their own, and each being able to generate conversation with the player on the fly based on these artificial individual memories. 

 

I was actually playing a Final Fantasy yesterday and had a moment where I realised how awkward NPC's really are. Gamers are so conditioned to thinking "I want to know everything this NPC has to say, so I keep bothering it until they start repeating the same line". But this is so unrealistic and just weird.

 

It's cool how a one day later I see this article here. Pretty sure that when that tech hits games, we'll have soon a generation laughing at what kind of NPC's we settle with now.


  • eacao likes this

#36
superexistence

superexistence

    Member

  • Members
  • PipPipPipPipPip
  • 273 posts

Hopefully further tech like this will bring the cost of making video games down from the clouds.



#37
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google’s DeepMind AI grasps basic laws of physics

Google DeepMind’s artificial intelligence team, alongside researchers at the University of California, Berkeley, has trained AI machines to interact with objects in order to evaluate their properties without any prior awareness of physical laws.
The research project drew inspiration from child development and sought to train AI to mirror human capacity to interact with physical objects and infer properties such as mass, friction, and malleability.
The study, entitled Learning to perform physics experiments via deep reinforcement learning, explained that while recent advances in AI have achieved ‘superhuman performance’ in complex control problems and other processing tasks, the machines still lack a common sense understanding of our physical world – ‘it is not clear that these systems can rival the scientific intuition of even a young child.’
Lead researcher Misha Denil and his team set about various trials in different virtual environments in which the AI was faced with a series of blocks and tasked with assessing their properties.
In the first simulation, called Which is Heavier, the AI was given a set of four blocks which were the same size but varied in mass. The system had to identify which of the blocks was heaviest.
‘Assigning masses randomly… ensures it is not possible to solve this task from vision (or features) alone, since the color and identity of each block imparts no information about the mass in the current episode,’ wrote Denil.
The AI was rewarded if it correctly determined the heaviest block, and was given negative feedback if it answered incorrectly. Through this reinforcement technique, the AI was able to learn that the only way to obtain information on mass was to interact with the blocks and watch how they responded.

Google DeepMind's AI learns to play with physical objects
DeepMind is making machines 'feel' their way around virtual objects


  • Zaphod likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#38
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda



Reinforcement Learning with Unsupervised Auxiliary Tasks ["significantly outperforms... state-of-the-art on Atari... 880% expert human performance, and... first-person, three-dimensional Labyrinth tasks... speedup in learning of 10× and averaging 87% expert human performance..."]


  • Zaphod, Casey and nomad like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#39
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 610 posts
  • LocationUK

I go through London via St Pancras roughly once a fortnight. Every time I get off the train I look out at the building that houses DeepMind and just imagine what they are getting up to. 

 

This thread is definitely warranted. As they apply their AGI to more and more problems, I predict this thread to become flooded in just a few years.


  • Casey likes this

#40
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,133 posts
  • LocationAnur Margidda

Google's DeepMind AI improves learning speed and performance through UNREAL (UNsupervised REinforcement and Auxiliary Learning)

Our primary mission at DeepMind is to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how. Our reinforcement learning agents have achieved breakthroughs in Atari 2600 games and the game of Go. Such systems, however, can require a lot of data and a long time to learn so we are always looking for ways to improve our generic learning algorithms.

Our recent paper “Reinforcement Learning with Unsupervised Auxiliary Tasks” introduces a method for greatly improving the learning speed and final performance of agents. We do this by augmenting the standard deep reinforcement learning methods with two main additional tasks for our agents to perform during training. A visualisation of our agent in a Labyrinth maze foraging task can be seen below.

iclrgif.gif


  • Casey and Erowind like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!





Also tagged with one or more of these keywords: DeepMind, deep learning, deep reinforcement learning, progressive neural network, artificial intelligence, AGI, differentiable neural, Google, RankBrain, artificial neural networks

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users