Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
167 replies to this topic

#81
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,239 posts
  • LocationIn the Basket of Deplorables

 

LOL THERE YOU GO PEOPLE AIS HAVE FLAWS TOO THEY'RE NOT PERFECT MARY SUES JUST A NEW FORM OF BEING!!!

No one here is saying they're perfect, but I'll repeat what has been said before.

They don't have to be perfect. They just have to be better.

 

???

It was strictly in response to the claim that they are perfect.


Click 'show' to see quotes from great luminaries.

Spoiler

#82
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).
View Publication


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#83
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

DeepMind just published a mind blowing paper: PathNet.

Potentially describing how general artificial intelligence will look like.
Since scientists started building and training neural networks, Transfer Learning has been the main bottleneck. Transfer Learning is the ability of an AI to learn from different tasks and apply its pre-learned knowledge to a completely new task. It is implicit that with this precedent knowledge, the AI will perform better and train faster than de novo neural networks on the new task.
DeepMind is on the path of solving this with PathNet. PathNet is a network of neural networks, trained using both stochastic gradient descent and a genetic selection method.
PathNet is composed of layers of modules. Each module is a Neural Network of any type, it could be convolutional, recurrent, feedforward and whatnot.

A network of neural networks? Where have we heard that one before, I wonder...?

Spoiler


  • eacao likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#84
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

DeepMind in talks with National Grid to reduce UK energy use by 10%

Alphabet's London-based AI outfit DeepMind and the National Grid are in early-stage talks to reduce the UK's power usage purely through neural networks and machine learning—no new infrastructure required.
 
Demis Hassabis, co-founder and CEO of DeepMind (and lead programmer on Peter Molyneux's Theme Park), hopes that the UK's energy usage could be reduced by as much as 10 percent, just through AI-led optimisation. The UK generated around 330 terrawatt-hours (TWh) of energy in 2014, at a cost of tens of billions of pounds—so a 10 percent reduction could be pretty significant, both in terms of money spent and carbon dioxide produced.
 
The National Grid, owned by a publicly traded company of the same name, owns and operates the UK's power transmission network—that is, the country's power lines and major substations. The sources of energy—power stations, hydro plants, wind turbines, and a smattering of solar panels—are owned by other big companies (primarily EDF and E.On).

Why am I not surprised?


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#85
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 722 posts
  • LocationLondon

I wish i was smart/educated enough to work for DeepMind!

 

My opinion on human usefulness post artificial super intelligence:

 

Probably at some point an ASI will kill us all.

 

Lets assume that doesn't happen because there's not much we (outside the AI research community) can do to stop it at this point if its going to happen.

Lets also assume the people programming the AI that eventually became the ASI were not idiots, because then you'd be back to the extinction scenario.

 

So the ASI is going to be keen to serve humanity in some way, its possible this will result in someone controlling it and becoming a world dictator, but by this point you could just get the ASI to design a TIVR universe where you are basically God, so i'm not sure what the payoff would be to oppressing the rest of humanity, plus it would be a lot of work.

 

Practically any job that needs doing, but no human wants to do, is going to be done by automatic systems set up by the ASI at our request. This will leave the human race in a situation where no one has to do anything and as a result we can all do whatever we want. (Assuming it doesn't infringe upon other people's rights etc. If you want to ignore other people's rights you'll need to take it to VR, where you can abuse imaginary people)

 

There will be some people like Jakob who don't like the idea of having all their problems solved by an AI. I also think a lot of people define themselves by their contribution to society in the form of their work, and a lot of those people could be left feeling a bit lost. They could easily get a Neural lace, and use that to control a space ship and launch himself/herself off towards a distant star, far from the big AI Nanny that's looking after humanity. Then they could build a whole new civilization ideally in some system with multiple goldilocks planets, where AI is limited to certain tasks or a certain max intelligence level. 

 

I will be living in VR full time, until i eventually get bored of everything I, and the rest of the human race, can imagine. That will probably take a few thousand years though so i think its fair to say i'll figure out what to do after that when it becomes a problem.

 

Worst case i'll just gradually escalate the level of real world risky extreme sports i'm involved in. This would keep me entertained (The imminent threat of death would hopefully make things less boring) as well as eventually killing me, problem solved. 


  • ddmkm122 and Nerd like this

#86
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

THIS IS IT. THIS IS IT.
THIS IS NOT A DRILL. 
 
THEY'VE DONE IT. THEY'VE ACTUALLY DONE IT.
 


DeepMind's new algorithm adds 'memory' to AI

The AI system is able to learn to play one Atari game and then use its knowledge to learn another

When DeepMind burst into prominent view in 2014 it taught its machine learning systems how to play Atari games. The system could learn to defeat the games, and score higher than humans, but not remember how it had done so.
For each of the Atari games, a separate neural network had to be created. The same system could not be used to play Space Invaders and Breakout without the information for both being given to the artificial intelligence at the same time. Now, a team of DeepMind and Imperial College London researchers have created an algorithm that allows its neural networks to learn, retain the information, and use it again.
"Previously, we had a system that could learn to play any game, but it could only learn to play one game," James Kirkpatrick, a research scientist at DeepMind and the lead author of its new research paper, tells WIRED. "Here we are demonstrating a system that can learn to play several games one after the other".
The work, published in the Proceedings of the National Academy of Sciences journal, explains how DeepMind's AI can learn in sequences using supervised learning and reinforcement learning tests. This is also explained in a blog post from the company.



 

Mark down today, 14 March, 2017, as one of those major days in computer science history.
DeepMind's actually done it: they've achieved general intelligence. And I say that without exaggeration— this one problem in AI has been the single biggest drawback to any meaningful progression in the field. Without the ability to have a single neural network learn multiple tasks (particularly using a previously learned task to deal with a new, separate one), AI has always been more "artificial" than "intelligence". I've even just said it to another Redditor— any system that cannot do something that simple doesn't deserve to be called 'intelligent.'

 

And now we've done it. They've done it.


  • Casey and sasuke2490 like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#87
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

Enabling Continual Learning in Neural Networks

Computer programs that learn to perform tasks also typically forget them very quickly. We show that the learning rule can be modified so that a program can remember old tasks when learning a new one. This is an important step towards more intelligent programs that are able to learn progressively and adaptively.

Deep neural networks are currently the most successful machine learning technique for solving a variety of tasks including language translation, image classification and image generation. However, they have typically been designed to learn multiple tasks only if the data is presented all at once. As a network trains on a particular task its parameters are adapted to solve the task. When a new task is introduced,  new adaptations overwrite the knowledge that the neural network had previously acquired. This phenomenon is known in cognitive science as ‘catastrophic forgetting’, and is considered one of the fundamental limitations of neural networks.

By contrast, our brains work in a very different way. We are able to learn incrementally, acquiring skills one at a time and applying our previous knowledge when learning new tasks. As a starting point for our recent PNAS paper, in which we propose an approach to overcome catastrophic forgetting in neural networks, we took inspiration from neuroscience-based theories about the consolidation of previously acquired skills and memories in mammalian and human brains.

Neuroscientists have distinguished two kinds of consolidation that occur in the brain: systems consolidation and synaptic consolidation. Systems consolidation is the process by which memories that have been acquired by the quick-learning parts of our brain are imprinted into the slow-learning parts. This imprinting is known to be mediated by conscious and unconscious recall - for instance, this can happen during dreaming. In the second mechanism, synaptic consolidation, connections between neurons are less likely to be overwritten if they have been important in previously learnt tasks. Our algorithm specifically takes inspiration from this mechanism to address the problem of catastrophic forgetting.

A neural network consists of several connections in much the same way as a brain. After learning a task, we compute how important each connection is to that task. When we learn a new task, each connection is protected from modification by an amount proportional to its importance to the old tasks. Thus it is possible to learn the new task without overwriting what has been learnt in the previous task and without incurring a significant computational cost. In mathematical terms, we can think of the protection we attach to each connection in a new task as being linked to the old protection value by a spring, whose stiffness is proportional to the connection’s importance. For this reason, we called our algorithm Elastic Weight Consolidation (EWC).

What's the policy for copy-pasting from blogs, anyway?


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#88
superexistence

superexistence

    Member

  • Members
  • PipPipPipPipPip
  • 273 posts

Holy crap, we just developed weak AGI... Goertzel was right....

Within eight years... and that was a year ago.


  • Casey, Yuli Ban and sasuke2490 like this

#89
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

Google Created an AI That Can Learn Almost as Fast as a Human

Deep learning machines have been generating incredible amounts of buzz in recent months. Their extensive abilities can allow them to play video games, recognize faces, and, most importantly, learn. However, these systems learn 10 times more slowly than humans, which has allowed us to keep the creeping fears of a complete artificial intelligence (AI) takeover at bay. Now, Google has developed an AI that is capable of learning almost as quickly as a human being.
Claims of this advancement in speed come from Google’s DeepMind subsidiary in London. They say that not only can their machine assimilate and act on new experiences much more quickly than previous AI models, it will soon reach human-level speeds.
It seems as though every day some new development in AI technology is being revealed to the world. From altruistic robot lawyers to predictions about the singularity, AI technology has been barreling forward. However, we have not yet reached “true AI.” No robots exist whose AI capacity matches the intelligence of the human brain.


  • Casey and Infinite like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#90
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

Damn it...

Demis Hassabis CSAR Talk Around 37:05: AlphaGo Training Time Was 3 Months; Now It Can Be Done in 1 Week


  • Casey and sasuke2490 like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#91
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

Google’s AlphaGo AI will face its biggest challenge yet next month

"Ke will face off against AlphaGo in a three-game match."

It’s just over a year since Google’s DeepMind unit stunned the world when its AlphaGo AI beat Go legend Lee Se-dol 4-1 in a five-game match; the result demonstrated mastery of a feat that had eluded computer scientists for decades and sparked a flood of new interest in the field of artificial intelligence. But there was one possible “gotcha” that Go devotees could hold onto: Lee Se-dol was once, but is no longer, quite considered the greatest player on the planet.
That distinction is now considered to belong to Ke Jie, a 19-year-old Chinese player ranked number 1 worldwide. A professional since the age of ten, Ke has beaten Lee several times in high-profile matches in recent years, including three finals victories in the three months leading up to Lee’s AlphaGo match. And next month, Ke will get his own showdown with DeepMind's AI.
At the Future of Go Summit in Wuzhen, China, Ke will face off against AlphaGo in a three-game match. Another game will see five of China’s top pros attempt to team up to take down AlphaGo, while another will see a pro-vs-pro match where each player alternates turns with an AlphaGo teammate.

After what happened earlier this year with online Go matches, I don't have much hope for Ke Jie taking a single match. In fact, I'm kinda worried that AlphaGo will dazzle Jie with inhuman moves that he can't possibly see coming or work with.

 

It’ll take place between May 23rd and May 27th.


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#92
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

Google DeepMind open sources Sonnet so you can build neural networks in TensorFlow even quicker

Google’s DeepMind announced today that it was open sourcing Sonnet, its object-oriented neural network library. Sonnet doesn’t replace TensorFlow, it’s simply a higher-level library that meshes well with DeepMind’s internal best-practices for research.
Specifically, DeepMind says in its blog post that the library is optimized to make it easier to switch between different models when conducting experiments so that engineers don’t have to upend their entire projects. To this avail, the team made changes to TensorFlow to make it easier to consider models as hierarchies. DeepMind also added transparency to variable sharing.
It’s in DeepMind’s own interest to open source Sonnet. If the community becomes acquainted with DeepMind’s internal libraries, it will become easier for the group to release models side-by-side with papers. Inversely, it also means the machine intelligence community can more feasibly contribute back by employing Sonnet in their own work.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#93
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPip
  • 1,554 posts
  • LocationGeorgia

Can't wait! Yeah as you said Yuli I don't expect him to be able to pull off what Lee Se-dol did when he faced AlphaGo considering how much it has matured. 


The growth of computation is doubly exponential growth. 


#94
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

And let me remind people of something:
 

Demis Hassabis CSAR Talk Around 37:05: AlphaGo Training Time Was 3 Months; Now It Can Be Done in 1 Week


  • sasuke2490 and TranscendingGod like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#95
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,239 posts
  • LocationIn the Basket of Deplorables

I'm going to bet on 5:0 to AlphaGo. However, soon we'll begin catching up. It'd surprise me if the top Go player in 2025 didn't have some sort of inorganic mental augmentation, however crude.


Click 'show' to see quotes from great luminaries.

Spoiler

#96
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

I'm going to bet on 5:0 to AlphaGo. However, soon we'll begin catching up. It'd surprise me if the top Go player in 2025 didn't have some sort of inorganic mental augmentation, however crude.

There's only three matches this time around. 

 

And yes, I've no doubt that humans will eventually stage a comeback against AI but only with things like neural augmentation. Neural laces + cortical modems.


  • Jakob likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#97
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,148 posts
  • LocationAnur Margidda

The Next AlphaGo Match Is Official

Just over a year ago, we saw a major milestone in the field of artificial intelligence: DeepMind’s AlphaGo took on and defeated one of the world’s top Go players, the legendary Lee Sedol. Even then, we had no idea how this moment would affect the 3,000 year old game of Go and the growing global community of devotees to this beautiful board game.

Instead of diminishing the game, as some feared, artificial intelligence (A.I.) has actually made human players stronger and more creative. It’s humbling to see how pros and amateurs alike, who have pored over every detail of AlphaGo’s innovative game play, have actually learned new knowledge and strategies about perhaps the most studied and contemplated game in history. You can read more about some of these creative strategies in this blog post.
From May 23-27, we’ll collaborate with the China Go Association and Chinese Government to bring AlphaGo, China’s top Go players, and leading A.I. experts from Google and China together in Wuzhen, one of the country’s most beautiful water towns, for the “Future of Go Summit.”


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#98
sasuke2490

sasuke2490

    veteran gamer

  • Members
  • PipPipPipPipPip
  • 464 posts

alphago is better now due to training and new hardware/software.


https://www.instagc.com/scorch9722

Use my link for free steam wallet codes if you want


#99
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

 

And yes, I've no doubt that humans will eventually stage a comeback against AI but only with things like neural augmentation. Neural laces + cortical modems.

 

 

Even with all forms of augmentation short of being fully AI, we'll still be at a huge disadvantage at all fronts. AI does not concern itself or agonize about losing humanity in the process so it'll recurse itself at full speed, without limitations.

 

That's why we'll always be behind the curve, getting exponentially further behind with time as long as we hold on to the anchor that's dragging the ocean floor - that's humanity in the essential sense.

 

We could "claim" we're better than AI at being human... but it's still a shallow boast. A sufficiently advanced AI can mimic being human perfectly and do much more at the same time. We will be utterly outmatched and outclassed. The only redeeming quality we have is that we are labelled as homo sapiens/evolutis, a species classified among the countless others in the animal kingdom. AI will merely recognize us as such.


  • Yuli Ban likes this
What are you without the sum of your parts?

#100
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,239 posts
  • LocationIn the Basket of Deplorables

Yeah, go on, lie down on the train tracks and get crushed. Cultist.


Click 'show' to see quotes from great luminaries.

Spoiler





Also tagged with one or more of these keywords: DeepMind, deep learning, deep reinforcement learning, progressive neural network, artificial intelligence, AGI, differentiable neural, Google, RankBrain, artificial neural networks

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users