Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
159 replies to this topic

#121
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

AlphaGo's Next Move

With just three stones on the board, it was clear that this was going to be no ordinary game of Go.
Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a “3:3 point” strategy - a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but it’s a favoured position of our program AlphaGo. Ke Jie was playing it at its own game.
Ke Jie’s thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.
This week’s series of thrilling games with the world’s best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.
The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials. If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.


Well, that chapter of human history is closed. 3-0 sweep against the number one Go player. But good game, and I greatly respect Ke Jie for pushing technology to its absolute limits and showing that humanity still has a few special moves to play, even if he could not overcome the sheer power of the machines.
All that's left to ask is "What's next?"


  • sasuke2490 likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#122
BasilBerylium

BasilBerylium

    Member

  • Members
  • PipPipPipPipPipPip
  • 542 posts
  • LocationArgentina

"What's next?"

StarCraft?



#123
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

No more playing games: AlphaGo AI to tackle some real world challenges

Humankind lost another important battle with artificial intelligence (AI) last month, when AlphaGo beat the world’s leading Go player Ke Jie by three games to zero.
AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.
Ke Jie described AlphaGo’s skill as “like a God of Go”.
AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. They’ve been described by one Go expert as like “games from far in the future”, which humans will study for years to improve their own play.
DeepMind says the research team behind AplhaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials.


  • Casey, Maximus, Jakob and 1 other like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#124
BasilBerylium

BasilBerylium

    Member

  • Members
  • PipPipPipPipPipPip
  • 542 posts
  • LocationArgentina

Just few minutes ago I imagined AlphaGo playing against the best Go players from all the history in the afterlife.



#125
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

A neural approach to relational reasoning

Consider the reader who pieces together the evidence in an Agatha Christie novel to predict the culprit of the crime, a child who runs ahead of her ball to prevent it rolling into a stream or even a shopper who compares the relative merits of buying kiwis or mangos at the market.
 
We carve our world into relations between things. And we understand how the world works through our capacity to draw logical conclusions about how these different things - such as physical objects, sentences, or even abstract ideas - are related to one another. This ability is called relational reasoning and is central to human intelligence.
 

We construct these relations from the cascade of unstructured sensory inputs we experience every day. For example, our eyes take in a barrage of photons, yet our brain organises this “blooming, buzzing confusion” into the particular entities that we need to relate.
 
A key challenge in developing artificial intelligence systems with the flexibility and efficiency of human cognition is giving them a similar ability -  to reason about entities and their relations from unstructured data. Solving this would allow these systems to generalize to new combinations of entities, making infinite use of finite means.
 
Modern deep learning methods have made tremendous progress solving problems from unstructured data, but they tend to do so without explicitly considering the relations between objects.
In two new papers, we explore the ability for deep neural networks to perform complicated relational reasoning with unstructured data. In the first paper - A simple neural network module for relational reasoning - we describe a Relation Network (RN) and show that it can perform at superhuman levels on a challenging task. While in the second paper -  Visual Interaction Networks - we describe a general purpose model that can predict the future state of a physical object based purely on visual observations.



Primary quote that caught my eye:
 

State-of-the-art results on CLEVR using standard visual question answering architectures are 68.5%, compared to 92.5% for humans. But using our RN-augmented network, we were able to show super-human performance of 95.5%.


Hmmm.


  • Casey and Maximus like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#126
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

DeepMind Shows AI Has Trouble Seeing Homer Simpson's Actions

The best artificial intelligence still has trouble visually recognizing many of Homer Simpson’s favorite behaviors such as drinking beer, eating chips, eating doughnuts, yawning, and the occasional face-plant. Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond “Mmm, doughnuts” or “Doh!”
The most popular AI used by Google, Facebook, Amazon, and other companies beyond Silicon Valley is based on deep learning algorithms that can learn to identify patterns in huge amounts of data. Over time, such algorithms can become much better at a wide variety of tasks such as translating between English and Chinese for Google Translate or automatically recognizing the faces of friends in Facebook photos. But even the most finely tuned deep learning relies on having lots of quality data to learn from. To help improve AI’s capability to recognize human actions in motion, DeepMind has unveiled its Kinetics dataset consisting of 300,000 video clips and 400 human action classes.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#127
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Robots May Soon Be Able to Reason Like Humans Thanks to Google's DeepMind

What differentiates humans from robots is the ability to think logically and reason, this is a key part of intelligence. When this can be replicated by a machine, then experts claim it will no doubt take artificial intelligence (AI) to the next level.

Scientists and researchers have tried to do this, but it's a difficult problem and current methods used are not as advanced. Deep machine learning is one such example of this, however researchers found that it is good for processing information, but it can struggle with reasoning.
However, the latest player to enter the game is called Relational Networks (RNs) and it was discussed in a paper by DeepMind, which is the world leader in artificial intelligence research.
DeepMind have basically attempted to enable machines to reason by tracking RNs to convolutional neural networks and recurrent neural networks, both traditionally used for computer vision and natural language processing.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#128
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda


  • Casey and Infinite like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#129
Infinite

Infinite

    Member

  • Members
  • PipPipPipPipPipPip
  • 649 posts
  • LocationDublin, Ireland

^This is great! AI is taking over the arts now and making everyone an artist. Power to the people :p


Is minic an fhírinne searbh.


#130
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

They haven't done anything of the sort yet. It's definitely a big step forward, but it's not the big breakthrough I've been waiting for.

 

I'm waiting for when I can type a sentence and the algorithm can generate an image in any art style or even in animation. There must also be a way to refine the results. I can describe a "blue box with a red bow", but there are still billions of possible ways that can be drawn. I can detail my sentence/paragraph, but I still need to be able to refine it completely.

 

Also, it needs to be uploaded to the internet. I can tell you that's years away. "Years." If someone comes back to me in 2019, quoting this post telling me how DeepArt has taken the internet by storm, I'll tell them back "two years is still 'years'."


  • caltrek likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#131
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Google’s DeepMind Is Teaching AI How to Think Like a Human

"According to two new papers released last week, DeepMind researchers at this secretive Alphabet subsidiary are now laying the groundwork for a general artificial intelligence."

Last year, for the first time, an artificial intelligence called AlphaGo beat the ranking human champion in a game of Go. This victory was both unprecedented and unexpected, given the immense complexity of the Chinese board game. While AlphaGo's victory was certainly impressive, this artificial intelligence, which has since beat a number of other Go champions, is still considered "narrow" AI—that is, a type of artificial intelligence that can only outperform a human in a very limited domain of tasks.
So even though it might be able to kick your ass at one of the most complicated board games in existence, you wouldn't exactly want to depend on AlphaGo for even the most mundane daily tasks, like making you a cup of tea or scheduling a tuneup for your car.
In contrast, the AI often depicted in science fiction is called "general" artificial intelligence, which means that it has the same level and diversity of intelligence as a human. While we already have artificial intelligences that can do everything from diagnose diseases to drive our Ubers, figuring out how to integrate all these narrow AIs into a general AI has proven challenging.

AkVVmB2.png


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#132
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

DeepMind is funding climate change research at Cambridge as it looks to use AI to slow down global warming

DeepMind is funding climate change research at the University of Cambridge as it looks to slow down global warming and reduce global energy consumption.
The artificial intelligence (AI) lab, which was acquired by Google for around £400 million in 2014, has committed an unknown amount of funding to the University of Cambridge. 
"DeepMind has a long-standing commitment to supporting research and as part of this is funding some research at Cambridge that focuses on climate change," a DeepMind spokesperson told Business Insider.
"It's always been part of DeepMind's mission to contribute to some of the biggest questions facing society and there are few areas more important or more urgent than climate change."


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#133
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

DeepMind Goes to Alberta For First International Lab

DeepMind, the London-based artificial intelligence company, is hiring three prominent computer scientists from the University of Alberta in Edmonton, Canada, to establish its first research facility outside the U.K.

The new lab will be headed by Rich Sutton, a leading expert in reinforcement learning, a form of machine learning in which software learns by trial and error to maximize a reward. The company is also hiring Michael Bowling, a professor who has used reinforcement learning to train software capable of playing poker better than many of the world’s top professionals, and Patrick Pilarski, who has studied the creation of AI-enabled artificial limbs.
DeepMind is owned by Alphabet Inc., the parent company of Google. It is best known for developing AlphaGo, software that has beaten the world’s best players at the strategy game Go, an achievement considered a major milestone in computer science.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#134
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#135
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda


  • Casey, Maximus, Infinite and 1 other like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#136
BasilBerylium

BasilBerylium

    Member

  • Members
  • PipPipPipPipPipPip
  • 542 posts
  • LocationArgentina

This thread always brings interesting things.



#137
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Google's AI genius said there could be an 'epochal event' that makes robots way more productive

Demis Hassabis, the cofounder and CEO of Google DeepMind, said there could be an "epochal event" that causes artificial intelligence (AI) to have a far greater impact on jobs that the industrial revolution.
Increasingly sophisticated algorithms are set to take millions of jobs that have traditionally been done by humans, from driving taxis to performing medical operations. The World Economic Forum (WEF) warned last January that robots, automation, and AI will replace 5 million human jobs by 2020.
 
Hassabis said at a talk in London this month that the true impact of AI on jobs "isn't clear yet."


  • Casey likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#138
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

DeepMind’s founder says to build better computer brains, we need to look at our own

After decades in the wilderness, AI has swaggered back onto center stage. Cheap computer power and massive datasets have given researchers alchemical powers to turn algorithms into gold, and the deep pockets (and marketing prowess) of Silicon Valley’s tech giants haven’t hurt either.
But despite warnings from some that the creation of super-intelligent AI is just around the corner, those working in computational coal mines are more realistic. They point out that contemporary AI programs are extremely narrow in their abilities; that they’re easily tricked, and simply don’t possess those hard-to-define but easy-to-spot skills we usually sum up as “common sense.” They are, in short, not that intelligent.
The question is: how do we get to the next level? For Demis Hassabis, founder of Google’s AI powerhouse DeepMind, the answer lies within us. Literally. In a review published in the journal Neuron today, Hassabis and three co-authors argue that the field of AI needs to reconnect to the world of neuroscience, and that it’s only by finding out more about natural intelligence can we truly understand (and create) the artificial kind.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#139
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

DeepMind is learning about the world like a real-life baby

Google’s AI project has taught itself to walk and understand the world around it

Google’s DeepMind artificial-intelligence project is, to some people, rather frightening. Those fears won’t be abated by news that Google’s DeepMind has started to interpret the world around it in a similar manner to that of a human baby. It’s also learning how to walk, so when it finally has robotic legs, you won’t even be able to run away from it.
Obviously Google isn’t in the process of building some malicious killing machine. What it is doing, however, is looking at how its AI learns. In its new, snappily named paper "SCAN: Learning Abstract Hierarchical Compositional Visual Concepts", the DeepMind team outline how they’ve managed to replicate human-like thought processes in an AI brain.
The human mind is capable of overcoming problems by utilising the conceptual tools it has at hand and recombining them in novel ways to provide a solution. DeepMind summarises this with an example of how we use raw materials to build tools that solve a problem – such as building an abacus out of clay, reeds and wood to help count large numbers.


  • Maximus and Infinite like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#140
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Google’s DeepMind and UK hospitals made illegal deal for health data, says watchdog

A deal between UK hospitals and Google’s AI subsidiary DeepMind “failed to comply with data protection law,” according to the UK’s data watchdog. The Information Commissioner's Office (ICO) made its ruling today after a year-long investigation into the agreement, which saw DeepMind process 1.6 million patient records belonging to UK citizens for the Royal Free Trust — a group of three London hospitals.
The deal was originally struck in 2015, and has since been superseded by a new agreement. At the time, DeepMind and the Royal Free said the data was being shared to develop an app named Streams, which would alert doctors if patients were at risk from a condition called acute kidney injury. An investigation by the New Scientist revealed that the terms of the agreement were more broad than hand been originally implied. DeepMind has since made new deals to deploy Streams in other UK hospitals.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!





Also tagged with one or more of these keywords: DeepMind, deep learning, deep reinforcement learning, progressive neural network, artificial intelligence, AGI, differentiable neural, Google, RankBrain, artificial neural networks

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users