AI & Robotics News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the road’s horizon.

Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions.

In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Introducing Pathways: A next-generation AI architecture
Pathways is a new way of thinking about AI that addresses many of the weaknesses of existing systems and synthesizes their strengths. To show you what I mean, let’s walk through some of AI’s current shortcomings and how Pathways can improve upon them.

Today's AI models are typically trained to do only one thing. Pathways will enable us to train a single model to do thousands or millions of things.


Today’s AI systems are often trained from scratch for each new problem – the mathematical model’s parameters are initiated literally with random numbers. Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned – how to balance, how to leap, how to coordinate the movement of your hands – and started learning each new skill from nothing.

That’s more or less how we train most machine learning models today. Rather than extending existing models to learn new tasks, we train each new model from nothing to do one thing and one thing only (or we sometimes specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks. Not only does learning each new task take longer this way, but it also requires much more data to learn each new task, since we’re trying to learn everything about the world and the specifics of that task from nothing (completely unlike how people approach new tasks).

Instead, we’d like to train one model that can not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively. That way what a model learns by training on one task – say, learning how aerial images can predict the elevation of a landscape – could help it learn another task -- say, predicting how flood waters will flow through that terrain.

We want a model to have different capabilities that can be called upon as needed, and stitched together to perform new, more complex tasks – a bit closer to the way the mammalian brain generalizes across tasks.
And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: AI & Robotics News and Discussions

Post by Ozzie guy »

Yuli Ban wrote: Fri Oct 29, 2021 5:11 am Introducing Pathways: A next-generation AI architecture
Pathways is a new way of thinking about AI that addresses many of the weaknesses of existing systems and synthesizes their strengths. To show you what I mean, let’s walk through some of AI’s current shortcomings and how Pathways can improve upon them.

Today's AI models are typically trained to do only one thing. Pathways will enable us to train a single model to do thousands or millions of things.


Today’s AI systems are often trained from scratch for each new problem – the mathematical model’s parameters are initiated literally with random numbers. Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned – how to balance, how to leap, how to coordinate the movement of your hands – and started learning each new skill from nothing.

That’s more or less how we train most machine learning models today. Rather than extending existing models to learn new tasks, we train each new model from nothing to do one thing and one thing only (or we sometimes specialize a general model to a specific task). The result is that we end up developing thousands of models for thousands of individual tasks. Not only does learning each new task take longer this way, but it also requires much more data to learn each new task, since we’re trying to learn everything about the world and the specifics of that task from nothing (completely unlike how people approach new tasks).

Instead, we’d like to train one model that can not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively. That way what a model learns by training on one task – say, learning how aerial images can predict the elevation of a landscape – could help it learn another task -- say, predicting how flood waters will flow through that terrain.

We want a model to have different capabilities that can be called upon as needed, and stitched together to perform new, more complex tasks – a bit closer to the way the mammalian brain generalizes across tasks.
This is what I have been hyping up the last few months as potentially real AGI (just not human level as it has to learn so many things to be human level including the ability to self learn).

I would love to know your personal thoughts Yuli now that google as made a more public release.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

It's literally just an announcement. No details of what exactly they're doing specifically that's different from, say, the hype over OpenAI. Hopefully this is indeed a multimodal, general-purpose world-modeling transformer as they're promising, but until they give us a more substantial update, it's hard to say anything that isn't just wishful hopes.
And remember my friend, future events such as these will affect you in the future
User avatar
wjfox
Site Admin
Posts: 8730
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: AI & Robotics News and Discussions

Post by wjfox »

‘Yeah, we’re spooked’: AI starting to have big real-world impact, says expert

Fri 29 Oct 2021 16.00 BST

A scientist who wrote a leading textbook on artificial intelligence has said experts are “spooked” by their own success in the field, comparing the advance of AI to the development of the atom bomb.

Prof Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, said most experts believed that machines more intelligent than humans would be developed this century, and he called for international treaties to regulate the development of the technology.

“The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world,” he told the Guardian. “That simply wasn’t the case for most of the history of the field – we were just in the lab, developing things, trying to get stuff to work, mostly failing to get stuff to work. So the question of real-world impact was just not germane at all. And we have to grow up very quickly to catch up.”

[...]

Russell said there was still a big gap between the AI of today and that depicted in films such as Ex Machina, but a future with machines that are more intelligent than humans was on the cards.

“I think numbers range from 10 years for the most optimistic to a few hundred years,” said Russell. “But almost all AI researchers would say it’s going to happen in this century.”

https://www.theguardian.com/technology/ ... ays-expert
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

wjfox wrote: Sun Oct 31, 2021 9:22 am ‘Yeah, we’re spooked’: AI starting to have big real-world impact, says expert

Fri 29 Oct 2021 16.00 BST

A scientist who wrote a leading textbook on artificial intelligence has said experts are “spooked” by their own success in the field, comparing the advance of AI to the development of the atom bomb.

Prof Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, said most experts believed that machines more intelligent than humans would be developed this century, and he called for international treaties to regulate the development of the technology.

“The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world,” he told the Guardian. “That simply wasn’t the case for most of the history of the field – we were just in the lab, developing things, trying to get stuff to work, mostly failing to get stuff to work. So the question of real-world impact was just not germane at all. And we have to grow up very quickly to catch up.”

[...]

Russell said there was still a big gap between the AI of today and that depicted in films such as Ex Machina, but a future with machines that are more intelligent than humans was on the cards.

“I think numbers range from 10 years for the most optimistic to a few hundred years,” said Russell. “But almost all AI researchers would say it’s going to happen in this century.”

https://www.theguardian.com/technology/ ... ays-expert
What's tragic is that we indeed HAVE been discussing this for years. Heck, I remember my shock back in the mid 2010s when I discovered that the World Economic Forum was shifting focus to discuss things like AI and transhumanism because it was such a niche topic just five years prior. Now even Sunday school church groups are talking about AI. Ten years ago, you were a schizophrenic loon if you claimed we'd be where we are right now. I know because I debated some people about robots and their effect on the economy back in 2010, and even when I was under the impression it was 100 years away, it was STILL a "Star Trek fantasy."

We had all the time in the world to do something, but we didn't because the status quo was too comfortable and we rationalized often that it wasn't actually as close as it seemed. Sure, AI is getting better, but we still don't understand the brain well enough to make it human level.

Except, we don't need human level AI for it to be transformative. And the pandemic upended that comfortable status quo with which we had grown complacent. Right on time for transformative AI to arrive.

We had every chance to prepare, and we didn't.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

It's time for the machines to take over
In the sixty-five years since John McCarthy first coined the term "artificial intelligence," one of the most surprising discoveries in the field has been that things that people do that are easy, and that should be easy for a computer to do, actually turn out to be very, very difficult for machines.

The average adult can, with relative ease, perceive a doorknob, reach out and grasp it, turn it, and open the door. The same task is still incredibly hard to engineer for even the most sophisticated robotic armatures using cutting-edge deep learning AI approaches.

By contrast, and equally surprising, machines have surpassed scientists' expectations at tasks that are hard for a human. After about five-hundred years of development by humans of the game of chess, a machine built by DeepMind called AlphaZero was able, in a matter of a couple of years, to develop to a point where it could defeat all human grandmasters, and also beat them at an even older game, Go.

That divide between the easy and the hard typically defines the debate over what's known as "human-level" AI, also known as "artificial general intelligence," or AGI, the quest to make a machine equal to a human. Many people think that the divide means that AGI won't be achieved for decades, if it is ever achieved.

AI scientist Melanie Mitchell has written, "AI is harder than we think because we are largely unconscious of the complexity of our own thought processes."

But what if the very definition of what is "human-level" intelligence is changing? What if AI is no longer measured against the quality of human thought and action in the real world, but rather the all-too-predictable behavior of spending all day staring into a smartphone?
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

The first time I interviewed Eric Schmidt, a dozen years ago when he was the C.E.O. of Google, I had a simple question about the technology that has grown capable of spying on and monetizing all our movements, opinions, relationships and tastes.

“Friend or foe?” I asked.

“We claim we’re friends,” Schmidt replied coolly.

Now that the former Google executive has a book out Tuesday on “The Age of AI,” written with Henry Kissinger and Daniel Huttenlocher, I wanted to ask him the same question about A.I.: “Friend or foe?”

“A.I. is imprecise, which means that it can be unreliable as a partner,” he said when we met at his Chelsea office. “It’s dynamic in the sense that it’s changing all the time. It’s emergent and does things that you don’t expect. And, most importantly, it’s capable of learning.

“It will be everywhere. What does an A.I.-enabled best friend look like, especially to a child? What does A.I.-enabled war look like? Does A.I. perceive aspects of reality that we don’t? Is it possible that A.I. will see things that humans cannot comprehend?”

I agree with Elon Musk that when we build A.I. without a kill switch, we are “summoning the demon” and that humans could end up, as Steve Wozniak said, as the family pets. (If we’re lucky.)
And remember my friend, future events such as these will affect you in the future
User avatar
raklian
Posts: 1747
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: AI & Robotics News and Discussions

Post by raklian »

Yuli Ban wrote: Sun Oct 31, 2021 8:29 pm
The first time I interviewed Eric Schmidt, a dozen years ago when he was the C.E.O. of Google, I had a simple question about the technology that has grown capable of spying on and monetizing all our movements, opinions, relationships and tastes.

“Friend or foe?” I asked.

“We claim we’re friends,” Schmidt replied coolly.

Now that the former Google executive has a book out Tuesday on “The Age of AI,” written with Henry Kissinger and Daniel Huttenlocher, I wanted to ask him the same question about A.I.: “Friend or foe?”

“A.I. is imprecise, which means that it can be unreliable as a partner,” he said when we met at his Chelsea office. “It’s dynamic in the sense that it’s changing all the time. It’s emergent and does things that you don’t expect. And, most importantly, it’s capable of learning.

“It will be everywhere. What does an A.I.-enabled best friend look like, especially to a child? What does A.I.-enabled war look like? Does A.I. perceive aspects of reality that we don’t? Is it possible that A.I. will see things that humans cannot comprehend?”

I agree with Elon Musk that when we build A.I. without a kill switch, we are “summoning the demon” and that humans could end up, as Steve Wozniak said, as the family pets. (If we’re lucky.)
Before too long, we won't be able to equally coexist with them for obvious reasons. Instead, we'll have to make the choice whether to join them or not.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
Post Reply