AI & Robotics News and Discussions

User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

This video is so old, it could have been on the very first Future Timeline forum:

And yet it's still impressive. I watched this without sound at first and thought "why are they randomly speeding this up? That's kind of funny."

They didn't speed up the video.

Imagine trying to thumbwrestle this thing, and it malfunctions.


So just think: if we could accomplish this in 2009, imagine what another 12 years of progress gave us.
And remember my friend, future events such as these will affect you in the future
User avatar
wjfox
Site Admin
Posts: 8926
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: AI & Robotics News and Discussions

Post by wjfox »

AI is thousands of times faster at simulating Universe

19th May 2021

A modelling technique based on pairs of neural networks that "compete" against each other for the best result could usher in a new era of super high-resolution cosmological simulations.

Read more: https://www.futuretimeline.net/blog/202 ... faster.htm


Image
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

MUM: A new AI milestone for understanding information
When I tell people I work on Google Search, I’m sometimes asked, "Is there any work left to be done?" The short answer is an emphatic “Yes!” There are countless challenges we're trying to solve so Google Search works better for you. Today, we’re sharing how we're addressing one many of us can identify with: having to type out many queries and perform many searches to get the answer you need.

Take this scenario: You’ve hiked Mt. Adams. Now you want to hike Mt. Fuji next fall, and you want to know what to do differently to prepare. Today, Google could help you with this, but it would take many thoughtfully considered searches — you’d have to search for the elevation of each mountain, the average temperature in the fall, difficulty of the hiking trails, the right gear to use, and more. After a number of searches, you’d eventually be able to get the answer you need.

But if you were talking to a hiking expert; you could ask one question — “what should I do differently to prepare?” You’d get a thoughtful answer that takes into account the nuances of your task at hand and guides you through the many things to consider.

This example is not unique — many of us tackle all sorts of tasks that require multiple steps with Google every day. In fact, we find that people issue eight queries on average for complex tasks like this one.

Today's search engines aren't quite sophisticated enough to answer the way an expert would. But with a new technology called Multitask Unified Model, or MUM, we're getting closer to helping you with these types of complex needs. So in the future, you’ll need fewer searches to get things done.

MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models. And MUM is multimodal, so it understands information across text and images and, in the future, can expand to more modalities like video and audio.

Take the question about hiking Mt. Fuji: MUM could understand you’re comparing two mountains, so elevation and trail information may be relevant. It could also understand that, in the context of hiking, to “prepare” could include things like fitness training as well as finding the right gear.
Image
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

The race to understand the thrilling, dangerous world of language AI
On May 18, Google CEO Sundar Pichai announced an impressive new tool: an AI system called LaMDA that can chat to users about any subject.

To start, Google plans to integrate LaMDA into its main search portal, its voice assistant, and Workplace, its collection of cloud-based work software that includes Gmail, Docs, and Drive. But the eventual goal, said Pichai, is to create a conversational interface that allows people to retrieve any kind of information—text, visual, audio—across all Google’s products just by asking.

LaMDA’s rollout signals yet another way in which language technologies are becoming enmeshed in our day-to-day lives. But Google’s flashy presentation belied the ethical debate that now surrounds such cutting-edge systems. LaMDA is what’s known as a large language model (LLM)—a deep-learning algorithm trained on enormous amounts of text data.

Studies have already shown how racist, sexist, and abusive ideas are embedded in these models. They associate categories like doctors with men and nurses with women; good words with white people and bad ones with Black people. Probe them with the right prompts, and they also begin to encourage things like genocide, self-harm, and child sexual abuse. Because of their size, they have a shockingly high carbon footprint. Because of their fluency, they easily confuse people into thinking a human wrote their outputs, which experts warn could enable the mass production of misinformation.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Agility Robotics' Cassie Is Now Astonishingly Good at Stairs
Bipedal robots are a huge hassle. They’re expensive, complicated, fragile, and they spend most of their time almost but not quite falling over. That said, bipeds are worth it because if you want a robot to go everywhere humans go, the conventional wisdom is that the best way to do so is to make robots that can walk on two legs like most humans do. And the most frequent, most annoying two-legged thing that humans do to get places? Going up and down stairs.

Stairs have been a challenge for robots of all kinds (bipeds, quadrupeds, tracked robots, you name it) since, well, forever. And usually, when we see bipeds going up or down stairs nowadays, it involves a lot of sensing, a lot of computation, and then a fairly brittle attempt that all too often ends in tears for whoever has to put that poor biped back together again.

You’d think that the solution to bipedal stair traversal would just involve better sensing and more computation to model the stairs and carefully plan footsteps. But an approach featured in upcoming Robotics Science and Systems conference paper from Oregon State University and Agility Robotics does away will all of that out and instead just throws a Cassie biped at random outdoor stairs with absolutely no sensing at all. And it works spectacularly well.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
weatheriscool
Posts: 13560
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

Robotic ‘Third Thumb’ use can alter brain representation of the hand
https://www.ucl.ac.uk/news/2021/may/rob ... ation-hand
20 May 2021

Using a robotic ‘Third Thumb’ can impact how the hand is represented in the brain, finds a new study led by UCL researchers.
Dani Clode with Third Thumb

The team trained people to use a robotic extra thumb and found they could effectively carry out dextrous tasks, like building a tower of blocks, with one hand (now with two thumbs). The researchers report in the journal Science Robotics that participants trained to use the thumb also increasingly felt like it was a part of their body.

Designer Dani Clode began developing the device, called the Third Thumb, as part of an award-winning graduate project at the Royal College of Art, seeking to reframe the way we view prosthetics, from replacing a lost function, to an extension of the human body. She was later invited to join Professor Tamar Makin’s team of neuroscientists at UCL who were investigating how the brain can adapt to body augmentation.
User avatar
Time_Traveller
Posts: 2230
Joined: Sun May 16, 2021 4:49 pm
Location: San Francisco, USA, June 7th 1929 C.E

Re: AI & Robotics News and Discussions

Post by Time_Traveller »

Robotic ‘Third Thumb’ use can alter brain representation of the hand
20 May 2021

Image

The team trained people to use a robotic extra thumb and found they could effectively carry out dextrous tasks, like building a tower of blocks, with one hand (now with two thumbs). The researchers report in the journal Science Robotics that participants trained to use the thumb also increasingly felt like it was a part of their body.

Designer Dani Clode began developing the device, called the Third Thumb, as part of an award-winning graduate project at the Royal College of Art, seeking to reframe the way we view prosthetics, from replacing a lost function, to an extension of the human body. She was later invited to join Professor Tamar Makin’s team of neuroscientists at UCL who were investigating how the brain can adapt to body augmentation.

Professor Makin (UCL Institute of Cognitive Neuroscience), lead author of the study, said: “Body augmentation is a growing field aimed at extending our physical abilities, yet we lack a clear understanding of how our brains can adapt to it. By studying people using Dani’s cleverly-designed Third Thumb, we sought to answer key questions around whether the human brain can support an extra body part, and how the technology might impact our brain.”

The Third Thumb is 3D-printed, making it easy to customise, and is worn on the side of the hand opposite the user’s actual thumb, near the little (pinky) finger. The wearer controls it with pressure sensors attached to their feet, on the underside of the big toes. Wirelessly connected to the Thumb, both toe sensors control different movements of the Thumb by immediately responding to subtle changes of pressure from the wearer.
https://www.ucl.ac.uk/news/2021/may/rob ... ation-hand
"We all have our time machines, don't we. Those that take us back are memories...And those that carry us forward, are dreams."

-H.G Wells.
User avatar
Yuli Ban
Posts: 4643
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

IBM’s Project CodeNet will test how far you can push AI to write software
IBM’s AI research division has released a 14-million-sample dataset to develop machine learning models that can help in programming tasks. Called Project CodeNet, the dataset takes its name after ImageNet, the famous repository of labeled photos that triggered a revolution in computer vision and deep learning.

While there’s a scant chance that machine learning models built on the CodeNet dataset will make human programmers redundant, there’s reason to be hopeful that they will make developers more productive.
And remember my friend, future events such as these will affect you in the future
User avatar
Time_Traveller
Posts: 2230
Joined: Sun May 16, 2021 4:49 pm
Location: San Francisco, USA, June 7th 1929 C.E

Re: AI & Robotics News and Discussions

Post by Time_Traveller »

The MIT humanoid robot: A dynamic robotic that can perform acrobatic behaviors
May 24, 2021

Image

Creating robots that can perform acrobatic movements such as flips or spinning jumps can be highly challenging. Typically, in fact, these robots require sophisticated hardware designs, motion planners and control algorithms.

Researchers at Massachusetts Institute of Technology (MIT) and University of Massachusetts Amherst recently designed a new humanoid robot supported by an actuator-aware kino-dynamic motion planner and a landing controller. This design, presented in a paper pre-published on arXiv, could allow the humanoid robot to perform back flips and other acrobatic movements.

"In this work, we tried to come up with realistic control algorithm to make a real humanoid robot perform acrobatic behavior such as back/front/side-flip, spinning jump, and jump over an obstacle," Donghyun Kim, one of the researchers who developed the robot's software and controller, told TechXplore. "To do that, we first experimentally identified the actuator performance and then represent the primary limitations in our motion planner."
https://techxplore.com/news/2021-05-mit ... botic.html
"We all have our time machines, don't we. Those that take us back are memories...And those that carry us forward, are dreams."

-H.G Wells.
Post Reply