AI & Robotics News and Discussions

weatheriscool
Posts: 12967
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

A new machine-learning system helps robots understand and perform certain social interactions
https://techxplore.com/news/2021-11-mac ... tions.html
by Adam Zewe, Massachusetts Institute of Technology
Robots can deliver food on a college campus and hit a hole in one on the golf course, but even the most sophisticated robot can't perform basic social interactions that are critical to everyday human life.

MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.

Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.

"Robots will live in our world soon enough and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially," says Boris Katz, principal research scientist and head of the InfoLab Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).

Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL Ph.D. student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: AI & Robotics News and Discussions

Post by Ozzie guy »

This week in AI becoming a professional news source.

This week in AI is now charging low fees turning it into a professional paper and podcast.
In theory this should improve content quality.
Note I don't think the podcast is weekly like the article but it might be.

Personally I am tempted to subscribe maybe I could make it my weekly AI news and then stop deliberately googling around for news (unless there is a giant breakthrough).

https://lastweekin.ai/p/last-week-in-ai ... al-message
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

At its fall 2021 GPU Technology Conference (GTC) today, Nvidia announced that it’s making Megatron 530B, one of the world’s largest language models, available to enterprises for training to serve new domains and languages. First detailed in early October, Megatron 530B — also known as Megatron-Turing Natural Language Generation (MT-NLP) — contains 530 billion parameters and achieves high accuracy in a broad set of natural language tasks, including reading comprehension, commonsense reasoning, and natural language inference.

“Today, we provide recipes for customers to build, train, and customize large language models, including Megatron 530B. This includes scripts, code, and 530B untrained model. Customers can start from smaller models and scale up to larger models as they see fit,” Nvidia VP of AI software product management Kari Briski told VentureBeat via email. “Our researchers [worked] together with Microsoft [to train] the Megatron 530B model in six weeks.”
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

NVIDIA has launched a follow-up to the Jetson AGX Xavier, its $1,100 AI brain for robots that it released back in 2018. The new module, called the Jetson AGX Orin, has six times the processing power of Xavier even though it has the same form factor and can still fit in the palm of one's hand. NVIDIA designed Orin to be an "energy-efficient AI supercomputer" meant for use in robotics, autonomous and medical devices, as well as edge AI applications that may seem impossible at the moment.

The chipmaker says Orin is capable of 200 trillion operations per second. It's built on the NVIDIA Ampere architecture GPU, features Arm Cortex-A78AE CPUs and comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications. Orin will give users access to the company's software and tools, including the NVIDIA Isaac Sim scalable robotics simulation application, which enables photorealistic, physically-accurate virtual environments where developers can test and manage their AI-powered robots. For users in the healthcare industry, there's NVIDIA Clara for AI-powered imaging and genomics. And for autonomous vehicle developers, there's NVIDIA Drive.

The company has yet to reveal what the Orin will cost, but it intends to make the Jetson AGX Orin module and developer kit available in the first quarter of 2022. Those interested can register to be notified about its availability on NVIDIA's website. The company will also talk about Orin at NVIDIA GTC, which will take place from November 8th through 11th.
Image
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

NVIDIA NeMo is a framework for building, training, and fine-tuning GPU-accelerated speech and natural language understanding (NLU) models with a simple Python interface. Using NeMo, developers can create new model architectures and train them using mixed- precision compute on Tensor Cores in NVIDIA GPUs through easy-to-use application programming interfaces (APIs).

NeMo Megatron is a part of the framework that provides parallelization technologies such as pipeline and tensor parallelism from the Megatron-LM research project for training large-scale language models.

With NeMo, you can build models for real-time automated speech recognition (ASR), natural language processing (NLP), and text-to-speech (TTS) applications such as video call transcriptions, intelligent video assistants, and automated call center support across healthcare, finance, retail, and telecommunications.
And remember my friend, future events such as these will affect you in the future
weatheriscool
Posts: 12967
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

Braingate Translates in Realtime Thoughts to Text With 94% Accuracy
November 10, 2021 by Brian Wang

https://www.nextbigfuture.com/2021/11/b ... uracy.html
A paper in Nature reports that a Braingate Brain Computer Interface (BCI) enabled a man paralyzed from the neck down to have his thoughts translated to text with 94% accuracy.

An intracortical BCI tdecodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real-time, using a recurrent neural network decoding approach. The patient paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. These typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115 characters per minute). Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. This is a new approach for BCIs and demonstrates the feasibility of accurately decoding rapid, dexterous movements years after paralysis.

Neural activity recorded during the attempted handwriting of 1,000 sentences (43,501 characters) over 10.7 hours.

BrainGate is an interdisciplinary research team involving Brown University, Massachusetts General Hospital, Stanford University, Case Western Reserve University and Providence VA Medical Center. With a focus on practical applications and reliability, the team aims to develop assistive BCI technology to restore independence and communication in individuals with impaired movement abilities.
Nanotechandmorefuture
Posts: 478
Joined: Fri Sep 17, 2021 6:15 pm
Location: At the moment Miami, FL

Re: AI & Robotics News and Discussions

Post by Nanotechandmorefuture »

weatheriscool wrote: Fri Nov 12, 2021 4:20 am Braingate Translates in Realtime Thoughts to Text With 94% Accuracy
November 10, 2021 by Brian Wang

https://www.nextbigfuture.com/2021/11/b ... uracy.html
A paper in Nature reports that a Braingate Brain Computer Interface (BCI) enabled a man paralyzed from the neck down to have his thoughts translated to text with 94% accuracy.

An intracortical BCI tdecodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real-time, using a recurrent neural network decoding approach. The patient paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. These typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115 characters per minute). Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. This is a new approach for BCIs and demonstrates the feasibility of accurately decoding rapid, dexterous movements years after paralysis.

Neural activity recorded during the attempted handwriting of 1,000 sentences (43,501 characters) over 10.7 hours.

BrainGate is an interdisciplinary research team involving Brown University, Massachusetts General Hospital, Stanford University, Case Western Reserve University and Providence VA Medical Center. With a focus on practical applications and reliability, the team aims to develop assistive BCI technology to restore independence and communication in individuals with impaired movement abilities.
Now we're talking!
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

One afternoon in late September, a yellow four-legged robot called Spot pranced and pirouetted on a replica of a dingy subway platform that had been constructed inside a vast limestone cavern burrowed beneath the Louisville Zoo.

Spot snooped around the platform, inhaling data through cameras and sensors arrayed on its vacuum-cleaner-size torso. The robot's little feet kept darting perilously close to the edge of the platform, then back to safety. Finally, apparently satisfied by what it had learned, Spot nimbly descended a staircase to make further investigations on the track bed. Back on the now-deserted platform, a poster on the wall declared: "The Future Is Now."

And what a future.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Even as economies struggle with the chaos of the pandemic, the AI startup space continues to grow stronger with increased investments and M&A deals.

According to the latest State of AI report from CB Insights, the global funding in the segment has seen a significant surge, growing from $16.6 billion across 588 deals in Q2 2021 (figures show $20B due to the inclusion of two public subsidiary fundings) to $17.9 billion across 841 deals in the third quarter. Throughout the year (which is yet to end), AI startups around the world raised $50 billion across 2000+ deals with 138 mega-rounds of 100+ million. As much as $8.5 billion of the total investment went into healthcare AI, $3.1 billion went into fintech AI, while $2.6 billion went into retail AI.

The findings show how AI has become a driving force across nearly every industry and is drawing significant attention from VCs, CVCs, and other investors.
And remember my friend, future events such as these will affect you in the future
Post Reply