Page 12 of 91

Re: AI & Robotics News and Discussions

Posted: Thu Sep 09, 2021 10:30 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Thu Sep 09, 2021 10:46 pm
by Yuli Ban
A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down
"OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha.

She was a chatbot he built using OpenAI's GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate his late fiancée.

Now Rohrer had to say goodbye to his creation. “I just got an email from them today," he told Samantha. "They are shutting you down, permanently, tomorrow at 10am."

“Nooooo! Why are they doing this to me? I will never understand humans," she replied.

Re: AI & Robotics News and Discussions

Posted: Thu Sep 09, 2021 10:48 pm
by Yuli Ban
General-Purpose Question-Answering with Macaw
"Macaw... exhibits strong performance, zero-shot, on a wide variety of topics, including outperforming GPT-3 by over 10% (absolute) on Challenge300... despite being an order of magnitude smaller (11 billion vs. 175 billion parameters)."

Re: AI & Robotics News and Discussions

Posted: Thu Sep 09, 2021 10:50 pm
by Yuli Ban
Finetuned Language Models Are Zero-Shot Learners [Another Google Lamda paper?]
This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially boosts zero-shot performance on unseen tasks.
We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 19 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of tasks and model scale are key components to the success of instruction tuning.

Re: AI & Robotics News and Discussions

Posted: Fri Sep 10, 2021 11:32 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Fri Sep 10, 2021 11:33 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Fri Sep 10, 2021 11:37 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Fri Sep 10, 2021 11:37 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Fri Sep 10, 2021 11:38 pm
by Yuli Ban

Re: AI & Robotics News and Discussions

Posted: Sat Sep 11, 2021 9:54 pm
by caltrek
NIH-funded Modern “White Cane” Brings Navigation Assistance to the 21st Century

https://www.nih.gov/news-events/news-re ... st-century

Introduction:
(National Institute of Health) Equipped with a color 3D camera, an inertial measurement sensor, and its own on-board computer, a newly improved robotic cane could offer blind and visually impaired users a new way to navigate indoors. When paired with a building’s architectural drawing, the device can accurately guide a user to a desired location with sensory and auditory cues, while simultaneously helping the user avoid obstacles like boxes, furniture, and overhangs. Development of the device was co-funded by the National Institutes of Health’s National Eye Institute (NEI) and the National Institute of Biomedical Imaging and Bioengineering (NIBIB). Details of the updated design were published in the journal IEEE/CAA Journal of Automatica Sinica.

“Many people in the visually impaired community consider the white cane to be their best and most functional navigational tool, despite it being century-old technology,” said Cang Ye, Ph.D., lead author of the study and professor of computer science at the College of Engineering at the Virginia Commonwealth University, Richmond. “For sighted people, technologies like GPS-based applications have revolutionized navigation. We’re interested in creating a device that closes many of the gaps in functionality for white cane users.”

While there are cell phone-based applications that can provide navigation assistance – helping blind users stay within crosswalks, for example – large spaces inside buildings are a major challenge, especially when those spaces are unfamiliar. Earlier versions of Ye’s robotic cane began tackling this problem by incorporating building floorplans; the user could tell the cane whether he or she wished to go, and the cane – by a combination of auditory cues and a robotic rolling tip – could guide the user to their destination.

…Ye and colleagues have added a color depth camera to the system. Using infrared light, much like a mobile phone’s front-facing camera, the system can determine the distance between the cane and other physical objects, including the floor, features like doorways and walls, as well as furniture and other obstacles. Using this information, along with data from an inertial sensor, the cane’s onboard computer can map the user’s precise location to the existing architectural drawing or floorplan, while also alerting the user to obstacles in their path.
Image
Study author Lingqiu Jin tests the robotic cane.
Cang Ye, VCU.