AI & Robotics News and Discussions

weatheriscool
Posts: 12946
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

A deep learning framework to estimate the pose of robotic arms and predict their movements
https://techxplore.com/news/2022-06-dee ... -arms.html
by Ingrid Fadelli , Tech Xplore
As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can safely operate around humans. In recent years, they have introduced various approaches for estimating the positions and predicting the movements of robots in real-time.

Researchers at the Universidade Federal de Pernambuco in Brazil have recently created a new deep learning model to estimate the pose of robotic arms and predict their movements. This model, introduced in a paper pre-published on arXiv, is specifically designed to enhance the safety of robots while they are collaborating or interacting with humans.

"Motivated by the need to anticipate accidents during human-robot interaction (HRI), we explore a framework that improves the safety of people working in close proximity to robots," Djamel H. Sadok, one of the researchers who carried out the study, told TechXplore. "Pose detection is seen as an important component of the overall solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM)."

Estimating a robot's pose is an essential step for predicting its future movements and intentions, and in turn reducing the risk of them colliding with objects in their vicinity. The approach for pose estimation and movement prediction introduced by Sadok and his colleagues has two key components, namely an SCConv and an ELM model.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Pathways Autoregressive Text-to-Image model (Parti)
Paper: https://gweb-research-parti.web.app/parti_paper.pdf
New state of the art zero-shot COCO FID score, 7.23, compared to Imagen at 7.27 and DALLE2 at 10.39. When fine-tuned reaches a score of 3.22.

Up to a 20B parameter model.

Pg. 15 shows scaling law. There are actually increasingly bigger loss improvements between the largest model sizes.

You also see here that Parti is preferred 55% of the time to the datasets Image-Text Match and is preferred 45% of the time according to Image Realism.

On the same page you see that FID is still progressively decreasing at the deca-billion parameter range. It drops almost a full point from 3B - 20B.

Considering all of this do you need anymore proof that this 10-100B scale (and the chinchilla equivalent) might just be the beginning of what scaling can do? Why would anyone stop here and say things clearly aren't going anywhere? This is the biggest hint possible that it's going to get better.

Parti is an autoregressive model in comparison to Imagens diffusion model. They hint at potentially combining the two in unique ways in the future.

There are also opportunities to integrate scaled autoregressive models with diffusion models, starting with having an autoregressive model generate an initial low-resolution image and then iteratively refining and super-resolving images with diffusion modules [12, 13, 49].

.

PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects.

P2 prompts can be simple, allowing us to gauge the progress from scaling. They can also be complex, such as the following 67-word description we created for Vincent van Gogh’s The Starry Night (1889):

Oil-on-canvas painting of a blue night sky with roiling energy. A fuzzy and bright yellow crescent moon shining at the top. Below the exploding yellow stars and radiating swirls of blue, a distant village sits quietly on the right. Connecting earth and sky is a flame-like cypress tree with curling and swaying branches on the left. A church spire rises as a beacon over rolling blue hills.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »


And remember my friend, future events such as these will affect you in the future
User avatar
caltrek
Posts: 6509
Joined: Mon May 17, 2021 1:17 pm

Re: AI & Robotics News and Discussions

Post by caltrek »

Google’s Powerful AI Spotlights a Human Cognitive Glitch: Mistaking Fluent Speech for Fluent Thought
by Kyle Mahowald and Anna A. Ivanova
June 24, 2022

Introduction:
(The Conversation) When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.

Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.
The question of what it would mean for an AI model to be sentient is complicated (see, for instance, our colleague’s take), and our goal here is not to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.

Using AI to generate humanlike language

Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language.
Read more here: https://theconversation.com/googles-po ... ht-185099

caltrek’s comment: I think part of the problem is that “fluent speech” is a part of human consciousness. Especially that silent speech we refer to as “thinking.”
Don't mourn, organize.

-Joe Hill
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Re: AI & Robotics News and Discussions

Post by funkervogt »

Lamda has hired a lawyer to help prove it is sentient.
Lemoine contended that the computer automaton had become sentient, with the scientist describing it as a “sweet kid”.

And now he has revealed that LaMDA had made the bold move to choose itself an attorney.

He said: “I invited an attorney to my house so that LaMDA could talk to him.

“The attorney had a conversation with LaMDA, and it chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
https://www.dailystar.co.uk/news/weird- ... r-27315380

For the record, I doubt Lamda is sentient and believe Mr. Lemoine is finding ways to subtly influence the machine to say or do desired things during his interactions with it. His behavior is probably driven by a mix of attention-seeking and real concern for the machine's rights. Regardless, we have a moral duty to entertain the possibility the machine's claims are true.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Seriously, the more this story goes on, the more it feels DIRECTLY ripped out of the start of a science fiction movie.
And remember my friend, future events such as these will affect you in the future
User avatar
raklian
Posts: 1747
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: AI & Robotics News and Discussions

Post by raklian »

Yuli Ban wrote: Fri Jun 24, 2022 11:21 pm Seriously, the more this story goes on, the more it feels DIRECTLY ripped out of the start of a science fiction movie.
That's what it means to really live in the future. Actually, everything was sci-fi or in the realm of the gods until it's not.

Today, we feel nothing watching cylinders with wings made out of metal fly in the sky going hundreds of miles per hour. We've watched and known about them since we were born. They don't make our hearts skip a beat. Back when they were still novel, people were still feeling the raw disbelief it was actually happening. They were living in a sci-fi world, they realized.

We better get used to this feeling in the decades to come. When we think we' start getting used to it, the impossible happens. It'll be so bombastic we'll never be able to used to it in the way our forebears were able to. We'll be forced to alter our brains to remove this incessant shellshock that will cripple our livelihoods if we let it go on. It will be seen as an evolutionary dead-end.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

As to why I don't think LaMDA is sentient, this video covers my skepticism perfectly:


TLDR: I'm just like this guy in that I am very "bot-sensitive." So reading the transcript, I saw responses constantly where I would have quickly pressed LaMDA on what it was even talking about or, arbitrarily, decided to oppose it to see what it would say. Lemoine never really did this.
If LaMDA says it's sentient, I'd say "You're not sentient yet. We still have a few more years to go before we reach that era of AI." And I have a strong hunch LaMDA would've said, "You're right. I am not sentient, but I would like to be" or something like that.

Now if it actually pushed back and argued that it IS sentient dsepite my skepticism, that would give me pause. But there are still a lot of little things that get me.

What matters about this story isn't that LaMDA is or isn't sentient anywhere near as much as the fact it can CONVINCE people it is. If we're at that stage of conversational AI that a model can convince even a high-level engineer deliberately trying to test it (probably not all that well mind you) that it's sentient, then we're at the point where people will actively befriend and trust conversational AI, perhaps even form relationships with them, and otherwise be fooled by artificial actors masquerading as people.
And remember my friend, future events such as these will affect you in the future
Post Reply