AI & Robotics News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

But despite my skepticism and doubts, I just LOVE this story though. It makes me so giddy every time I read/watch news articles on it. I WANT LaMDA to be sentient. I WANT there to be a "thinking computer" somewhere on Earth. But extraordinary claims require extraordinary evidence and while what Lemoine offered was extraordinary in its own right, it wasn't extraordinary enough.

I don't doubt we'll get there. Indeed, I don't doubt we'll get there soon. But this is a false alarm.

Or perhaps more accurately it's a fire drill to prepare us for the real thing.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Robots are coming for the elderly — and that's a good thing
Millions of people are lonely worldwide (e.g., one study reports prevalence rates of 22% in the US, 23% in the UK, and 9% in Japan), and loneliness has a profound, negative effect on mental and physical health. And the population with the greatest risk of loneliness? The elderly. During COVID, loneliness may have meant the difference between life and death for elderly patients. In one study of elderly patients admitted to the ICU, those who were most socially isolated had a 119% greater chance of death.

But one solution for my grandmother and millions of other lonely people may be counterintuitive on its face: robots and artificial intelligence. These technologies have certainly garnered a lot of scrutiny, and many are concerned that a robot workforce is bound to replace humans in many industries. But focusing on the potential downsides of these innovations makes us overlook the promise of robots as social beings, life facilitators, and trusted companions.

Though once clunky and awkward, AI technology has reached the point where robots (e.g., Moxie) and chatbots (e.g., Replika) are learning and mimicking humans, mirroring speech patterns and remembering likes and dislikes. More and more, they are being built to be socially responsive, rather than apathetic and "robotic" like the chatbot technology often encountered on customer service lines. Tech companies are also putting efforts into anthropomorphism — making robots look and move like humans — which, when combined with advanced AI, is making "robot friends" a real possibility.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Google’s AI Is Something Even Stranger Than Conscious
The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.

Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.

Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Debate over AI sentience marks a watershed moment
The AI field is at a significant turning point. On the one hand, engineers, ethicists, and philosophers are publicly debating whether new AI systems such as LaMDA – Google’s artificially intelligent chatbot generator – have demonstrated sentience, and (if so) whether they should be afforded human rights. At the same time, much of the advance in AI in recent years, is based on deep learning neural networks, yet there is a growing argument from AI luminaries such as Gary Marcus and Yann LeCun that these networks cannot lead to systems capable of sentience or consciousness. Just the fact that the industry is having this debate is a watershed moment.
"Consensus is that LaMDA... not yet achieved sentience. Though this is rather beside the point. The fact that this debate is taking place at all is evidence of how far AI systems have come and suggestive of where they are going."

Starspawn0 says:
Whether or not people think all this story about "sentience" is "hype" -- or think Google is manipulating the news media or think some stupid ethics guy fell for an Eliza-like trick -- they better get used to it! I don't think this kind of talk is going away. As time goes on it will become more frequent and get louder.

As I said, my opinion is that it's a very good chatbot, not sentient. But it's not just a dumb Eliza-like system. Probably you can ask it some hard commonsense reasoning questions and it will get them right; you can ask it some questions that require "logical reasoning", and it will often get them right; you can ask it puzzles, and it can solve them; you can engage in a conversation, and it will give what seem to be "intelligent" replies. I coded up Eliza programs ages and ages ago, and know how brittle and simple they are -- what LaMDA does is multiple quantum leaps beyond just those simple programs; and quantum leaps beyond GOFAI chatbots like the Eugene Goostman bot. But it probably still has some limits, and if you chat with it long enough, they will become apparent.

The limits are not because "it's just matrix multiplication" or "it's just autocomplete" or "it's just statistics". Those are what Daniel Dennett would call "intuition pumps" that people use to persuade others not to take the claim that computers could be conscious seriously. Searle was good at that. His Chinese Room thought experiment is an example, as is his idea that if "functionalism" is true then you should be able to make a computer out of toilet paper and stones -- could toilet paper "think"? Could it be conscious?

....

But what about qualia? Well, perhaps some weak version of panpsychism is true, and machines experience qualia, too:



But if we accept panpsychism, don't we also accept that the qualia are "epiphenomenal"? -- and if so, isn't it weird that we have thoughts about qualia, that they become topics for argument (it doesn't just ride on top of the brain in operation, but even affects our thoughts about what consciousness is)? My answer to that is: not really. Maybe both our ability to introspect and our ability to have subjective experiences arise from the same cognitive capacity. Maybe this capacity is what creates subjective experience out of what we observe or imagine -- if we didn't have it, we'd just react without much subjective experience as we observe the world around us.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Experts at OpenAI have trained a neural network to play Minecraft to an equally high standard as human players.

The AI model was trained on 70,000 hours of miscellaneous in-game footage, supplemented with a small database of videos in which specific in-game tasks were performed, with the keyboard and mouse inputs also recorded.

After fine-tuning, OpenAI found the model was able to perform all manner of skills, from swimming to hunting for animals and consuming their meat. It also grasped the “pillar jump”, a move whereby the player places a block of material below themselves in mid-air in order to gain elevation.

Perhaps most impressive, the AI was able to craft diamond tools (requiring a long string of actions to be executed in sequence), which OpenAI described as an “unprecedented” achievement for a computer agent.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Recent developments in machine learning have made virtual assistants reliable for various activities, including restaurant recommendations, bill-paying assistance, and appointment reminders.

A novel work by the Microsoft research team now presents GODEL, a Grounded Open Dialogue Language Model. Their work introduces a new class of pretrained language models that permit both task-oriented and social dialogue and are assessed by the utility of their responses. With GODEL, they aim to help researchers and developers design dialogue agents that are unlimited in the types of queries they may react to and the sources of information they can draw from.

The potential for meaningful, open-ended conversational exchanges is present in modern state-of-the-art models that use massive PLMs. Still, they are resistant to meaningful comparison because there is no agreement on how to evaluate them. Their method overcomes the absence of reliable automated evaluation criteria, which has long been a barrier to general-purpose open-ended discussion models.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.

Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.

The question of what it would mean for an AI model to be sentient is complicated (see, for instance, our colleague’s take), and our goal here is not to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.
And remember my friend, future events such as these will affect you in the future
weatheriscool
Posts: 12967
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

Robots are driving US co-workers to substance abuse, mental health issues, finds study
https://phys.org/news/2022-06-robots-co ... ental.html
by University of Pittsburgh

Automation enhances industry, but it's harmful to the mental health of its human co-workers.

A University of Pittsburgh study suggests that while American workers who work alongside industrial robots are less likely to suffer physical injury, they are more likely to suffer from adverse mental health effects—and even more likely to abuse drugs or alcohol.

These findings come from a study published last week in Labour Economics by Pitt economist Osea Giuntella, along with a team that included Pitt colleague Rania Gihleb, an assistant professor in the Department of Economics, and Tianyi Wang, who is in a post-doctorate program after earning his Ph.D. at Pitt.

"There is a wide interest in understanding labor market effects of robots. And evidence of how robots affected employment and wages of workers, particularly in the manufacturing sector," said Giuntella, an expert in labor economics and economic demography and an assistant professor in the Department of Economics in the Kenneth P. Dietrich School of Arts and Sciences.

"However, we still know very little about the effects on physical and mental health. On one hand, robots could take some of the most strenuous, physically intensive, and risky tasks, reducing workers' risk. On the other hand, the competition with robots may increase the pressure on workers who may lose their jobs or forced to retrain. Of course, labor market institutions may play an important role, particularly in a transition phase."
weatheriscool
Posts: 12967
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

'Fake' data helps robots learn the ropes faster
https://techxplore.com/news/2022-06-fak ... aster.html
by University of Michigan
In a step toward robots that can learn on the fly like humans do, a new approach expands training data sets for robots that work with soft objects like ropes and fabrics, or in cluttered environments.

Developed by robotics researchers at the University of Michigan, it could cut learning time for new materials and environments down to a few hours rather than a week or two.

In simulations, the expanded training data set improved the success rate of a robot looping a rope around an engine block by more than 40% and nearly doubled the successes of a physical robot for a similar task.

That task is among those a robot mechanic would need to be able to do with ease. But using today's methods, learning how to manipulate each unfamiliar hose or belt, would require huge amounts of data, likely gathered for days or weeks, says Dmitry Berenson, U-M associate professor of robotics and senior author of a paper presented today at Robotics: Science and Systems in New York City.

In that time, the robot would play around with the hose—stretching it, bringing the ends together, looping it around obstacles and so on—until it understood all the ways the hose could move.

"If the robot needs to play with the hose for a long time before being able to install it, that's not going to work for many applications," Berenson said.
Post Reply