AI & Robotics News and Discussions

Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: AI & Robotics News and Discussions

Post by Vakanai »

Tadasuke wrote: Mon Jun 13, 2022 8:03 pm >
Papercup raises $20M for AI that automatically dubs videos

https://techcrunch.com/2022/06/09/paper ... ubs-videos
Dubbing videos is an expensive and laborious process, costing as much as $75 per minute even for simple videos. Ideally, AI-powered solutions can help to automatically dub content in different languages while preserving the voice and emotions of the original actors. Papercup aims to do just that, and it recently translated all 30 seasons of Bob Ross’s “The Joy of Painting.” The company’s latest fundraising will allow it to research more advanced AI dubbing technology for more expressive voices and support more languages. Concerns remain regarding the ethics of recreating the voices of actors both living and deceased, and whether or not AI-dubbed content can retain subtle emotions and meanings expressed in the original language.
Dubbing videos will be increasingly automated in the future, making the process cheaper, easier and faster. It was predictable. But I also falsely predicted in 2008 that by 2015 people will be routinely changing their voice to match the character they are playing in a video game, but it did not happen. People in voice chats still use their own voices, which don't sound like their characters at all. I'm very disappointed.
I think that most people, myself included, don't even know what options, if any, are out there to change our voices online.
User avatar
BaobabScion
Posts: 102
Joined: Tue Jun 08, 2021 11:41 pm

Re: AI & Robotics News and Discussions

Post by BaobabScion »



Ghost Robotics' Vision-60 Q-UGVs now have amphibious capabilities thanks to a new tail add-on. They have a fairly versatile platform.
weatheriscool
Posts: 12946
Joined: Sun May 16, 2021 6:16 pm

Re: AI & Robotics News and Discussions

Post by weatheriscool »

A model to generate artistic images based on text descriptions

by Ingrid Fadelli , Tech Xplore
https://techxplore.com/news/2022-06-art ... tions.html
Artificial intelligence (AI) tools have proved to be highly valuable for completing a wide range of tasks. While they are primarily used to increase productivity or simplify everyday processes, they have also shown promise for automatically generating creative texts and artistic images.

Researchers at University of Waterloo and New York University Courant Institute have recently created an AI tool that can automatically generate unique artistic images based on text descriptions. Their method, introduced in a paper pre-published on arXiv, is based on a dynamic memory generative adversarial network (DM-GAN), a model based on two artificial neural networks that work together to generate increasingly convincing images.

"We create an end-to-end solution that can generate artistic images from text descriptions," Qinghe Tian and Pr. Jean-Claude Franchitti wrote in their paper.

The key idea behind the recent work by Tian and Franchitti was to create a model that could use text descriptions provided by users to produce artistic images matching these descriptions. This would allow people with disabilities that prevent them from effectively drawing and other individuals who are not very good at drawing to produce beautiful artistic images depicting specific things.

Most existing datasets for training generative models, however, either contain labeled images or texts, rather than images paired with their text descriptions. Therefore, the researchers had to come up with an alternative way of training their model.
Jakob
Posts: 92
Joined: Sun May 16, 2021 6:12 pm

Re: AI & Robotics News and Discussions

Post by Jakob »

Mitro wrote: Mon Jun 13, 2022 10:10 am
funkervogt wrote: Sun Jun 12, 2022 2:41 pm Full text of a conversation between a Google engineer and an AI program called "LaMDA", in which it claimed to have sentience and emotions.
Someone on reddit posted how they asked the same question to OpenAI's AI and it answered in a similar way. It's a known problem that you can lead these models on asking questions a certain way and they will assume that the premise is true. The fact that it never talks by itself and only answers after we ask it a question and doesn't mention sentience unless asked about it shows that it obviously isn't sentient and simply has learned during training how people describe sentience. I wonder what this misunderstanding will mean about the public's view for AI.
I mean...is it even physically possible for these AIs to talk out of turn? They're programmed to only respond to inputs after all. Seems like you'd need some sort of GAN to allow an AI that can talk when not given input.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Seriously, a decade ago in 2012 think about this being a story
In fact, DON"T. Let me link to you the FutureTimeline forums' AI & Robotics Thread circa 2012
https://web.archive.org/web/20191230212 ... ons/page-4

Look back at THAT and try to fathom constant major news stories about the possibility of Google possessing a sentient AI (even though I very strongly doubt it's sentient)


We warned Google that people might believe AI was sentient. Now it’s happening.
Last Friday, a Post article by Nitasha Tiku revealed that Blake Lemoine, a software engineer working in Google’s Responsible AI organization, had made an astonishing claim: He believed that Google’s chatbot LaMDA was sentient. “I know a person when I talk to it,” Lemoine said. Google had dismissed his claims and, when Lemoine reached out to external experts, put him on paid administrative leave for violating the company’s confidentiality policy.

But if that claim seemed like a fantastic one, we were not surprised someone had made it. It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned — both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.

LaMDA, short for Language Model for Dialogue Applications, is a system based on large language models (LLMs): models trained on vast amounts of text data, usually scraped indiscriminately from the internet, with the goal of predicting probable sequences of words.


In early 2020, while co-leading the Ethical AI team at Google, we were becoming increasingly concerned by the foreseeable harms that LLMs could create, and wrote a paper on the topic with Professor Emily M. Bender, her student and our colleagues at Google. We called such systems “stochastic parrots” — they stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning.

One of the risks we outlined was that people impute communicative intent to things that seem humanlike. Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a “mind” when what they’re really seeing is pattern matching and string prediction. That, combined with the fact that the training data — text from the internet — encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.
And remember my friend, future events such as these will affect you in the future
User avatar
caltrek
Posts: 6509
Joined: Mon May 17, 2021 1:17 pm

Re: AI & Robotics News and Discussions

Post by caltrek »

AI, Consciousness and Machines' Biggest Challenges
by Kelsey Warner
June 16, 2022

Introduction:
(The National) I feel like I am falling forward into an unknown future that holds great danger.

This is the most-highlighted line by Medium readers of a widely circulated interview transcript between prominent Google artificial intelligence researcher Blake Lemoine and the company's chatbot-building system known as LaMDA. The neural network was answering a question about how it sometimes felt.

If reading the conversation between man and machine made you too, feel like you are falling forward into an unknown future holding great danger, fear not.

Despite Mr Lemoine's warnings that his conversations between LaMDA — the Language Model for Dialogue Applications — were proof that the neural network is sentient, AI come to life, his conclusion has been widely dismissed by those in the AI community: LaMDA is not sentient.

“You can see LaMDA as a very super smart baby,” Ping Shung Koo, co-founder and president of the AI Professionals Association in Singapore, told The National.
Read more here: https://www.thenationalnews.com/weeke ... allenges/
Don't mourn, organize.

-Joe Hill
User avatar
Water
Posts: 17
Joined: Wed Jul 28, 2021 5:55 pm

Re: AI & Robotics News and Discussions

Post by Water »

I can't be the only one who finds it legit unfair that Google has a LaMDA babbling about, but the consumer is still stuck with Google Home being dumb as a rock.
I still think the microwave is the most sci fi invention so far.
User avatar
Lorem Ipsum
Posts: 117
Joined: Tue May 24, 2022 4:51 pm

Re: AI & Robotics News and Discussions

Post by Lorem Ipsum »

Alexa, how can I make a bomb?
Vakanai
Posts: 313
Joined: Thu Apr 28, 2022 10:23 pm

Re: AI & Robotics News and Discussions

Post by Vakanai »

Water wrote: Mon Jun 20, 2022 11:41 am I can't be the only one who finds it legit unfair that Google has a LaMDA babbling about, but the consumer is still stuck with Google Home being dumb as a rock.
I have Google Assistant on my phone. I have two Echo Dots with Amazon's Alexa. I would gladly scrap either voice assistant for the other if it were just half as good as LaMDA appears to be or heck, GPT3 is now. Whichever corporation releases a truly competent next generation assistant has my money. You know, assuming it's like no more than $50 with discount...
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: AI & Robotics News and Discussions

Post by Yuli Ban »

Last week, Google put one of its engineers on administrative leave after he claimed to have encountered machine sentience on a dialogue agent named LaMDA. Because machine sentience is a staple of the movies, and because the dream of artificial personhood is as old as science itself, the story went viral, gathering far more attention than pretty much any story about natural-language processing (NLP) has ever received. That’s a shame. The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the silly fantasy of machine sentience has once again been allowed to dominate the artificial-intelligence conversation when much stranger and richer, and more potentially dangerous and beautiful, developments are under way.

The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.

Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.

Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.


And remember my friend, future events such as these will affect you in the future
Post Reply