Synthetic Media & Generative AI News and Discussions

User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by Yuli Ban »

Neural Rendering: NeRF Takes a Walk in the Fresh Air
A collaboration between Google Research and Harvard University has developed a new method to create 360-degree neural video of complete scenes using Neural Radiance Fields (NeRF). The novel approach takes NeRF a step closer to casual abstract use in any environment, without being restricted to tabletop models or closed interior scenarios.
Mip-NeRF 360 can handle extended backgrounds and ‘infinite’ objects such as the sky, because, unlike most previous iterations, it sets limits on the way light rays are interpreted, and creates boundaries of attention that rationalize otherwise lengthy training times. See the new accompanying video embedded at the end of this article for more examples, and an extended insight into the process.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by Yuli Ban »

Andy Chanley, the afternoon drive host at Southern California's public radio station 88.5 KCSN, has been a radio DJ for over 32 years. And now, thanks to artificial intelligence technology, his voice will live on simultaneously in many places.

"I may be a robot, but I still love to rock," says the robot DJ named ANDY, derived from Artificial Neural Disk-JockeY, in Chanley's voice, during a demonstration for Reuters where the voice was hard to distinguish from a human DJ.

Our phones, speakers and rice cookers have been talking to us for years, but their voices have been robotic. Seattle-based AI startup WellSaid Labs says it has finessed the technology to create over 50 real human voice avatars like ANDY so far, where the producer just needs to type in text to create the narration.

Zack Zalon, CEO of Los Angeles-based AI startup Super Hi-Fi, said ANDY will be integrated into its AI platform that automates music production. So instead of a music playlist, ANDY can DJ the experience, introducing the songs and talking about them.

The next step will be for the AI to automate the text that is created by humans as well. "That's really the triumvirate that we think is going to take this to the next level," Zalon said.
And remember my friend, future events such as these will affect you in the future
weatheriscool
Posts: 12967
Joined: Sun May 16, 2021 6:16 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by weatheriscool »

Technique enables real-time rendering of scenes in 3D
https://techxplore.com/news/2021-12-tec ... es-3d.html
by Adam Zewe, Massachusetts Institute of Technology
To represent a 3D scene from a 2D image, a light field network encodes the 360-degree light field of the 3D scene into a neural network that directly maps each camera ray to the color observed by that ray. Credit: Massachusetts Institute of Technology

Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.

Yet a machine that needs to interact with objects in the world—like a robot designed to harvest crops or assist with surgery—must be able to infer properties about a 3D scene from observations of the 2D images it's trained on.

While scientists have had success using neural networks to infer representations of 3D scenes from images, these machine learning methods aren't fast enough to make them feasible for many real-world applications.
Nanotechandmorefuture
Posts: 478
Joined: Fri Sep 17, 2021 6:15 pm
Location: At the moment Miami, FL

Re: Synthetic Media & Deepfakes News and Discussions

Post by Nanotechandmorefuture »

Yuli Ban wrote: Sun Dec 05, 2021 6:51 pm
Andy Chanley, the afternoon drive host at Southern California's public radio station 88.5 KCSN, has been a radio DJ for over 32 years. And now, thanks to artificial intelligence technology, his voice will live on simultaneously in many places.

"I may be a robot, but I still love to rock," says the robot DJ named ANDY, derived from Artificial Neural Disk-JockeY, in Chanley's voice, during a demonstration for Reuters where the voice was hard to distinguish from a human DJ.

Our phones, speakers and rice cookers have been talking to us for years, but their voices have been robotic. Seattle-based AI startup WellSaid Labs says it has finessed the technology to create over 50 real human voice avatars like ANDY so far, where the producer just needs to type in text to create the narration.

Zack Zalon, CEO of Los Angeles-based AI startup Super Hi-Fi, said ANDY will be integrated into its AI platform that automates music production. So instead of a music playlist, ANDY can DJ the experience, introducing the songs and talking about them.

The next step will be for the AI to automate the text that is created by humans as well. "That's really the triumvirate that we think is going to take this to the next level," Zalon said.
Sounds like the brobot will do well.
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by Ozzie guy »

Synthetic Media will become my biggest hobby until the singularity.

Before I can live in realistic FIVR simulations what I likely can do in a few years is feed an AI lots of information about myself and a said fantasy situation I want to live in. The AI can then create a book or even movie about me living in that world/going through that fantasy. It would kind of be a way to test out and psudo live in FIVR fantasies before FIVR exists. Even listening to audiobooks of fantasy worlds I am in would be more fun than anything else that exists.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by Yuli Ban »

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing.
And remember my friend, future events such as these will affect you in the future
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Synthetic Media & Deepfakes News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
User avatar
bretbernhoft
Posts: 112
Joined: Tue Oct 05, 2021 9:23 am
Location: USA
Contact:

Re: Synthetic Media & Deepfakes News and Discussions

Post by bretbernhoft »

Yuli Ban wrote: Sun Dec 26, 2021 4:17 am
With the ability to intelligently generate media variants (of any kind) in real-time, as would be supported by a cutting-edge video graphics engine, the outcome would be unusually incredible for media enjoyers. Eventually entire movies and video game storylines will be randomly and instantaneously created for each individual user, with lifelike realism.

Let's be real here, the potential for AI-assisted media generation is incredible. The future of media (in terms of creating artifacts/objects) is the ability to imagine, engineer and populate entire universes as simulations. We're getting so close to enabling this level of play.
I am a JavaScript Developer, who loves learning; especially when solving challenges as part of making unique applications for Internet users throughout the global Web. I began my journey in technology with WordPress and Web Analytics. More recently I've been working with React, TypeScript, Tailwind CSS, JavaScript, Vite, Node, Git, Netlify, Quickbase and RESTful JSON APIs.
User avatar
raklian
Posts: 1747
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: Synthetic Media & Deepfakes News and Discussions

Post by raklian »

bretbernhoft wrote: Sun Dec 26, 2021 7:53 pm
With the ability to intelligently generate media variants (of any kind) in real-time, as would be supported by a cutting-edge video graphics engine, the outcome would be unusually incredible for media enjoyers. Eventually entire movies and video game storylines will be randomly and instantaneously created for each individual user, with lifelike realism.
Like the blockchain, it looks like entertainment is also going down the decentralization route. No more relying on movie & video game studios with all of their money, manpower, and of course, long production times to offer us entertainment content that isn't even specifically catered to our own preferences most of the time.

Entertainment will be interactive, immediate and personalized. That's the future.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
User avatar
bretbernhoft
Posts: 112
Joined: Tue Oct 05, 2021 9:23 am
Location: USA
Contact:

Re: Synthetic Media & Deepfakes News and Discussions

Post by bretbernhoft »

raklian wrote: Sun Dec 26, 2021 8:11 pm
bretbernhoft wrote: Sun Dec 26, 2021 7:53 pm
With the ability to intelligently generate media variants (of any kind) in real-time, as would be supported by a cutting-edge video graphics engine, the outcome would be unusually incredible for media enjoyers. Eventually entire movies and video game storylines will be randomly and instantaneously created for each individual user, with lifelike realism.
Like the blockchain, it looks like entertainment is also going down the decentralization route. No more relying on movie & video game studios with all of their money, manpower, and of course, long production times to offer us entertainment content that isn't even specifically catered to our own preferences most of the time.

Entertainment will be interactive, immediate and personalized. That's the future.
IMO, what you've just described is quite an important observation about the future of media; that it is becoming decentralized, or fractalized. Our media and entertainment is mutating, right alongside everything else. All of which is powered by the increasing agency of our digital tools. All of this is hard to describe to other media enthusiasts who haven't dreamed of and considered the subject matter.
I am a JavaScript Developer, who loves learning; especially when solving challenges as part of making unique applications for Internet users throughout the global Web. I began my journey in technology with WordPress and Web Analytics. More recently I've been working with React, TypeScript, Tailwind CSS, JavaScript, Vite, Node, Git, Netlify, Quickbase and RESTful JSON APIs.
Post Reply