Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Synthetic Media & Deepfakes News and Discussions

Artificial intelligence synthetic media deepfakes media synthesis GANs artificial imagination image synthesis natural language generation automation deep learning

  • Please log in to reply
183 replies to this topic

Poll: Synthetic Media Poll (5 member(s) have cast votes)

When will deepfakes be perfected?

  1. 2020 to 2022 (1 votes [20.00%])

    Percentage of vote: 20.00%

  2. 2022 to 2024 (3 votes [60.00%])

    Percentage of vote: 60.00%

  3. 2024 to 2026 (0 votes [0.00%])

    Percentage of vote: 0.00%

  4. After 2026 (1 votes [20.00%])

    Percentage of vote: 20.00%

Vote Guests cannot vote

#81
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

An A.I. is designing retro video games — and they’re surprisingly good

Google DeepMind demonstrated a few years back that artificial intelligence (A.I.) could learn to play retro video games better than the majority of human players, without requiring any instruction as to how they should accomplish the feat. Now, researchers from Georgia Tech have taken the next logical leap by demonstrating how A.I. can be used to create brand-new video games after being shown hours of classic 8-bit gaming action for “inspiration.”
The results? New titles like “Killer Bounce” and “Death Walls,” which look like they could have stepped directly out of some grungy 1980s video arcade, designed by machines way more sophisticated than any 1980s computer scientist could have imagined.
 
“Our system operates in several stages,” Mark Riedl, associate professor of Interactive Computing at Georgia Tech, told Digital Trends. “First, we take video of several games being played. In this case, the games are Super Mario Bros., Kirby, and Mega Man. Our system learns models of the level design and game mechanics [and] rules for each game. The machine learning algorithms we use are probabilistic graphical models for learning level design, and a form of causal inference for learning game mechanics.”

Killer Bounce actually looks pretty good.


And remember my friend, future events such as these will affect you in the future.


#82
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,186 posts

Oh, I see.  Yes, definitely a difference between commercial entertainment and artistic expression.  I can see where commercial artists could gain from useful algorithms identifying "success" versus "failure" etc.


Manufactured art isn't real art because of the lack of expression, representation and or agency. Where representation means to either facilitate someone else's expression or the stimulation of expression from an audience. When that audience's expression is also manufactured they cease to function as an audience. Instead the manufactured art expresses nothing, demands nothing, and validates nothing.

Take the new marvel movies. If you ask people why something like the movie Deadpool is good, a common response i've heard is, "because it's deadpool." Okay, but what is deadpool? Then they'll say something along the lines of, "well deadpool is funny because of x joke or y personality trait." Okay, but why is that funny to you? Is it funny to you because it's genuinely new material expressing something? Or is it funny because you're constantly told it's funny and the trope is recycled in a positive feedback loop through many forms of media and pop culture? Are you expressing anything? Does this art represent you in anyway? What does it reflect in you? What are you able to see in it? If the answer is nothing more than circular repetition how does that express anything but nihilism, fatalism and despair? I guess one could say manufactured art is a genuine collective expression of capitalist alienation. But if it's only art in that particular way, it's the worst kind of art and needs destroyed. Good art is liberating, even the most painful art is vital to processing humankind's expressions and desires. Art that actively kills expression, art that kills agency and art that kills representation through facilitation is anti-humanistic in nature, it must be abolished.

This isn't a critique of you Caltrek, more just me getting my thoughts out since the topic of commercial art came up. I'm also not saying all commercial art is inherently manufactured art, but the overlap is great.

#83
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Programmer trains artificial intelligence to draw faces from text descriptions | Programmer Animesh Karnewar wanted to know how characters described in books would appear in reality, so he turned to artificial intelligence to see if it could properly render these fictional people


And remember my friend, future events such as these will affect you in the future.


#84
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

This Painter AI Fools Art Historians 39% of the Time | For comparison, the Turing Test requires a computer to fool people 30% of the time


And remember my friend, future events such as these will affect you in the future.


#85
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

The future of fake news: Can you tell a real video from a deepfake?

Can you imagine a world where you see a video of a world leader engaged in some kind of damning act, but you can’t tell whether it’s real or a completely computer-generated fake?
What about a world where pretty much anyone could make that fake video on a regular computer using free tools?
It might sound like science fiction, but at least one respected researcher has dropped everything to warn the world that we’re hurtling towards that reality.
We’ll meet him later — but first let’s put your fake-detecting skills to the test.


And remember my friend, future events such as these will affect you in the future.


#86
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Random generation of anime characters by sophisticated AI programs is now so good, it's unreal

Never would we have thought that characters designed by AI programs jumped from rudimentary to ultra-advanced in the space of three years.
 
In 2015, an artificial intelligence program called Chainer was introduced to the world, which generated anime characters based on users’ inputs and helped artists come up with their own ideas. It was relatively basic and created content that looked like it was haphazardly drawn.
It is now 2018, and the makers of MakeGirls.moe have unleashed Crypko, the most sophisticated character generator AI program to date that will give even professional illustrators a run for their money.

untitled2.jpg?w=640&h=480


And remember my friend, future events such as these will affect you in the future.


#87
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Portrait by AI program sells for $432,000

The final price is far higher than the $7,000-$10,000 estimate put on it by Christie's in New York before the sale.
The painting, called Portrait of Edmond Belamy, was created by a Paris-based art collective called Obvious.
The artwork was produced using an algorithm and a data set of 15,000 portraits painted between the 14th and 20th Centuries.
To generate the image, the algorithm compared its own work to those in the data set until it could not tell them apart.
The portrait is the first piece of AI art to go under the hammer at a major auction house. The sale attracted a significant amount of media attention.
"AI is just one of several technologies that will have an impact on the art market of the future - although it is far too early to predict what those changes might be," said Christie's specialist Richard Lloyd, who organised the sale.


And remember my friend, future events such as these will affect you in the future.


#88
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

China’s state-run press agency has created an ‘AI anchor’ to read the news

Xinhua, China’s state-run press agency, has unveiled new “AI anchors” — digital composites created from footage of human hosts that read the news using synthesized voices.
It’s not clear exactly what technology has been used to create the anchors, but they’re in line with the most recent machine learning research. It seems that Xinhua has used footage of human anchors as a base layer, and then animated parts of the mouth and face to turn the speaker into a virtual puppet. By combining this with a synthesized voice, Xinhua can program the digital anchors to read the news, far quicker than using traditional CGI. (We’ve reached out to AI experts in the field to see what their analysis is.)

Just Jesus Christ. It's literally like something out of sci-fi.


And remember my friend, future events such as these will affect you in the future.


#89
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

This AI Shows Us the Sound of Pixels


And remember my friend, future events such as these will affect you in the future.


#90
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

AI Generated Faces From Scratch

to9DovdojqrybNBsjSxUWPfotNTORnLSRpeZc-MK


And remember my friend, future events such as these will affect you in the future.


#91
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Nvidia has created the first game demo using AI-generated graphics

The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality.
“It’s a new way to render video content using deep learning,” Nvidia’s vice president of applied deep learning, Bryan Catanzaro, told The Verge. “Obviously Nvidia cares a lot about generating graphics [and] we’re thinking about how AI is going to revolutionize the field.”
The results of Nvidia’s work aren’t photorealistic and show the trademark visual smearing found in much AI-generated imagery. Nor are they totally novel. In a research paper, the company’s engineers explain how they built upon a number of existing methods, including an influential open-source system called pix2pix. Their works deploys a type of neural network known as a generative adversarial network, or GAN. These are widely used in AI image generation, including for the creation of an AI portrait recently sold by Christie’s.
But Nvidia has introduced a number of innovations, and one product of this work, it says, is the first ever video game demo with AI-generated graphics. It’s a simple driving simulator where players navigate a few city blocks of AI-generated space, but can’t leave their car or otherwise interact with the world. The demo is powered using just a single GPU — a notable achievement for such cutting-edge work. (Though admittedly that GPU is the company’s top of the range $3,000 Titan V, “the most powerful PC GPU ever created” and one typically used for advanced simulation processing rather than gaming.)


And remember my friend, future events such as these will affect you in the future.


#92
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Google AI generates images of 3D models with realistic lighting and reflections

Artificial intelligence (AI) that can synthesize realistic three-dimensional object models isn’t as far-fetched as it might seem. In a paper (“Visual Object Networks: Image Generation with Disentangled 3D Representation“) accepted at the NeurIPS 2018 conference in Montreal, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) and Google describe a generative AI system capable of creating convincing shapes with realistic textures.
The AI system — Visual Object Networks, or VON — not only generates images that are more realistic than some state-of-the-art methods, it also enables shape and texture editing, viewpoint shifts, and other three-dimensional tweaks.
“Modern deep generative models learn to synthesize realistic images,” the researchers wrote. “Most computational models have only focused on generating a 2D image, ignoring the 3D nature of the world … This 2D-only perspective inevitably limits their practical usages in many fields, such as synthetic data generation, robotic learning, visual reality, and the gaming industry.”


And remember my friend, future events such as these will affect you in the future.


#93
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

This AI Can Clone Any Voice, Including Yours | Lyrebird represents an exciting (and frightening) step forward in voice synthesis


And remember my friend, future events such as these will affect you in the future.


#94
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Phew! I've been meaning to necro this thread for months now, but I never got around to it. I think that was for the best, all things considering, because now I had an absolutely stellar load of articles and videos to post. Literally half a year's worth.


And remember my friend, future events such as these will affect you in the future.


#95
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

The First Novel Written by AI Is Here—and It's as Weird as You'd Expect It to Be

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.
People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.
This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.
In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.
But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

Related to what Erowind and Caltrek are arguing about.


And remember my friend, future events such as these will affect you in the future.


#96
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

This algorithm can create 3D animations from a single still image

Chung-Yi Weng, PhD student at the University of Washington, and some of his friends created something truly astonishing.
Their software called “Photo Wake-Up” allows character animations to simply “walk out” of a static image frame — without leaving a hole in the picture behind them. The results were published in a recently submitted paper.
Weng’s method identifies a 2D subject in a single photo as input, and creates a 3D animated version of that subject. The animation can then “walk out, run, sit, or jump in 3D.”
And it could redefine the way we interact with photos. “We believe the method not only enables new ways for people to enjoy and interact with photos, but also suggests a pathway to reconstructing a virtual avatar from a single image,” Weng, and his collaborators explain in the paper.


And remember my friend, future events such as these will affect you in the future.


#97
Maximus

Maximus

    Spaceman

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,831 posts
  • LocationCanada

Not even in Harry Potter. The world will be truly magical once we get AR capable contact lenses.



#98
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Text-To-Speech Donald Trump Model
hanyuqn

This is the result of many hours playing around with Tacotron and other publicly implemented TTS models. I started with a model trained on the LJ Speech dataset from https://github.com/keithito/tacotron and fine-tuned it with about 3 hours of Trump audio (mostly from weekly addresses and a few speeches where background noise is minimal) across around 4,000 audio files of a few words each that were generated by splitting videos by silence, using ffmpeg. This forced each audio file to start immediately at the beginning of a word and end precisely at the end of a word. I then ran a script to use Google Cloud Speech to transcribe each file and save the results to a csv in the same format as LJ Speech, then went through all of the audio and fixed/deleted incorrect transcriptions. While I experimented a lot with different code and changing hyperparameters, it was definitely making the training data as clean as possible that got the best results. This model is still very far from perfect of course and the results vary greatly across different sentences you give it.

Take a listen...


And remember my friend, future events such as these will affect you in the future.


#99
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA
Here's an old post of mine, an edited version of a post I made earlier in the thread.
 
 
[Possibilities] The Bedroom Multimedia Studio
 
When I was about 9 or 10 (which wasn't that long ago, just 15 years), I actually thought we already had media generation and synthesizing algorithms. I very distinctly remember looking up on AOL how to download a program that would create a cartoon. I'd type in my useless and baffling 10-year-old descriptions and the computer would self-destruct trying to decipher what I was trying to say, but eventually I'd get a 30-minute episode that I could watch and show off to others. I was confused and disappointed when my words didn't magically turn into a cartoon.
 
Eventually, when I was 13, I found this Anime Studios program at Walmart and thought "Aw sweet, Imma go create my own show". And I even asked myself "Do they already have the voices I want on there?" at one point like a dumbass.
I think I still have that disc, too. Never touched since 2008, which should tell you whether or not I was able to create a cartoon that day. But I was a silly little kid who didn't really know much about technology. Computers and the internet seemed like magic and this was the era right around when blogs became common, so I just watched Cartoon Network and their contemporaries not understanding the sheer effort and manhours that went into creating even a single animated short, let alone an entire series. Cartoons just existed. I didn't know how they were created other than that some guy drew images over and over again and they somehow got put on TV, and I never thought to use the magical internet to research how cartoons are made. So surely, computers could just generate them, right?
 
Actually, I did find out the extensive process of animation around the same time of my second little gaffe because I genuinely did try to do something with it and was flabbergasted at how labor intensive it was just to create a single piss-poor 2-second loop of stick figures and that brought me to do some actual research.
 
How disappointing, right? That was over a decade ago.
 
It feels surreal and sorta vindicating to know that, by the late 2020s, it's possible someone who's a stupid 10-year-old that doesn't know better could think "Can I download something that makes a cartoon?" and the answer will be "Yes!"
 
If I were a bettin' man, I'd say that the sort of generative networks that we'll see during the next decade up to the latest years (2028 and 2029) will allow for you to pick and choose what sort of show you want to create.
 
Like, to keep with the cartoon example, imagine that for the early 2020s: GoAnimate, but vastly superior. There are many presets you can choose. You can type in detailed descriptions of someone or something and get a character or object designed, and then you can pick out what you want in a specific style. You don't even have to stick with any one style for all objects and characters— you could have a character from classic Disney a la Snow White or Beauty and the Beast with a modern overly-childlike moe anime character in a single work with a background that looks like it was drawn in the style of Ed Edd 'n Eddy. The animation will probably be either too awkward or too fluid, making it too obvious that it's something created with software. And the voices will also be realistic, but with poorly cadenced intonations and weird inflections. But a particularly skilled creator or a small team could create something with $200 that looks like it cost a thousand times more than that (which is the actual cost of a single episode of a typical Western cartoon, believe it or not).
 
I can see that as being probable by 2022-2023 or so. Mainly if deep learning + destructive brain scans takes off.
 
By the late 2020s, it will all likely be many times more refined, sort of like the difference between creating websites in 1998 vs. today. Animation quality/fluidity could be entirely a choice rather than a limitation and voice synthesization will be indistinguishable from reality. The OST can be anything you want it to be, however epic and sweeping or minimalist and unpretentious. What was rough and took a lot of your own effort in the early 2020s is literally as easy as typing in descriptions and then refining the results with the help of AI, then letting said AI run various scenes through and letting you choose which one was the best or was the closest so you could further tweak it. And the AI could also help you edit it if you doubt your hand. And if you were to release it to the world, you genuinely could since NLP/NLU would allow you to translate everything (speech and text) to all other languages. You could create an entire multimedia franchise in your bedroom.

And remember my friend, future events such as these will affect you in the future.


#100
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 21,025 posts
  • LocationNew Orleans, LA

Forbes writers will use AI to pen their rough drafts

Contrary to popular belief, the steadfast march toward automation is affecting all sorts of fields — not just blue-collar industries like manufacturing and transportation. Already, artificially intelligent systems (AI) are reviewing contracts and mining documents in discovery, determining which job candidates get callbacks, and selecting the inventory retailers choose to highlight for particular customers.
Now, at least one publication is using it to help supply generate “thought starters” that might later become published articles.
According to a report in Digiday this morning, Forbes’ product team recently began internally testing an AI tool that supplies story threads. It builds on the publisher’s semi-automated topics recommendation feature in Bertie, its content management system (CMS), that produces writing prompts based on reporters’ previous work.


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: Artificial intelligence, synthetic media, deepfakes, media synthesis, GANs, artificial imagination, image synthesis, natural language generation, automation, deep learning

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users