Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

  • Please log in to reply
162 replies to this topic

#21
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

So here's some more on this subject:
 
Based on nothing but conjecture, I feel that 2018 will be the year media synthesis begins taking off

Remember when Hayao Miyazaki called an AI-created animation "an insult to life itself?"
https://www.youtube....h?v=ngZ0K3lWKRc
The cold fact is that it's not going away. If anything, we're on the cusp of an era where AI-created media is dominant.
A recent story that I liked was:
Nvidia’s new AI creates disturbingly convincing fake videos

Researchers from Nvidia have created an image translation AI that will almost certainly have you second-guessing everything you see online. The system can change day into night, winter into summer, and house cats into cheetahs with minimal training materials.

https://www.youtube....h?v=9VC0c3pndbI
And you also get to media synthesis with:
AI Generated Music Based on Listener Feedback
I suck at drawing, so I asked a deep neural net to draw a worldmap for me from this MS Paint sketch
deepart.io - become a digital artist
In other words, human creativity enhanced by artificial intelligence. I can't be 100% certain obviously, but I can guess. And I feel that, in 2018, we're going to start seeing AI programs that help you create things. Image translation, image synthesis, gif and video synthesis even.
A solid challenge, I think, would be for someone to create a comic or manga based purely on image synthesis. Find an art style that you want to replicate, then begin typing descriptions of scenes and text, and finally compile everything so that it makes sense. This might not be done in 2018 and the initial results will be sloppy, but hopefully it could get started then. If it's accomplished, it'll unleash a brave new world of media. And it will soon be translated into animation. Not just 2D animation, but 3D as well.
I believe someone mentioned that these sorts of content synthesizing algorithms can also be used to smooth out CG— we've achieved photorealistic CG graphics a couple years ago, but only as static images or with very, very limited animation. Realism costs money. The closer you get to photorealism, the harder it is to break out of the uncanny valley and the more expensive it gets as you need more and more artists and programmers. And what good is photorealism if physics are wonky? Physics cost even more money. Yeah, the way the water actually moves around a player character's hand may look amazing realistic, but it's easy to forget just how much time and effort went into doing that. And if your system isn't powerful enough, it won't be able to run it anyway. AI can rectify all of this, meaning you could create a basic animation in DarkBASIC for all one cares and the algorithm fills in all the blanks, turning it into something indistinguishable from real life.
And of course:
WaveNet: A Generative Model for Raw Audio
Lyrebird claims it can recreate any voice using just one minute of sample audio
Want realistic-sounding speech without hiring voice actors? There's an algorithm for that too.
Japanese AI Writes a Novel, Nearly Wins Literary Award
Want an epic, thought-provoking novel or poem but you have virtually no writing skills? There's an algorithm for that too. And if you're like me and you prefer to write your own novels/stories, then there's going to be an algorithm that edits it better than any professional and turns that steaming finger turd into a polished platinum trophy.
I support these endeavors because while it's true that content creators get screwed, I can only think of all those millions more creators there are whose imaginations go untapped because they simply lack the time or the money or the physical talent. For all we know, the average Joe who fucked up your cheeseburger could have the concept for the greatest movie franchise in history, but since he doesn't have $500 million lying around in his closet (unless he does and is just trying to piss you off), we'll never know about it. Maybe you despise the way a show ended (if it actually ended and wasn't just canceled) or even the way it played out and feel you could have done better. Obviously you can't actually do that because you'd need millions of dollars, but if you had a program that could whip up such a thing...
Or maybe you love a classic game, but when you load it up, it's really showing its age. Maybe it could use more levels, or the levels it has need fleshing out, or the graphics could be wholly improved. There'll be a program for that one too.
In 2018, my prediction is that we'll see the public release of more media synthesis AI. A lot of it will be super rudimentary compared to what we'll see down the pike (2020-2023 is when we'll likely get into the really fun territory, and it'll likely be matured by 2027). Hollywood and various creator studios might will object, but it'll be the start of something amazing.
And like I said, it will probably start with something like someone using these media synthesis AI to create a comic. A short comic, but a comic nonetheless. One that can be registered as an IP and sold for money. I'm tempted to say "animated short", but I'm being conservative. Even creating a comic/manga with AI would be a moonshot for the field.
As for the realm of general AI, I don't see much happening in that area in 2018. Narrow AI will get stronger, much stronger. We might see more clustered narrow AI as well. But I'm more excited about media synthesis.
Well, excited and frightened. Because if you can use a computer to create new entertainment, you can also use AI to create new "facts".
https://www.youtube....h?v=ohmajJTcpNk
Yes, a brave new world indeed.

Starspawn0's take on this is roughly reinforcing my opinion:
 

OpenAI’s co-founder Greg Brockman thinks 2018 we will see “perfect“ video synthesis from scratch and speech synthesis:
https://www.reddit.c...ence_with_greg/
I don’t think it will be perfect, but maybe a lot better than exists today. It will be eerie.

...

Perfect, for long videos (more than 1 minute), is going to require human-level AI. You can turn any AI problem into a video-synthesis problem -- e.g. have an animated person answer your questions.
"Far off" means more than 20 years. But before then there will be systems that can imitate humans pretty well in conversation -- like the robot Shelia in this clip:
https://youtu.be/4sKYEjclen8
(Her movements are too gracelful to imitate in 20 years, but her responses will be in the next 10 to 15.)


And remember my friend, future events such as these will affect you in the future.


#22
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

There's also a slew of news stories about this topic coming out in the past few days.

And there's also these older ones (some dating back to 2014!)
 


  • eacao likes this

And remember my friend, future events such as these will affect you in the future.


#23
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

A year-old video talking about image synthesis: 

 

And a more recent one:


  • eacao likes this

And remember my friend, future events such as these will affect you in the future.


#24
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

This assists my prediction of how this will ultimately unfold. I hold no delusions about the time frame— very little of this is going to be on your computer within five years. You can use DeepDream and DeepArt and various DL voice synthesis programs, but it's all still very early in development. There will still be voice actors and animators in 2025. They'll still be fields you can get into and receive career payment. Comic and manga creators also won't be replaced anytime soon. If anything, it might take a bit longer for them precisely because of the nature of cartooning. Neural networks today are fantastic at repainting a pre-existing image or using an image it's seen before to create something new. But so far, it lacks the ability to actually stylize the image. There's no way to exaggerate features like you'd see in a cartoon. We know networks understand anime eyes, but they don't seem to be able to create an actual anime character based on images they've seen— if you fed a computer 1,000 anime stills and then inputted your own portrait into it, it wouldn't give you huge eyes or unrealistically sharpened/cutened features— it'd just recolor your portrait to make it toon-shaded. Likewise, I can't make my friend look like a character from the Simpsons with any algorithm that currently exists. He'd just have crayon-yellow skin and a flesh-colored snout but otherwise wouldn't actually have his skeletal or muscular structure altered to fit the Simpsons' distinctive style. 

No network today can do that. It might be possible within a couple years to at least get a GAN to approximate it, but it won't be until the mid-2020s at the earliest that we'll see "filters" that could change my portrait into an actual cartoon. As of right now, making an algorithm "cartoonify" a person simply means adding vector graphics or cel-shading.

 

Now that won't be a problem if you were to use text-to-image synthesis. You could phase out the middleman and go straight to generating new characters from scratch. And in 2018, I bet that we might see the first inklings of this in a very basic way. In a lab, we'll get a comic created entirely by algorithm. 

Input text describing a character— if I had to come up with something, I'd make it simple and just go with "round head with stick figure body". 

Do the same thing for others. Describe the ways their limbs bend. If they have mouths, describe whether or not they're open. If there are speech bubbles, what do they look like and how big are they? Etc. etc.

Perhaps you could be more daring and feed a network thousands of images from a pre-chosen art style, but I'm being conservative. 

 

Right now, a neural network that can actually make narrative sense is a damn-near impossible thing to create. So if you want to achieve causality and progression in such a story, you'll still need a human to make sense of it. Thus, this comic will likely be organized by a human even if the images are entirely AI-generated. 

 

 

 

What happens by 2019 and 2020 then? Singularity No, no, let's be real. It's not going to fundamentally improve by that much by 2020— AI will still lack narrative understanding. We're only just now getting AI that can understand sentences and paragraphs, so a whole narrative sequence is still way too much. But I can see people generating short cartoons. And when I say that, I don't mean video synthesis— that'll definitely happen too— but instead creating animations entirely through image synthesis. It'll be painstaking work, but it's still far less work than actual animation. They'll be this new breed of animator who can't draw for shit but can write detailed descriptions into a GAN textbox over and over again, slightly changing the pose and posture in each image. The more minute stuff will require manual editing, but larger points can be generated. Again, this will require a lot of manual effort, so even by the early 2020s, you won't be able to fuck about in your bedroom creating Pixar quality movies just by dribbling misspelled words into a text box. It would be shocking if you could even put together something longer than 10 minutes. 

 

So in essence, it's human creativity augmented with a new tool that greatly democratizes the medium you're working in, a tool that possesses a very, very fleeting amount of creativity in and of itself.

 

As for voices, you'll be able to generate near-perfect sounding voices within a couple years— most likely by next year. No more monotonic Microsoft Sam or the classic Stephen Hawking voice. There'll be natural-sounding voices with natural intonations, inflections, and timbres.

 

You just likely won't be able to use it yourself. Oh sure, you could play around with it, but it'll be on GitHub if anything at all. A wide commercial release likely won't come for years.  Siri and Cortana still sound pretty robotic. Oh sure, there are a few more idiosyncrasies to speaking patterns, but you still know when you're listening to a real person and when you're listening to Siri. What's well past the horizon and fast approaching us is a voice synthesizer that sounds so natural that, if you were listening to it vs. a real person, you wouldn't be able to tell the difference unless you are very highly trained and the program talks for more than a minute nonstop. Right now, we still need real vocal talents to provide for all these sounds that neural networks divvy up and reorganize into words, but genuine voice synthesis— creating a human voice from nothing but altering sound waves and doing it so well that it sounds indistinguishable from a real person— is likely not far behind. There isn't a world of difference between generating an artificial human voice vs. generating an artifical instrumental tone. Being able to get a TTS voice say the same word in different tones will be a gamechanger in and of itself, as would being able to get it to understand emotional cues.

 

But again, in the early 2020s, such technology likely won't be in the hands of the common indie creator. Even TTS programs today cost a fair amount of money, and none of them sound natural. 

 

And even if they did sound natural, there's something else to consider. Have you ever used a TTS program and in the middle of listening to it read to you, it suddenly jerks around and keeps speaking like there was no break? Or maybe it didn't understand that you don't say 'dot dot dot' when there's an ellipses? That's still something that could pose a problem without manual editing. I don't see AI overcoming that within three years.

 

Translate these advances from voice into music and you have the same benefits and issues. 

 

 

 

 

The general point being made here is that we are so close to a new age in entertainment and media in general that we're already teasing our fingers across its nose. Like, we're stupid close. And we're technically already within the space of it as we've seen with DeepDream and DeepArt and image colorizers (/r/ColorizedHistory being one of the finest subreddits out there).

But if you want to know the day when you could go to Amazon and buy a disc that holds a "cartoon generator" where you could basically recreate the entirety of, say, Avatar: The Last Airbender without losing a single aspect of the show's design, then I'm still forced to say "definitely not the 2020s. Possibly not even the 2030s (but I won't be so bold as to say it's impossible before then)."

 

In the 2020s, a person like me— stupendously bad drawing skills and a seeming incapability to grasp depth— will be able to use AI to generate very high quality art and even some animation. I could use it to master Photoshop, getting the network to generate just about any image I want and make it look real rather than mostly real with obvious tampering and fake elements that are half-assedly covered up. I could use it to perfectly copy my mother's signature if I ever needed it for homework (just to use a crazy example). I could use it to add new voice overs to existing properties without hiring actors. But the more complex stuff— creating complex animation, creating high-end video games, creating long videos, creating high-quality novels and novellas— is beyond me if I'm not willing to put in the effort.

 

Because the AI I can use will still allow me to create these things, but unlike the lower-hanging fruit, it also requires genuine effort on my part.

 

For example: I can use AI to create character and item designs in video games, perhaps even drawing up concept art and backgrounds and perhaps even usable assets. It could create and animate pixel art. It could even generate the music. But the process of actually creating said video game— coding it, putting all the pieces together, giving it narrative— is all on me. I can see there being AI that can partially code some aspects of a video game, perhaps even streamlining the process. And towards the end of the decade, there may even be a sort of "autocomplete" for coding. And maybe even English-to-Code translation. As long as I write what needs to happen and what needs to work, the AI could translate that into game code. Yet it's still on me to create the thing itself. I tried learning to code twice about half a decade ago with the intention of trying my hand at game design, and I failed both times because I just couldn't get into it, even though I did understand it after a while. By the end of next decade, I could probably bring those old ideas to life in some limited form.

 

 

 

I could go on and on about this subject, so I will.

 

I'm not comp-sci major (as just mentioned). But if I had to make an uneducated guess, I'd say that by 2027 we'll start seeing major disruption in the entertainment industry. Mostly in the field of comics/manga, modeling, graphic design, and content/business writing— the easiest stuff to automate since they involve static images or mostly fact-based writing. And the keyword there is "start" because it's not like every manga-ka in Japan is going to be on the dole or every cover model is eating cheap ramen at a homeless shelter glaring at these completely computer-generated physical gods and goddesses on the covers of magazines come January 1st, 2027. It'll still take a lot of time, and plenty of these types will still get by purely based on human stubbornness, tradition, and an increasing demand for the authentic. 

 

Voice synthesis will likely be ironing out the very tiny few imperfections that still exist and the only real drawback will remain emotional variability— it's actually very hard to express emotion through writing, which is why so many stories have so many adverbs and overly flowery/simple emotional states, so getting a computer to understand what emotion to express, when to express it, and how to express it will be extremely difficult. Human emotional coaches might be needed for that for quite a while since a subjective desire to get things right will likely mean multiple run-throughs rather than a simple post-and-done situation, but I doubt that's a career you should be looking into— even if it weren't inevitable that AI will eventually figure out the emotional value of a scene (probably in the post-AGI days), it's likely going to devolve into one of two situations:

1: the mainstream works, where emotions are basically given pre-sets with tiny variations to satisfy the largest possible audience.

2: the auteur, where the creators want everything to fit their vision perfectly, even if it might be seen as goofy, unrealistic, inhuman, or chewing the scenery by the hoi polloi.

The in-between— where finding the right emotion is something that can be figured out by salaried or commissioned experts— probably won't be too common.

 

 

That being said, the big studios will likely have sounded the alarm on this use of AI to enhance/alter entertainment, but not in the way some might think. They're out for money, so anything that reduces cost while increasing profits is welcome— in other words, that "alarm" is more like a celebratory airhorn because now movie and gaming studios can spend as little as possible creating a product, leaving almost all of the budget on advertising. 

 

 

Comic artists might still be going strong— people have a natural affinity towards what's canon, after all— but their years/decades of hard work refining their craft has been reduced since computers can perfectly match their art style. I can see some artists embracing this and embracing the inevitable explosion of fanon, but I can also see just as many artists— if not more— threatening to take legal action against those using their style via neural networks, or maybe even taking it up against the creators of these neural networks. People will try copywriting styles rather than just IP (perhaps they'll make it a part of their IP). Which isn't going to fare well against, say, blockchain-based neural networks that simply can't be stopped or those in other countries who don't care about trademark violations. 

 

These days, fan works tend to be of variable quality because of both artist skills and writing skills— you could be a fantastic artist who completely nails the style of Jack Kirby or Akira Toriyama, but if your writing skills are no better than the average 14-year-old fanfiction writer who just discovered nu metal and swearing or that blood exists, people won't be coming back to you. Likewise, you could have really amazing writing skills, perhaps on par with David Foster Wallace or Vladimir Nabokov, but if your drawings look like mine, people won't subject themselves to your visual torture. 

 

Heaven help ye if you venture into the magical and masochistic world of fangames. I've seen some shit. Like Vietnam-tier shit. So I know that this sort of AI can help people out. But in a world where everyone who wants to can create their own media franchises, you can understand that it could get overwhelming after a while. So overwhelming that it could spur many to just not bother at all. I've always wanted to create comics, but I think it's been beaten over your head by now that I can't draw. More than that, I've always wanted to be behind a TV show, a video game, or even a movie. Again, that's just not happening. Now I could write some fantastic stories that are eventually adapted into such things, but that's not what I'm talking about. So for me, waiting for the world to change is the only way. And it'll start with the easiest of the lot, which is comics.

 

This, I feel, will be a reality by 2029. Much sooner than many are comfortable with accepting.

 

I'm focusing so much on comic artists because it's the first thing that came to mind and because I wanted to focus on pure entertainment in regards to what will be possible in the very near future with media synthesis technology. We're not likely to generate whole shows, movies, and triple-A games with the tech anytime soon. I'm well aware of the potential to use media synthesis to craft false realities, fake news, and untrue videos— that's gonna be a post for another day.


  • Raklian, Zaphod, eacao and 1 other like this

And remember my friend, future events such as these will affect you in the future.


#25
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.


#26
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.


#27
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.


#28
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.


#29
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,477 posts
Actually there is debate in math forums as well about this as machine assisted mathematical discoveries are beginning to be churned out and some traditional mathematicians don't like it and refuse to use the technology while others embrace it readily

https://www.quantama...trust-20130222/
  • Zaphod, Yuli Ban and caltrek like this

#30
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,477 posts
I should also note that a lot of this is easier said than done at least in the case of mathematics and will likely take several generations of programmers to work on the different aspects of problems like these and fuse them together into a new paradigm that can be adopted by their respective communities at large

https://www.quantama...atics-20150519/

#31
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

To use a post from the SomethingAwful forums, this is from 2016: 
Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artwork

9LF0O0c.gif
Just doodle some really poor shit for a piss amount of seconds and the algorithm will do the rest. And like I said, this was from March 2016. It's going on two years later. Just imagine how amazing GANs will be in 2018 when this is what we could do two years ago.
 
And here's what we've all been not waiting for: 
 
DeepDream + Model trained on Manga Comics
When this is real time and capable of being put into glasses or contact lenses or implanted directly into your retinas, you can live in your favorite animu forever.


  • Zaphod, eacao and BasilBerylium like this

And remember my friend, future events such as these will affect you in the future.


#32
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

I've downloaded TensorFlow and plan to learn about neural nets over the break. I'll let you people know how it goes. Perhaps I'll be able to make a simple OCR program by the end of the break.


  • Yuli Ban likes this

#33
BasilBerylium

BasilBerylium

    Banned

  • Banned
  • PipPipPipPipPipPip
  • 734 posts

Imagine this being applied for horror games.


This website has a magic that makes people draw back here like moths to light.


#34
BasilBerylium

BasilBerylium

    Banned

  • Banned
  • PipPipPipPipPipPip
  • 734 posts

eQJa0hS.jpg

Poor penguins


This website has a magic that makes people draw back here like moths to light.


#35
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

How generative AI is changing art and design

You've heard a lot about AI in art and design in 2017, with Adobe trumpeting its Sensei machine-learning technology for everything from image recognition in Lightroom to tech previews of turning photos into 'hand-drawn' sketches and animated graph creation at this year's Adobe Max conference.
But - from looking at what researchers at companies like Google and universities around the world are working on - this is just the beginning. Here James Kobielus, SiliconAngle Wikibon's lead analyst for AI, data science, and application development, takes a look at the latest imaging research that could affect how - and what - you create in the future.
A lot of the examples feature imagery that aren't what you'd call artistic - unsurprising as they're largely created by people from a science rather than arts background. They also often have a fractal distortion reminiscent of an acid trip (but that's just inherent in AI, it turns out, rather than the researchers behind them necessarily being Timothy Leary-types). And their research is often presented in the ultra-detailed format of scientific papers - heavy on words and, strangely to us, light on visual examples. But persevere - perhaps as reading over the Christmas break - and you'll find both insight and inspiration from the next generation of technology of art and design.


And remember my friend, future events such as these will affect you in the future.


#36
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

And now witness this.
 
Remember that time about a year and a half ago when a man fed Blade Runner into a neural network?
 
Turns out that the neural net was able to remember Blade Runner and managed to transpose its aesthetic onto other films.

The Neural Net That Recreated ‘Blade Runner’ Has the Movie Stuck in Its Memory

The AI that made ‘Blade Runner: Auto-encoded’ transposed the aesthetic of the movie onto other sci-fi classics

But what happens when the Blade Runner auto-encoder watches other films? Broad tried it out. When shown another Philip K. Dick adaptation, A Scanner Darkly, and a Soviet Classic, Man with a Movie Camera, it could still recognize the composition of the frames, but it essentially transposed the aesthetic of Blade Runner: Auto-encoded. They were dimly-lit, plagued by visual noise, and dreamy. The auto-encoded versions clearly came from the same memory.


And remember my friend, future events such as these will affect you in the future.


#37
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,662 posts
  • LocationNew Orleans, LA

AI Colors/Recolors Images
And of course the examples are anime girls.

 

001.png

 

006.png


And remember my friend, future events such as these will affect you in the future.


#38
bgates276

bgates276

    Member

  • Members
  • PipPipPipPipPip
  • 456 posts

Baby steps ... actually, the process is probably fairly complicated, but at least it's something. Would like to see them generate unique characters from scratch, out of AI. 



#39
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

I've downloaded TensorFlow and plan to learn about neural nets over the break. I'll let you people know how it goes. Perhaps I'll be able to make a simple OCR program by the end of the break.

Update: TensorFlow is a complicated mess. I am going to make my own neural net using neuroevolution (genetic algorithms as a learning method). It's combining two startlingly simple, beautiful concepts. We'll see what we get out of it--I'll try to find some data sets from Kaggle instead of making my own.



#40
bgates276

bgates276

    Member

  • Members
  • PipPipPipPipPip
  • 456 posts

 

I've downloaded TensorFlow and plan to learn about neural nets over the break. I'll let you people know how it goes. Perhaps I'll be able to make a simple OCR program by the end of the break.

Update: TensorFlow is a complicated mess. I am going to make my own neural net using neuroevolution (genetic algorithms as a learning method). It's combining two startlingly simple, beautiful concepts. We'll see what we get out of it--I'll try to find some data sets from Kaggle instead of making my own.

 

 

Doesn't the new $3000 Nvidia Titan V graphics card use tensors of some sort? What could programmers possibly do with them, if say, they made a video game application that utilized them?

 

Obviously, no one is going to make a commercial video game designed specifically for that kind of card, but maybe in the future, we could see more mainstream graphics cards (say, an Nvidia 1200 or 1300 series) with some kind of built in capabilities for AI enhancements. Eventually, perhaps it may even be an essential feature of console gaming. Ie.  Playstation 5 or 6.  







Also tagged with one or more of these keywords: Artificial intelligence, deep learning, creativity, entertainment, media synthesis, DeepMind, DeepDream, art, artificial neural network, GAN

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users