^ While true in the slightly longer term, I'm thinking more of what might be possible through film media in the next 10 years. Sort of like if you went onto YouTube, searched "Die Hard Forever," and saw a video that had no actual run time, perhaps with compilation videos of the "best bits" or "the past 24 hours of John McClane's life" every day for the past 5 months. You might click onto the original video and see the movie playing out, then set it on in the background while you go to work, and 8 hours later when you come back, it's still playing. Not a new movie. Not a rerun. Not after being rewound. It's the same storyline still going on and being generated. Depending on how well it understands things, it'll know not to bring back Hans Gruber due to him being dead for decades (in more ways than one) (unless there's some cybernetic resurrection plotline generated, which is entirely possible in a never-ending movie).
I say "ten years," but it might be even sooner than that for ideas that don't require such theatricality. Imagine like an "indie movie" generator, where the plot of everything is purely saccharine and slice of life and you're just following two Millennial-looking lovebirds from San Francisco around the world, endlessly. Two people who don't exist except in that "movie" (would it even be a movie by that point)? Less need for cinematic shots or creative angles means it's easier for a neural network to convincingly pull off. There'd be hiccups in many places. Maybe a scene doesn't generate well. Maybe the script goes off the walls at certain points and there's 20 minutes of characters just repeating the same string of words over and over again. Maybe the text-to-speech program doesn't enunciate things properly.
Point is, if we manage to get human image synthesis down this year (and it's looking like we will), this might be feasible in closer to 3 to 5 years. In 2025, it ought to be possible to go online and view a never-ending movie at any time. Would make for a good proper sequel to The Never-ending Story now that I think about it.
And if it's possible for live action movies, it might also be possible for animated ones, at least to an extent.
See, animation is actually trickier than live action. We have trillions, perhaps quadrillions of data references for live action media: that is, photographs and every frame of a video of a person, so it's easy for a neural network to figure out what a realistic human looks like, how we behave, and how we react to our environment via physics. You probably aren't going to see a person run off of a cliff, look down, and then fall unless it's a live action piece parodying cartoons.
With animation, that model has to also understand exaggeration and an entirely different set of physics across fewer references. Animation often has a lot of stylization and creativity behind it. It'll still work undoubtedly because it already does, but there's a higher chance of the network needing to model something novel. So a never-ending episode of an otherwise 22-minute cartoon will have to account for a lot of things. An AI generating a dreamlike never-ending Looney Tunes "short" would have to account for a lot of slapstick that already borders on the dreamlike. A never-ending Family Guy episode would have to understand to generate cutaway gags that only tangentially relate to what's going on. And so on. I can see experiments very soon in that regard with indie toons that are created by individuals, but it'll take a while for AI to understand that well.