Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Machine Learning/AI for Physics Prediction

machine learning physics AI neuroscience

  • Please log in to reply
5 replies to this topic

#1
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,050 posts
  • LocationLondon

I was thinking about videogames, and how machine learning advances could be applied to them and I got thinking:

 

We can do detailed physics simulations of collisions pretty fast, but the more detailed we get the slower your game or sim will run. 

  • Could you set up a machine that solves physics problems as presented by your videogame e.g. "what happens if this bullet moving at this speed hits this wall at this angle?" through simulations that are too detailed to run in real time. 
  • Then you set up a machine learning system which takes the same inputs and basically guesses the answer.
  • Then you set up discriminator one to guess which solutions are simulated in detail and which ones are guesses. 

Then you leave them all running for ages and people playing your videogame will keep producing novel collision physics problems for the system to work with, your detailed simulator will produce correct answers to use as training data, your guessing system will get very good at guessing the right answer, and your discriminator will get better at spotting unrealistic collision physics. 

 

Eventually you should get a machine learning system which is as good as a human at guessing what is about to happen in a physical collision. Assuming it is better than humans, it doesn't actually need to be very accurate, because human players would see the outcome and accept that it was realistic.  

 

The same technique could be use for physics other than collisions like chemical interactions (when computers are fast enough to simulate training data),  Or fluid dynamics for realistic water guessing, or an entire physics system dedicated to guessing what different explosions will look like,

 

So would this work to produce a good game physics engine, theoretically? Or effectively would your machine learning system be as complicated as the simulation, because we actually do know how these things work so a simulation is already faster? 

 

Also if it did work, would anything stop the same physics engine being used IRL, maybe as part of robot's AI?

 

Potentially you could have many systems working together, until you have a virtual world entirely guessed by machine learning systems. (which could then be improved over time, for an endlessly improving game engine!)

 

So besides the super cool reality simulation you could end up with, the other fun thing would be to see if your ML system can figure out things newton's laws, or if it would figure out relativity eventually if the calculations got detailed enough and your game had enough high speed collisions? 

 

Thoughts?


  • Casey, caltrek, Whereas and 1 other like this

#2
Whereas

Whereas

    Member

  • Members
  • PipPipPipPipPip
  • 482 posts

One can teach a deep learning system to simulate a game internally using less resources than running the game, but if you're going for producing "comparable graphics quality" from scratch, I'm guessing it'd require *more* resources. If you use AI to supplement physics simulation-based techniques, though, there's plenty of potential.

For instance there have been some great results with merging ray-tracing graphics generation with low-grade AI: link

^ This is from the Two Minute Papers channel - I'm guessing most of the videos on there will be of interest to you, though I'll highlight a few more featuring successful research relevant to this topic.

  • Using human volunteers to optimize the parameters of fluid simulations for the best ratio of computational power / perceived accuracy: link
  • Using neural networks to fill in the computationally expensive bits of simulations: link
  • Deep learning AI produces "imagined" scenarios based on its learned internal model of a game, can use them to get better at playing the game: link

  • Alislaws likes this

If you're wrong, how would you know it?


#3
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,050 posts
  • LocationLondon

Thanks very much, I will check out those links later.

 

 

Found this article, which describes the same sort of approach I outlined, including a use case where they applied it to Oil wells. So it seems to be a viable tactic, at least for problems with a limited number of variables/inputs.  

 

How do you teach physics to machine learning models?

https://towardsdatas...ng-3a3545d58ab9

 

Virtual flow metering through a hybrid modelling scheme

We have, for instance, considered this approach for the specific task of virtual flow metering in an oil well...

 

The advantage of this approach is that we can perform all the computationally demanding parts off-line, where making fast real-time predictions is not an issue. By generating large amounts of training data from the physics-based model, we can teach the ML model the physics of the problem.

 

A trained ML model can use just the sensor measurements from the physical well, i.e., pressures and temperatures, to predict the oil, gas, and water rates simultaneously. More importantly, it can make these predictions within a fraction of a second, making it an ideal application for running on real-time data from the production wells.



#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts

Of course it will work.  Your brain doesn't simulate a whole scene, and then decide if what it saw was realistic -- it uses shortcuts, and you only have to make sure your physics engine can use the same shortcuts to fool it.  Though, it's easier to find fault with a production than to generate one (the old adage, "It's easier to criticize than to build.")

 

Also, note that humans don't even notice fairly large changes to a scene when the eye saccades:

 

https://youtu.be/ySbhJEF9fXU

 

Even if a simulation of water, for example, is way-off-wrong, it still might fool a human.  It just needs to get certain cues right -- and ML models could figure out what those are, given enough training data.

 

I would guess you could produce animations humans couldn't tell apart from reality using only a tiny fraction of the compute required to simulate the scene accurately, to a high degree of fidelity.

 

....

 

I suspect what the human brain does, to work out whether what is going on in a scene is realistic, is it relies heavily on associations and memory.  It learns a little encyclopedia of actions and reactions -- e.g. "If you see this type of object in this type of situation, this is what will happen."  And it's probably "hierarchical", and has different "encyclopedias" at different scales.


  • Alislaws likes this

#5
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,050 posts
  • LocationLondon

The reason I suspected it might not be feasible is (for all I know) simulating a physics problem by using the underlying mathematics that we have figured out, might be faster than feeding lots of variables into your ML model to produce the same outcome.

 

I am aware that ‚Äčtraining ‚Äč ML systems takes large amounts of computing power but not familiar with the requirements of running the trained system, presumably it depends on the number of inputs, but no idea what sort of scale we are talking. 

 

This sort of tech is already being used in animation to amazing success (any day now you are going to be able to open one of the big mainstream game engines, design a humanoid (ish) skeleton and hit animate and you'll have something better than most animators could have produced 10 years ago!

 

I really hope someone creates an AI/ML enabled engine that just continually improves over the years as it gains more and more training data. 

 

One other thought: It seems to me that the basic function of intelligence (in terms of where it first starts to appear in species as they evolved more complexity) is to predict what happens next in your environment. (and then react to it in some way that increases chances of evo. success) 

 

This basic summary of "figure out what is going to happen in the future, move to compensate/take advantage" is pretty much what all forms of intelligence originally evolved for*. Maybe someday we'll be able to create sapience the old fashioned way by evolving our AI from the most basic level. We'd need an automated function to attempt improvement.

 

Perfecting simulating realistic environments on the cheap would let us test and train our evolving AI in real world scenarios and because the simulation would be fast and efficient we could presumably speed it up a lot, because we don't have billions of years. 

 

*later things like imagination and creativity stuff comes along which is basically your brain trying to predict optimal futures and then figure out what actions can be taken to get there.  



#6
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 243 posts

Here's an uber paper on that

https://eng.uber.com...eural-networks/







Also tagged with one or more of these keywords: machine learning, physics, AI, neuroscience

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users