It would cause the unraveling of society as we know it. Just one small thing: if humans do not have free will, then it would undermine the justice system, as all acts would be preordained.
1. About zero -- even centuries hence.
2. I have no way of answering this. It would require assigning probabilities to things I have almost no information to work from.
3. Mass panic, breakdown of society.
Another thing to consider: let's say that the we don't live in a simulation. That doesn't mean there isn't a "program" we live by, without knowing it. It could be the case that human nature acts as an equilibrating force, keeping history of the cosmos on a set course.
Consider AI language models, as they currently exist. They work by taking in large amounts of human-generated data, and can predict how to complete a story, for example, when given a few lines in the beginning. The more data you train them with, the more plausible the story reads.
Imagine, in the not too distant future, this is pushed further and further, and the AI is asked to write newspaper stories for 1 year hence. Maybe it turns out to be dead-on accurate.
Now, obviously, these AI models couldn't predict the weather or other natural disasters. Those occur with some random frequency. But it might still be able to predict some fairly specific things -- like who wins an election, how the stock market behaves, where wars will occur and who will instigate them and how many die and by what means, and so on.
This is basically Isaac Asimov's psychohistory:
But what I'm suggesting is a little different. What might be the case is that it can work not because machines develop a deep analysis of unfolding causal patterns, but because, at some level, human nature has a direction given by evolution.
So, certain combinations of genes lead to latent features hidden behind our use of language and our actions, that reveal a "direction" -- a force pushing things towards a singular outcome. A sufficiently deeply trained AI system might pick up on those patterns. If it looks like a hurricane, say, is going to throw out-of-whack some prediction it makes, humans might act as a counter-balancing force, and push it back towards what the prediction says. One antecedent of this is the Gaia Hypothesis, except here humans are playing the role of Gaia, restoring the balance. Yet another is the planet Solaris from Lem's novels (and the films based on it) -- it has the capacity to correct its orbit, to keep it in a perfect circle.
It would be truly shocking if, when asked to predict the future, GPT-5 writes page after page of "future history", that turns out to be what you will see happen, because we humans will make it happen (so long as they don't get "contaminated" with GPT-5's predictions... unless GPT-5 because self-aware and takes its own predictions into account).
And, now, what if in those pages there is mention of global thermonuclear annihilation?...