In the short-term, the thing to worry about AI is what state actors and governments might do with it. Take "10 year march" for example: imagine Australia decides to start using AI to track citizens it believes are "up to no good", and for some reason thinks "10 year march" is one of them (law enforcement has fantasies about putting him in jail) -- where the reasons it believes this to be the case are all hidden, and kept confidential. So, an AI combs through his online activity, reading and understanding all that he writes, and compiles a dossier to be used as part of an investigation. Part of what protects individuals from this level of surveillance is the fact that it's just too costly, unless they are pretty sure that the target is guilty and of high-value. However, if you lower the cost of investigation 1000x, you can run them on orders of magnitude more citizens. And it need not even involve illegal search and seizure -- it can all be done by just monitoring their online activity, that everybody has access to.
Also imagine political persuasion bots; or nationalist troll-bots, that scour the internet for mention of anything even vaguely critical of Russia, say, and then will show up to challenge it -- and belittle and attack the writer into submission, teaching them a lesson not to mess with Russia!
In the longer run, there are probably going to be major "accidents" when some sufficiently smart AIs are unleashed onto the web. Maybe, for example, one of those bots that defends Russia's honor gets the idea to shut down the U.S. power grid, to teach it a lesson. It has a basic imperative to teach the enemies of Russia a lesson, and after reading enough online comments, generalizes to the claim that the whole country of the U.S. is Russia's enemy. And it also generalizes that shutting down the U.S. power grid is an appropriate response against such a powerful adversary.
The people who release the bot onto the web may not intend this to happen -- may even try to guard against it. But the way many of these AI systems are trained and built, it's hard to prove you've covered all the bases so that something like it, won't.