They don't start as super intelligent. They improve themselves to get there (according to everything I've read, if it happens, that's how it happens) so you start with something that is much more comprehensible.
Just because your intelligence increases, it doesn't change your fundamental motivations, just the sophistication with which you go about achieving them.
We still want basically what monkeys want: Food, Sex, Survival, to avoid pain and misery, social interaction, respect etc. If we suddenly get much smarter, we're still going to be Motivated by these things. You've said before, anyone who thinks human nature is going to change significantly in the future is probably wrong.
Now in the past people solved food problems by hunting and gathering, today we solve them by going to the supermarket (and setting up a huge global supply chain), and in the future, maybe we'll have food printers etc.
An AI would have none of these motivations, we have them due to our biology.
So assuming we could make a super intelligence: We put together all immensely complex machinery needed for it's brain, we turn it on and it would just sit there doing nothing. It wouldn't get bored or start trying to figure something out or anything, it would just sit there, until we give it something to do. It wouldn't care if you turned it off, after all why is existing better than not existing?
We think it is, because any humans who thought not existing was the way forward did not breed and so evolution gave us survival instincts, but nothing gives an AI survival instincts except the people who create it.
The true danger of AI is if we mess up their motivations. If we make an AI and make its whole purpose be to produce paperclips, then a super intelligent AI kills the world and turns it into paperclips. Its not the intelligence that makes the AI destroy us, its the fact that we gave it an extremely stupid, simple and absolute motivation. In the same scenario, what if we programmed our AI to make 1 trillion paperclips. Worst case it turns a city into paperclip factories, but the species is safe.
So we control the AI by making it not want to disobey, rather than by making it impossible for it to disobey. If it gets smart enough, it can beat whatever rules or restrictions we put on it, you're right on that! But it has to want to. If it doesn't want to, then it wont, it might even suggest better rules and restrictions because its so incredibly happy at the idea of serving humanity. That's why its solvable, we don't need to beat a super intelligent AI, we just need to make one that enjoys serving us.