Jeff's goal is to "understand intelligence" and then use it to build intelligent machines. He is confident that this is possible, and that the machines can be dramatically smarter than humans (e.g. thinking faster, more memory, better at physics and math, etc.). Jeff thinks the hard part is done—he has the right framework for understanding cortical algorithms, even if there are still some details to be filled in. Thus, Jeff believes that, if he succeeds at proselytizing his understanding of brain algorithms to the AI community (which is why he was doing that podcast), then we should be able to make machines with human-like intelligence in less than 20 years.
Near the end of the podcast, Jeff emphatically denounced the idea of AI existential risk, or more generally that there was any reason to second-guess his mission of getting beyond-human-level intelligence as soon as possible. However, he appears to be profoundly misinformed about both what the arguments are for existential risk and who is making them. Ditto for Lex, the podcast host.
https://www.lesswron...within-20-years
The get-out-of-jail-free card is Jeff Hawkins' caveat: "if he succeeds at proselytizing his understanding of brain algorithms to the AI community..."
That means, if 2039 passes without an AGI being built, he can say that his theory of intelligence and his roadmap for inventing AGI are still right, but the prediction failed because other people failed to give him the necessary money and manpower for the project.