Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Jeff Hawkins on neuromorphic AGI within 20 years


  • Please log in to reply
3 replies to this topic

#1
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPip
  • 838 posts

 

Jeff's goal is to "understand intelligence" and then use it to build intelligent machines. He is confident that this is possible, and that the machines can be dramatically smarter than humans (e.g. thinking faster, more memory, better at physics and math, etc.). Jeff thinks the hard part is done—he has the right framework for understanding cortical algorithms, even if there are still some details to be filled in. Thus, Jeff believes that, if he succeeds at proselytizing his understanding of brain algorithms to the AI community (which is why he was doing that podcast), then we should be able to make machines with human-like intelligence in less than 20 years.

 
Near the end of the podcast, Jeff emphatically denounced the idea of AI existential risk, or more generally that there was any reason to second-guess his mission of getting beyond-human-level intelligence as soon as possible. However, he appears to be profoundly misinformed about both what the arguments are for existential risk and who is making them. Ditto for Lex, the podcast host.

https://www.lesswron...within-20-years

 

The get-out-of-jail-free card is Jeff Hawkins' caveat: "if he succeeds at proselytizing his understanding of brain algorithms to the AI community..."

 

That means, if 2039 passes without an AGI being built, he can say that his theory of intelligence and his roadmap for inventing AGI are still right, but the prediction failed because other people failed to give him the necessary money and manpower for the project. 



#2
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
I don't think Hawkins is going to go anywhere with this.

I remember about 12 to 15 years ago (maybe I'm mistaken about the dates) trying to get an undergraduate student to code up some of the ideas from a little paper Hawkins wrote with Dileep George. The student read Hawkins's book On Intelligence, and really liked it. I think my student actually had some email contact with Hawkins; but I don't remember the details. Anyhow, as students are wont to do, his interests shifted, and he moved on to other stuff, and became a freelance coder. I haven't heard from the guy in at least 10 years.

I continued to follow what Hawkins and George were up to for a few years. Then, one day I saw that George had founded his own company; and the newer ideas Hawkins was working on were COMPLETELY DIFFERENT from the work with George. George's stuff (with Hawkins) was based on principles like "belief propagation" and the "MAP rule (maximum a posteriori)"; and the newer stuff by Hawkins was based more on the kinds of ideas you see in a classical introductory algorithms class (hashing, string manipulation, function iteration, and so on). Furthermore, George attracted a lot of big investors with his company Vicarious (that company came a few years after this interregnum when Hawkins went a different way). I haven't looked in depth at his later stuff -- and don't really remember his earlier stuff that well. My overall-and-not-very-well-thought-out impression, though, is that it's not going to lead to the kind of breakthroughs that were promised (or at least implied in videos presentations by Dileep George and also Scott Phoenix, who I think used to be named something like Scott Brown). As I suggested, the same is true of Jeff Hawkins.

The neuroscience community is full of people who are ticked at Hawkins, for giving the impression he is making "serious progress", and getting attention for it; e.g. I think I posted some Tweets by David Markowitz where he wrote some nasty things. But he also has his defenders, that like to see a fresh perspective.

Hawkins kind of has a chip on his shoulder, after not getting into the neuroscience grad school of his choice (Berkeley); and perhaps the animosity is mutual (he dislikes them as much as they dislike him).

I think Hawkins's greatest legacy will probably be the influence he has had on getting people excited by neural nets. For example, Andrew Ng sometimes mentions how big an influence On Intelligence had on him -- it might even be the case that his early work at Google on Deep Learning would never have happened without Hawkins's influence.

Like with Ben Goertzel, I'd like to see what he is doing pan out -- but I'm afraid I have to admit that I think the probability is about zero.
  • Yuli Ban and funkervogt like this

#3
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,124 posts

 

belief propagation" and the "MAP rule (maximum a posteriori)"

I never thought I'd see these terms in the wild.



#4
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Yes, it used some standard tools you have probably heard of before. Honestly, the approach in the paper didn't look that innovative. I think it was something like a multi-layered graphical model with a forward and backward pass to do inference and training.

Probably George did other things besides this for his ph.d. thesis -- I don't recall.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users