I found one of the most interesting points in the discussion to be by Demis Hassabis:
I just want to caveat one thing about slowing versus fast progresses, you know, it could be that, imagine there was a moratorium on AI research for 50 years, but hardware continued to accelerate as it does now. We could, you know, this is sort of what Nick's point was is that there could be a massive hardware overhang or something where an AI actually many, many, many different approaches to AI including seed AI, self-improving AI, all these things could be possible. And, you know, maybe one person in their garage could do it. And I think that would be a lot more difficult to coordinate that kind of situation. I think there is some argument to be made where you want to make fast progress when we are at the very hard point of the 's' curve. Where actually, you know, you need quite a large team, you have to be quite visible, you know who the other people are, and you know, in a sense society can keep tabs on who the major player are and what they're up to. Whereas, opposed to a scenario where in say 50 or a 100 years time when, you know, someone, a kid in their garage could create a seed AI or something like that.
This is nothing new, but I think a lot of people miss this point. Firstly, he is saying that an AGI is possible if there is an improvement in hardware alone. A lot of people look at the creation of AGI as a problem similar to what many theoretical physicists have to grapple with, such as coming up with a complete theory of the universe. This is not a good comparison because even if the finest minds work on this problem for decades, it may be beyond human understanding and simply be unsolvable. Whereas, an AGI will be created given enough improvement in hardware.
Secondly, he makes the point that because of this hardware overhang there is an incentive for fast progress in creating an AGI. They want to get past the goalpost before any nitwit can and hopefully apply strong safety features.