Of course, Moore's Law guarantees that the computers will get even faster quite soon.
Moore's law actually ended in 2010 at 32nm. I still see people (like Kurzweil) pushing this myth, but it isn't real. We can go over this. Both Intel and AMD were producing 32nm products in 2010. Dropping down one full step every two years and a half step in between would be: 28nm: 2011
AMD, with production at Global Foundries, dropped a half node to 28nm in 2014. GF, with process technology licensed from Samsung had 14nm in 2017 and 12nm in 2018. GF currently has no plans for 7nm production. AMD's 7nm production began in 2019 at TSMC.
Intel began producing 22nm in 2012, 14nm in 2014 and 10nm in 2018 (canceled and delayed to 2020).
It's worse than it looks. AMD's 14nm process was actually 22nm and the 12nm process is 16nm. And, TSMC's and Samsung's 7nm are only equal to Intel's 10nm. So, we're at least 3 years behind. None of this has been smooth. Intel was still making 22nm chips in 2015 because it was unable to get enough volume from 14nm. In 2018, Intel could not meet demand for 14nm chips and was having trouble with heat dissipation on Coffee Lake. Cannon Lake was canceled so all of the Ice Lake chips were produced on 14nm. Today, Intel is still making 14nm chips because it can't get enough volume from 10nm. Both Intel and AMD have gone to hybrid chips with chiplets using two process sizes because one process can't fulfill the requirements.Today, no one knows how to get to a 7nm chip. So, when you hear numbers thrown around like 5nm, understand that they are talking about something more like 9nm. People also get confused between what is possible for memory (smallest), what is possible with a low power chip (middle), and what is possible with a high performance chip (largest). In other words, you can build SRAM smaller than you can make a low power processor (like is used on a cell phone) and the cell phone chip can be made smaller than a desktop processor chip. We are hitting the wall. We've had to resort to EUV immersion lithography and still have to do quad patterning. And some of those gains were due to rearranging the layout of the chip rather than making the components smaller. If you are counting on Moore's Law, you've probably hitched your wagon to a lame horse.
There are ways to get at least a few more generations once we run out, but they won't be popular since they can't use current development techniques. And we might still increase wafer size from 300mm to 400mm.
For obvious reasons, I don't think we'll be able to create a human-level AGI until we have computers whose hardware is as powerful as a human brain. The best estimate is that a human brain does the equivalent of 10^16 calculations per second, and today, only the best supercomputers are that fast.
For 32 bit operations, 10^16 seems about right. However, Summit can do 14.8 * 10^16 FLOPS now. Aurora and Frontier are both expected in 2021 with Aurora having 100 * 10^16 FLOPS and Frontier having 150 * 10^16 FLOPS. I am quite confident that none of them will be AGI systems.
However, I'm struck by a troubling realization: Even today's PC desktops have processing speeds greater than insect brains, yet no one has figured out how to build insect-level AI. See this graph:
I'm not sure what you mean by this. What is it that you're saying an insect can do that a computer can't?
On the other hand, I wonder if some of our computers DO vastly exceed insect-level intelligence in some domains. For example, the computers that drive autonomous cars, play games like Go and Starcraft, and synthesize written text (GPT-3) might be much better at those tasks than an insect-level AGI ever could be.
There's no such thing as "insect-level AGI" that I'm aware of. The term 'artificial intelligence' was coined at the Dartmouth conference in 1956 and since that time the goal has been human-level AI. Many people including Alan Turing and John Nash assumed that brains were just a type of computer and that at some point computers would be able to do whatever brains could. The Turing Test for example would make sense if the assumption of brain/computation equivalence turned out to be true. Unfortunately, it didn't work out that way. The first Cray-1 was built in 1976 with 160 megaFLOPS and people began to have doubts about human-level. It wasn't just that the hardware wasn't fast enough, computer scientists couldn't figure out how to even theoretically define human reasoning, much less explain how it worked.
Nevertheless, there were those who suggested that all we needed to do was reach some level of performance using the closest techniques available and then, once we were in the ballpark, we could figure it out. However, Japan's 5th and 6th generation projects collapsed without making progress nor did Cyc over the following decade. Finally, in 1997, Gubrud coined the term 'artificial general intelligence' to try to distinguish the goal from AI. The Singularity Institute for Artificial Intelligence was founded by Yudkowsky in 2000. Legg and Goertzel began using the term 'AGI' in 2002. By 2006. Cray's Red Storm was nearly a million times faster that Cray-1 with 101 TeraFLOPS. Some confidence returned to the field either because Yudkowsky or perhaps the more powerful hardware and people began talking about it. Today, we have DeepMind, the Human Brain Project, and OpenAI which all talk around AGI while still being unable to define it, much less design it or build it. The SIAI is now the Machine Intelligence Research Institute (MIRI) and Goertzel developed the OpenCog Prime software for OpenCog.
To me, the hype on this topic is staggering. According to self-driving car enthusiasts, the technology is either here already (but just hasn't been officially approved) or will be here within months. This is all nonsense. My mother's 2015 Subaru Forester has features like warning if you drift out of your lane, automatic braking, and vehicle following for cruise control. This would be Level 2. The best systems available today like Tesla's Autopilot or the Openpilot system are Level 3. No one knows how to get to Level 4 while a true autonomous system would be Level 5.
People seem constantly confused about the concept of development. This is an engineering process. It's how you slowly improve something over time such as making it more powerful, cheaper, or more suited to a task. For example, once tiny locomotives were available to replace horses they were steadily developed over the next century until they weighed a million and half pounds and put out over 4,000 HP. The triple expansion steam engines on the Titanic delivered 15,000 HP. However, when development reaches its practical limit you have to switch to something else. So, piston steam engines on ships switched to steam turbines which are also used in power plants. Locomotives switched from steam piston engines to diesels. Eventually diesels have got big enough to replace steam turbines on ships and to used for power generation. Piston engines on aircraft were replaced by jet engines. Jet engines are also used for power generation.
The biggest misunderstanding about AGI seems to be the assumption that it can be developed from AI. This is incorrect. AGI is not a bigger, faster, or more complex version of AI -- it's a completely different technology which isn't derived from computational theory (like AI). Watson and Alpha Zero will never be AGI, regardless of what hardware they are run on. In a similar fashion, Autopilot and OpenPilot will never be autonomous nor will Alexa or Siri ever be a genuine personal assistant. AI to AGI is like switching from a piston engine to a jet engine -- they are fundamentally different. You can't have insect-level AGI because insect functionality is similar to a finite automaton while AGI is considerably more complex. Curiously, MIRI seems well aware of this:
MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change
Based only on what is publicly known from the published research, AGI wouldn't seem likely within the next 50 years.
I don't know. What do you guys think? Are you also troubled by the gap between the theoretical performance of our computer hardware and the actual performance of the best "AI" software running on it?
It's about what I would expect given the limitations of AI.
And the algorithms the brain uses to map this data to "programming" are probably pretty dumb. It isn't something that you need a "genius" to come up with. Listen to this podcast, for example, which has an interview with Princeton professor Uri Hasson:
I think you might be exaggerating what he said.
At the same time, these artificial networks, as opposed to humans, fail miserably in situations that require generalization and extrapolation across contexts
Yes, that's the problem that everyone has seen. So, does Hasson know how to solve it?
How high-level cognitive functions emerge from brute-force,over-parameterized BNNs is likely to be a central question for future cognitive studies.
No, he doesn't seem to.
But GPT-3 and similar systems might give machines enough information about how we think, to where they can emulate abstract human thought.
I wrote the theory for abstraction in 2016, but, because of specific conflicts, still have not submitted it for publication. So, I would naturally be interested if someone had an implementation.
What that says to me is that models like GPT-3 are not just "memorizing" like how skeptics seem to think. They are actually capturing fundamental human cognitive abilities, somehow.
That would be remarkable. Let's look:
Despite GPT-3's improved benchmark results over GPT-2, OpenAI cautioned that such scaling up of language models could be approaching or already running into fundamental capability limitations of the current approach
Fundamental scaling limitations at this level would not seem to fit with the idea of general abstraction or human-type cognition.