Demis Hassabis urges greater caution in AI development

Post Reply
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Demis Hassabis urges greater caution in AI development

Post by funkervogt »

It's kind of remarkable for such a restrained man to say this.
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
https://time.com/6246119/demis-hassabis ... interview/
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Re: Demis Hassabis urges greater caution in AI development

Post by funkervogt »

What Hassabis said in 2016:
In his view, public alarmism over AGI obscures the great potential near-term benefits and is fundamentally misplaced, not least because of the timescale. “We’re still decades away from anything like human-level general intelligence,” he reminds me. “We’re on the first rung of the ladder. We’re playing games.” He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
https://www.theguardian.com/technology/ ... nd-alphago
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Demis Hassabis urges greater caution in AI development

Post by Yuli Ban »

Probably due to two things:

• Unexpectedly rapid development of generalist models
• Unexpected capabilities of scaling laws

And maybe a third one:
• Repeated discovery of what ELIZA's makers discovered 60 years ago: humans are very easily fooled


All three together make this a fairly dangerous situation.

Edit: And now that I'm back, let me expand this more than it needed to be expanded upon

When Hassabis first said that "AGI is decades away," I think he genuinely believed that. Circa 2016, who other than really hardcore Singularitarians and the more optimistic casuals of /r/Futurology believed that anything resembling AGI was near? DeepMind helped to defeat humans at Go, yes, but just like with DeepBlue 2 decades prior, all this proved was that a sufficiently advanced narrow AI could defeat humans at a task we once believed could only be done with general AI.
AI circa 2016 was still a bunch of parlor tricks, and there was (almost) zero idea that large language models would provide any sort of major shortcut to generality (some acutely aware types like Starspawn0 foresaw it, but even he wasn't making grand predictions of a transformative LLM arising within five years)

It wasn't even until 2017 when we saw the paper about attention models that introduced us to transformers, and it wasn't until 2019 with GPT-2 that we saw any glimmer of the potential of what LLMs could do. So effectively, before Valentine's Day 2019, there was no clear path towards AGI, proto or otherwise.

In the late 2010s, the field of AI was marked by a significant amount of hype and overpromising. Many companies and researchers were eager to showcase the capabilities of their AI systems, but the reality was that most of these systems were only capable of performing specific, narrow tasks. These narrow AI systems were essentially data science parlor tricks that were able to perform well on certain tasks, such as image recognition or language translation, but lacked the ability to generalize their knowledge to other tasks or to understand the meaning behind the data they were processing.

Despite this, the field was often portrayed as being on the cusp of achieving true artificial general intelligence. This led to a perception that the field was akin to a giant Potemkin village, where impressive-looking advancements were being made, but the reality was that true AGI was still a long way off.

Furthermore, many of the advancements in AI were driven by large amounts of data and computing power, rather than significant breakthroughs in algorithms or theoretical understanding of intelligence. This led to a situation where the field was heavily reliant on access to massive amounts of data and computational resources, and progress was often hindered by the limitations of these resources.

In the early 2020s, something changed in the field. Despite there still being no major fundamental algorithmic breakthroughs, AI started doing something it never had before: generalizing. This was made possible by the rise of large language models such as GPT-3. These models, trained on vast amounts of data, were able to understand and generate human language with unprecedented fluency and accuracy.

DeepMind was caught off guard by the potential of these large language models. They had been focused on other areas of AI research, such as reinforcement learning and neural networks, and were not initially aware of the capabilities of these models. However, as the potential of these models became more apparent, DeepMind quickly adapted and started incorporating them into their own research. DeepMind, being a company focused heavily on quality over quantity, must have very quickly realized something was up with LLMs.

The ability of AI to generalize, thanks to these large language models, opened up new possibilities for AI in fields such as natural language processing, language translation, and even creative writing. It also raised new ethical concerns, such as the potential misuse of these models, and the need for responsible AI development.

Before the emergence of LLMs, the general perception among AI researchers was that the development of AGI was still decades, if not centuries or even millennia, away. This belief was based on the idea that the only way to reach AGI was through reverse engineering the brain and advancing deep reinforcement learning. However, the emergence of LLMs suddenly accelerated this timeline and changed the perception of what was possible with AI. The cold fact is that there are hints of true generality emerging from contemporary large language models, hints that did not previously exist no matter how much one tried to stretch the capabilities of these pre-2020s models.

It's like comparing Victorian scientists trying to reverse engineer an F-35 with only basic ideas of how heavier-than-air flight might work, and suddenly LLMs came along and were like the first airplanes sent back in time. These models are still far from being AGI, but they are far ahead of anything that had been developed before and accelerated the timescales for AGI development by an enormous, almost stupefying amount.

Demis Hassabis was among those who believed that AGI was still far away, based on the assumption that the only way to reach AGI was through reverse engineering the brain and deep reinforcement learning. However, with the emergence of LLMs, it became clear that there were other paths to AGI, and that the development of AGI was much closer than previously thought.


Furthermore, there was another issue: the general perception among AI researchers was that there were only three types of AI: narrow AI, general AI, and super AI. However, with the emergence of LLMs, it became clear that there was an intermediate phase between narrow and general AI, one that had not yet been named.

These multimodal and generalist models now exist in a "bizarro twilight stage" between narrow and general AI, a stage that has no name but has the potential to unleash transformative AI on the world. Even a moderately generalized proto-AGI has the potential to upend whole industries in a way even the strongest narrow AIs couldn't. However, there was zero preparation done for this development because everyone from engineers to data scientists to economists all agreed that AGI was still decades away, and thus not a concern to contemporary society.


One of the main concerns with the rise of LLMs is the lack of genuine and honest discussion about AI safety. Before the emergence of LLMs, the debate around AI safety was often dismissed as science fiction or limited to corporate discussions of "AI replicating biases." But now, as LLMs and diffusion models are being commercialized, society is in a massive gray area of uncharted territory.

LLMs alone have also revitalized and amplified the ELIZA effect, which refers to the tendency for people to attribute human-like intelligence and understanding to AI systems. The ELIZA effect was always pronounced even when it emerged out of purely narrow Markov chain and fuzzy logic chatbots, which could easily trick humans into thinking that the bots had any level of cognitive function. However, it's become supercharged with the emergence of LLMs that show hints of true natural language understanding and commonsense logic. This could lead to confusion and misunderstanding about the true capabilities of these systems, and the potential risks they pose.

Thus, it's no wonder why Demis Hassabis is suddenly having strong hesitation about the rapid rate of AI progress. No one could have predicted what has started with any real level of accuracy, even if they generally guessed the broadstrokes. It's all happening too fast, too soon.


By the way, everything after "So effectively, before Valentine's Day 2019, there was no clear path towards AGI, proto or otherwise" was written by ChatGPT.
And remember my friend, future events such as these will affect you in the future
bluethought
Posts: 1
Joined: Mon Jan 16, 2023 12:59 pm

Re: Demis Hassabis urges greater caution in AI development

Post by bluethought »

Long time lurker, first time poster.

I was reading your post and thinking, "Wow, Yuli's writing style has significantly changed since last time I checked in." Something seemed different, but I never considered it was ChatGPT's output! It continues to be amazing!
User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: Demis Hassabis urges greater caution in AI development

Post by lechwall »

Regardless of whether LLMs are the path forward to true AGI one thing is for certain. A lot of jobs are about to be automated away by GPT 4 (if its a leap in improvement a la GPT 2 to GPT 3) and this is going to cause a big disruption
User avatar
ººº
Posts: 359
Joined: Fri Sep 16, 2022 3:54 am

Re: Demis Hassabis urges greater caution in AI development

Post by ººº »

Is GPT-4 really that revolutionary?
Post Reply