Artificial General Intelligence (AGI) News and Discussions

User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ozzie guy »

Google actively building AGI announcement

About a month ago I purchased a ted talk where google discusses the AGI they are building called Google pathways.

I am not any kind of AI researcher however it really seems like Google is building an AGI.

I am now able to download the Ted talk so I want to share the talk to get the word out there.

This Ted video contains multiple speakers PLEASE SKIP TO 27:55 for the relevant Ted talk by Jeff Dean.

https://mega.nz/file/lxYiHb5S#HglcdIEUh ... 6r4oS4rekc

I may have to take down the talk at some point for copyright reasons.
User avatar
wjfox
Site Admin
Posts: 8730
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: Proto-AGI/First Generation AGI News and Discussions

Post by wjfox »

User avatar
andmar74
Posts: 389
Joined: Mon May 24, 2021 9:10 am
Location: Denmark

Re: Proto-AGI/First Generation AGI News and Discussions

Post by andmar74 »

Global governance system for AGI. Never going to happen.
Something much, much easier to handle: Covid-19. How did that go?
lkiannn
Posts: 1
Joined: Sun May 23, 2021 12:59 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by lkiannn »

For COVID, they had to create a structure that should WORK. For AGI, they may (and very probably will) opt to create a structure that would outright FORBID any creative work. The bureaucracies are much better at such tasks, so that I expect that to happen.
Lariliss
Posts: 10
Joined: Tue Oct 12, 2021 7:33 am

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Lariliss »

Human consciousness and intelligence is rather categorized with observable reactions and brain tasks results (consciousness, self-awareness, sentience, sapience), but it is far not understood yet.
In this case AGI is helpful as a technology by itself, helping science to get more understanding of the human brain with pieces of technology, not with animals and human experiments.
This is the positive side, which is a feature for all AI studies. There is no need to wait 100 years and numerous research, we can feed it with enough information, which can be processed overnight.
On the other hand, AGI raises the questions of possible threats (why won’t we control it with regulations from the start?) and ‘rights’ - which has a blur border even for animals yet.

I would like to watch the results and helping hand from AGI, as from any technology. It easy to remember nuclear power, space technologies, the internet both sides of the coin.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

Human-Level Language Models
Language models have received a lot of attention recently, especially OpenAI’s GPT-3 and Codex. While the possibility of a human-level language model remains out of reach, when might one arrive? To help answer this question, I turn to some basic concepts in information theory, as pioneered by Claude Shannon.

Shannon was interested in understanding how much information is conveyed by English text. His key insight was that when a text is more predictable, less information is conveyed per symbol compared to unpredictable texts. He made this statement more precise by introducing the concept of entropy. Roughly speaking, entropy measures the predictability of a sequence of text, in the limit of perfect prediction abilities.

Since Shannon’s work, a popular hobby of computational linguists has been to invent new ways of measuring the entropy of the English language. By comparing these estimates with the actual performance of language models at the task of predicting English text, it is possible to chart the progress we have made toward the goal of human-level language modeling. Furthermore, there are strong reasons to believe that entropy is a more useful metric for tracking general language modeling performance when compared to performance metrics on extrinsic tasks, such as those on SuperGLUE.

My result is a remarkably short timeline: Concretely, my model predicts that a human-level language model will be developed some time in the mid 2020s, with substantial uncertainty in that prediction.

I offer a few Metaculus questions to test my model and conclude by speculating on the possible effects of human-level language models. Following Alan Turing, mastery of natural language has long been seen as a milestone achievement, signaling the development of artificial general intelligence (AGI). I do not strongly depart from this perspective, but I offer some caveats about what we should expect after AGI is developed.
And remember my friend, future events such as these will affect you in the future
User avatar
funkervogt
Posts: 1171
Joined: Mon May 17, 2021 3:03 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by funkervogt »

AI researcher Andrew Critch predicts massive progress in AI technology and massive expansion of tech companies that control it by 2035.
Regarding my AI development timeline/forecast: I'm going to start saying things more publicly now. Privately, I've been saying that combining transformers and knowledge graphs was the Next Important Thing. This is the sort of thing I mean: http://research.baidu.com/Blog/index-view?id=160

On the whole, I expect the jobs of most machine learning engineers, as they currently exist at top tech companies, to be mostly-automated by around 2030, or maybe 2035 (assuming no international moratorium on AI research, or similar). Most of the same people will still be able to retain jobs in tech, but the jobs will be pretty different and operating at a higher level of abstraction away from the details of ML engineering work as it currently exists. In particular, progress in other areas of AI (not just NLP) will be aided and accelerated by this transition.

This transition will mean much more rapid development of capabilities for tech companies, yielding a "singularity" at the level of «what a big tech company can do». That is to say, while currently tech companies are only readily able to provide services in a few sectors of the economy (as defined by, say, Fidelity's Research division: https://eresearch.fidelity.com/.../sect ... rket.jhtml), by 2030-2035 most big tech companies (e.g., BATFAANG) will be readily able to provide profitable services in most or all sectors. By "readily able" I mean that if you're a big tech company, you could throw a dart at a chart of industry sectors, and wherever the dart lands, become a major player in that sector with less than 1 year of effort (assuming no major international moratorium on Letting Tech Companies Do Stuff), and repeat this process each year, probably in parallel to some extent.

Whether this level of rapid capabilities development leads to any kind of "AI lab leak" of dangerous AI technology will depend on how hard tech companies try to prevent lab leaks using very meticulous cybersecurity protocols (e.g., tons of checklists and meetings to second-guess their assumptions). I'm fairly uncertain as to whether the first major AI lab leak will be (a) a recoverable disaster (like the recent pandemic), (b) an unrecoverable catastrophe (like, say, the Ice Age), or (c) an event that inevitably precipitates human extinction (necessarily involving more potent tech than (a) or (b) would require). By "major AI lab leak" I mean something at least as bad for computers as the recent pandemic has been for humans. I'm currently putting probabilities of around 85% on the recoverable disaster outcome (a), 10% on the unrecoverable catastrophe outcome (b), and 5% on the extinction event (c).

I do think (p ~ 85% or maybe 90% if I reflect more) that there will be a serious AI lab leak sometime in the next 10-15 years, at least enough to knock out a major portion of the internet and attached devices. Here's hoping it's not of type (b) or (c)! Clearly, (c) can only occur as the *first* major AI lab leak if we avoid other major AI lab leaks with (a)-level and (b)-level tech, so if it happens, it happens a bit later.

Also, as a reminder, I continue to think most existential risk from AI technology will come some years *after* greater-than-human AI technologies have been adopted by a geopolitically multipolar combination of actors/institutions, so the number 5% above is not an upper bound on my overall estimate of AI x-risk. For a more comprehensive breakdown of scenarios, see http://acritch.com/arches .

Here's hoping y'all have an enjoyable singularity!
<iframe src="https://www.facebook.com/plugins/post.p ... &width=500" width="500" height="350" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowfullscreen="true" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share"></iframe>
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

We solve university level probability and statistics questions by program synthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned on code. We transform course problems from MIT's 18.05 Introduction to Probability and Statistics and Harvard's STAT110 Probability into programming tasks. We then execute the generated code to get a solution. Since these course questions are grounded in probability, we often aim to have Codex generate probabilistic programs that simulate a large number of probabilistic dependencies to compute its solution. Our approach requires prompt engineering to transform the question from its original form to an explicit, tractable form that results in a correct program and solution. To estimate the amount of work needed to translate an original question into its tractable form, we measure the similarity between original and transformed questions. Our work is the first to introduce a new dataset of university-level probability and statistics problems and solve these problems in a scalable fashion using the program synthesis capabilities of large language models.
And remember my friend, future events such as these will affect you in the future
User avatar
Ozzie guy
Posts: 486
Joined: Sun May 16, 2021 4:40 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Ozzie guy »

With Yuli saying Proto AGI may now exist but is unreleased I felt like sharing my plans after proto AGI is released.

I mentioned this before I am going to become a minimalist. I have had enough of working crappy slightly dangerous jobs and I have failed to make it in self employment.

I have become very close with my family again so I may give up doing things I don't like and start seriously caring for my sick dad. I don't think it's that strange for a 24 year old to live with family and my brother still does. I really don't want to work another depressing job again (I may have to but I at least wont waste time effort and money into becoming self employed). I might go to university and study something fun rather than something that is good for a career as in my country the government pays you to go to university so that would cover my income if I needed one. (you have to pay them back but only when you have a decent income).

I would give myself permission to be a minimalist enjoying life because I know it won't take long for proto AGI to turn into human level AGI and then turn into a utopian or dystopian singularity where any amount of prior self improvement or work is irrelevant.
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Re: Proto-AGI/First Generation AGI News and Discussions

Post by Yuli Ban »

And remember my friend, future events such as these will affect you in the future
Post Reply