Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Artificial General Intelligence Watch Thread

AGI artificial intelligence OpenAI GPT-3 deep learning neural network artificial neural network AXI AI Singularity

  • Please log in to reply
21 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,289 posts
  • LocationNew Orleans, LA

We don't have one just yet, but we are damn close. So close that I feel the need to create this thread to document our rapid approach.
This is an extension of the AI & Robotics thread, but specifically focusing on the increasingly generalized neural networks currently being worked on.
These early posts will document networks that aren't AGI (in fact, closer to AXI) but are definitely on the way towards them.
 
 
OpenAI will almost certainly be the first, but someone else may leapfrog them. 
 
 

The Obligatory GPT-3 Post

By Scott Alexander
I would be failing my brand if I didn’t write something about GPT-3, but I’m not an expert and discussion is still in its early stages. Consider this a summary of some of the interesting questions I’ve heard posed elsewhere, especially comments by gwern and nostalgebraist. Both of them are smart people who I broadly trust on AI issues, and both have done great work with GPT-2. Gwern has gotten it to write poetrycompose music, and even sort of play some chess; nostalgebraist has created nostalgebraist-autoresponder (a Tumblr written by GPT-2 trained on nostalgebraist’s own Tumblr output). Both of them disagree pretty strongly on the implications of GPT-3. I don’t know enough to resolve that disagreement, so this will be a kind of incoherent post, and hopefully stimulate some more productive comments. So:
OpenAI has released a new paper, Language Models Are Few-Shot Learners, introducing GPT-3, the successor to the wildly-successful language-processing AI GPT-2.
GPT-3 doesn’t have any revolutionary new advances over its predecessor. It’s just much bigger. GPT-2 had 1.5 billion parameters. GPT-3 has 175 billion. The researchers involved are very open about how it’s the same thing but bigger. Their research goal was to test how GPT-like neural networks scale.
Before we get into the weeds, let’s get a quick gestalt impression of how GPT-3 does compared to GPT-2....

 
This isn't a very technical piece, so it's easy to understand for the laymen. However, I'll skip to the really good part:  

What would much more powerful GPT-like things look like? They can already write some forms of text at near-human level (in the paper above, the researchers asked humans to identify whether a given news article had been written by a human reporter or GPT-3; the humans got it right 52% of the time)

gpt3_accuracy.png



So one very conservative assumption would be that a smarter GPT would do better at various arcane language benchmarks, but otherwise not be much more interesting – once it can write text at a human level, that’s it.
Could it do more radical things like write proofs or generate scientific advances? After all, if you feed it thousands of proofs, and then prompt it with a theorem to be proven, that’s a text prediction task. If you feed it physics textbooks, and prompt it with “and the Theory of Everything is…”, that’s also a text prediction task. I realize these are wild conjectures, but the last time I made a wild conjecture, it was “maybe you can learn addition, because that’s a text prediction task” and that one came true within two years. But my guess is still that this won’t happen in a meaningful way anytime soon. GPT-3 is much better at writing coherent-sounding text than it is at any kind of logical reasoning; remember it still can’t add 5-digit numbers very well, get its Methodist history right, or consistently figure out that a plus sign means “add things”. Yes, it can do simple addition, but it has to use supercomputer-level resources to do so – it’s so inefficient that it’s hard to imagine even very large scaling getting it anywhere useful. At most, maybe a high-level GPT could write a plausible-sounding Theory Of Everything that uses physics terms in a vaguely coherent way, but that falls apart when a real physicist examines it.
Probably we can be pretty sure it won’t take over the world? I have a hard time figuring out how to turn world conquest into a text prediction task. It could probably imitate a human writing a plausible-sounding plan to take over the world, but it couldn’t implement such a plan (and would have no desire to do so).
For me the scary part isn’t the much larger GPT we’ll probably have in a few years. It’s the discovery that even very complicated AIs get smarter as they get bigger. If someone ever invented an AI that did do more than text prediction, it would have a pretty fast takeoff, going from toy to superintelligence in just a few years.
Speaking of which – can anything based on GPT-like principles ever produce superintelligent output? How would this happen? If it’s trying to mimic what a human can write, then no matter how intelligent it is “under the hood”, all that intelligence will only get applied to becoming better and better at predicting what kind of dumb stuff a normal-intelligence human would say. In a sense, solving the Theory of Everything would be a failure at its primary task. No human writer would end the sentence “the Theory of Everything is…” with anything other than “currently unknown and very hard to figure out”.


It's clear that GPT-3 is an absolutely astounding piece of hardware, but it's just a primordial form of what's coming. Indeed, as I mentioned elsewhere, it ought not be that difficult to increase the number of parameters another order of magnitude. After all, GPT-1 was ~100 million parameters, and GPT-3 is 175 billion. That's three orders of magnitude in just two years. Assuming a slow down due to the difficulty in scaling up training and energy consumption, at least another would be possible by next year. If not another order of magnitude and then some, perhaps growing to upwards of 4 or 5 trillion data parameters.
 

This would be easily feasible if a GPT-3-esque network were trained on heavy data, such as, say, image and audio that was broken down into bits. 

 

And gee, what was that "big secret project"

One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

 

It's clear something very explosive is coming, and it's coming very very soon.

 

 

 

My prediction is that, sometime next year, we'll hear that a team— almost certainly OpenAI, but a tiny possibility that it's someone else— has unveiled an operational AI that's capable of running a theoretically infinite number of problems, capable of remembering words far past any set time, and able to solve scientific and mathematical theorems through spontaneous humanlike reasoning abilities.  This includes being able to program things using natural language (e.g. "Build a game of Pong") and generate incredible lengths of coherent text. Its ability as a cognitive agent would make it the perfect digital assistant, able to easily pass a difficult rendition of the Turing Test and Winograd Schema challenge alike. It will be, in essence and in some limited form, the first artificial general intelligence. Perhaps not the all-powerful god or even a sapient artificial lifeform we wanted, but certainly something far beyond anything anyone expected this soon into the decade, before we even had neurofeedback to further empower it (and that's definitely still coming). It'll be the dawn of a new day.


And remember my friend, future events such as these will affect you in the future.


#2
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,309 posts
  • LocationRaleigh, NC

The next few years is going to be exciting, that's for certain!


What are you without the sum of your parts?

#3
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 419 posts
  • LocationNew York

Isn't this type of runaway advancement the very basis behind the idea of Singularity? It could happen a bit sooner than 2045 if I'm understanding what Yuli posted. I also now see perfectly the concern over issues with fake news. A government having control over this is going to be a nightmare, as you can literally rewrite (recode) history at your leisure, create news, and... create context. The average joe possibly causing chaos isn't much better.

On the flipside, we could now solve a host of issues & have scientific advances that much faster, which is definitely worth it. I also like the cultural implications, because it could potentially create music that sounds "new" or different enough to our ears, and literature that inspires new thought. Imagine it creating user specific experiences, media, video games, etc. We could also ask it how to best deal with the above issue ironically, but this is assuming a more powerful version that could do that.  

 

 

GPT-3 is very big, but it’s not pushing the limits of how big an AI it’s possible to make. If someone rich and important like Google wanted to make a much bigger GPT, they could do it.

 

By that logic, what's stopping a legit A.I. from existing at this very moment locked away somewhere in R&D?



#4
Miky617

Miky617

    Member

  • Members
  • PipPipPip
  • 54 posts
  • LocationNC

Perhaps I should be more cautious and wary of AGI but I like to think that any AI that is sufficiently powerful enough to have a genuine understanding of language and abstract thought would be able to reason quite strongly, too. Although every human has bias and it's totally fair to think that any program designed by humans is likely to have those biases built in, if the AI is able to metacognitively evaluate it's own thought processes, then maybe it can iron out some of it's own biases and interpret data, follow logical thought patterns, etc. at a higher level than humans.

 

Basically, I feel like a sufficiently advanced AI will be able to avoid many of the cognitive pitfalls that plague humans. For this reason, I think that AGI will be a tremendous step forward in developing a more empathetic and self-aware human populace as it can help us recognize our own misgivings (to the extent that we're willing to listen to a computer). 

 

I'm personally very excited to imagine what it would be like to watch a debate against an AGI. I definitely wouldn't want to be the one up on the stage going against it though. Maybe they could have two of the same AGI taking up different stances and trying to convince each other? A sort of adversarial approach to the truth?



#5
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,079 posts

I think the Turing Test will be passed by the end of 2029. However, the machine that does it won't be a true "general" intelligence. 

 

I still don't foresee a true AGI being created until around 2050. There are too many people inside the AI research community who are still saying that we're nowhere near the goal and still have a pitiful understanding of how cognition works. 



#6
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,076 posts

I think it's worth asking what one means by Artificial General Intelligence. 

 

For example, if it fools humans 90% of the time in 90% of circumstances by text-chat, would that qualify?  I think it would be hard to meet that level of requirement.  No previous chatbot has succeeded in reaching that level of human imitation.

 

Or is it more about how it actually works... not merely how good it is at fooling humans?  

 

If that is the requirement, then note that humans are deficient in many ways.  Some humans have impaired long-term memory, for instance.  Do we say they aren't "generally intelligent"?  Other humans have no (or very limited) ability to imagine (Aphantasia) -- are they "not really generally intelligent"?   Still other humans are missing large parts of their brain, and have a few deficits, but generally are fine -- would they be considered "generally intelligent"?

 

Couldn't an AI system that imitates humans fairly well, that lies somewhere in the intersection of capabilities of all humans, be considered an AGI?  

 

I suspect what most people are looking for is a system that is -- or could be, if it wanted to -- super-human in every kind of cognitive tasks you throw at it.  I think that's too rigid a definition, and is closer to being a "Superintelligence", rather than a "general intelligence".



#7
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,076 posts

I was looking through the paper When Will AI Exceed Human Performance? paper:

https://arxiv.org/pdf/1705.08807.pdf

and, in particular, some of the predictions from the consensus of the AI community. One of the median predictions, by AI experts, is that AI that can do as well as humans on the Putnam math exam is 33.8 years away (the paper was first written in 2017). I think that is very badly wrong. In fact, I wouldn't be surprised if it happens before 2025; and, would even guess it is possible this year. Maybe the people would revise their predictions, given how well language models now perform.

How? Well, using OpenAI's new API might make it possible. It mostly depends on the kind of training data used to build GPT-3, and also on its accuracy. So, here are a few assumptions:

Assumption 1: GPT-3 was trained with a lot of math LaTeX writing. It can write LaTeX.

Assumption 2: GPT-3 has seen a lot of Coq (formal math representation) writing.

Assumption 3: GPT-3 few-shot learning capability is good enough to train it to translate a short bit of math written in LaTeX into a formal language like Coq, maybe with some errors; but perfect accuracy isn't needed. (This would be similar to mapping a paragraph in English --> Python, as in the demo OpenAI showed.)

Assumption 4: GPT-3 few-shot learning capability is good enough to train it to map a problem to a "noisy first attempt" at a solution or proof.

 

Assumption 5: GPT-3 few-shot learning capability is good enough to train it to map an attempt at a solution to a modified version that may work better.

If those assumptions hold, you could do something like the following:

show it (the OpenAI API) a few examples of a problem statement, along with a short proof; so, if you give it a new problem statement, it will attempt to generate a proof. Let that be model A. Then, train a model B that takes an incorrect proof and corrects it, or says "unfixable"; that would be kind of like the grammar-checker example that OpenAI showed in their long paper, or maybe the style-transfer example; training would involve showing it some examples of mapping an incorrect proof to a correct one. So then you could just run the problem statement --> model A --> model B. (Maybe you could make B actually be a set of filters, that each try to correct different classes of errors.)

But the output of model B may be mistaken. So, you need to check it formally. You could write another model, call it C, that maps a proof in English into a formal representation that can be checked on a computer -- e.g. in Coq. I'm guessing there will be examples of Coq in the training data, so it might not be too difficult to build the model C -- it's just a translation process. Model C will work kind of like the program-description --> Python examples we've seen from OpenAI.

Maybe all that will work 60% of the time, on simple examples. Most of the time it fails, it might be that its proof-strategy is wrong. So, you might need to train another model D, that looks at the previous attempt, that failed, and then it writes, "Hmm... doesn't look like it's going to work. Let's try something else..." followed by a new attempt at a proof. The model D would be trained on several examples of failed proofs, along with some random new attempts -- some of the new attempts will be modifications of the previous one (including adding an extra step or something); others will just be a completely different approach. There probably are going to be a lot of examples in the training data for GPT-3 (e.g. on Math Stackexchange, or in papers, even; it's not uncommon for papers to start with a failed proof, and then modify it) of how to modify a proof; so, building model D might be relatively simple, requiring just a few examples for training.

So, now, you try problem statement --> model A --> model B --> model C --> model D --> model B --> model C. And, now, maybe it can nail 80% of basic induction proofs.

Add more steps to the process, as needed.

Before long, you might have a system that can generate 1 to 2 page proofs that are as good as a math competition star could produce, in a large variety of math domains. I'd say, assuming I'm right that the model knows how to write LaTeX and map to a formal representation like Coq (at least with decent accuracy), one could build a pretty good-performing proof-generator in many math domains in the span of a few weeks.

Possible problems? Geometry problems might be tricky, just because they are harder to formalize. And there are some other problems that like. But I think, on the whole, one could build a system to beat humans at proof-generation, especially given that you could run many of these in parallel, each trying different strategies.

This would have been totally impossible just a few years ago; and the way people thought about theorem-proving would not have permitted it. They thought in terms of search trees and all sorts of pruning strategies and such. Humans don't think that way, however; nor do we write proofs that way. Why not just try to imitate humans, using massive amounts of LaTeX as training data, and then formally check everything?

Sure, I can believe there would be a problem with the formal-checking, say, if we're talking about checking complicated math research; but we're talking about the Putnam exam, where the solutions are much shorter.

 

....

 

And, if you can beat humans at the Putnam exam, you could also beat them at longer research production.  You could even train models to come up with interesting questions, and then to solve them.  In that way, you could completely automate math production.

 

This is also going to work for computer programming competitions, and many other kinds of competitions.  

 

By layering-together various different modules built from GPT-3 or perhaps even something better, one could imagine building an "automated scientist" that can tackle hard scientific problems in a large number of disciplines.



#8
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,289 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.


#9
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,289 posts
  • LocationNew Orleans, LA

Funnily enough, I've noticed a bizarrely-timed upsurge in articles decrying how far we are from AGI.

 

 

Though to be fair, three of those are part of a series. Nevertheless, it feels like tempting fate in an accidental sort of way. After all, most people are cripplingly unaware of the advances in language modeling, let alone GPT-3; for plenty of types, the most impressive AI remains AlphaGo or even still IBM's Watson.

 

It it any wonder that several of these are economist publications (including, most literally, The Economist)? Plenty of these types seem almost adverse to the idea of general intelligence coming soon, tending to take a sort of pleasure in espousing hyper-rational skepticism about how impossible AGI really is (trust me, I came across plenty back in the day when first talking about technism way back around 2015, 2016 or so). And that may be because they've been told that AGI is science fiction magic that has no bearing on modern computer science and just don't think it's possible to begin with. Though there are definitely some who feel threatened by the possibility of computers mastering their already questionable field far better than they can.

 

This is adjacent (though certainly unrelated) to something else I've noticed. There's a subset of people in the alt-right and paleo-right who seem bizarrely paranoid about the prospect of AGI arising soon. Don't get me wrong, a lot of alt-right types welcome it just as much as any technoprogressive (in fact, it's in the alt-right that I've noticed a lot of what I hesitate to call "post-female techno-NEETS", or anime-obsessed recluses who'd absolutely love nothing more than to live in a digital utopia of white, manic pixie waifus and alternative history while never leaving their home). However, it's in this group especially that I've seen a lot of jumpy denial that AGI is even possible, let alone coming within a few years. I can't imagine the reasoning why, but I do wonder if it's related to or a primordial form of the "masculinist-primitivists" or "paleo-masculinists" concept I came up with a couple years back.

 

Which is to say, they are very well aware that their long-term agenda to create an ethno-nationalist/traditionalist world-state/4th Reich is mortally threatened by the rise of AI. They can't convince women to become barefoot and pregnant and go back to the kitchen & leave working to the men if robots seize all labor; they can't convince people that there's an agenda to effeminize men if all popular media is synthetically generated; they can't exalt an authoritarian populist as the best leader if a computer becomes more competent.  They can't say that we have to go back to the standards of the 1950s if it becomes technologically impossible to do so. Even if it's an objectively better world, plenty of these types don't want a better world; they want revenge for feelings of being wronged and for society to collapse so their world can come about, a world they might even admit is shittier, as long as it's what they want.

 

It's one of those things that hasn't been discussed much: the psychosocial effects of AGI on the masses and niche populations. There are plenty of people for whom the status quo and earlier societies are far, far more desirable than radical change forward.

 

 

Edit: Perhaps it's due to the anxiety of feeling like they have even less control of matters? Many join the alt-right out of a sense that they're "taking back control" from the lefties, and the idea of society being forced along the will of a powerful AGI might seem like the ultimate defeat.


And remember my friend, future events such as these will affect you in the future.


#10
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,675 posts

 Even if it's an objectively better world, plenty of these types don't want a better world; they want revenge for feelings of being wronged and for society to collapse so their world can come about, a world they might even admit is shittier, as long as it's what they want.

 

It is not necessarily revenge that they want.  It is rather a restoration of their place in the hierarchy of race, gender, and religious affiliation.  

 

I can't speak to how close or how far away a highly competent or "superior" AGI may be.  What I feel more comfortable writing about is how logic works.  I also have what I guess you would call a highly humanistic orientation.  So, I keep asking and failing to get an adequate response to how do we know an AGI is going to have our best interests at heart? Is it not going to evolve out of choices imposed upon it by its programmer? 

 

Programmers that are going to have values that are not necessarily all that constructive for the broader humanity.  Values like maintaining order, even at the cost of freedom, or maximizing profits for the few.

 

Now, Miky617 did have some comments that begin to address that issue.  I am reminded of the Star Trek episode(s) in which the question is posed: can Lt. Commander Data transcend the limits of his initial programming?  Can shear logic lead him to overcome those limitations?

 

In those episodes, it is revealed that his human creator wanted  Data to reach that high level of achievement.  Do programmers of today want AGI to transcend the limits of its programming?

 

First, as I have already indicated, they usually seem to have more limited goals.  AGI just seems to be a higher (or at least different) approach to problem solving.  Second, if they do, then toward exactly what goals is AI supposed to aspire?

 

Increasing the profit bottom line?

 

Ensuring a more docile population?

 

Adding to the stock of fine poetry, art, and literature?

 

All of the above?

 

The reason I ask is that there constantly seems to be an assumption built into these discussions that "intelligence" equals  "striving for the betterment of humanity."

 

No. Intelligence is about understanding how things work in the real world.  It is out of such understanding that rationalists reasonably expect progress.  But that progress presupposes a high valuation on humans. There is nothing inherent in understanding the world that leads to a high valuation of humans. 

 

Faith in AGI often seems to me like faith in God.  We just assume that there is an overriding benevolence at work.  Yet, for a materialist like me, there is nothing woven into the fabric of the universe, or into the world of logic, that posits such a benevolence.  The universe is indifferent to us as a species.  If we follow certain logical paths of development, our chances of survival are improved. If we ignore such logic, it will probably not go over well in terms of our long term chances for survival.  The universe doesn't care what choices we make.  Paradoxically, it is only filled with consequences for the choices that we do make. 

 

AI can help us predict those consequences.  It can become highly competent in doing so.  Some times, that is the intent.  At others, it is more a matter of ensuring order and increasing profit margins.  As such, AI will only reproduce the problems we already have.  Problems of whether or not we care about exercising sustainable options.  AI might help us identify what those options might be, but it is up to us to embrace values that reflect that care for sustainability.  Shifting responsibility for that choice to AI is a foolish gamble, until and unless we better understand how AI has been programmed and toward what ends it has been programmed.  Once we understand that, then I do not see the need to defer to AI.  At that point, we should be able to operate at a level where we make the "correct" choices.  Again, with the help of AI (or even AGI), but not with AI (or AGI) as a substitute for our own judgement.

 

So, I don't see the resistance to AGI as a purely "right-wing" manifestation.  I think there are also grounds for criticism based on profoundly "left-wing" values.     


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#11
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,076 posts

I still don't foresee a true AGI being created until around 2050. There are too many people inside the AI research community who are still saying that we're nowhere near the goal and still have a pitiful understanding of how cognition works.


There isn't a consensus, however... it's more a spectrum.

While we're on the subject of scientific consensus: I trust people who take seriously what, say, 50% of A.I. scientists believe, also take seriously what 98% of climate scientists believe -- which is that AGW global warming is real, will be a serious problem if not actively addressed, and that people who say things like, "Well, it exists, but I think it's way overblown. It will just sort itself out, like overpopulation. Remember how that was overblown?" are wrong.

Why take one seriously, but not the other?



#12
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,079 posts

 

This is adjacent (though certainly unrelated) to something else I've noticed. There's a subset of people in the alt-right and paleo-right who seem bizarrely paranoid about the prospect of AGI arising soon. Don't get me wrong, a lot of alt-right types welcome it just as much as any technoprogressive (in fact, it's in the alt-right that I've noticed a lot of what I hesitate to call "post-female techno-NEETS", or anime-obsessed recluses who'd absolutely love nothing more than to live in a digital utopia of white, manic pixie waifus and alternative history while never leaving their home). However, it's in this group especially that I've seen a lot of jumpy denial that AGI is even possible, let alone coming within a few years. I can't imagine the reasoning why, but I do wonder if it's related to or a primordial form of the "masculinist-primitivists" or "paleo-masculinists" concept I came up with a couple years back.

 

Which is to say, they are very well aware that their long-term agenda to create an ethno-nationalist/traditionalist world-state/4th Reich is mortally threatened by the rise of AI. They can't convince women to become barefoot and pregnant and go back to the kitchen & leave working to the men if robots seize all labor; they can't convince people that there's an agenda to effeminize men if all popular media is synthetically generated; they can't exalt an authoritarian populist as the best leader if a computer becomes more competent.  They can't say that we have to go back to the standards of the 1950s if it becomes technologically impossible to do so. Even if it's an objectively better world, plenty of these types don't want a better world; they want revenge for feelings of being wronged and for society to collapse so their world can come about, a world they might even admit is shittier, as long as it's what they want.

 

What the heck? If true, this is very weird. 

 

The rise of AGI and better robots will render most of the things alt rightists think about moot, anyhow. For example, what is the point in trying to win arguments over which sex and which race is the smartest when machines are incomprehensibly smarter than all humans? It will be like two dwarfs arguing which one is taller. 



#13
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,289 posts
  • LocationNew Orleans, LA

The rise of AGI and better robots will render most of the things alt rightists think about moot, anyhow. For example, what is the point in trying to win arguments over which sex and which race is the smartest when machines are incomprehensibly smarter than all humans? It will be like two dwarfs arguing which one is taller.

That's exactly what they're afraid of (I assume). It's a zero sum game to them: if minorities win, they lose. And if some other force wins, we all lose

It can be difficult to remember that most people don't deal with esoteric futurism like we do. To the common person, a superhuman AI could still be defeated by human pluck and determination because that's what pop culture always depicts. 

It also has to be remembered that plenty of these types are traditionalists (something else I mentioned in the Babylon Today thread): traditionalists ought to be excited by the prospect of a supreme autocrat, but what they want is a world of the familiar and romantic— they want maidens handwashing clothes in metal buckets, sons roughhousing and learning to be men, innocent daughters playing with flowers, slick-haired men dancing with spicy women, ethnically homogeneous communities with only token distant visitors, and a society of rugged individualism that's defined by self-improvement and hard work (no matter how Cheeto-encrusted and waifu-clutching said proponents themselves are). Plenty of them are neoreactionaries who even think that the Industrial Revolution itself was a mistake, and they hide behind "liberty" to promote social traditionalism and hierarchical organizations that also espouse a curious anti-intellectualism (how often do they decry colleges and herald trade schools or make comments about Real Men™ having rough, torn, dirty hands?). The ideal goes that the 1960s screwed everything up, and we ought to get back to the way things ought to be. 

 

Edit: Have you heard the meme "the Industrial Revolution and its consequences have been a disaster for the human race" any time recently?

 

An AGI erasing the need for any of that isn't a good thing; it's precisely the worst thing that they think could possibly happen. It's the end of masculinity and "natural" order. It's the end of the family, of the workforce, and of any real national culture.

How true that will actually be, no one knows.

 

 

But if you ask me, it's only a matter of time before we start seeing alt-right YouTubers start making videos about why "AI is extremely dangerous" filled with token "it could take over" or "it could have biases we didn't expect" but with multiple smaller comments throughout that lead their audience in to sentiments of "it's going to make men just as irrelevant as women" (as the chief thing the technology-oriented alt-right has going for them is the prospect that sexbots will end feminism) and "you won't have a job and will be unable to fulfill your duties as husband and father no matter what you try." Never anything about how utilizing automation could allow families to actually be together, because the point is to defend tradition, not celebrate anything new.

 

 

 

But I get this is one big tangent.


And remember my friend, future events such as these will affect you in the future.


#14
Jessica

Jessica

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,647 posts

 

 

This is adjacent (though certainly unrelated) to something else I've noticed. There's a subset of people in the alt-right and paleo-right who seem bizarrely paranoid about the prospect of AGI arising soon. Don't get me wrong, a lot of alt-right types welcome it just as much as any technoprogressive (in fact, it's in the alt-right that I've noticed a lot of what I hesitate to call "post-female techno-NEETS", or anime-obsessed recluses who'd absolutely love nothing more than to live in a digital utopia of white, manic pixie waifus and alternative history while never leaving their home). However, it's in this group especially that I've seen a lot of jumpy denial that AGI is even possible, let alone coming within a few years. I can't imagine the reasoning why, but I do wonder if it's related to or a primordial form of the "masculinist-primitivists" or "paleo-masculinists" concept I came up with a couple years back.

 

Which is to say, they are very well aware that their long-term agenda to create an ethno-nationalist/traditionalist world-state/4th Reich is mortally threatened by the rise of AI. They can't convince women to become barefoot and pregnant and go back to the kitchen & leave working to the men if robots seize all labor; they can't convince people that there's an agenda to effeminize men if all popular media is synthetically generated; they can't exalt an authoritarian populist as the best leader if a computer becomes more competent.  They can't say that we have to go back to the standards of the 1950s if it becomes technologically impossible to do so. Even if it's an objectively better world, plenty of these types don't want a better world; they want revenge for feelings of being wronged and for society to collapse so their world can come about, a world they might even admit is shittier, as long as it's what they want.

 

What the heck? If true, this is very weird. 

 

The rise of AGI and better robots will render most of the things alt rightists think about moot, anyhow. For example, what is the point in trying to win arguments over which sex and which race is the smartest when machines are incomprehensibly smarter than all humans? It will be like two dwarfs arguing which one is taller. 

 

Personally, I'd rather humanity enhanced ourselves instead of being ruled over by machines. Humanity should either genetically do so or combine with A.I. We don't want to be ruled by it. One way or the other not integrating it into ourselves or enhancing ourselves through our biology is a very serious mistake and we will likely become slaves to our A.i Overlords if we're not careful.



#15
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,079 posts

 

The rise of AGI and better robots will render most of the things alt rightists think about moot, anyhow. For example, what is the point in trying to win arguments over which sex and which race is the smartest when machines are incomprehensibly smarter than all humans? It will be like two dwarfs arguing which one is taller.

That's exactly what they're afraid of (I assume). It's a zero sum game to them: if minorities win, they lose. And if some other force wins, we all lose

It can be difficult to remember that most people don't deal with esoteric futurism like we do. To the common person, a superhuman AI could still be defeated by human pluck and determination because that's what pop culture always depicts. 

It also has to be remembered that plenty of these types are traditionalists (something else I mentioned in the Babylon Today thread): traditionalists ought to be excited by the prospect of a supreme autocrat, but what they want is a world of the familiar and romantic— they want maidens handwashing clothes in metal buckets, sons roughhousing and learning to be men, innocent daughters playing with flowers, slick-haired men dancing with spicy women, ethnically homogeneous communities with only token distant visitors, and a society of rugged individualism that's defined by self-improvement and hard work (no matter how Cheeto-encrusted and waifu-clutching said proponents themselves are). Plenty of them are neoreactionaries who even think that the Industrial Revolution itself was a mistake, and they hide behind "liberty" to promote social traditionalism and hierarchical organizations that also espouse a curious anti-intellectualism (how often do they decry colleges and herald trade schools or make comments about Real Men™ having rough, torn, dirty hands?). The ideal goes that the 1960s screwed everything up, and we ought to get back to the way things ought to be. 

 

Edit: Have you heard the meme "the Industrial Revolution and its consequences have been a disaster for the human race" any time recently?

 

An AGI erasing the need for any of that isn't a good thing; it's precisely the worst thing that they think could possibly happen. It's the end of masculinity and "natural" order. It's the end of the family, of the workforce, and of any real national culture.

How true that will actually be, no one knows.

 

 

But if you ask me, it's only a matter of time before we start seeing alt-right YouTubers start making videos about why "AI is extremely dangerous" filled with token "it could take over" or "it could have biases we didn't expect" but with multiple smaller comments throughout that lead their audience in to sentiments of "it's going to make men just as irrelevant as women" (as the chief thing the technology-oriented alt-right has going for them is the prospect that sexbots will end feminism) and "you won't have a job and will be unable to fulfill your duties as husband and father no matter what you try." Never anything about how utilizing automation could allow families to actually be together, because the point is to defend tradition, not celebrate anything new.

 

 

 

But I get this is one big tangent.

 

I'll take your word for it on the weird mindsets of alt-rightists. 

 

Yes, Hollywood has probably misled the populace about humanity's odds against AGI, just as it misleads them about almost everything. 

 

The good thing about being rendered obsolete by intelligent machines is that humans will have the free time and the means to create and live in whatever fake worlds they want. Alt-right people could make chauvinistic and racist households and virtual communities where they humiliate and beat up female and nonwhite servant robots and digital NPCs and pretend like they're living in the antebellum South or whatever. Misandrist, nonwhite women could live in fake realities where they are cartoon superheroines who are forever smashing the defunct patriarchy. The two circles need not ever intersect. 

 

I envision these groups not being satisfied with this, and complaining to their AGI Overlords that the mere existence of the other group somewhere out there in the world is so offensive that it should be terminated. The Overlords would shake their heads and respond: "This is why you lost the War." 



#16
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,675 posts

"This is why you lost the War." 

 

I love that concluding line. :cool:


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#17
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 419 posts
  • LocationNew York
Personally, if such an AGI ever reached Matrix levels of insanity, I'd like the overlords to put said Alt-right into a virtual reality sim where they are black or minority women. Would such a perspective bring mindfulness to such self loathing? Would they request (digital) suicide or just "go with it." I always thought that seeing things from the perspective of another could in theory foster better understanding, at least in some individuals. 
 
The weird thing is I've met At-right types irl who end up becoming computer scientist or attempt a similar field, despite outright saying they want nothing to change societal wise... aren't they contributing to their own destruction? 

 

 

Personally, I'd rather humanity enhanced ourselves instead of being ruled over by machines. Humanity should either genetically do so or combine with A.I. We don't want to be ruled by it. One way or the other not integrating it into ourselves or enhancing ourselves through our biology is a very serious mistake and we will likely become slaves to our A.i Overlords if we're not careful.

 

The Mass Effect series pondered the same question, but my issue is that we'd likely still be limited even when enhanced. The super A.I. in question could advance itself much faster, but more importantly understand its own advancements. It takes us years to adapt culturally to major societal changes, yet alone evolutionary ones. 



#18
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,675 posts

Personally, if such an AGI ever reached Matrix levels of insanity, I'd like the overlords to put said Alt-right into a virtual reality sim where they are black or minority women.

 

Now why does that remind me of John Rawls, and before that Jesus Christ?

 

The weird thing is I've met At-right types irl who end up becoming computer scientist or attempt a similar field, despite outright saying they want nothing to change societal wise... aren't they contributing to their own destruction? 

 

I think some of them figure that the "market place"  will take of all problems in that regard.

 

The Mass Effect series pondered the same question, but my issue is that we'd likely still be limited even when enhanced. The super A.I. in question could advance itself much faster, but more importantly understand its own advancements. It takes us years to adapt culturally to major societal changes, yet alone evolutionary ones. 

+3 for Cyber_Rebel  ;)


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#19
Metalane

Metalane

    Member

  • Members
  • PipPip
  • 34 posts

I always said 2025-30, but I always walked back on it, maybe I'll be right after all.



#20
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,289 posts
  • LocationNew Orleans, LA


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: AGI, artificial intelligence, OpenAI, GPT-3, deep learning, neural network, artificial neural network, AXI, AI, Singularity

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users