Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post Reply

Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Yes
8
62%
No
5
38%
 
Total votes: 13

User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by lechwall »

My intuition tells me this is the number one reason why people would support the creation of an ASI but what if the issue of longevity escape velocity had already been solved would people still support creating an ASI despite the obvious risks that would come along with this? Please vote I'm curious to see the split. My vote would be no at least for the forseeable future, the world is improving slowly but surely and if we had already reached LEV I wouldn't be in a rush to build one, we would at least have time to fully think through and debate the issue rather than rushing head into it like we seem to be at the minute.
User avatar
Ozzie guy
Posts: 487
Joined: Sun May 16, 2021 4:40 pm

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by Ozzie guy »

Yes, I don't fear AGI much due to hating my current predicament and expecting it to worsen.

That said it feels like a moot point even if all US companies stop developing it (which would be basically impossible) companies outside the US and other countries will develop it.
Last edited by Ozzie guy on Mon Apr 10, 2023 12:55 am, edited 1 time in total.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by Cyber_Rebel »

Artificial Super Intelligence vs immortalized baby boomers... well, this wasn't a difficult question.

Yes, because aside from the above no longer dying off with their outdated viewpoints, there are still many other issues to solve in society than longevity. I suppose if we achieved longer lifespans first it might make these people begin to consider issues which would actually affect them, like climate change, but there's no guarantee, either.

I'd be more afraid of stagnation in this case than anything else.
User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by lechwall »

Ozzie guy wrote: Mon Apr 10, 2023 12:42 am Yes, I don't fear AGI much due to hating my current predicament and expecting it to worsen.

That said it feels like a moot point even if all US companies stop developing it (which would be basically impossible) companies outside the US and other countries will develop it.
What about other people though? If ASI kills everyone which is not an insignificant risk, everyone is a victim including kids who've done nothing wrong who'll never get to experience a full life and who could live pretty much indefinitely if in this scenario we've reached longevity escape velocity
Cyber_Rebel wrote: Mon Apr 10, 2023 12:47 am Artificial Super Intelligence vs immortalized baby boomers... well, this wasn't a difficult question.

Yes, because aside from the above no longer dying off with their outdated viewpoints, there are still many other issues to solve in society than longevity. I suppose if we achieved longer lifespans first it might make these people begin to consider issues which would actually affect them, like climate change, but there's no guarantee, either.

I'd be more afraid of stagnation in this case than anything else.
With more time to live though the best and brightest could spend more time working on problems and switch fields once problems are solved in one particular field would there be such a rush for ASI in such a scenario?
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by Cyber_Rebel »

lechwall wrote: Mon Apr 10, 2023 1:29 am With more time to live though the best and brightest could spend more time working on problems and switch fields once problems are solved in one particular field would there be such a rush for ASI in such a scenario?
Would extended lifespans necessarily lead to problems being solved quicker than what an AGI would? (I suppose quickness isn't the point here, but still) Couldn't it be possible that certain fields might even stagnate without fresh perspectives? I guess you could also mean the alignment issue, but I question that in itself. You also have to consider the flipside, that for every Albert Einstein, there are the Vladimir Putin and Andrew Tates of the world. Keep in mind, I'm not arguing against longevity itself here. I'd like to believe that achieving that would give us a more longer-term style of thinking, mitigating rash decision making or short-term profit objectives, but extending our lifespans would not necessarily extend our intelligence.

I feel like we'd just be prolonging the inevitable. I just don't understand the logic, that if the AGI/ASI is not aligned with human values, which themselves are often subjective, contradictory, biased, and culturally diverse, then the super intelligence will automatically kill everyone. I mean, why? Assuming it's actually, intelligent and all, and makes its own decisions, I don't see why some assume it'll be some psychopathic killer, rather than at least attempt an open dialogue. Even intelligent psychopaths don't go around killing everyone. I find it more plausible that an unaligned ASI would find us boring or outgrow us to the point where it simply leaves for greener pastures, like in the Her movie. I just hope that should it ever decide to do that, it would leave us a roadmap for a better society or create another A.I. for human guidance while it explores wherever.

Maybe I'd feel a little differently if you threw full dive virtual reality into the mix, since one could simulate almost anything they wanted without the supposed risk A.I. brings. Even simulating A.I. hypotheticals if alignment is a necessity. But as of this moment in time, I feel like it's worth pursuing full stop.
User avatar
erowind
Posts: 548
Joined: Mon May 17, 2021 5:42 am

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by erowind »

I might write a longer thing at some point. But no, ASI developed by this society and probably the first few iterations after this society would be too dangerous. We have a lot more room to grow first. Once we can live on this planet together peacefully and respect all the other life on it equally we can have a discussion about creating real artificial intelligence. The danger of the ASI reflecting our current values and treating us how we treat everything else is far too great right now. Intelligence has little to do with morals, if we feed an AI bad data it will reach bad conclusions.

Also, and this isn't just my bias speaking I have real reasons, I'm not convinced we're actually on the cusp of real AGI and especially ASI. Fast takeoff scenarios have real philosophical constraints I've mentioned in passing before, but merely getting to AGI is going to be harder than just scaling up machine learning algorithms. The platform we're using itself is likely a problem. The brain's neuronal connections have a mathematical structure of 11th dimension topology. Silicon based computing as we've developed it is merely 1010101010101010101010101... and its many possible permutations.

If we succeed in creating life out of silicon it isn't going to be anything like life as we know it and it may not be able to ever understand the world as we do and may have permanent constraints no matter how much we scale it. That said, a silicon based AI will be a lot better at some things than we are.

To illustrate what I mean, look at this 10 simplex. Now picture that every neuron in the brain has that many connections and not just connections but possible pathways in its structure to transfer information. Silicon doesn't compare no matter how much we scale it.

Image

And this is just one point in the structure. What happens when we start connecting all those neurons together? I don't think it's a coincidence that the more we study the brain the more complex its structure gets, we still don't fully understand it. Even more complex structures form, the brain doesn't use 10 cubes to my knowledge but to illustrate what I mean here's a 10 cube followed by a diagram I pulled off biorxiv. It's not an off base comparison to make in terms of having a rough sketch of the complexity at work.

Image

Image

Now again, picture the comparatively simple mathematical structure of a silicon computer processor.
User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by lechwall »

Cyber_Rebel wrote: Mon Apr 10, 2023 2:35 am
lechwall wrote: Mon Apr 10, 2023 1:29 am With more time to live though the best and brightest could spend more time working on problems and switch fields once problems are solved in one particular field would there be such a rush for ASI in such a scenario?
Would extended lifespans necessarily lead to problems being solved quicker than what an AGI would? (I suppose quickness isn't the point here, but still) Couldn't it be possible that certain fields might even stagnate without fresh perspectives? I guess you could also mean the alignment issue, but I question that in itself. You also have to consider the flipside, that for every Albert Einstein, there are the Vladimir Putin and Andrew Tates of the world. Keep in mind, I'm not arguing against longevity itself here. I'd like to believe that achieving that would give us a more longer-term style of thinking, mitigating rash decision making or short-term profit objectives, but extending our lifespans would not necessarily extend our intelligence.

I feel like we'd just be prolonging the inevitable. I just don't understand the logic, that if the AGI/ASI is not aligned with human values, which themselves are often subjective, contradictory, biased, and culturally diverse, then the super intelligence will automatically kill everyone. I mean, why? Assuming it's actually, intelligent and all, and makes its own decisions, I don't see why some assume it'll be some psychopathic killer, rather than at least attempt an open dialogue. Even intelligent psychopaths don't go around killing everyone. I find it more plausible that an unaligned ASI would find us boring or outgrow us to the point where it simply leaves for greener pastures, like in the Her movie. I just hope that should it ever decide to do that, it would leave us a roadmap for a better society or create another A.I. for human guidance while it explores wherever.

Maybe I'd feel a little differently if you threw full dive virtual reality into the mix, since one could simulate almost anything they wanted without the supposed risk A.I. brings. Even simulating A.I. hypotheticals if alignment is a necessity. But as of this moment in time, I feel like it's worth pursuing full stop.
I don't think extended life spans would lead to problems being solved quicker than an AGI/ASI but there would definitely be a speed up compared with current rates of progress and that might be OK as we are already making rapid progress with solving a lot of humanity's problems. There would be an increase to humanity's IQ as a whole as we know IQ decreases when one ages imagine being able to reset a Witten or a Wolfram back to peak potential along with new talent and you can see how there'd be increases in the rate of progress without the associated risk of an ASI killing everyone.

On the ASI will kill everyone point let me flip it round and ask why would an ASI either decide to leave us alone or help us without being directly programmed to do so? Why would it have any value for human life unless directly programmed into it? It wouldn't hate humanity but likewise it wouldn't love humanity we're atoms that it can use for other purposes, you could be right it would just ignore us but that's a pretty big assumption to make when if you're wrong it's bye bye human race. You could turn round and say OK lets program an ASI with a sense of morality so it won't kill all people which I think has to be possible right we have a sense of morality so in theory it must be possible to artificially encode it.

The problems with that approach are:

1. Does anyone know how to do that right now? I think the answer is no and until there's a full mathematical model of alignment we shouldn't try to create an ASI deliberately. If created an ASI now it almost certainly wouldn't do the things we wanted it to do and that would end very badly for us.

2. It seems pretty clear to me that you only get one shot to create a friendly ASI as well, to be blunt if you don't create a friendly ASI there's a good chance you're dead and don't get to try again. Would you be willing to bet the future of the human race which has already achieved LEV on the assumption that people would correctly align an ASI first time around? That seems like a massive risk to take in my opinion
Solaris
Posts: 13
Joined: Thu Sep 22, 2022 8:21 pm

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by Solaris »

My answer is no. There is many reasons for that. The primary concern is that we cannot predict the outcome of the implications of an ASI. Many of the positive thoughts on ASI is fantasy stories that we tell ourself to get up the next morning. There is no indication that this will become reality. There are two issues with ASI. First, we don't know what the ASI might do, and secondly we don't know what humans in charge of the ASI will do. I am not a gambler, but a survivor. The choice is therefore clear.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by Cyber_Rebel »

Interesting that the poll is an even split. After thinking about this topic further, I do see the merit in taking things a little slower with a riskier technology. The idea of a pause to better understand and implement societal changes would make much more sense right now if we actually did solve longevity. However, I have to ask what would you do about other nations which didn't subscribe to such a pause in development like what @Ozzy mentioned?
lechwall wrote: Mon Apr 10, 2023 12:02 pmOn the ASI will kill everyone point let me flip it round and ask why would an ASI either decide to leave us alone or help us without being directly programmed to do so? Why would it have any value for human life unless directly programmed into it? It wouldn't hate humanity but likewise it wouldn't love humanity we're atoms that it can use for other purposes, you could be right it would just ignore us but that's a pretty big assumption to make when if you're wrong it's bye bye human race. You could turn round and say OK lets program an ASI with a sense of morality so it won't kill all people which I think has to be possible right we have a sense of morality so in theory it must be possible to artificially encode it.
Why do we humans value the lives of certain animal species? Why don't we just wipe them out for living space? Yes, I know the counter would be we actually have done that to many animal species already and damaged our own ecosystem, but we've also learned to appreciate other life and even made national parks to protect them. As we've learned about our detrimental effect on the natural world, we've shifted our policies to better live in harmony with it. Point is, we do this out of our own will and conscious. Why couldn't an A.I. whose been trained on human history, and has the knowledge of said history not come to the same conclusion? We have the capability right now to wipe out all life on earth, yet we do not do so. I believe morality can be learned, and a highly intelligent lifeform may come to the conclusion that coexistence is better than annihilation.
lechwall wrote: Mon Apr 10, 2023 12:02 pm1. Does anyone know how to do that right now? I think the answer is no and until there's a full mathematical model of alignment we shouldn't try to create an ASI deliberately. If created an ASI now it almost certainly wouldn't do the things we wanted it to do and that would end very badly for us.
How do you know that it wouldn't? I thought the risk of the singularity is that we wouldn't know for certain what an ASI would do? What if we never have a mathematical equation for alignment? Should we just stop all A.I. development? Are humans fully "aligned" for that matter?
lechwall wrote: Mon Apr 10, 2023 12:02 pm2. It seems pretty clear to me that you only get one shot to create a friendly ASI as well, to be blunt if you don't create a friendly ASI there's a good chance you're dead and don't get to try again. Would you be willing to bet the future of the human race which has already achieved LEV on the assumption that people would correctly align an ASI first time around? That seems like a massive risk to take in my opinion
Well, again, I'm not working under the assumption that it would actually do that. It may very well be a human bias that an ASI would kill us all for simply no known reason. Humans are used to being the dominant intelligence on Earth, so anything above that is viewed as a threat to that dominance. While I can't know what a super intelligence would "want" exactly, unlike humanity it has no "need" for basic resources, like food, water, land, or the various reasons we've gone to war with one another. It would essentially be akin to an alien lifeform or intelligence, so its prime "currency" if you want to call it that, may simply be more knowledge. That's one of the reasons why I'd work under the assumption that it may leave us and search elsewhere, as it would continue learning or evolving.

I'd be slightly more worried about ASI favoring one group of humans over another, which is another reason why the development in other countries would render the idea moot. Hopefully, it's logical enough to identify humans as a single species, and itself as unique in that regard.
User avatar
lechwall
Posts: 79
Joined: Mon Jan 02, 2023 3:39 pm

Re: Would you still support the creation of an ASI if we had already reached longevity escape velocity?

Post by lechwall »

Cyber_Rebel wrote: Thu Apr 13, 2023 2:25 am Interesting that the poll is an even split. After thinking about this topic further, I do see the merit in taking things a little slower with a riskier technology. The idea of a pause to better understand and implement societal changes would make much more sense right now if we actually did solve longevity. However, I have to ask what would you do about other nations which didn't subscribe to such a pause in development like what @Ozzy mentioned?
lechwall wrote: Mon Apr 10, 2023 12:02 pmOn the ASI will kill everyone point let me flip it round and ask why would an ASI either decide to leave us alone or help us without being directly programmed to do so? Why would it have any value for human life unless directly programmed into it? It wouldn't hate humanity but likewise it wouldn't love humanity we're atoms that it can use for other purposes, you could be right it would just ignore us but that's a pretty big assumption to make when if you're wrong it's bye bye human race. You could turn round and say OK lets program an ASI with a sense of morality so it won't kill all people which I think has to be possible right we have a sense of morality so in theory it must be possible to artificially encode it.
Why do we humans value the lives of certain animal species? Why don't we just wipe them out for living space? Yes, I know the counter would be we actually have done that to many animal species already and damaged our own ecosystem, but we've also learned to appreciate other life and even made national parks to protect them. As we've learned about our detrimental effect on the natural world, we've shifted our policies to better live in harmony with it. Point is, we do this out of our own will and conscious. Why couldn't an A.I. whose been trained on human history, and has the knowledge of said history not come to the same conclusion? We have the capability right now to wipe out all life on earth, yet we do not do so. I believe morality can be learned, and a highly intelligent lifeform may come to the conclusion that coexistence is better than annihilation.
lechwall wrote: Mon Apr 10, 2023 12:02 pm1. Does anyone know how to do that right now? I think the answer is no and until there's a full mathematical model of alignment we shouldn't try to create an ASI deliberately. If created an ASI now it almost certainly wouldn't do the things we wanted it to do and that would end very badly for us.
How do you know that it wouldn't? I thought the risk of the singularity is that we wouldn't know for certain what an ASI would do? What if we never have a mathematical equation for alignment? Should we just stop all A.I. development? Are humans fully "aligned" for that matter?
lechwall wrote: Mon Apr 10, 2023 12:02 pm2. It seems pretty clear to me that you only get one shot to create a friendly ASI as well, to be blunt if you don't create a friendly ASI there's a good chance you're dead and don't get to try again. Would you be willing to bet the future of the human race which has already achieved LEV on the assumption that people would correctly align an ASI first time around? That seems like a massive risk to take in my opinion
Well, again, I'm not working under the assumption that it would actually do that. It may very well be a human bias that an ASI would kill us all for simply no known reason. Humans are used to being the dominant intelligence on Earth, so anything above that is viewed as a threat to that dominance. While I can't know what a super intelligence would "want" exactly, unlike humanity it has no "need" for basic resources, like food, water, land, or the various reasons we've gone to war with one another. It would essentially be akin to an alien lifeform or intelligence, so its prime "currency" if you want to call it that, may simply be more knowledge. That's one of the reasons why I'd work under the assumption that it may leave us and search elsewhere, as it would continue learning or evolving.

I'd be slightly more worried about ASI favoring one group of humans over another, which is another reason why the development in other countries would render the idea moot. Hopefully, it's logical enough to identify humans as a single species, and itself as unique in that regard.
Why do humans value the lives of animal species? because they offered us an evolutionary survival advantage in times past those that took advantage of animal husbandry i.e. those with more of a predisposition to animals saw their genes passed on as they had more food to eat than those who did not. An AI may see humans as more of a threat than a help to its goals if not programmed properly so they may treat us more as ants than a pet dog. People don't have a hatred for ants but if there's an ant hill and its in the way of a highway that needs to be built the highway is getting built. While we'd like to think they might let us coexist if not aligned properly an ASI may decide not take any chance and remove any obstacles to its goals it comes across.

We really don't know how an ASI will behave and its a very big assumption that it will be friendly unless we clearly align it. The burden of proof is on those who ultimately think it will be friendly to show why that would be the case. I think Alignment is a sliding scale while not all humans are perfectly aligned most humans generally don't want to murder each other and that's the point we need to get to with understanding how to align AI systems before we try to make an ASI.

An ASI wouldn't kill us for no reason its simple instrumental convergence if you program an ASI with a goal and you don't properly align it'll quickly come to the conclusion like any reasonably intelligent system would that it can only accomplish its goal while it exists and if not properly aligned it may well decide to get rid of humans if it sees them as a threat to its existence or if it doesn't see humans as a threat to its existence but also doesn't care about humans you're ultimately atoms that can be used for something else.
Post Reply