Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Emergence Through Divergence


  • Please log in to reply
4 replies to this topic

#1
Cartesian Logic

Cartesian Logic

    Member

  • Members
  • PipPip
  • 11 posts
The primary fear of AI isn’t that it will be a direct threat, but through divergence of its original programming it would inadvertently lead to the extinction of humans. No one could possibly begin to understand what the goal of an AGI/ASI would be, but I’m sure it would have some measure of impact on us by some means. Anything beyond this point is just speculation. There is no real systematic behavior in nature, and all things first interact through some form of emergence. A decent article describing emergence can be found here: https://www.google.c...omplexity-30973

I would go so far as to speculate that life is the greatest achievement of nature in that there is no better structured system in our known universe. It seems our existence is merely the result of effective interactions. What makes us so special is evolution. The ability to progressively adapt keeps living organisms in the race to exist. Perhaps technology is the transcendence we have come to expect. A computer is one of the most simplified forms of interaction down to a particle scale. It is capable of improving faster than any biological means. Whatever an AGI/ASI develops from emergent behaviors could possibly become the tribulation or triumph of our existence.

#2
zEVerzan

zEVerzan

    Orange Animating Android

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,671 posts
  • LocationSome Underground Sweatshop Probably
Divergence of its original programming? It would also be a threat if it slavishly followed its programming to a T without consideration for how that would affect its creators.

The Parable of the Paperclip Factory imagines that an AI is given charge of a paperclip factory, and given the sole directive to think of more efficient ways to mass-produce paperclips while also making itself better at thinking. Long story short, a few months go by, everyone dies immediately, and suddenly trillions of nanobots start converting Earth's mass into paperclips at once, and before you know it you have an interstellar nanoswarm of superintelligent, omnicidal von-neumann probes that just convert everything they find into paperclips.

Or what about an AI given the directive, "maximise human happiness, please." What's to stop it from wiping out all life on Earth except for human brain tissue, which it then covers the earth with, electrifying the mass to be in an optimally happy state forever?

No my fear is that there WON'T be emergent behaviors in AI like self-awareness and creativity, because if not we could be in trouble.
I always imagined the future as a time of more reason, empathy, and peace, not less. It's time for a change.
Attention is currency in the "free marketplace of ideas".
I do other stuff besides gripe about the future! Twitter Youtube DeviantArt +-PATREON-+

#3
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 7,565 posts

 

 

Divergence of its original programming? It would also be a threat if it slavishly followed its programming to a T without consideration for how that would affect its creators.

The Parable of the Paperclip Factory imagines that an AI is given charge of a paperclip factory, and given the sole directive to think of more efficient ways to mass-produce paperclips while also making itself better at thinking. Long story short, a few months go by, everyone dies immediately, and suddenly trillions of nanobots start converting Earth's mass into paperclips at once, and before you know it you have an interstellar nanoswarm of superintelligent, omnicidal von-neumann probes that just convert everything they find into paperclips.
 

 A very good parable.

 

 

 

No my fear is that there WON'T be emergent behaviors in AI like self-awareness and creativity, because if not we could be in trouble. 

 

That still doesn't quite get at the answer.  To refer back to your parable, AI could very well become "self-aware" and "creative".  It could become quite aware of itself while still being obsessed about converting everything to paperclips.  It could become very creative about that paperclip manufacturing process.  It could do all of that if its programming (or in your wording "directive")  contains nothing that would encourage it to value or protect human life.  


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#4
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,361 posts
  • LocationLondon

I think things like the paperclip parable area great example to illustrate how humans might make anthropomorphic assumptions about AIs and doom us all. But they are all easily fixed by setting basic limits/scope on the AI's objectives. i.e. "Increase paperclip production to X" rather than "maximise paperclip production".

 

If AIs show emergent behaviour beyond their original programming then your paperclip AI as it gets smarter might enable more nuanced understanding of context etc. and maybe even the ability to selectively ignore instructions, on the one hand this is scary, but on the other, an AI which goes

 

"I think I'm going to stop making paperclips, now that I have cornered the world paperclip market, and humanity needs no more paperclips" would be really good. 

 

If we don't see human like/sentient behaviour emerging from vastly complex AI systems it may mean that an AGI capable of human level intellect understanding and empathy would need to have all of it's features planned for by humans which could take hundreds of years.



#5
Cartesian Logic

Cartesian Logic

    Member

  • Members
  • PipPip
  • 11 posts
Divergence is what makes it AGI. The emergence was the main point I was trying to make in that one of the possible goals of an AGI could fallow a similar trend. An example of what could be a hazard of emergence would be if the AGI were to need materials as it developed itself into something more complex (such as a Matrioshka Brain), and in the event of harvesting materials it jeopardizes our existence. This could be as simple as upsetting Earth’s dynamic equilibrium. Just shower thoughts here, I’m not a fear-monger. This is one of the few places I feel comfortable posting, so thank you for the objective feedback network displayed within this forum.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users