Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

William Hertling reviews Three Laws Lethal by David Walton

scifi AI superintelligence

  • Please log in to reply
No replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,063 posts
http://www.williamhe...y-david-walton/







Naomi Sumner, programmer extraordinaire, creates a virtual world to train AIs. Those who perform well in the game world survive, allowing them to reproduce — spawn new AIs similar to themselves. As thousands of generations pass, the AIs not only become incredibly good at the self-driving game, they also develop some surprising emergent behavior, like circumventing the limits on their memory footprint.

They’re very smart, but still not conscious. A few more steps are required to reach that point, steps none of the characters anticipate or plan for. Ultimately, it is the training world itself that becomes self-aware, and all the AI actors inside it are merely elements of its psyche.

But every invention in history, sooner or later, is turned into a weapon. UAVs, drones, and missiles can benefit from self-driving technology as well, especially when trained through war-simulation game play. So what happens when part of this infant conscious mind is partitioned off and trained to kill?

You’ll have to read THREE LAWS LETHAL to find out…


If we ever have Superintelligence, it will probably be realized through Neuroevolution. I can foresee several scenarios how this can play out, and I don't think it has been explored in fiction or even by people who study "existential risks" scenarios. Let me explain:

The first thing I want to point out is the nature of human intelligence is such that if you are good at a few different things, then you tend to be good at other things, too. This is different from the kind of intelligence we see in computer systems today -- e.g. they may be good at image recognition, but are not good at other things.

The second thing I want to mention is the interesting feature of biological brains encapsulated by so-called "intelligence genes". Each of these genes can boost intelligence by a small amount; and their effect is "additive" -- in other words, the more genes you have, the greater the boost in intelligence.

It's entirely possible that these two aspects of intelligence -- "good in a few ==> good in many" and "intelligence genes" -- occur naturally in most evolutionary processes. If so, then if we were to use Neuroevolution to evolve a population of agents to survive in a world, we might expect them to develop these features as well.

So, in this population of agents, you will probably discover certain features, which play the role of "genes", that act in an additive fashion on increasing intelligence. How can you tell which "genes" are the "intelligence genes"? Test the agents on a small number of tasks for which you have good metrics, and if they are good on those, they will do well on other tasks, too. Once you have a population of good-performing agents, then, it's possible that you could pick out various "intelligence genes", and then pool all those into a single agent -- making it superintellgent.

What would these "genes" look like in the context of artificial agents? It depends on how the agents are constructed. There are paradigms that are kind of similar to how it works in biology -- where there is a kind of artificial "DNA" code that serves as parameters to feed into a generative process to generate a neural net. In such models of Neuroevolution, the "genes" would literally be little snippets of the artificial "DNA".

For other, simpler paradigms that don't use a generative encoding of the networks, it might be harder to identify "genes".

....

I also think that AIs based on brain data could have a similar potential for rapid intelligence expansion, through the use of Neuroevolution. I've written about this before on other forums, and don't feel like doing it again.





Also tagged with one or more of these keywords: scifi, AI, superintelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users