Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The Importance of getting it right the first time and a centralized economy


  • Please log in to reply
1 reply to this topic

#1
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,477 posts


Recently Stephen Hawking gave a talk about AI (linked above) in which he emphasized the importance of "getting it right the first time" with regard to key technologies such as Artificial Intelligence, Synthetic Biology, and Nanotrchnology. He comes at this from the perspective of the story of intelligence as a story about information. However, in order to control information it cannot be free. It has to be contained somehow with security measures, restricted access, etc. First of all I would like to ask "Is this even possible?" If you look at the story from the vantage of a network of evolving information, then based on the billions of years of history that we have as evidence we should expect at least two things.

1. Information entropy increases. There is no perfect closed system that can contain it indefinitely. Once it escapes it can replicate very rapidly.

2. The diversity and variety of information should continue to increase. The increase in entropy of information seems to be closely related to what we call intelligence. To try to curtail the process of increasing information entropy is to squelch the very intelligence we wish to create.

Secondly, in terms of a more pragmatic perspective. What political and economic systems would be necessary for a species to control these key technologies enough to ensure survival?

I don't think it would be capitalism because for example the short term profit in creating autonomous weapons systems would generally seem to outstrip the potential existential threat it creates such as we are seeing with the global warming bullshit from industry and the possible ramifications of genetically modified foods. Hawking directly references the need to curtail the weaponization of autonomous systems.

Wouldn't political and economic control have to become more centralized in order to control the spread of information regarding these key technologies in order to ensure survival?

#2
Whereas

Whereas

    Member

  • Members
  • PipPipPipPipPip
  • 488 posts

AGI is, by necessity, going to be able to rewire itself / change the way it thinks / form new opinions. Eliezer Yudkowsky has suggested that whatever process it will use to modify itself should be constructed in such a manner that it *couldn't* lead to a state in which the AGI might turn on us. Ideally we'd produce a mathematical proof of the process being unable to create a non-friendly AGI.

I don't see this happening, though, precisely because of capitalism. Once someone manages to make an AGI, no matter how potentially unsafe, it's probably going to be put to widespread use very quickly.


If you're wrong, how would you know it?





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users