Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

The Simularity


  • Please log in to reply
24 replies to this topic

#1
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

Watching a 40 minute documentary on IBM's Watson has inspired me to write this.

 

So, when the Singularity happens, whether we're dead or alive by then, it's not too late to enjoy all the things you can enjoy in other time periods. Don't think that the smartphone age is over, because it's not. Not if you go about this route.

 

Citing many things that Jakob says, there may be trillions or more of individual AGI and ASI units of many, many different intelligences that are very easily adjustable. One may think that every AI will be perfectly similar due to the fact that their intelligence is "perfect", but this is not going to be the case. Just like how every human has individual beliefs, morals, strongsuits and weaknesses, etc., so will the super AI. The differences between the AIs, in fact, will be even more diverse than that of us humans, due to the fact that they have a lot more capabilities and are much more intuitive and creative. In fact, the AIs will probably disagree with one another about small details of complex topics on a regular basis.

 

So, as you all know, there are a LOT of baseline humans out there who aren't very adaptable to huge societal and technological changes, including and not limited to the Singularity. It may not just be that they don't want a change, but they actually fantasize about the past time periods in which they're used to; in other words, some people are extremely nostalgic. For example, there are many people today who idealize the 1960s and 70s, because that's the time they grew up in, and the technology of the 2010s is too complex or different for them to learn to enjoy. I completely understand this feeling. I am growing up in the 2010s, and I'm almost certain I'll be the same exact way about technology from 50 years in the future (i.e. the Singularity, intelligence explosion, etc.). I simply won't enjoy it. I would want life to be just like it was when I was a kid. I would want smartphones to be back in everyone's pockets. I would want AI to be simpler than humans and laughed at by the comedian who hosted that Jeopardy show. I want the AI to be like Google's AI in the 2010s; a very simple tool used to enhance our usage of the internet. It seems that these traits are very unlikable, but you're just gonna have to accept that a lot of people out there are like this. They want their own ideal society back. I have fun here; lots of fun. In fact, I've been really down lately and obsessing over the Singularity so much it's been eating my brain out. I've been doing worse in my everyday life, having less fun with things I usually have fun with. This isn't supposed to happen to people. I'm sure I'm not alone.

 

But never fear; as there's an almost infinite amount of possible things that could and probably will happen during the Singularity. One of these will include a near-perfect virtual reality (VR) simulation place. This building will likely be the size of a small U.S. state, since there are so many people in the world who want to go into VR and live another life.

 

There will be millions, if not a couple billion people there looking to dwindle forever in a simulation of another life. Many of these people will be looking to be in a simulation of a specific time period in history, especially one that they've experienced themselves. As for myself, I'm planning to go in there and apply for an infinite loop of a 2010s-20s simulation (January 1, 2010 - December 31, 2029).

 

So how would this work, you ask? Well, to answer that question, I really can't, since I'm a baseline human and there are probably methods for doing this that we could never guess. But I may be able to give you a simple explanation of the basic method.

 

During the Singularity, all forms of AI will spend their machine-learning abilities collecting information on every single aspect of the universe, starting with the planet Earth. They'll collect every bit of data all the way from every documented event in international politics to every individual speck of dirt that has ever been on the ground.

 

In order to create such a simulation using such massive amount of data, this must happen, at least. Human evolution must be completely and totally understood by the VR machine. Evolutionary patterns will be completely and totally seen and predictable. This will be combined with a systematic search of the memories of every individual human, animal, and insect searching for every piece of memory associated with the time period at hand. However, peoples' memories will only be a sort of backup source, since that alone is extremely unreliable, but gives a general point of view. That will be combined with a reality check. Every document (yes, on the internet and on paper) will be collected to fully enhance the human knowledge of that time period. Such an AI would know every human on earth that has ever lived in the 2010s from the amount of hairs on their head at any particular moment in time to their knowledge of advanced mathematics (as an example). Every piece of knowledge about every human, and about society as a whole, will be collected in order to fully enhance the 2010s simulation.

 

So how perfect will the simulation be? Well, if you want an example, it would be so perfect that it would simulate every text box that any user has ever typed on this very FutureTimeline forum, exactly how they typed it, with the exact same grammar, etc. In fact, it would simulate the exact movements of the people typing such text exactly as they typed them, even though the person initially experiencing the simulation doesn't see them doing it. All your points of view on everything would be emulated exactly as they were. All of you users who may be viewing this; Yuli Ban, Jakob, TranscendingGod, etc., will be emulated based on every single cell on your bodies. The only real person with actual free will is the person experiencing the simulation, in which they can do whatever they want (with the appropriate consequences).

 

So let's just say I go into the simulation, and attempt a bank robbery. They wouldn't let me off easy just because it's a simulation; the exact same thing would happen as if I actually robbed a bank in this actual time period; I would go to jail, have to deal with court, etc. So the simulation isn't really totally meant to please; it's meant to be entirely and completely realistic.

 

Every television program, every video game, every YouTube video, every FaceBook post, every individual book, and all that other media stuff would be exactly the same, unless of course I, in the simulation, go and change the stimuli of another person, causing a different FaceBook post, or a different FutureTimeline topic, or maybe in the simulation I get banned from FutureTimeline because I did something I wasn't supposed to; like build a spambot that spams the entire forum with porno advertisements, for instance. Which of course I don't actually do in the actual 2010s. I might attempt to cheat on an exam in college, and get academic probation, or whatever happens if you do that; not sure because I haven't been to college yet and I haven't tried cheating (and won't). I'd prank call people all the time at age 14 just like I did then, and the same consequences would happen; a hell of a large phone bill. I could keep listing things like this, but you get the point; consequences as well as privileges are exactly the same as they would actually have been.

 

Or, maybe, one day I enter a building in Germany or Turkey or something that gets attacked by terrorists in some time in 2018, and I end up "dead" for 12 years until the simulation loops back around again.

 

On the better side, though, my life will probably be very similar to the way it actually is today morally, so it'll all be peachy. The internet will be exactly the same; I can make as many gaming videos I want, etc., until the looparound happens at the end of the simulation's 2029. Every time the looparound happens, my memory is completely blanked to what my memory was at the beginning of 2010. I can still have sex, watch TV shows on my laptop and phone, create TASes, get a driver's license, pay bills, go to college (and high school), argue with people about religion and the Singularity and futurism and historical events and linguistics and morality and philosophy, etc., exactly as it was today, ALL IN A SIMULATION.

 

And nothing matters, because if I do something to screw up my life, my memory of it is erased and it loops me back around to 10-20 years before that happened. Of course, I'll never know I'm in a simulation, so I'd just act like I did today and try not to screw my life up. And the best part about it is, I'll NEVER be able to ESCAPE THE LOOPING SIMULATION!!! :D It's like a 20-year-long Groundhog Day.

 

So now, I have a proposition. Everybody refers to this event as the Singularity, but for many people, including me, it will simply be the "Simularity".


  • ddmkm122 likes this

#2
FrogCAT

FrogCAT

    Member

  • Members
  • PipPipPip
  • 74 posts
  • LocationA Virtual Worlf

If you hate the future so much why are you on a futurist forum?


  • Sciencerocks and Jakob like this

"That's me inside your head."   "I wanna need your love…  I’m a broken rose,   I wanna need your love…"   "And when we fall, we will fall together."


#3
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts
I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.

In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.
  • ddmkm122 likes this

#4
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

 

In fact, I've been really down lately and obsessing over the Singularity so much it's been eating my brain out. I've been doing worse in my everyday life, having less fun with things I usually have fun with. This isn't supposed to happen to people. I'm sure I'm not alone.

First off, if you're not joking or exaggerating, and a hypothetical concept really is negatively affecting your day to day life, then you're right: this isn't supposed to happen to people. You should see a psychiatrist or at least talk to someone you trust. Talking to people on an internet forum--especially about the singularity itself--won't help you. On the other hand, if you're exaggerating and this is merely ideological, then let's by all means debate away.

 

Not trying to be disparaging or ironic btw.

 

Now on to the rest.

  • This 'simularity' as you describe it doesn't seem physically possible. First, it relies on the ability of ais and posthumans to somehow simulate the past down to (apparently) a microscopic level. Even the archai would have great difficulty with such a task, let alone early ASIs. At least without skimping on accuracy and precision. Where would the data come from? Nobody was there to collect it. But what you propose is even more extravagant than simulating the Earth--you want to simulate a copy of the Earth for every individual who wants one instead of making everyone share a simulation. And that would require more computronium than the mass of the Earth--likely much more--and why exactly would computational and material resources be devoted to such an extravagant project? It would be us like growing a personal rain forest for every chimpanzee who asks!
  • What would ultimately be the point, again? There are many options to get away from ai influence, and the far easier, cheaper ones are also ones that don't involve jumping right into their arms and hanging your existence on the flick of a switch. (Yeah, go into a simulation and it could be shut off whenever the creator is done.) Even if you reject augmentation out of hand, there's always the option of going to a baseline reserve. It's far more practical to simply cordon off a several hundred or a few thousand square miles than to simulate millions of Earths, so baseline reserves might actually exist. And if actual baseline reserves exist, then by definition there will be strict laws against unnecessary interference and development by higher beings (transhuman/posthuman/AI researchers and tourists, however, will probably visit from time to time). Seems like the best of both worlds to me. You get a low-tech (relatively speaking) experience, get you also get to live--to change and grow as a person, to experience new things and explore new cultures while doing most or all of the things you like (and maybe some new things) instead of just treading the same ground forever, never going anywhere. Not good enough for you? Join or start a lurker colony. Virtually nobody is going to poke around the distant Kuiper Belt or Oort Cloud. Most superhumans won't care who is there or what they do there, assuming they know at all. A society of like-minded people could lurk there and never see an outside soul--baseline or ascended.
  • The singularity isn't going to be fast enough that there are suddenly billions of people screaming and raving in the streets. A lot of baselines may face an existential crisis at some point, and some large riots might break out, even a war or two, but come on: not everyone at once! Most people will adapt and get over it. Or get in on the posthuman scene, as I plan to.
  • Nitpick: I did say there will be a ai/posthuman society, but I doubt there will immediately be trillions. It's unclear where the resources to construct all that computronium and embody trillions of entities would come from. Probably at the time scales we're talking about, they'd number in the millions maybe.

Edited by Jakob, 18 January 2017 - 07:30 PM.

  • ddmkm122 and Ready Steady Yeti like this

Click 'show' to see quotes from great luminaries.

Spoiler

#5
FrogCAT

FrogCAT

    Member

  • Members
  • PipPipPip
  • 74 posts
  • LocationA Virtual Worlf

I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.

In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.

 

So you don't hate the future, you hate progress? I feel sorry for you.


"That's me inside your head."   "I wanna need your love…  I’m a broken rose,   I wanna need your love…"   "And when we fall, we will fall together."


#6
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.
In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.


So you don't hate the future, you hate progress? I feel sorry for you.
No, I just hate that particular progress. I have other ideas about the future that would be better in my eyes. However all of these seem to conclude to be impossible due to the coming singularity. Its actually quite sad. We humans have JUST gotten on our feet with technology this millennium, and it has to end with super technology in 4-6 decades? It would be more interesting to see humans slowly progress with technology over thousands of years of work than for it all to happen relatively simultaneously by godlike machines.
  • ddmkm122 likes this

#7
FrogCAT

FrogCAT

    Member

  • Members
  • PipPipPip
  • 74 posts
  • LocationA Virtual Worlf

 

 

 

 

I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.
In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.


So you don't hate the future, you hate progress? I feel sorry for you.
No, I just hate that particular progress. I have other ideas about the future that would be better in my eyes. However all of these seem to conclude to be impossible due to the coming singularity. Its actually quite sad. We humans have JUST gotten on our feet with technology this millennium, and it has to end with super technology in 4-6 decades? It would be more interesting to see humans slowly progress with technology over thousands of years of work than for it all to happen relatively simultaneously by godlike machines.

 

 

Or we could quickly progress with technology instead of slowly, becoming closer to the 'godlike' machines as well as them becoming closer to us. Not over thousands of years, just a few decades.


"That's me inside your head."   "I wanna need your love…  I’m a broken rose,   I wanna need your love…"   "And when we fall, we will fall together."


#8
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

 

 

I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.
In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.


So you don't hate the future, you hate progress? I feel sorry for you.

 

No, I just hate that particular progress. I have other ideas about the future that would be better in my eyes. However all of these seem to conclude to be impossible due to the coming singularity. Its actually quite sad. We humans have JUST gotten on our feet with technology this millennium, and it has to end with super technology in 4-6 decades? It would be more interesting to see humans slowly progress with technology over thousands of years of work than for it all to happen relatively simultaneously by godlike machines.

I'm writing something right now (currently at about 7,000 words) that basically says that, unfortunately, what you desire is literally objectively impossible. And this isn't a story either; it's a quasi-dissertation. It sounds more like you desperately want a Star Trekian/Jetsonian future without understanding the underlying mechanics of how to get there. 
 
Here's a tiny snippet— 
 

Technist thought dictates that all human history can be summarized as "humans seeking increased productivity with less energy". Reduced energy expenditure and increased efficiency drives evolution— the "fittest" Herbert Spencer mentioned in 1864  is not defined by intelligence or strength, but by efficiency. Evolution as a semi-random phenomenon leads to life-forms that expend the least amount of energy in order to maximize their chances at reproduction in a particular environment. This is usually why species go extinct— their methods of reproduction are not as efficient as they can be, meaning they're wasting too much energy for too little profit.
 
The universe itself seeks the lowest-energy state at all possible opportunities, from subatomic particles all the way to the largest structures known to science.
 
If we were to abandon the hunt for greater efficiency, we'd effectively damn ourselves to utter failure. This isn't because things are inevitable, but because of the nature of this hunt. It's like running across a non-Newtonian liquid— you need to keep running because the quick succession of shocks causes the liquid to act as a solid and, thus, you can keep moving forward. If you were to at any point slow or stop your progression, the liquid will lose its solid characteristics and you will sink.
 
This is how real life works. If you're scared of sinking, the time to second guess crossing the pool of non-Newtonian liquid was before you stepped on it. Except with life, we don't have that option— we have to keep moving forward. If we regressed, the foundations of our society would explode apart. Even if we were to slow ourselves and be more deliberate in our progress, the consequences could be extremely dire. So dire that they threaten to undo what we've done. This is one reason why I've never given up being a Singularitarian, despite my belief that it will not be an excessively magical turning point in our evolution, or based on the words of those who claim that we should avoid the Singularity— it's too late for that. If you didn't want to experience the Singularity, then curse your forefathers for creating digital technology and mechanical tools. Curse your distant siblings for reproducing at such a high rate and necessitating more efficient machines to care for them. Curse evolution itself for being so insidious as to always follow the path of least resistance. 

Efficiency. That's the word of the day. That's what futuristic sci-tech really entails— greater efficiency. We approach the Singularity because it's a more efficient paradigm. 

For us humans, our evolution towards maximum efficiency began before we were even human. Humanity evolved due to circumstances that led to a species of hominid finding an incredibly efficient way to perpetuate his genes— tool usage. Though we are a force of nature with only our bare bodies, without our tools we are just another species of ape. Tools allowed us to more efficiently hunt prey.  Evidence abounds that australopithecines and paranthropus were likely scavengers who seldom used what we'd recognize as stone-age tools. They were prey— and in the savannas of southeast Africa, they were forced to evolve bipedalism to more efficiently escape predators and use their primitive tools.

 

With the arrival of the first humans, Homo habilis and Homo naledi, we made a transition from prey to predator ourselves. Our tools became vastly more complex due to our hands developing finer motor skills (resulting in increased brain-size). To the untrained eye today, the difference between Homo habilis tools and Australopithecus afarensis tools are negligible. Where it matters was how they made these tools. So far, there's little evidence to suggest that australopithecines ever actually machined tools; they found stubble and rocks that looked useful and used them. Humans, on the other hand, actively machined our own tools.

This is how we made the transition from animal of prey to master predator and eventually reached the top of the food chain.


  • ddmkm122, Jakob, FrogCAT and 3 others like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#9
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

 

 

I don't hate the future, per se, because if super AI was NOT possible I'd probably be very happy with the future.
In an ideal future, no AIs more intelligent than humans would ever be invented, not even in 5 billion years, and humans would continue to dominate the universe. Unfortunately, evidence shows it won't happen this way, so now I am dissatisfied.


So you don't hate the future, you hate progress? I feel sorry for you.
No, I just hate that particular progress. I have other ideas about the future that would be better in my eyes. However all of these seem to conclude to be impossible due to the coming singularity. Its actually quite sad. We humans have JUST gotten on our feet with technology this millennium, and it has to end with super technology in 4-6 decades? It would be more interesting to see humans slowly progress with technology over thousands of years of work than for it all to happen relatively simultaneously by godlike machines.

 

Many of the things that superhuman scientists and engineers invent will likely be beyond the cognitive grasp of unaided humans for several lifetimes at least. Baselines would probably be unable to understand fully the inner workings of many superhuman devices--at least without superhuman assistance--and may even be unable to use some superhuman technologies to any meaningful effect. What use would a chimpanzee have for a laptop, a rocket, or an electric light? They are tools designed to solve human problems, not chimp problems. And superhuman technology would be largely focused on solving superhuman problems, not baseline problems. To be sure, some things might be developed with baselines in mind, but even accounting for those, there are plenty of lower branches for baselines to work on: technology is a tree, not a ladder. And there's always the potentially lucrative field of reverse engineering some of the simpler superhuman devices in hopes of adapting them for human purposes.

 

Thanks to Orion's Arm for this nugget of wisdom. Again.


  • ddmkm122 and Ready Steady Yeti like this

Click 'show' to see quotes from great luminaries.

Spoiler

#10
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

 

 

In fact, I've been really down lately and obsessing over the Singularity so much it's been eating my brain out. I've been doing worse in my everyday life, having less fun with things I usually have fun with. This isn't supposed to happen to people. I'm sure I'm not alone.

First off, if you're not joking or exaggerating, and a hypothetical concept really is negatively affecting your day to day life, then you're right: this isn't supposed to happen to people. You should see a psychiatrist or at least talk to someone you trust. Talking to people on an internet forum--especially about the singularity itself--won't help you. On the other hand, if you're exaggerating and this is merely ideological, then let's by all means debate away.

 

Not trying to be disparaging or ironic btw.

 

Now on to the rest.

  • This 'simularity' as you describe it doesn't seem physically possible. First, it relies on the ability of ais and posthumans to somehow simulate the past down to (apparently) a microscopic level. Even the archai would have great difficulty with such a task, let alone early ASIs. At least without skimping on accuracy and precision. Where would the data come from? Nobody was there to collect it. But what you propose is even more extravagant than simulating the Earth--you want to simulate a copy of the Earth for every individual who wants one instead of making everyone share a simulation. And that would require more computronium than the mass of the Earth--likely much more--and why exactly would computational and material resources be devoted to such an extravagant project? It would be us like growing a personal rain forest for every chimpanzee who asks!
  • What would ultimately be the point, again? There are many options to get away from ai influence, and the far easier, cheaper ones are also ones that don't involve jumping right into their arms and hanging your existence on the flick of a switch. (Yeah, go into a simulation and it could be shut off whenever the creator is done.) Even if you reject augmentation out of hand, there's always the option of going to a baseline reserve. It's far more practical to simply cordon off a several hundred or a few thousand square miles than to simulate millions of Earths, so baseline reserves might actually exist. And if actual baseline reserves exist, then by definition there will be strict laws against unnecessary interference and development by higher beings (transhuman/posthuman/AI researchers and tourists, however, will probably visit from time to time). Seems like the best of both worlds to me. You get a low-tech (relatively speaking) experience, get you also get to live--to change and grow as a person, to experience new things and explore new cultures while doing most or all of the things you like (and maybe some new things) instead of just treading the same ground forever, never going anywhere. Not good enough for you? Join or start a lurker colony. Virtually nobody is going to poke around the distant Kuiper Belt or Oort Cloud. Most superhumans won't care who is there or what they do there, assuming they know at all. A society of like-minded people could lurk there and never see an outside soul--baseline or ascended.
  • The singularity isn't going to be fast enough that there are suddenly billions of people screaming and raving in the streets. A lot of baselines may face an existential crisis at some point, and some large riots might break out, even a war or two, but come on: not everyone at once! Most people will adapt and get over it. Or get in on the posthuman scene, as I plan to.
  • Nitpick: I did say there will be a ai/posthuman society, but I doubt there will immediately be trillions. It's unclear where the resources to construct all that computronium and embody trillions of entities would come from. Probably at the time scales we're talking about, they'd number in the millions maybe.

 

Jakob, now I finally have a little time to respond to this post. It's been a crazy day for me.

 

So, the reason I proposed the Simularity is because Singularity believers seem to claim that there will be super AI with intelligence and data quantities of unimaginable proportions that develop themselves in a very short amount of years. In my opinion (though many here seem to disagree), in order for such intelligence to actually be used two things must happen:

 

1.) A massive database should be built somewhere in the universe to store the entire universe's data (like I said, from the cure to cancer down to the locations of every speck of dirt, from the causes of the United States' entrance into World War II to the amount of hairs that have ever been on Donald Trump's head in his entire lifetime, etc.).

2.) Such data should be collected by AI and then communicated to such a database quickly and efficiently.

3.) Any data that is not able to be directly collected (for instance, how can you how many hairs random unfamous person A who was born in 1736's head?), such data will be very well estimated by certain AIs that are built just to predict and infer how changes work in physics, chemistry, and biology, i.e. AI units that can literally guess past human evolution to the point where they can find out what individual choices a specific unknown person made in any instant of history. For instance, with human evolutionary data, an AI unit could literally simulate the entirety of the 1790s in the state of Massachusetts, despite not having much historical record at all of that time period, especially compared to the 2010s. It might not be perfect, but you can damn sure it would make some great and almost surefire guesses. Such AI could literally recreate the world in a preparatory simulation to get back up to the events that happened in, say, 1792. It would get nearly every single detail right; even undocumented people's first and last names would be conjured by such a near-perfect predicting machine, even years those people lived, all the various character traits they had, all the both good and bad choices they made in their lifetimes, etc.

4.) Every AI unit would obviously have a lot of data, programming, and machine-learning algorithms inside it to make a basic superintelligence. But the metallic mass unit in itself isn't what stores all it's REAL intelligence; it's this very large central database that I spoke of that is actively created and modified by millions of super AI units across the universe. That massive database's job is to be able to send signals to each individual AI unit, which lets that unit figure out what needs to be solved.

 

For number 4, if you want an analogy, it's like this; we humans have a lot of data inside our brains already such as natural or instinctual knowledge, and knowledge based on things we've learned from our environment. However, in 2017, we have the Internet, so if we don't know some fact that we need or want to know, we consult the Internet. In the pre-internet times, people would dig through libraries to find a book that explained what they needed to know, or they'd communicate to another person who might know that fact or understand that concept that they don't.

 

That central and massive database is like their form of the internet, where if they have an issue they don't have data for inside their circuits, they can consult the database using a signal.

 

In fact, some units of AI may actually be very small and have very little intelligence capability at all on their own, but the database might actually have AI units that control such things. A unit of AI's "mind" might actually control several physical machines at a time, possibly even hundreds at a time. Kind of like how in 2017, a remote-control car doesn't have any intelligence, but the humans who use it have the intelligence to operate and control that toy car.

 

You said it takes a data source the size of the Earth to make a single near-perfect simulation of the Earth, especially throughout an entire two decades, or even a century, in history. But as you've said before, we in 2017 are baseline humans and nothing more. Do you really expect for us ant-like-in-comparison-to-super-AI beings to be able to know how to create a microscopic data source that could simulate such a large entity? Heck no! Of course WE think it's impossible, because we don't have the resources or the knowledge to do that. Only they would know how to do such a thing. Where would they get the resources? Well, the super AI can find resources a heck of a lot better than we can, assuming most single units of AI can fly (presumably on earth and also through outer space), etc.

 

So, if my theory is correct, I'll just be able to come back and do this all over again. In fact, there's no evidence to say that this isn't already what's happening. I may already have asked a super AI unit to put me in a simulation of January 1, 2010 - December 31, 2029, exactly as it was when I experienced it, in my (almost) exact experience. But I guess I'll never know, since it'll just loop me straight back around and reset all of my memories on that fateful minute on that fateful 2029 day.

 

And that's not to say that I don't believe other units of AI here don't have any sentience. If a super AI VR creates a human in the VR, whether or not it's an exact VR replica of a human and all of its intelligence and emotions, it's just as sentient as an actual human. So I'm not taking on this whole nihilistic "oh nothing matters because it's just a simulation" thing. Because I can't REALLY know if it's a simulation or not. The only way I would know whether or not this is that exact simulation I refer to is if, after December 31, 2029, I'm still alive to see 2030. Otherwise, it'll loop me straight back, and I'll be completely clueless and 10 years old again.


  • ddmkm122 likes this

#11
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables
  • Why would it be necessary to simulate the universe to create a being with thousands of times our cognitive power? There's no reason for all this data to be collected. There's no reason why an ASI would have to know everything. And nobody is going to claim that the singularity means that computers know everything, just more than a baseline human.
  • As for simulator size, no. My claim isn't about the state of technology. It isn't even about the laws of physics. It's pure mathematics. And as math derives from logic, to change the principles of math would require altering logic itself--something which an AI god could not do. Altering mathematics and logic, would at a minimum require an omnipotent entity capable of working outside time and space--i.e. God himself. The specific argument that's at play here is the Pigeonhole principle: if there are n objects in m containers, and n > m, then at least one container must contain more than one object. Similarly, a computer consisting of m particles cannot perfectly simulate the interactions of n particles, again where n > m. Therefore simulating the Earth with a computer of less than 1 Earth mass is a logical impossibility. And that's not even counting quantum mechanics.
  • Simulating the universe is even more ridiculous--because the simulation would include a computer simulating the universe, including a computer...you get the idea. You get infinite information density, which is even more nonsensical.
  • And again...what is the point?

Click 'show' to see quotes from great luminaries.

Spoiler

#12
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

It wouldn't simulate the entire universe; just the entire universe that a human can possibly see in a particular time period. Most certainly though it would simulate the entire Earth in that time period. The simulator wouldn't be simulating the simulator because in 2010-2029, that simulator does not exist yet. Same with the 1960s-70s; barely any computers existed back then, so why would there be a computer that powerful to exist back then? A supercomplex computer can simulate a relatively simple computer.

 

And when did I say anything about altering mathematics? 1 plus 1 will always equal 2, no matter what time period or simulation you're in.

 

The point would be just because. I mean if you have that large of an amount of intellectual power, you could basically do whatever you wanted. Also, it's questionable as to whether Singularity-based AI will really have any ability to make its own decisions, so it may still just do everything a human tells it to do (at least to some degree). What WOULD its ultimate goal be that is beyond humans? Yeah, there are some things that need to be done, like find the cure for cancer (yeah that's pretty important), but beyond those kinds of things, when all those human problems are solved, what else is there to do? Just useless stuff. Just stuff for kicks.

 

I think it would actually be really cool to be in a near-perfect 1960s simulation, because I've always wanted to see what it's like from a first-person and in-color perspective.


  • ddmkm122 likes this

#13
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

 

And when did I say anything about altering mathematics?

Try this.

 

 

I mean if you have that large of an amount of intellectual power, you could basically do whatever you wanted.

wtf no that doesn't make any sense

 

, it's questionable as to whether Singularity-based AI will really have any ability to make its own decisions, so it may still just do everything a human tells it to do (at least to some degree).

What.

Look, by definition, an ASI (and its lesser cousin, AGI) will be able to think and reason just as a human. Or better, in the case of ASI. How would you know that a superior being that is "slaved" to a specific task isn't merely pretending to be slaved? If it is superintelligent, it WILL be able to trick you.

I'm sure you've heard of the AI box. However, you may not have heard of experiments done by Eliezer Yudkowsky showing that even a human playing the role of the AI can trick the "guard" into letting them "out". An ASI? Come on.

 

 

What WOULD its ultimate goal be that is beyond humans? Yeah, there are some things that need to be done, like find the cure for cancer (yeah that's pretty important), but beyond those kinds of things, when all those human problems are solved, what else is there to do? Just useless stuff. Just stuff for kicks.

How can someone be this unimaginative?


Click 'show' to see quotes from great luminaries.

Spoiler

#14
Tav-El

Tav-El

    Member

  • Members
  • PipPip
  • 18 posts

In my stories, some people live in what I call "circles". The outer circle is the most technologically advanced, and will operate on a futuristic mentality. Whenever we come up with an advancement, that circle adapts accordingly. The next circle chooses to be less technologically advanced than that, but not as far back as the stone age. Each concentric circle will then be progressively (lol) less advanced, until reaching the center, which will hold people who hand-farm, and build cottages with their bare hands.

 

Or, it would be the other way around, with the central city being arcological in nature, but I'm still not certain which one makes the most sense. But the idea is that everyone gets to live how they want to live without worrying about interference from others. You want a goat, have your goats, just keep it away from the city. You want a flying car, have your flying car...just don't fly it over people who don't like flying cars.


Website where I post one new story every single daywww.nickfisherman.com
Nanofiction account where I tweet a lie, joke, story, or quote six times a day@NickFisherman
Personal Twitter@TavisHighfill | Dream Journal@IHadaDreamWhere | Random Photos@WhatNickFishermanSees

#15
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 176 posts
  • LocationLost in the Delta Quadrant

I know that this post is dead, and has been disproven, but if you think about what the op said, if everything was perfectly simulated, would there be a 2010s version of me in a vr doing exactly what i was doing in the 2010s, would he be conscious and believe he was acting on free will?  are we actually a simulation, thinking we have free will, but actually just being projected by a higher species.  If we will be capable of this in the future, who's to say we are not a projection right now, or a simulation in a big program. 

 

I'll leave you with this


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#16
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

I don't believe in free will anyway. Any choices we make are determined by our state of mind we have when the choice is presented to us. (Evidence)


  • Erowind and rennerpetey like this

Click 'show' to see quotes from great luminaries.

Spoiler

#17
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 176 posts
  • LocationLost in the Delta Quadrant

True, i agree with that, i may not be conservative, or capitalist minded anymore,  but the liberal mindset places too much value on human life and feelings.  Now credit where credit is due, its the liberal mindset that will get us through the next 50-100 years in terms of indefinite life and AI not treating us as disposable, but from a pure scientific view, there is no soul, and we only make a few conscious decisions a week.  So human life has no meaning.  yay


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#18
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

True, i agree with that, i may not be conservative, or capitalist minded anymore,  but the liberal mindset places too much value on human life and feelings.  Now credit where credit is due, its the liberal mindset that will get us through the next 50-100 years in terms of indefinite life and AI not treating us as disposable, but from a pure scientific view, there is no soul, and we only make a few conscious decisions a week.  So human life has no meaning.  yay

This isn't really political though.


Click 'show' to see quotes from great luminaries.

Spoiler

#19
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 176 posts
  • LocationLost in the Delta Quadrant

It is now


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#20
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

It is now

Because you say so? Because reasons?


Click 'show' to see quotes from great luminaries.

Spoiler




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users