Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

"Intelligence explosion"? "Singularity"? "Infinitely complex future"? I think not.


  • Please log in to reply
21 replies to this topic

#1
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

Humans are animals. It is true that we are very different from other animals in the ability to understand complex problems that other animals can't even begin to understand. But in the end, we're just animals. We react on various stimuli. We don't actually make our own choices; rather, the choices we make are all due to some stimuli that makes us desire to do that thing. People talk about "nature", but really, everything is nature. We are nature; everything we make, do, or think about is nature. Everything that is everything is technically "nature". The desire for me to write this forum topic is due to some stimulus in my brain; I may be making the decision to write it, yes, but it is to some degree automatic, based on how my brain reacts to certain things around me.

 

Anyway, the point is that humans have limitations. Now let's look at the technology we've created up to today. Every bit of that technology runs by some human command. If we left the planet, along with all that technology, the technology would just stop doing things, since humans aren't telling it to do anything anymore. If I left the room as I was typing this topic, and never touched this Chromebook again, you would never have seen this topic, because the Chromebook isn't intelligent enough to just scan my brain and finish the topic for me based on what I would say, basically emulating my brain.

 

Another thing; every piece of technology we make has some link with our human culture and behaviors. I once wondered what it would be like if the internet was the only universe that existed, and I just literally lived inside a computer and browsed the internet. Well, that wouldn't be possible as I think about it now, because without real life, the internet would barely have anything to talk about. There would be no videos of people IRL, because where would the people be? And how can we make video games if we don't have any animals, insects, people, coins, etc. to base them on? Donkey Kong wouldn't have existed as he is if gorillas didn't exist IRL, and Sonic wouldn't exist as he was if hedgehogs didn't exist.

 

Due to human limitations, I don't believe that the far future is going to be anything particularly special in technology. Yes, there will be major improvements, but eventually physical and biological limitations will make this constant increase in technological improvements that we see today come to a stop or very large slowdown. A "singularity" is sort of ridiculous if you think about it from this standpoint, since I believe we in 2017 are very close to that point that I speak of, where we just kind of have solved most of the issues we can in science and technology. Of course, there will still be ideas and theories, like "teleportation" or "quantum computing", etc. (which may or may not come around, I don't know; just using those as examples), that we as humans will just not be able to improve past a certain point, or even come about at all.

 

In regards to a technological "intelligence explosion" or the "Singularity", I think it's possible (sort of) that something similar to that idea might happen. But I wouldn't count on anything like that happening anytime soon. I see robots becoming part of our society and being as intelligent as us happening in about 200 years. We won't be alive to see it if it is even possible for us to do.

 

Note that I said AS intelligent as us. Meaning that it'll take a huge amount of time and effort just to make machines as intelligent as US, or nearly. Don't even mention machines billions of times smarter than us by 2045, that's WAY too soon by a long shot (and in fact impossible to happen at all).

 

Sure, maybe one far off day, we'll have machines that are a bit more intelligent than the smartest of us. But they're not gonna be GODLY or anything like is suggested by some. They'll just be automatons that help us with stuff. They'll still have issues and problems. Just like everything does.

 

How can we expect to create a god when we're not gods ourselves? You can't just make something out of nothing. These robots aren't just going to increase their own intelligence. And how will they even know how to? We can't even do that to OURSELVES! We can't just make ourselves have godlike "intelligence". So how do we expect to make a machine do the same? Machines will not be able to exceed the limitations that we humans know. Beyond that is impossible.

 

See, there's some forms of knowledge, visions, or intelligence that humans will biologically NEVER be able to grasp. We DON'T know what it would be like to be a god, because we're not gods. A hypothetical "god" could understand things that no human ever born could ever even think of for a second. Neither will the machines that we create be able to understand such things. Machines also have limits. Though they're amazing, they're not infinite.

 

Also, intelligence can't just be defined by a number. It is a term we humans made up. It has no actual meaning. It's not something you can just grab and max out. It's not like life count in video games, which, with certain hacks, you can just max out to the highest possible number

 

In conclusion, this is my personal opinion. In regards to the 2045 theory, or the 2070 theory or whatever, I'm sure most of us on this forum will be alive in 2045, so we'll see then if the singularity will happen as told or not. But, frankly, I think most of us here will be quite disappointed that 2045 will not be the year of any unique singularity, but will just be another normal, human year.



#2
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,376 posts
  • LocationRaleigh, NC

I'd like to think intelligence is the end result of a process, rather than the process itself.

 

Who cares about what goes on in the circuits as long as it outputs a result that is meaningful, insightful, useful or something we humans can totally relate to.

 

But then will will be a point when we no longer can grasp the actions/processes of synthetic intelligence as it gets more sophisticated. Does this mean it's no longer intelligent? Not by a long shot.  

 

Intelligence does not require others to understand it. It can be a standalone phenomenon - after all, like Ready already mentioned above, intelligence is meaningless on its own. It's a process which produces a result that effects some kind of change in the environment regardless of how coherent it can be, generally speaking.


What are you without the sum of your parts?

#3
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 569 posts
  • LocationUK
Intelligence has arisen through millions of years of evolution. Essentially you can think of genes as self-improving algorithms that are trying to optimise their own reproductive success. It is a remarkably simple process of incremental mutations leading to chance improvements in reproductive ability. Given enough time this has led to such complexity that there is now a species that is not only self-aware, but understands its own evolutionary history, the workings of cells in its body and even the rules of physics. This is all from a process that isn't even trying to optimise intelligence, it's just that improving intelligence is conducive to increased reproductive fitness.
 
Now, artificial intelligence will be created in roughly the same way that biological intelligence was, but is not constrained by the slow and inefficient process of evolution. Many of the machine learning algorithms are akin to biological evolution and operate by way of self-improvement. We can scale up computing speed and capacity to orders of magnitude beyond the human brain. Given enough capacity we could create a trillion different AI algorithms, with each iterative generation lasting a fraction of a second -  all trying to maximise their own intelligence.
 
Unless we cause are own extinction or similar, we WILL create a super intelligence. 


#4
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

 

Intelligence has arisen through millions of years of evolution. Essentially you can think of genes as self-improving algorithms that are trying to optimise their own reproductive success. It is a remarkably simple process of incremental mutations leading to chance improvements in reproductive ability. Given enough time this has led to such complexity that there is now a species that is not only self-aware, but understands its own evolutionary history, the workings of cells in its body and even the rules of physics. This is all from a process that isn't even trying to optimise intelligence, it's just that improving intelligence is conducive to increased reproductive fitness.
 
Now, artificial intelligence will be created in roughly the same way that biological intelligence was, but is not constrained by the slow and inefficient process of evolution. Many of the machine learning algorithms are akin to biological evolution and operate by way of self-improvement. We can scale up computing speed and capacity to orders of magnitude beyond the human brain. Given enough capacity we could create a trillion different AI algorithms, with each iterative generation lasting a fraction of a second -  all trying to maximise their own intelligence.
 
Unless we cause are own extinction or similar, we WILL create a super intelligence. 

 

But how can we make such algorithms if we can't even go so far as to find out our own algorithms? The point is, we can't create an intelligence as powerful as you speak, because we can't grasp that kind of power. We have no idea what such a power would be like, therefore we can't create it.

 

You can't make something out of nothing. This is like saying you're going to take two small pieces of wood and use them to make the world's largest mansion.

 

There is always a limit, and there's always a system. There's a degree where any system can't be improved anymore. In 2017, we're near that point in our technology. By 2045, we'd be lucky if we had a technology that was as intelligent as a squirrel.



#5
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,699 posts

This election shown me that there's a better chance of the exact opposite during the short term for America and that science, education and evidence based reasoning could really take a hard hit. Understanding human history and the many bumps in the road created by human idiocy should always be factored into any future prediction as the past 150 years isn't the norm....

 

Not saying that humanity won't one day become something great, but in America's case it isn't looking so good.


To follow my work on tropical cyclones

http://z7.invisionfr...php?showforum=1 Archives...Probably the best in the entire world within my opinion -->>> http://z7.invisionfr...php?showforum=4


#6
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,805 posts
  • LocationIn the Basket of Deplorables

 

Humans are animals. It is true that we are very different from other animals in the ability to understand complex problems that other animals can't even begin to understand. But in the end, we're just animals. We react on various stimuli. We don't actually make our own choices; rather, the choices we make are all due to some stimuli that makes us desire to do that thing. People talk about "nature", but really, everything is nature. We are nature; everything we make, do, or think about is nature. Everything that is everything is technically "nature". The desire for me to write this forum topic is due to some stimulus in my brain; I may be making the decision to write it, yes, but it is to some degree automatic, based on how my brain reacts to certain things around me.

Fine. Free will doesn't exist. We are slaves to chemical cogs and quantum dice. I've admitted as much.

 

 

Anyway, the point is that humans have limitations. Now let's look at the technology we've created up to today. Every bit of that technology runs by some human command.

We've created systems that are capable of acting with autonomy and doing things we don't predict. For instance, Google's AI can dream. AIs have composed works of literature and art. Facebook is developing AIs that create AI. And do you think somebody told Google's translation algorithm to design its own language?

 

 

Another thing; every piece of technology we make has some link with our human culture and behaviors. I once wondered what it would be like if the internet was the only universe that existed, and I just literally lived inside a computer and browsed the internet. Well, that wouldn't be possible as I think about it now, because without real life, the internet would barely have anything to talk about. There would be no videos of people IRL, because where would the people be? And how can we make video games if we don't have any animals, insects, people, coins, etc. to base them on? Donkey Kong wouldn't have existed as he is if gorillas didn't exist IRL, and Sonic wouldn't exist as he was if hedgehogs didn't exist.

Er...yes. The internet requires physical reality to exist. Intelligence requires physical reality to exist. Fortunately, physical reality does in fact appear to exist. Unless you're an immaterialist. In which case, I've got a fitting response for you.

 

 

Due to human limitations, I don't believe that the far future is going to be anything particularly special in technology.

Human limitations can be transcended.

 

 

Yes, there will be major improvements, but eventually physical and biological limitations will make this constant increase in technological improvements that we see today come to a stop or very large slowdown.

There are indeed physical limits to computing efficiency. When we hit those, we build bigger computers. There is no law of physics that says we can't build a computer the size of an asteroid, moon, or planet...or bigger.

 

 

A "singularity" is sort of ridiculous if you think about it from this standpoint, since I believe we in 2017 are very close to that point that I speak of, where we just kind of have solved most of the issues we can in science and technology.

Many a great luminary has claimed such things. And here we are.

 

 

Of course, there will still be ideas and theories, like "teleportation" or "quantum computing", etc. (which may or may not come around, I don't know; just using those as examples), that we as humans will just not be able to improve past a certain point, or even come about at all.

Quantum computing...has come around.

 

 

In regards to a technological "intelligence explosion" or the "Singularity", I think it's possible (sort of) that something similar to that idea might happen. But I wouldn't count on anything like that happening anytime soon. I see robots becoming part of our society and being as intelligent as us happening in about 200 years. We won't be alive to see it if it is even possible for us to do.

Robots are literally a part of our society today, that's literally what a social robot is. To create conscious robots, we will have to understand what consciousness is and reverse engineer it. That could be 200 years away or 20, but most experts think it's in between. I expect to be alive to see it. You're not alone in thinking that AGI is impossible, but you're in the small minority.

 

 

Note that I said AS intelligent as us. Meaning that it'll take a huge amount of time and effort just to make machines as intelligent as US, or nearly.

That's obvious, it's why billions of dollars are going into AI research. Why are we special? Why will the AI train come to a screeching halt just before it hits you? If you're scared of being run over, then jump on! There's plenty of room for humans.

 

 

Don't even mention machines billions of times smarter than us by 2045, that's WAY too soon by a long shot

2045 is just a number randomly thrown out by one man many years ago. Stop clinging to it. It's not holy gospel. Well, maybe for some people...

 

There probably won't be a year when the Singularity "happens". That's a Kurzweil myth.

 

 

Sure, maybe one far off day, we'll have machines that are a bit more intelligent than the smartest of us.

Indeed. Probably in about 100 years, they'll blow past baseline humans. A great many of these may be humans that have expanded their  capabilities. Others will be derived from machines, and a few might be derived from animals.

 

 

But they're not gonna be GODLY or anything like is suggested by some.

Oh, hell, close enough. Not in a literal, metaphysical sense, but close enough. Just give it several thousand years.

 

What would you call me if I had a quantum brain the size of Earth with a photonic/quantum/wormhole-based neural structure and vast nanofactories that could conjure up anything I imagine from raw material? Machine avatars capable of mining the sun and running factories producing exotic states of matter you can't even define? Neurally controlled utility fog swarms capable of manipulating my environment--including your own body and brain--down to a molecular level? The ability to see the future with astonishing detail and accuracy by simulating what I want to know--including the actions of baseline humans with near-perfect detail and destroy any threats to myself before they even realize they're a threat? Or simply change their minds through subversion and deception (using my vast processing power and simulations to know exactly what to say and do), or outright neural manipulation? What would you call me?

 

 

They'll just be automatons that help us with stuff.

Look, I think your view of AI is shaped by TV and Hollywood.

 

That's what weak AI is for, and yes we'll still have that. But come on...you wouldn't expect a sentient being not to have its own goals and values? That's just silly. On the other hand...

 

Yeah. And look at our avatars. This is hilarious.

 

 

They'll still have issues and problems. Just like everything does.

So you realize. Of course. Even bigger issues and problems.

 

Base emotions and primal fears will probably be ubiquitous among any human-derived intelligences. Think of it this way. We've got a "reptile brain" tucked away in the depths of our mind. We have a more complex mammal brain, and finally we have a human brain that supports language, reasoning, sapience, and all other such basic building blocks of humanity. For a post-human, that little human brain will still be tucked away in some far flung corner of its mind, but it will be the primal. The base. Higher minds have higher problems in addition to lower ones. Think of Maslow's hierarchy of needs with one or ten or a hundred levels stacked above the highest human level.

 

Could a common chimpanzee understand the concerns and troubles of a modern man? Some, yes, but many would be incomprehensible without simplification and metaphors. Could a modern man understand the concerns and troubles of a quantum planet-mind? Some, yes, but many would be incomprehensible without simplification and metaphors.

 

 

How can we expect to create a god when we're not gods ourselves? You can't just make something out of nothing. These robots aren't just going to increase their own intelligence. And how will they even know how to? We can't even do that to OURSELVES! We can't just make ourselves have godlike "intelligence". So how do we expect to make a machine do the same? Machines will not be able to exceed the limitations that we humans know. Beyond that is impossible.

What would an Australopithecus dropped in New York City think but that we are gods? And yet somehow we got from there to here. And that was by the random whims of evolution, not some concerted effort.

 

 

See, there's some forms of knowledge, visions, or intelligence that humans will biologically NEVER be able to grasp.

Why, yes. Who said anything about biology? Biology alone is not the most efficient path.

 

 

We DON'T know what it would be like to be a god, because we're not gods. A hypothetical "god" could understand things that no human ever born could ever even think of for a second.

That is correct.

 

 

Machines will not be able to exceed the limitations that we humans know. Beyond that is impossible.

Machines have found out things we didn't know.

 

 

Also, intelligence can't just be defined by a number.

Yes. So don't define it by a number.

 

 

It has no actual meaning. It's not something you can just grab and max out. It's not like life count in video games, which, with certain hacks, you can just max out to the highest possible number

That is false. Intelligence exists. Unless you want to get eyeballs deep in philosophy.

 

 

But, frankly, I think most of us here will be quite disappointed that 2045 will not be the year of any unique singularity, but will just be another normal, human year.

Undoubtedly. The Kurzeillian "rapture of the nerds" isn't the only path. We aren't going to be able to pinpoint any singularity. We'll argue over where it happened, when it happened, if it happened. It's like trying to pin down when was the first time man created fire or writing.


  • matthewpapa, sasuke2490, Alice Tepes and 2 others like this

Click 'show' to see quotes from great luminaries.

Spoiler

#7
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,699 posts

       Probably the biggest reason that most technology and science will survive the long term leading to far more advancement is books and the internet. We live in an global civilization that is interconnected that will make it next to impossible for information to be lost even when a major nation goes down for a time unlike lets say 1,500 years ago.  One reason for this is once the printing press was invented  technology never want the otherway comparably to pre-printing press even with huge wars and dark times(most of Europe destroyed).

 

Humanity as an animal isn't smarter per say but his computers and his mountains of information certainly are.


To follow my work on tropical cyclones

http://z7.invisionfr...php?showforum=1 Archives...Probably the best in the entire world within my opinion -->>> http://z7.invisionfr...php?showforum=4


#8
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 569 posts
  • LocationUK

 

 

Intelligence has arisen through millions of years of evolution. Essentially you can think of genes as self-improving algorithms that are trying to optimise their own reproductive success. It is a remarkably simple process of incremental mutations leading to chance improvements in reproductive ability. Given enough time this has led to such complexity that there is now a species that is not only self-aware, but understands its own evolutionary history, the workings of cells in its body and even the rules of physics. This is all from a process that isn't even trying to optimise intelligence, it's just that improving intelligence is conducive to increased reproductive fitness.
 
Now, artificial intelligence will be created in roughly the same way that biological intelligence was, but is not constrained by the slow and inefficient process of evolution. Many of the machine learning algorithms are akin to biological evolution and operate by way of self-improvement. We can scale up computing speed and capacity to orders of magnitude beyond the human brain. Given enough capacity we could create a trillion different AI algorithms, with each iterative generation lasting a fraction of a second -  all trying to maximise their own intelligence.
 
Unless we cause are own extinction or similar, we WILL create a super intelligence. 

 

But how can we make such algorithms if we can't even go so far as to find out our own algorithms? The point is, we can't create an intelligence as powerful as you speak, because we can't grasp that kind of power. We have no idea what such a power would be like, therefore we can't create it.

 

You can't make something out of nothing. This is like saying you're going to take two small pieces of wood and use them to make the world's largest mansion.

 

There is always a limit, and there's always a system. There's a degree where any system can't be improved anymore. In 2017, we're near that point in our technology. By 2045, we'd be lucky if we had a technology that was as intelligent as a squirrel.

 

 

The point I made above demonstrates how incredible complexity can arise out of simple rules and processes. This is not making something out of nothing. Self-organisation and apparent complexity can occur in most physical or natural systems.

 

I don't know why you think that 2017 happens to be near the endpoint of technology. The current advancements in AI and AGI demonstrate that we are just at the very beginning of what is capable. The fact that we cannot "grasp that kind of power" is irrelevant. Already, nobody has any idea exactly how certain AIs have generated a solution to something. Does DeepMind understand exactly AlphaGo's reasoning despite being their creators?

 

Biological intelligence has already been created without any intelligence to guide it. Artificial intelligence will be created with intelligence to guide it. There will be a limit, but we have no idea what that limit will be.



#9
sasuke2490

sasuke2490

    veteran gamer

  • Members
  • PipPipPipPipPip
  • 462 posts

If brain emulation is possible than an a.i. can just scan in new skills from our memories becoming better than us, assuming computer keep improving at the rate they are for the next 30 years human level a.i. seems conservative with atomically precise manufacturing being required for many future inventions.


https://www.instagc.com/scorch9722

Use my link for free steam wallet codes if you want


#10
Maximus

Maximus

    Spaceman

  • Members
  • PipPipPipPipPipPipPip
  • 1,520 posts
  • LocationCanada

 

But how can we make such algorithms if we can't even go so far as to find out our own algorithms? 

 

Well, it looks like we already have.

 

 

Due to human limitations, I don't believe that the far future is going to be anything particularly special in technology. Yes, there will be major improvements, but eventually physical and biological limitations will make this constant increase in technological improvements that we see today come to a stop or very large slowdown. A "singularity" is sort of ridiculous if you think about it from this standpoint, since I believe we in 2017 are very close to that point that I speak of, where we just kind of have solved most of the issues we can in science and technology.

 

I doubt we will ever get to that point. As our methods of observing phenoma in the universe advance, more questions will arise. Thus, once we solve something that seems like the "holy grail" of physics, our greater understanding of the universe will allow us to observe previously unnoticed phenomena. 

 

You're probably right that there is a biological limitation to technological improvement. Science is becoming increasingly complex; to be considered an expert in one's chosen field could now take half a lifetime. People are spending more time than ever in school. We have somewhat solved this problem with specialization, but as Yuli so often says, just as chimps can understand up to a certain point, we too probably have a limit (albeit a much greater one). However, neural implants and AI have the potential to solve this.

 

I am pretty skeptical about the prospects for a "human-like" AI. Creating intelligence is one thing, and we are clearly making huge strides in this area. You seem to doubt the power of AI; the trends are clear however. Think of what NSA programs can do; humans would take centuries to sift through all that data. Think of advances in virtual assistants such as Siri and Alexa. However, the emotional and ethical aspect of intelligence is a whole different story. We know that emotions come from chemical processes in the brain, but as far as I'm aware, not much has been said about how this would be translate to an artificial intelligence. Ethics is even more unclear; this is an acquired cultural and societal trait. It's conceivable that AI could be taught to be "ethical", but again, this requires the AI to have emotions as well, not just rationality. 

 

I know movies aren't the greatest source when it comes to AI, but Ex Machina really brings out a good point: how do we know if an AI is really human? It can clearly pass the 'Turing test', but how do you test for emotion? How do you tell if it loves you, or if it's using the appearance of emotion to achieve a cold and calculated outcome? We humans tend to desire to project our own identities onto other intelligences, but we really have to consider that creating AI may not be creating a more advanced version of ourselves. It could be an entirely different form of intelligence.


If the world should blow itself up, the last audible voice would be that of an expert saying it can't be done. -Peter Ustinov
 

#11
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

 

 

Machines will not be able to exceed the limitations that we humans know. Beyond that is impossible.

Machines have found out things we didn't know.

 

Yes. But allow me to clarify, are those things it did that we couldn't have figured out as well, as humans?

 

Think of these centuries-in-the-future machines that I speak of to be like human equivalents. Yes, they could come up with things that no human has ever come up with. But any other human could do that as well. I can draw a picture that was so unique that no other human had ever drawn that exact sequence of lines and colors before in human history.



#12
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,805 posts
  • LocationIn the Basket of Deplorables

 

 

 

Machines will not be able to exceed the limitations that we humans know. Beyond that is impossible.

Machines have found out things we didn't know.

 

Yes. But allow me to clarify, are those things it did that we couldn't have figured out as well, as humans?

 

Think of these centuries-in-the-future machines that I speak of to be like human equivalents. Yes, they could come up with things that no human has ever come up with. But any other human could do that as well. I can draw a picture that was so unique that no other human had ever drawn that exact sequence of lines and colors before in human history.

 

Could you come up with a math proof that would take 10 billion years to read? Yeah, I didn't think so.


  • Zaphod likes this

Click 'show' to see quotes from great luminaries.

Spoiler

#13
Alice Tepes

Alice Tepes

    NONDESCRIPT HUMANOID

  • Members
  • PipPipPipPip
  • 199 posts
  • Locationfrom the desert town, located somewhere in the Southwestern United States, where all conspiracy theories are real.

firstly we will be able to figure out our own algorithm just give us time. 

secondly quantum computing is a real thing it is just something that can only be understood by using advanced mathematics rather than metaphors that we normally use to describe things because it does not conform to our macro scale understanding of the world.

now then about the singularity. it is not about the computers thinking of things that we can't it's about the computer being to understand it's own algorithm and being able to make its code and design more efficient faster than we can. about it understanding its own algorithm. we understand how it works so we can help it understand itself. there are also many limitations to the human mind that don't apply to computers in the same way. for instance the hard memory capacity of computers is much larger than a humans and is expandable. also computers don't forget things as easily as we do and have a photographic memory. their ability to calculate is beyond that of any human and they don't make nearly as many mistakes. they can think about more than one thing at the same time and mathematics come naturally to them. teaching a computer doesn't take years like a human once you have done it for the first time.

computers also have the advantage of running on strictly electricity and or light where we humans use chemical reactions in conjunction with electricity, which is much slower.

lastly there are computers that have intelligence by any definition that exists in a reputable source.

i could go on for much longer but ill end it with the fact the we are already always using machines to find information that we could not have on our own all the time like a microscope or magnetometer.


aspiring gynoid macrointelligence


#14
Alice Tepes

Alice Tepes

    NONDESCRIPT HUMANOID

  • Members
  • PipPipPipPip
  • 199 posts
  • Locationfrom the desert town, located somewhere in the Southwestern United States, where all conspiracy theories are real.

one more thing it is a fallacy to say that we cannot create things that are more intelligent than us because we ourselves where "created" by a process that lacked intelligence to begin with.


aspiring gynoid macrointelligence


#15
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

Do you think the singularity-era superintelligence be able to have knowledge about every detail about every movie, TV show, YouTube video, FaceBook post, tweet, and video game that has ever been in existence?

 

You know... just in case this dystopia does happen to happen in my lifetime, if at all, I might as well enjoy it.



#16
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,805 posts
  • LocationIn the Basket of Deplorables

Do you think the singularity-era superintelligence be able to have knowledge about every detail about every movie, TV show, YouTube video, FaceBook post, tweet, and video game that has ever been in existence?

 

You know... just in case this dystopia does happen to happen in my lifetime, if at all, I might as well enjoy it.

Why would you think there would only be one? I'm interested in expanding my cognitive capabilities, not getting subsumed by some kind of borg collective.

 

And why would all of these intelligences have any great interest in baseline human civilization? Why wouldn't some just not care?

 

But, yes, I suppose there would be some who are interested in such things. Just as there are humans today who study Neanderthal cave paintings.


Click 'show' to see quotes from great luminaries.

Spoiler

#17
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,699 posts

Do you think the singularity-era superintelligence be able to have knowledge about every detail about every movie, TV show, YouTube video, FaceBook post, tweet, and video game that has ever been in existence?

 

You know... just in case this dystopia does happen to happen in my lifetime, if at all, I might as well enjoy it.

 

Maybe or even likely, but we'll likely program the AI to use common sense and to consider more important things like:

Cancer cures

Heart disease

Math proofs that we need confirmed

etc


To follow my work on tropical cyclones

http://z7.invisionfr...php?showforum=1 Archives...Probably the best in the entire world within my opinion -->>> http://z7.invisionfr...php?showforum=4


#18
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,699 posts

one more thing it is a fallacy to say that we cannot create things that are more intelligent than us because we ourselves where "created" by a process that lacked intelligence to begin with.

 

I think this will occur. It will start reading and using logic and soon it will easily move above us.


To follow my work on tropical cyclones

http://z7.invisionfr...php?showforum=1 Archives...Probably the best in the entire world within my opinion -->>> http://z7.invisionfr...php?showforum=4


#19
Ready Steady Yeti

Ready Steady Yeti

    Member

  • Banned
  • PipPipPipPip
  • 137 posts

Do you think the singularity-era superintelligence be able to have knowledge about every detail about every movie, TV show, YouTube video, FaceBook post, tweet, and video game that has ever been in existence?
 
You know... just in case this dystopia does happen to happen in my lifetime, if at all, I might as well enjoy it.

Why would you think there would only be one? I'm interested in expanding my cognitive capabilities, not getting subsumed by some kind of borg collective.
 
And why would all of these intelligences have any great interest in baseline human civilization? Why wouldn't some just not care?
 
But, yes, I suppose there would be some who are interested in such things. Just as there are humans today who study Neanderthal cave paintings.

So as you say, every unit of AI will have it's own desires and interests? Meaning that they'll be similar to humans in that they are all individual in thought (though much more intelligent)?

#20
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,805 posts
  • LocationIn the Basket of Deplorables

 

 

Do you think the singularity-era superintelligence be able to have knowledge about every detail about every movie, TV show, YouTube video, FaceBook post, tweet, and video game that has ever been in existence?
 
You know... just in case this dystopia does happen to happen in my lifetime, if at all, I might as well enjoy it.

Why would you think there would only be one? I'm interested in expanding my cognitive capabilities, not getting subsumed by some kind of borg collective.
 
And why would all of these intelligences have any great interest in baseline human civilization? Why wouldn't some just not care?
 
But, yes, I suppose there would be some who are interested in such things. Just as there are humans today who study Neanderthal cave paintings.

So as you say, every unit of AI will have it's own desires and interests? Meaning that they'll be similar to humans in that they are all individual in thought (though much more intelligent)?

 

Why wouldn't they? If anything, they would be more human than baseline humans, not less. Expanding upon their humanity, not replacing it with something else.


Click 'show' to see quotes from great luminaries.

Spoiler




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users