Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

How Democracy Dies: Another Otherworldly Alternative

sapiocracy democracy education artificial intelligence superintelligence

  • Please log in to reply
11 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,329 posts
  • LocationNew Orleans, LA

When the political classes discussed how democracy can go awry, they tend to come back to many repeating trends in history: political apathy, extremist sentiment, radical populism, strong military support over the government, anti-intellectualism, civic decline, a sense of degeneration and a desire to reverse it through a strongman or ideological purity, widespread willful ignorance. 

 

Democracy is deeply flawed. It's biggest strength is also its Achilles' Heel. The entire citizenry is supposed to be sovereign. This maximizes civic participation and minimizes the chance for government abuse of the population. At least in theory. Usually, the government still finds a way to marginalize a group. But generally, it makes sense. Democrats are what gives a government its authority and power, so it's not in the government's interest to marginalize democrats. An aristocracy wouldn't marginalize aristocrats; a plutocracy wouldn't marginalize plutocrats; an ergatocracy wouldn't marginalize ergatocrats. So that makes sense in theory, even though in practice, it leads to instability because government works best if its power base is educated.

 

The thing about aristocracy and technocracy is that you can expect wealthy aristocrats to be well educated and small in number, while technocrats are literally defined by their technical expertise. 

 

Democracies need 100% of their population to be educated in order to function. Not just literate but also well-rounded. 

 

Information bubbles are extremely dangerous in a democracy. If group A believes President X is responsible for mass murder but group B believes he's not and no such murder happened and both bring "evidence" proving their side, you reach an impasse. Now imagine that for many smaller issues and even things irrelevant to politics. If groups start distrusting basic information and facts, they'll vote in irrational ways. 

 

This is just natural. We humans can't know all things at all times, and have to make deductions from what we can glean. There are only so many hours in the day and our attention is so limited, and we tend to reject that which goes against our views anyway.

 

Once synthetic media comes along, this will reach critical levels when people start persisting in entirely personalized bubbles, seeing only what they want to see. In 2032, Chelsea Clinton might be president, but many groups use media synthesis to change all news and images to show that Donald Trump Jr. is president.  And if you live in certain households that run with this, you might even have no clue Clinton is president the entire time and would think everyone saying otherwise is deluded.

 

Democracy can't function in such a world. 

 

But that's besides the point.

 

This just further shows how fragile democracy is. The main point to glean is that humans can only process a limited amount of information and can only act on that information at a certain speed. Democracy can die that way, but that's really how it died in Ancient Rome too, so what's so revelatory?

 

Simple: it also means that democracy can fail if there was a power base that's TOO smart, TOO capable, and TOO informed to the point that the common citizenry can't possibly keep up.

 

And everyone here can guess what that means.

 

But you see, the funny thing about artificial intelligence and narratives of it takeover is that we have to write fiction and fiction generally has to be exciting, especially if it's sci-fi. It's why I'd love to kickstart a "slice of tomorrow" literary genre in order to really go into the nitty gritty of future life, developments, politics, and so on with no need to tell an action or romance story or have grave stakes or dystopian consequences. 

 

If you think about the future in such a way, you see that AI overlords are inevitable, but in the most boring possible way.

 

Right now, we already live in an era of automated stock markets and enterprise expert systems. But in the future, as more and more capable AI is integrated into business and government for different purposes (such as optimization and cognitive assistance), we'll start to see the realms of politics and economics really speed up. More management decisions will be automated, and soon even executive decisions will follow. 

 

At first, this can greatly assist humans. We'll have these agents parse all the relevant data and optimize them for the most efficient solutions to any given problem.

 

But then more businesses automate. More leave executive decisions to these superbots. And the data coming in requires faster and faster responses to more and more complex information.

 

Soon, you need to be the equivalent of a nuclear scientist just to understand the basics,  and that's if you had several days to sit on it. You need to make a decision within an hour. And then a minute. Failure to do this means your business goes under within just a few weeks. 

 

And it goes doubly so for government. Bots are sending you 100 years of information in a day and need you to act immediately.  But you can't, Mr. President. You're only human. The fastest brain signal is only 270 miles per hour, whereas these bots have light-speed computing capabilities: 670,616,629 miles per hour. 

 

At some point, it makes no sense to let humans govern. It's literally impossible to keep up. Humans now get in the way.

 

Say a citizens group organized for a vote on how to limit the powers of AI. It takes 8 hours from start to finish.

 

In those 8 hours, the AIs have accomplished 5,000 years worth of thought and governance. They've modeled every single possibility of that vote hundreds of times over. They already taken appropriate actions. By the time the vote is done, the whole point of it is moot. Even when we try to be self-sufficient and independent, the machines are already a thousand steps ahead of us anyway.

 

Human enhancement doesn't change anything for the same reasons. Not all humans will upgrade, and the fact we have to toss away biology just to keep up proves the point that something has changed and democracy is obsolete.

 

This is all very outlandish, so it makes sense it's not discussed often. That and "AI took over the world over the course of decades through iterative improvements in efficiency" isn't quite as exciting as "killer robots slaughtered and enslaved their arrogant human masters." But that is the most likely outcome. If we're ruled over by AI, it's because we let it.


And remember my friend, future events such as these will affect you in the future.


#2
Cloned

Cloned

    BANNED

  • Banned
  • PipPipPipPipPip
  • 216 posts
  • LocationEU

Democracies need 100% of their population to be educated in order to function. Not just literate but also well-rounded.

What if we set an initial IQ level for voters? Solve few tasks and then vote.



#3
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,084 posts

Machines will someday understand the preferences and interests of individual humans better than those humans and their closest family members and friends do. It will make sense to hand decision making control to the machines at that point. 

 

Likewise, they will be better at leading countries, companies, and all other types of organizations. This might lead to a "survival of the fittest" scenario where the countries that cling to democracy and human leadership fall behind those that relinquish control to intelligent machines. 



#4
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 430 posts
  • LocationNew York

 

Democracies need 100% of their population to be educated in order to function. Not just literate but also well-rounded.

What if we set an initial IQ level for voters? Solve few tasks and then vote.

 

 

Doesn't that run into the exact same issue of: "it also means that democracy can fail if there was a power base that's TOO smart, TOO capable, and TOO informed to the point that the common citizenry can't possibly keep up." as Yuli says? 

 

Well, I for one, welcome my new A.I. overlords. I can see a country like America doing this, and honestly with the exception of a few human politicians I'd really prefer it. If A.I. came to the logical conclusion it must do all it can within industrial capacity to stem climate change, and has a workable model which spans out decades or hell even centuries for how that would go down while still achieving better living standards, then by all means. Same goes for better combating epidemics and the level of response needed to prevent future outbreaks. 

 

Considering everyday sheer human incompetence, reading this almost makes me wish it were happening right now. That scenario is actually a singularity itself, because society begins moving at the incredibly fast pace of A.I. rather than the slow humdrum of debates, defeats, and sheer debacle which characterizes the human process.



#5
Cloned

Cloned

    BANNED

  • Banned
  • PipPipPipPipPip
  • 216 posts
  • LocationEU

 to the point that the common citizenry can't possibly keep up." as Yuli says? 

Keep up poisoning planet? Or spending trillions on weapons?
That's will be horrible.



#6
Alric

Alric

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,091 posts

The problem with the IQ suggestion is that IQ is a small factor in the problem. You can have a relatively high IQ and still think stupid things for several different reasons. What is really important is critical thinking skills to cut through the nonsense and a willingness to actually apply those skills to yourself. 

 

Of course one of the big problems is people actively using propaganda and manipulation techniques to control the voters. When you combine democracy with capitalism, there is a built in incentive to game the system, because you can make profit off it and gaining profit is the point of capitalism. If you have an economic system that doesn't strongly benefit people to game the system, then people wouldn't manipulate others as much and democracy would work better. Of course people might still show up with their own ideals and try to push for it and they might be shady enough to lie and cheat to get what they want. However it is a lot worse in a system that actively rewards people for their bad behavior.



#7
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 22,329 posts
  • LocationNew Orleans, LA

That scenario is actually a singularity itself, because society begins moving at the incredibly fast pace of A.I. rather than the slow humdrum of debates, defeats, and sheer debacle which characterizes the human process.

 

There are essentially several other Singularities besides the one we know. I tend to refer to them as "event horizons" however. Basically when machines have dominated a particular field to the point of totality, you've reached its event horizon.

 

Economic: fully-automated economies, meaning it's nearly impossible for humans to work meaningfully while also reorganizing something of a slave society. It might be possible to completely eschew society entirely and live out in the woods, but still receive your normal income.

Political: AI dominates political matters.

Entertainment: The point at which synthetic media has so totally replaced the need for traditional media that most media consumed is personalized in some way.


And remember my friend, future events such as these will affect you in the future.


#8
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,106 posts
  • LocationLondon

Hey all!

 

This video got linked to in a thread I was reading recently. I'm sure many of you have seen it but it very neatly summarises (In a very general way) how political systems form, and maintain themselves:

 

Rules for Rulers by CGP Gray:

https://www.youtube....h?v=rStL7niR7gs

 

The basic message for those who cant watch videos is:

 

No one rules alone, everyone who rules anything needs certain people to agree to work with and follow them in order for them to rule, these people are their "key's to power"*. These people in turn have their own keys to power which provide them with their position.

 

As a result the ruler ends up having to take all the resources (treasure!) they control and use them to buy off their keys to power, not spend them wisely on science (for example). 

 

The interesting thing is that with some superintelligent AI in play running stuff, it becomes the only necessary key to power and it doesn't want treasure!.  

 

 

*(how many and who these people are will depend on the political system you're in, dictatorship democracy etc.)



#9
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,723 posts

I always find myself a little leary of conclusions about what impact AI will have on society.  What is often missing, or not convincingly discussed, is a discussion of values.  Upon what values will AI operate?

 

At least in the intermediate stages, the answer would seem to me those that are programmed into it.  Will it most value quarterly profit returns, or have more long term goals in which impacts on global warming will weigh heavier? 

 

Put another way, AI will be subject to the same ideological constraints and intellectual terrain to which humans are subject. Those who want to see AI more in control often assume its motives will somehow be more benign.  That will happen only if its motives are programmed to be more benign. 

 

If AI somehow transcends its initial programming, then the question becomes how will it do that?  Toward what ends?

 

Programming of AI will not be toward realization of a uniform set of values and goals.  Several competing entities will be involved in its development.  In the realm of politics, these entities will include political parties, governments, and large corporations.  So individual AIs will be working toward separate, sometimes competing goals.  One wonders, will that mean it will simply reproduce the power relations that already exist, or will it favor those that are most willing and able to invest in its development?

 

The socialism versus capitalism struggle will take on new meaning, as it will be a struggle to control AI.  As AI progresses, there will be a two way relationship.  The existing economic/political system controlling AI,  and AI affecting the development of the existing economic/political system. 

 

This will be a very complicated process, filled with all sorts of unexpected twists and turns. At least unexpected to me. So, at this point I don't have any final definitive conclusions. Only questions that mostly don't ever seem to be discussed in a realistic manner.   


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#10
haiduk

haiduk

    New Member

  • Members
  • Pip
  • 3 posts
  • Locationthe forest

OP has good points and very thought provoking ideas.

 

 

My view about AI is that it will be a different type of intelligence than human intelligence, and as such, will only replace some aspects of our lives.

 

 

Science is still far off from fully understand how the human brain & consciousness even works in the first place, let alone how to build an artificial version of it. 



#11
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,106 posts
  • LocationLondon

I always find myself a little leary of conclusions about what impact AI will have on society.  What is often missing, or not convincingly discussed, is a discussion of values.  Upon what values will AI operate?

...

Its too specific a subject I think. Easier to generalise and look at trends, individuals are hard to predict. 

 

You're completely right that how the AI thinks can change everything about how it impacts us. If the AI values human input into the political system it will keep it included right? 

 

It may scrap voting as inefficient and instead end up reading the entire population's minds 24/7 so that it can figure out how they would vote on the issue if they understood it but it would keep humans in the loop and be much more democratic than an AI which values only efficient use of resources.

 

An AI which values only efficient use of resources might remove humans from the loop immediately due to their inefficiencecy and biases (ignoring global warming etc.). 



#12
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 11,723 posts

^^^There is also the question of injustice.  "Efficient" often means most good for the most people. However, using that criteria one can come up with practices that are unjust for a minority.  Sorting out options in that case very much involves value judgements.  To some extent,the problem can be lessened through mitigation measures.  Computers might be well suited for that purpose, but they will have to have such values programmed into their computational framework for that to happen. In any event, "values" can be very subjective.


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls






Also tagged with one or more of these keywords: sapiocracy, democracy, education, artificial intelligence, superintelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users