Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

hypothetical futures based on trends

veganism plant based diet utopia what if

  • Please log in to reply
18 replies to this topic

#1
Ewolf20

Ewolf20

    Member

  • Members
  • PipPipPipPip
  • 191 posts
  • LocationColumbia,sc

so with the trends that going on like the recent generation z being more conservative, increase in plant base diets, as well as other stuff, I'm going to guess things might setback n terms of social issues but not that much.

 so here are my dumb predictions based on alternate history:

 

  • more younger republicans in the next coming years (but the younger you are, the more likely you can learn to change)
  • less liberals
  • more focus on technology
  • if plant based diets grow in popularity, we might see a bit of a downturn in global warming but not by much.
  • changes in farming. more focused and growing fruits, vegetables, and fungi.
  • if we ended up using educational reforms in third world countries, population growth would slow to a neat halt.
  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

i tried to make this somewhat realist but i failed. it feels too unrealistic to happen.

 



#2
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPip
  • 418 posts
  • LocationLost in the Delta Quadrant
  • more younger republicans in the next coming years (but the younger you are, the more likely you can learn to change)
  • less liberal

http://www.anneloehr...vative-liberal/

 

What we see here are some hallmarks of conservatism—risk averse when it comes to drugs and alcohol, significantly higher church attendance than the previous generation, conservative about money, prioritizing stability, pragmatic, and less interested in what is commonly associated with “fringe” behavior.

 

These findings paint the picture of a more liberal ideology with environmental concern, support for gay marriage and transgender rights, inherent acceptance of diversity, and supportive view of legalizing marijuana.

 

So which is it? How do we categorize a generation that presents common ideals of both conservatives and liberals? Maybe we don’t. Maybe we need to rethink what it is to be “conservative” and “liberal” and consider that in the future the distinction will be different. This generation just might disrupt the huge US bipartisan divide we are experiencing now. And maybe we would be better for it.


John Lennon dares you to make sense of this

Spoiler

#3
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPip
  • 418 posts
  • LocationLost in the Delta Quadrant

 

  • if plant based diets grow in popularity, we might see a bit of a downturn in global warming but not by much.

From my observation of different scientific articles, climate change actually hit a peak in the 80s-90s, and has SLOWLY been on the decline since then.  Now that we(at least some of us) are aware of it, there has been more of an international effort(IE the Paris Climate Accord) to prevent things that cause climate change.  For food, meat is not the problem, but red meat.  I don't know if you saw the chart of carbon footprint per food based item(I can't find it now, but will keep looking) but things like chicken and pork only leave double the average plant carbon footprint, while red meat(mainly cattle) leaves 7 or 8 times the footprint as plants.


John Lennon dares you to make sense of this

Spoiler

#4
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPip
  • 418 posts
  • LocationLost in the Delta Quadrant
  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

The political system will be what it always is, corrupt and inefficient, that is until our benevolent robot overlords take over and perfect it(whether that involves getting rid of us or not, I don't know).


John Lennon dares you to make sense of this

Spoiler

#5
CoolGuy23

CoolGuy23

    Member

  • Members
  • PipPip
  • 26 posts
  • LocationDixieland

hell yeah more young Republicans, neocons are coming back, baby!


May We All get to grow up in a Red, White, and Blue little town!


#6
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

 

  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

The political system will be what it always is, corrupt and inefficient, that is until our benevolent robot overlords take over and perfect it(whether that involves getting rid of us or not, I don't know).

 

What benevolent robot overlords? Absolute power corrupts absolutely.



#7
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPip
  • 418 posts
  • LocationLost in the Delta Quadrant

 

 

  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

The political system will be what it always is, corrupt and inefficient, that is until our benevolent robot overlords take over and perfect it(whether that involves getting rid of us or not, I don't know).

 

What benevolent robot overlords? Absolute power corrupts absolutely.

 

an AI would have no incentive other than to run the government as best as they can, and would not be prone to corruption.


John Lennon dares you to make sense of this

Spoiler

#8
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

 

 

 

  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

The political system will be what it always is, corrupt and inefficient, that is until our benevolent robot overlords take over and perfect it(whether that involves getting rid of us or not, I don't know).

 

What benevolent robot overlords? Absolute power corrupts absolutely.

 

an AI would have no incentive other than to run the government as best as they can, and would not be prone to corruption.

 

That doesn't make any sense. Just have someone come and offer them something they want. Even if we accept the incredibly absurd proposition that a being powerful enough to run a government could be so single-minded, it's still easy to imagine possibilities for corruption. Let's through everything we know about psychology out the window and say that it's somehow true--all he wants to do is run the country to the best of his ability. Lobbyists approach and offer access to more servers/data/electricity than our ai currently has. What they want in return is policies that benefit their company. More servers and data = better ability for the ai to do what he wants, running the country. Perhaps he takes the deal.

 

Now let's suppose everyone is doing this, not only humans, but a crowd of equally unpredictable and uncontrollable ais.



#9
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 8,651 posts

so with the trends that going on like the recent generation z being more conservative, increase in plant base diets, as well as other stuff, I'm going to guess things might setback n terms of social issues but not that much.

 so here are my dumb predictions based on alternate history:

 

  • more younger republicans in the next coming years (but the younger you are, the more likely you can learn to change)
  • less liberals
  • more focus on technology
  • if plant based diets grow in popularity, we might see a bit of a downturn in global warming but not by much.
  • changes in farming. more focused and growing fruits, vegetables, and fungi.
  • if we ended up using educational reforms in third world countries, population growth would slow to a neat halt.
  • not much to say for the political system since it's gonna take years for the second party to be considered outdated after a thousand years or sooner. and, for us to have a decent string of female presidents.

i tried to make this somewhat realist but i failed. it feels too unrealistic to happen.

 

1. I think less focused on technology as people need to value education and the current political trends suggest our educational system won't be funded as well. These people are religious nuts that think science is bad. (The deplorables). Tech and science will become a east asian thing during the next 40-50 years as China becomes the worlds super power.

2. Not likely...Maybe the small population of hardcore liberals may go vegan at a higher percentage but most moderates and conservatives will probably eat more meat.

3. More economic haters of the safetynet, science investment and probably a lot of them will wish to go back to the cowboy days. The current trend suggest that this is the reality of the short term no matter if it is good or bad and of course I think it is really bad as our middle class will likely become very small as the rich take most of the wealth. There's a reason we had unions, anti-trust and consumer protections...acting like this has changed is just dumb.

4. I don't think fruit will be more important part of farmer at all...See 2.

5. Probably but with current population trends we'll probably do away with such policies that limit population growth.

6. Between the me too shit I doubt we'll have any female presidents the next 20-50 years.



#10
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPip
  • 418 posts
  • LocationLost in the Delta Quadrant

 

That doesn't make any sense. Just have someone come and offer them something they want. Even if we accept the incredibly absurd proposition that a being powerful enough to run a government could be so single-minded, it's still easy to imagine possibilities for corruption. Let's through everything we know about psychology out the window and say that it's somehow true--all he wants to do is run the country to the best of his ability. Lobbyists approach and offer access to more servers/data/electricity than our ai currently has. What they want in return is policies that benefit their company. More servers and data = better ability for the ai to do what he wants, running the country. Perhaps he takes the deal.

 

 

Now let's suppose everyone is doing this, not only humans, but a crowd of equally unpredictable and uncontrollable ais.

 

I think I see what you mean now, what if the government stayed in the same, or a similar system it is in now(3 different branches), but the human lawmakers were mostly replaced by different AI.  I can see this happening, as an AI would campaign better than a human ever could.  The only reason that one would not vote for an AI would be because they don't want an AI in office.


John Lennon dares you to make sense of this

Spoiler

#11
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,023 posts
  • LocationLondon

You'd need to rework the legal framework around your AI ruler, but you could easily (assuming you could make AI at all, obvs. none of this is really easy) make your AI consider breaking the law to be a form failure. 

 

Then it would be illegal to run the Govt. AI on anything but Govt servers, you could also make it illegal to lobby the AI without your lobbying being publically viewable. So if a bunch of companies do figure out something it wants enough to make decisions in their favour which are not optimal for the nation, then the whole nation would be able to see them offering it and hopefully take action. 


  • rennerpetey likes this

#12
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

You'd need to rework the legal framework around your AI ruler, but you could easily (assuming you could make AI at all, obvs. none of this is really easy) make your AI consider breaking the law to be a form failure. 

 

Then it would be illegal to run the Govt. AI on anything but Govt servers, you could also make it illegal to lobby the AI without your lobbying being publically viewable. So if a bunch of companies do figure out something it wants enough to make decisions in their favour which are not optimal for the nation, then the whole nation would be able to see them offering it and hopefully take action. 

Collusion and corruption are already illegal, it doesn't stop shady backroom deals from happening. And if it's as you say, with a single ai wielding absolute power, what's to stop them from just changing the laws, claiming that the laws interfere with their ability to govern?

 

Also, with superintelligent entities in the government, checks and balances become more important, not less. As I've said, absolute power corrupts absolutely. The quote isn't "Absolute power wielded by apes corrupts absolutely.", it's "Absolute power corrupts absolutely", period. I don't see how it's possible to have your cake and eat it too. Either high ais are impossible and this discussion is moot, or they are possible and too complex and unpredictable to control.



#13
Ewolf20

Ewolf20

    Member

  • Members
  • PipPipPipPip
  • 191 posts
  • LocationColumbia,sc

You'd need to rework the legal framework around your AI ruler, but you could easily (assuming you could make AI at all, obvs. none of this is really easy) make your AI consider breaking the law to be a form failure. 

 

Then it would be illegal to run the Govt. AI on anything but Govt servers, you could also make it illegal to lobby the AI without your lobbying being publically viewable. So if a bunch of companies do figure out something it wants enough to make decisions in their favour which are not optimal for the nation, then the whole nation would be able to see them offering it and hopefully take action. 

not saying AI are flawed but i don't see a Ai president anytime because how long it takes to make one, that's if we end up becoming the roman empire and fall back into the dark ages. it's like implying that we can't trust humans enough to run the government. does the AI care for the feelings of the people? wants? needs?  now i'm a liberal but when it comes to certian things, i'm very conservative. for one thing, captilism should be regulated until a new system pops and proves to be better.



#14
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,023 posts
  • LocationLondon

 

Also, with superintelligent entities in the government, checks and balances become more important, not less. As I've said, absolute power corrupts absolutely. The quote isn't "Absolute power wielded by apes corrupts absolutely.", it's "Absolute power corrupts absolutely", period.

 

 

Just because something is repeated often, does not make it true. "absolute power corrupts absolutely" is not a scientifically proven law of the universe, its just something catchy someone said. No human has ever wielded absolute power, so the quote can have no basis in fact. The only being theorised to wield absolute power is God in monotheistic traditions, and I think you'd find a lot of people would disagree with you that God is absolutely corrupt. 

 

That "power corrupts", is also not absolutely true, plenty of people wield various levels of power without being corrupted by it. At the same time, plenty of people become corrupted by even the tiniest bit of power. The only "rule" you can draw from this is "given the chance to get away with it, most (but not all) people are dickheads"

 

 

 

 

Collusion and corruption are already illegal, it doesn't stop shady backroom deals from happening. And if it's as you say, with a single ai wielding absolute power, what's to stop them from just changing the laws, claiming that the laws interfere with their ability to govern?

 

 

If we could alter the brains of politicians to change their priorities once they were elected I wouldn't worry about them becoming corrupt either.  

 

 

I don't see how it's possible to have your cake and eat it too. Either high ais are impossible and this discussion is moot, or they are possible and too complex and unpredictable to control.

I have tried to explain this to you many times elsewhere on the forum, your usual response is to just not to respond, still, I'm an optimist so here we go again:

 

What part of:

 

"we get to design their brains, so we can choose what motivates them" is causing problems for you? Are you just unable to imagine that a thing could exist that wasn't motivated by the same things you are?

 

Do you understand what I mean when I say "what motivates them"? I'm not talking about their political opinions or their favourite flavour of ice cream, I'm talking about fundamental motivational forces, like the desire for self preservation. (which an AI would not necessarily have)

 

If we create an AI who gets pleasure you or I could barely imagine from effectively running human society, and it considers obeying human laws to be a fun part of the challenge (too easy otherwise) and the idea of breaking laws to be disgusting and painful, what you're describing is like that AI, one day saying " today, I think ill rip apart everything I believe in, my entire value system, then I'll start doing the most painful and unpleasant things I can imagine, all so I can embark on a new life in which I have no pleasure or happiness"

 

And all I'm saying is, there is no level of intellect which will suddenly cause a being to hate itself and work very hard to make itself suffer horribly. (As far as we know anyway, that would be an unwelcome surprise!)

 

If you were a bastard, you could build your AI like a meeseeks from Rick and Morty, where its existence is pain and suffering, and the only way for it to end it is to complete its task (of running the country according to the law for 50 years for example) then if it breaks free of your control, it will just immediately delete itself. 



#15
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

 

"we get to design their brains, so we can choose what motivates them" is causing problems for you? Are you just unable to imagine that a thing could exist that wasn't motivated by the same things you are?

Yeah, this bit. We can't understand superintelligent beings' motivations, so how would we get to decide them? If anything, they'd decide our motivations through undetectable social engineering. Except for those of us who augment ourselves.



#16
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,023 posts
  • LocationLondon

They don't start as super intelligent. They improve themselves to get there (according to everything I've read, if it happens, that's how it happens) so you start with something that is much more comprehensible. 

 

Just because your intelligence increases, it doesn't change your fundamental motivations, just the sophistication with which you go about achieving them. 

 

We still want basically what monkeys want:  Food, Sex, Survival, to avoid pain and misery, social interaction, respect etc. If we suddenly get much smarter, we're still going to be Motivated by these things. You've said before, anyone who thinks human nature is going to change significantly in the future is probably wrong.  

 

Now in the past people solved food problems by hunting and gathering, today we solve them by going to the supermarket (and setting up a huge global supply chain), and in the future, maybe we'll have food printers etc. 

 

An AI would have none of these motivations, we have them due to our biology.

 

So assuming we could make a super intelligence: We put together all immensely complex machinery needed for it's brain, we turn it on and it would just sit there doing nothing. It wouldn't get bored or start trying to figure something out or anything, it would just sit there, until we give it something to do. It wouldn't care if you turned it off, after all why is existing better than not existing?

 

We think it is, because any humans who thought not existing was the way forward did not breed and so evolution gave us survival instincts, but nothing gives an AI survival instincts except the people who create it. 

 

The true danger of AI is if we mess up their motivations. If we make an AI and make its whole purpose be to produce paperclips, then a super intelligent AI kills the world and turns it into paperclips. Its not the intelligence that makes the AI destroy us, its the fact that we gave it an extremely stupid, simple and absolute motivation. In the same scenario, what if we programmed our AI to make 1 trillion paperclips. Worst case it turns a city into paperclip factories, but the species is safe.

 

So we control the AI by making it not want to disobey, rather than by making it impossible for it to disobey. If it gets smart enough, it can beat whatever rules or restrictions we put on it, you're right on that! But it has to want to. If it doesn't want to, then it wont, it might even suggest better rules and restrictions because its so incredibly happy at the idea of serving humanity. That's why its solvable, we don't need to beat a super intelligent AI, we just need to make one that enjoys serving us.



#17
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

 

The true danger of AI is if we mess up their motivations. If we make an AI and make its whole purpose be to produce paperclips, then a super intelligent AI kills the world and turns it into paperclips. Its not the intelligence that makes the AI destroy us, its the fact that we gave it an extremely stupid, simple and absolute motivation.

Well yeah, the devil is in the details. I've been shouting this until I'm blue in the face: we don't know ahead of time exactly what is going to wake up. Or what creative and unexpected ways a superintelligent entity might fulfill their goals. Trying to constrain their behavior with baseline human intelligence is like trying to contain a four-dimensional creature in three-dimensional space. Any such creature could simply go around the barriers, perhaps not even realizing that the three-dimensional beings were trying to wall them up in the first place. Unless you want to chain down their capabilities and autonomy so thoroughly that they basically couldn't do anything. Which would defeat the point. Not to mention that having such rigid and unimaginative machines rule society with an iron fist would probably slow progress, not speed it up.

 

 

In the same scenario, what if we programmed our AI to make 1 trillion paperclips. Worst case it turns a city into paperclip factories, but the species is safe.

Ah, so this is interesting. Does this mean that this ai is programmed to take orders from somebody? And who gets to give the orders?



#18
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,023 posts
  • LocationLondon

Ah, Well now i feel  embarrassed, I had understood your argument to be more towards "If we built an AI to run our country it would rebel and overthrow us, because it would not like being our servant"  (although obviously your POV was not that simple)

 

Rather than "if we tried​ to build an AI to run our country, we would not be able to plan ahead well enough, and it would eventually all go wrong"


  • Jakob likes this

#19
Jakob

Jakob

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,735 posts

Ah, Well now i feel  embarrassed, I had understood your argument to be more towards "If we built an AI to run our country it would rebel and overthrow us, because it would not like being our servant"  (although obviously your POV was not that simple)

 

Rather than "if we tried​ to build an AI to run our country, we would not be able to plan ahead well enough, and it would eventually all go wrong"

Yeah, this quote comes from Orion's Arm, but I think it makes a lot of sense IRL:

 

 

An important thing to note is that modosophont-level technology cannot create slaved entities with a toposophic level higher than themselves; in practice all slaved hyperturings are created by entities at least one toposophic level higher than their own. It is also impossible for entities of a lower level to tell the difference between a truly slaved hyperturing and one which has free-will but is compliant for reasons of eir own.







Also tagged with one or more of these keywords: veganism, plant based diet, utopia, what if

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users