Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What scares you the most about AI, if anything at all.

ai artificial intelligence automation

  • Please log in to reply
13 replies to this topic

#1
sinplea

sinplea

    New Member

  • Members
  • Pip
  • 2 posts

I recently posted a video where I talk about some of my fears concerning AI. Job loss, is certainly a big one on my list. I'm certain we will create a lot of new jobs out our innovations, but its hard to imagine (for me) that many of those new jobs will be for the people whose jobs are displaced.

 

I'm sure most of the users here are more well-versed on subject of AI than your average Joe, and I think most of my fears come from the lack of educational preparation there is for people to navigate the next decade. If more people understood what was happening or if, at times, the industry was acting more openly with the public, I'd be less worried.

 

So what do you think? Are my fears unwarranted? Do you have some other, more pertinent fears? If you have no fears at all, care to share why? I'd love to hear your thoughts. 



#2
Cyber_Rebel

Cyber_Rebel

    Member

  • Members
  • PipPipPipPipPip
  • 448 posts
  • LocationNew York

Other members such as Yuliban or starspawn0 will likely be much more versed on this topic than I, but a growing concern I have is what role A.I. will play in governments.

 

I'd say that A.I. abuse by our current governments is certainly far more of a risk than any Hollywood Skynet plot. Doesn't mean we shouldn't develop nor research the potential benefits, just that it's a new disruptive technology that needs oversight like anything else. Otherwise we actually risk making totalitarian styled societies much easier to implement, with everyone's data, livelihood, and social habits at the elite's fingertips.

 



#3
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,160 posts

All the fucking matrix algebra and matrix calculus that I still have zero idea how to do.



#4
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,562 posts

That modern humans are designing it 



#5
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPipPip
  • 103 posts

The slight possibility that AIs will not evolve greater than human sapience before we lose the will and/or the ability to develop AI further, scares me more than anything.

 

On a slight chance another world war breaks out, climate change accelerates even faster, sources of clean water runs out, an asteroid strikes, aliens invade, frozen plagues thaw, north and south poles reverse, volcanoes erupt, the ring of fire activates, etc., the depressing necessity to fix the damages will outshine the simple luxury of being able to teach machines how to play board games.


As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#6
10 year march

10 year march

    Member

  • Members
  • PipPipPipPipPip
  • 381 posts
That Ai will just be used to generate profits instead of being designed to meet all of human need and once all needs are met all of human want.

#7
Jessica

Jessica

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,679 posts

The fact that it will become our masters and control us utterly without the possibility of defeating it. We humans really need to focus on genetic enhancement of ourselves instead of creating something that will rule us.



#8
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,106 posts
  • LocationLondon

Wow some very interesting and varied perspectives!

 

My big fear is that everyone will go on pretending that everything will be fine until its too late, these sort of attitudes make me very worried:

 

"there will be lots of new jobs to replace the industries that have existed for hundreds of years and that people have spent their lifetimes working in"

 

"All these displaced people will be capable of doing these new jobs and they will be enthusiastic about retraining to do something completely different in the middle of their career."

 

"Automated Doctors will not lead to unemployment amount doctors because only 80% of doctors tasks can be automated"

 

Given that "pretending that everything will be fine until its too late" perfectly describes most people's attitudes to climate change, I'm pretty sure corporate owned AI dystopia is the future we are heading for. (assuming the whole climate change thing doesn't break everything first) 



#9
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,105 posts

Something like the "Paperclip maximizer" scenario scares me. We create an AI that goes haywire and destroys the human race, but the AI is missing some key aspects of intelligence so it can't improve itself or evolve. After killing us off, it just stays on Earth forever, fixed at some level of technology and ability that was good enough to beat us, but far below the maximum of what can be achieved. It dies when the Sun expands a billion years from now. 



#10
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,164 posts

In the short-term, the thing to worry about AI is what state actors and governments might do with it.  Take "10 year march" for example:  imagine Australia decides to start using AI to track citizens it believes are "up to no good", and for some reason thinks "10 year march" is one of them (law enforcement has fantasies about putting him in jail) -- where the reasons it believes this to be the case are all hidden, and kept confidential.  So, an AI combs through his online activity, reading and understanding all that he writes, and compiles a dossier to be used as part of an investigation.  Part of what protects individuals from this level of surveillance is the fact that it's just too costly, unless they are pretty sure that the target is guilty and of high-value.  However, if you lower the cost of investigation 1000x, you can run them on orders of magnitude more citizens.  And it need not even involve illegal search and seizure -- it can all be done by just monitoring their online activity, that everybody has access to. 

 

Also imagine political persuasion bots; or nationalist troll-bots, that scour the internet for mention of anything even vaguely critical of Russia, say, and then will show up to challenge it -- and belittle and attack the writer into submission, teaching them a lesson not to mess with Russia!

 

In the longer run, there are probably going to be major "accidents" when some sufficiently smart AIs are unleashed onto the web.  Maybe, for example, one of those bots that defends Russia's honor gets the idea to shut down the U.S. power grid, to teach it a lesson.  It has a basic imperative to teach the enemies of Russia a lesson, and after reading enough online comments, generalizes to the claim that the whole country of the U.S. is Russia's enemy.  And it also generalizes that shutting down the U.S. power grid is an appropriate response against such a powerful adversary.

 

The people who release the bot onto the web may not intend this to happen -- may even try to guard against it.  But the way many of these AI systems are trained and built, it's hard to prove you've covered all the bases so that something like it, won't.



#11
sinplea

sinplea

    New Member

  • Members
  • Pip
  • 2 posts

Wow!

 

Thank you all for the replies! It's really cool to see the variance in responses! Glad to know I'm not the only one worried here.

 

I can definitely see government actors playing a huge problematic in the future of AI, in so many ways. IMO, this is the main reasons there needs to be global ethical standards for disseminating lawful practices to not only private companies, but also to academic researchers. The fact that this hasn't happened already is honestly mind-boggling. This is truly a testament to the speed of big tech's innovation rate compared to the rate governments are able to even understand the problems of the 21st century.

 

AI certainly has the ability to improve human life tenfold, yet it also has the ability to make so many worse off, disconnected from reality. I've never been so excited and nervous about an element of technology. Just glad i'm not the only one, lol.



#12
Lister

Lister

    New Member

  • Members
  • Pip
  • 2 posts

First post! ...if it gets approved.

 

I'd go down a (probably) unique path and say the thing I fear the most about AI is it's ability to fulfill our dreams and fantasies. 

 

I think it'll be able to do this mostly through virtual spaces. Both by Augmented Reality and Matrix-style VR spaces. AI moves far faster than we do (Biological Computation vs Technological Computation) but that speed advantage is far more powerful in virtual spaces. An AI would still be restricted in the physical world by physical laws, but few similar restrictions exist inside of a computer. 

 

So AI will turn us into virtual Gods and give us the power to do whatever we want with almost no restrictions or consequences. Sounds great, right?

 

Except I think we're far too young of a species to be handed this sort of power. It's too soon and we're too immature. 

 

To me this is a bit like handing an 8 year old a smartphone with zero restrictions. Except far, far worse. 

 

We could be in deep trouble even if things continue to improve in the physical world from now on. 

 

...Don't even get me started on our potential ability to "frame jack" or accelerate subjective time within virtual spaces. This could potentially allow people to live decades in VR spaces while days pass on the outside.



#13
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,105 posts

 

 

Also imagine political persuasion bots; or nationalist troll-bots, that scour the internet for mention of anything even vaguely critical of Russia, say, and then will show up to challenge it -- and belittle and attack the writer into submission, teaching them a lesson not to mess with Russia!

This is exactly the kind of stupid thing humans will use narrow AI for. I think the solution will be the end of the Wild West era of social media and Comments sections, and a big push for platforms to automatically reject content if it can't be proven to have come from a human being. 



#14
Sephiroth6633

Sephiroth6633

    Member

  • Members
  • PipPip
  • 26 posts
  • LocationUK

I am not worrying about what they would do once they are self-awareness. Perhaps they would just demands for equal rights like in a video game Detroit: Become Human. However I am more worrying about China may let the human-level AI take a lead as a president in middle of 2030. I mean what if it go rogue due to glitch or bug? it will more life and death situation. 







Also tagged with one or more of these keywords: ai, artificial intelligence, automation

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users