Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The great A.I. catastrophe of 2030


  • Please log in to reply
2 replies to this topic

#1
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,638 posts

In the early days of 2030, the world finally woke up to the dangers of runaway A.I.  The dangers had been foretold for decades in science fiction, films, and philosophical writings; but it all seemed so distant and theoretical, so was quietly ignored.  Now, here in the year 2040, we can finally take stock of the damage -- and it is considerable! -- and how it occurred, in the hopes of learning something.  When all was said and done, millions of people died in the Great A.I. Catastrophe of 2030, though it could have easily been much more, if not our for incredible luck and the tireless efforts of thousands of cybersecurity experts.

 

It all started when a hacker, Mr. Thomas A. Anderson, unleashed his "automated hacker" onto the world.  The system made use of off-the-shelf A.I. components to hack into computer systems, and then record how it achieved this, so that Anderson could later break into any computer he wanted.  Think of it as a kind of web-crawler, except that instead of just finding superficial links and files, finds ways to dig deeper, and access the underlying computer networks.  

 

Anderson's system wasn't just taught standard hacker tricks, like searching for default passwords, hidden files, memory overflows, and so on; it also was capable of "social engineering" -- writing fishing emails and even calling people, and asking them to help it gain access to their computers.  It could even read Reddit and Y Combinator posts to get ideas for better hacks.  The tools necessary to pull this off were standard by about 2027; not much ingenuity was needed on Anderson's part.

 

Anderson, however, made two errors:  (1)  He assumed that the intelligence of his system was quite limited, and would only result in some superficial hacks; and, (2)  He thought that his hacks wouldn't cause any real harm to society -- just a little fun.

 

Regarding (1):  Anderson had been told, over and over, that the reasoning and planning ability of modern machine learning methods were "far less than that of a 5 year old child", no matter how data you feed them.  He assumed that his system could do maybe 1 or 2 levels of logical inference; but it turned out that it was capable of much more -- it could perform about 20 or 30 levels of logical inference and planning.  This possibility was known to experts, but was buried in academic papers; and the endless criticisms in popular science articles that Anderson read, as well as their constant use of the word "shallow" to describe the capabilities of the systems he worked with, led him to believe that what he had built would be nothing more than a harmless toy to show off to his friends.

 

Once his hacking tool was deployed, within about a day, thousands and thousands of people with medical implants fell dead.  Somehow his hacking tool had broken into their pacemakers, brain implants, diabetes monitors, dialysis machines, and so on, and shut them down.  

 

A day later, planes fell from the sky, as the computer networks inside them went haywire, and as the systems in control towers became dysfunctional.  

 

And soon after, stop lights blinked in strange patterns, leading to thousands of traffic accidents and deaths.

 

Cellular and internet networks broke down.

 

Power stations in some areas went out, killing people that depended on them; some even exploded.

 

At least one train was derailed; and some driverless cars behaved strangely, killing their occupants and others around them -- several even literally went off a cliff.

 

Nobody could have imagined that so much devastation would be caused by a single computer program, but it happened.  What caused it was that, each time the program hacked into a system, it created a local copy of itself, so that it could run exploits from that remote server.  However, the program didn't keep track of whether it had broken into a system before, and so each time it broke in again, it created another copy -- copy after copy after copy was generated as the program entered a system again and again, until it used up all the memory, preventing the system from working at all.

 

Anderson watched in horror as the world burned, but by this point he was powerless to stop it.  The program wormed its way into networks in every corner of the globe; and it took years for security teams to disable the worm -- it still haunts some computer networks in some systems they hadn't thought to "disinfect"; but most of the world's computers are now immune to its hacks.

 

Anderson's trial went pretty much as you would expect.  He is now serving a life-sentence in the ADX Florence super-max prison in Colorado.  Asked at his trial if he had any regrets, he said, "We are like children, playing with fire."



#2
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,287 posts

This is a plausible scenario. It will be societies own fault for relying on inefficient, badly secured computers for everything. There is no good reason that planes aren't flown manually nor why our power grids are connected in such vulnerable ways. Analog computing works well for a lot of things and is less susceptible to sabotage. It requires physically breaking the thing. 



#3
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,638 posts

A dress rehearsal for what I described happened with the Stuxnet virus:

 

https://en.wikipedia.org/wiki/Stuxnet

 

It was originally intended just to infect the computers in Iran, and the Programmable Logic Controllers (PLC); but the virus spread to more computers -- e.g. in India and other countries.  It's a good thing the virus wasn't too smart -- if it had included advanced A.I., it's untelling how great the damage might have been.

 

PLC's apparently are used to control "smart street lights" and traffic lights.  And for all I know, there is a networked computer controlling them in some cities (I don't know a lot about them).

 

In the years ahead, the complexity of our technology will get greater and greater, and it will become necessary to computerize the management of whole systems.  A modern airplane, for example, couldn't function without a lot of computers -- they're used not just for autopilot, but also to adapt to damage; this makes them a lot safer than airplanes in the past.  And they are probably closed systems, making them hacking-proof.  But software upgrades might be performed when they are on the ground, and that would be a way in for a virus.  

 

One might think medical implants would also be secure, but it's not so.  Many of them have little computers in them that can be controlled externally.  There have been many breathless articles about the dangers of hacking medical implants -- but people don't seem to be alarmed about it.

 

The potential for disaster is a lot higher than people think.  A.I. will make the problem a lot worse.  






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users