Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Dangers of Near-Future Artificial Intelligence

artificial intelligence AI dangers media synthesis 2029 2019

  • Please log in to reply
1 reply to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,770 posts
  • LocationNew Orleans, LA

Inspired by the recent controversy over OpenAI's decision to not make GPT-2 open-source. The reasoning they gave, as we all know, is that GPT-2 is too dangerous to release publicly
This may very well be the first major instance of AI potentially being a public danger, so I feel it's necessary to create a thread discussing this. 
I recall there being another thread on the danger of AI, but it was focused more on the possibility of the Paperclip Maximizer and AGI gone wrong. This is a bit closer to home: AI in the next ten years as a force of malevolence, misapplied benevolence, and with unintended consequences. Where a lot of the danger lies with humans.

In my LessWrong post, I brought up one possible danger (that I heard from elsewhere and felt necessary to repeat): 

Imagine a phisher using style transferring algorithms to "steal" your mother's voice and then call you asking for your social security number. Someone will be the first. We have no widespread telephone encryption system in place to prevent such a thing because such a thing is so unthinkable to us at the present moment.

You probably trust your mother's voice more than you trust your keys or own handwriting. If someone else had it, someone with malicious intent, how screwed would you (and she) be? Or you can imagine a kidnapper coming to take a child out of a kindergarten class, and they call the school with the woman's voice telling them to expect them instead that day. Imagine if someone wanted to ruin your reputation for good and created some fake messages claiming you tortured your own mother (who may now be dead). Without encryption methods, there's really no good way to prevent this.

 

 

Another potential danger is exploiting people's gullibility for fake news: imagine that can generate news articles that sound convincing, have images that are believable (but algorithmically generated), and are linked to "sources" that are also synthesized via algorithm. Many don't check sources. As long as something seems well-sourced, we have a tendency to believe it more. And even when we do check sources, we may not check multiple sources. 

We could do this today, obviously. Propaganda bubbles work by creating a whole network of curated sources written by one side, if not one person, and those go back thousands of years. 

AI makes it so much easier. I remember a statement made that making a website in the 2010s is as easy as clicking some hypertext and choosing some pictures and maybe some preset themes, whereas there were people whose jobs were to design websites for clients back in the 1990s because of the high level of skill needed just to make a few pages. Ten years from now, it might be possible to completely automate the entire process, from the design, theme, pages, text, social media, and whatnot. So if you wanted to create some fake news and potentially give it some traction, all you'd need are a few websites, some media synthesizing, and a good sense of timing. 

 

 

Yet another danger involves something closer to my heart, and is more in line with what you could call "digital diabetes": getting addicted to media synthesis, to the point you lose all sense of reality or, at least, any desire to deal with reality. If you think NEETs and hikikkomori are worrisome now, imagine a potential point in the near future when you could create & recreate anything you desire. Any musical album, any musical scene, any concert, any television show, any new season for an old television show, any redone show, any game, any mod for said game, and so on. You have badly written fanfiction left over from your middle and high school years? Why not feed them into media generating algorithms? And this is awesome, yes, but it also presents the danger of choosing entirely artificial realities over the real world in a time when there is still more that has to be done in the real world. We don't yet live in a 100% automated society ruled over by benevolent AI, whether it's today or in twenty years. I can only think of myself five to six years ago, when I would have loved nothing more than to never get out of bed again and experience fake musical scenes and high-end adaptations of my work. There are plenty of people like this, who could become so hopelessly addicted to artificial media that they won't want to come out of it. 

 

 

Less personal and much creepier (unless you consider some creeps who'll be using this tech) is the possibility of procedurally generated content akin to Elsagate. If you've not heard of Elsagate, good. In short, it's a controversy surrounding loads of pedophilic cartoons & live action videos on YouTube involving cartoonish representation of characters like Elsa (from Frozen), Spiderman, etc. doing violent, sexual, and just plain bizarre things. Popular topics include scat play (that is, playing with "mud" that is clearly supposed to be human shit), drinking alcohol (when the characters literally look like children and often are in diapers), violently attacking each other with decapitations and gory injuries common, body inflation (filling up the body to extreme proportions), child pregnancy, injections with needles, and more. While many videos are manually created (which is already disturbing), plenty more seem to be procedurally produced. I have no interest in watching any of them to prove this, but those investigating this claim that a lot of Elsagate videos come out much too frequently to be man-made.


  • Zaphod and Erowind like this

And remember my friend, future events such as these will affect you in the future.


#2
Erowind

Erowind

    Anarchist without an adjective

  • Members
  • PipPipPipPipPipPipPip
  • 1,096 posts

OpenAI might already be doing this. But if I were them I'd still release the research just only to closed academic intranets. Responsible people should still have access to the tech.

 

The Elsagate thing is such a clusterfuck. There are many different forms of content that have been identified, some of them non-harmful and others exploitative. 

 

On the non-harmful end of things, the classic short Happy Tree Friends was mis-identified as Elsagate as were the old art shorts Don't Hug Me I'm Scared. 

 

On the mixed side of things some of the Gacha Life videos are made by actual children and we are witnessing these kids express themselves as they mature. Some of the really screwy stuff I imagine could be coming from kids who are also rape victims but most of it malicious for sure. Some Gacha Life shorts are probably pedophiles and there are definitely ad networks taking advantage too. (My inclination is the majority of the fucked up stuff is actually from ad networks but I could be wrong.) Gacha is mixed because some of the content is just a harmless evolution of how kids have always expressed themselves on the web, it just looks scary because it's different to what we grew up with. And on that point, kids being sexual as they mature isn't inherently bad as long as that expression is relative to their peers (other kids) and they've been given the tools to deal with those feelings without abusing each other. Good parents make the expression a non-issue. (I personally remember looking at Avatar The Last Airbender Porn as a kid and talking to a friend about it who also discovered pornographic content around the same time.) The problem is children really shouldn't be learning about themselves like this is an environment also infested with internet wackos and capitalist scum who are exploiting the developing sexuality of children for profit. 

 

On the really screwed up side of things are these algorithmically generated gore, rape and torture videos targeted at children for god knows why. I wouldn't put it past capitalists to do this if it's profitable, but honestly, it seems even a stretch for them. Whoever is doing it needs to be traced and stopped. 

 

Edit: Another catagory of elsagate content is shill youtubers with no morals who are exploiting children for profit. Not all of the for profit content is from ad networks, small and independent business has to get in on the money too!

 

Double Edit: Removed cursing because FTN is a nice place for nice people. 


  • Alislaws likes this





Also tagged with one or more of these keywords: artificial intelligence, AI, dangers, media synthesis, 2029, 2019

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users