Inspired by the recent controversy over OpenAI's decision to not make GPT-2 open-source. The reasoning they gave, as we all know, is that GPT-2 is too dangerous to release publicly.
This may very well be the first major instance of AI potentially being a public danger, so I feel it's necessary to create a thread discussing this.
I recall there being another thread on the danger of AI, but it was focused more on the possibility of the Paperclip Maximizer and AGI gone wrong. This is a bit closer to home: AI in the next ten years as a force of malevolence, misapplied benevolence, and with unintended consequences. Where a lot of the danger lies with humans.
In my LessWrong post, I brought up one possible danger (that I heard from elsewhere and felt necessary to repeat):
Imagine a phisher using style transferring algorithms to "steal" your mother's voice and then call you asking for your social security number. Someone will be the first. We have no widespread telephone encryption system in place to prevent such a thing because such a thing is so unthinkable to us at the present moment.
You probably trust your mother's voice more than you trust your keys or own handwriting. If someone else had it, someone with malicious intent, how screwed would you (and she) be? Or you can imagine a kidnapper coming to take a child out of a kindergarten class, and they call the school with the woman's voice telling them to expect them instead that day. Imagine if someone wanted to ruin your reputation for good and created some fake messages claiming you tortured your own mother (who may now be dead). Without encryption methods, there's really no good way to prevent this.
Another potential danger is exploiting people's gullibility for fake news: imagine that can generate news articles that sound convincing, have images that are believable (but algorithmically generated), and are linked to "sources" that are also synthesized via algorithm. Many don't check sources. As long as something seems well-sourced, we have a tendency to believe it more. And even when we do check sources, we may not check multiple sources.
We could do this today, obviously. Propaganda bubbles work by creating a whole network of curated sources written by one side, if not one person, and those go back thousands of years.
AI makes it so much easier. I remember a statement made that making a website in the 2010s is as easy as clicking some hypertext and choosing some pictures and maybe some preset themes, whereas there were people whose jobs were to design websites for clients back in the 1990s because of the high level of skill needed just to make a few pages. Ten years from now, it might be possible to completely automate the entire process, from the design, theme, pages, text, social media, and whatnot. So if you wanted to create some fake news and potentially give it some traction, all you'd need are a few websites, some media synthesizing, and a good sense of timing.
Yet another danger involves something closer to my heart, and is more in line with what you could call "digital diabetes": getting addicted to media synthesis, to the point you lose all sense of reality or, at least, any desire to deal with reality. If you think NEETs and hikikkomori are worrisome now, imagine a potential point in the near future when you could create & recreate anything you desire. Any musical album, any musical scene, any concert, any television show, any new season for an old television show, any redone show, any game, any mod for said game, and so on. You have badly written fanfiction left over from your middle and high school years? Why not feed them into media generating algorithms? And this is awesome, yes, but it also presents the danger of choosing entirely artificial realities over the real world in a time when there is still more that has to be done in the real world. We don't yet live in a 100% automated society ruled over by benevolent AI, whether it's today or in twenty years. I can only think of myself five to six years ago, when I would have loved nothing more than to never get out of bed again and experience fake musical scenes and high-end adaptations of my work. There are plenty of people like this, who could become so hopelessly addicted to artificial media that they won't want to come out of it.
Less personal and much creepier (unless you consider some creeps who'll be using this tech) is the possibility of procedurally generated content akin to Elsagate. If you've not heard of Elsagate, good. In short, it's a controversy surrounding loads of pedophilic cartoons & live action videos on YouTube involving cartoonish representation of characters like Elsa (from Frozen), Spiderman, etc. doing violent, sexual, and just plain bizarre things. Popular topics include scat play (that is, playing with "mud" that is clearly supposed to be human shit), drinking alcohol (when the characters literally look like children and often are in diapers), violently attacking each other with decapitations and gory injuries common, body inflation (filling up the body to extreme proportions), child pregnancy, injections with needles, and more. While many videos are manually created (which is already disturbing), plenty more seem to be procedurally produced. I have no interest in watching any of them to prove this, but those investigating this claim that a lot of Elsagate videos come out much too frequently to be man-made.