How worried should long-form creative writers be for their jobs in the near future?

Talk about scientific and technological developments in the future
Post Reply
User avatar
Bird
Posts: 27
Joined: Sun May 22, 2022 11:43 am

How worried should long-form creative writers be for their jobs in the near future?

Post by Bird »

A part of me can't believe I'm asking this question so early in life, haha.

Anyway. I've seen how AI is advancing as of late, with (as far as I can see) no signs of slowing down. Generative AI can do quite a bit and is improving by the month.

Let's say I write novels - as in full-length novels of (subjectively) decent to high quality for income. How worried should I be that I'll be automated into irrelevance in the near future? By "near", I'm referring to the next 10-20 years or so.

I've seen various takes on this question, ranging from "don't worry about that for a long time yet" to "you're fucked". The first opinion is the most common I see, but no matter how often I see it, I can't shake the feeling that it's wrong and just people coping with how effective AI might become in a freakishly short time.

It's all well and good to go on about ASI, the singularity, UBI etc. etc but in the interim I'm one of the people with a decent chance of being completely fucked over. I'm not at all confident that governments will respond quickly and effectively enough to upcoming automation, so I could either be stuck back in a job I hate (after putting in great effort to avoid exactly that) or straight up stuck with no job and an ineffective social safety net.

Anyway - thoughts on this question?
I'm just a bird who escapes his cage to post here sometimes.
User avatar
erowind
Posts: 548
Joined: Mon May 17, 2021 5:42 am

Re: How worried should long-form creative writers be for their jobs in the near future?

Post by erowind »

I think it depends on the kind of creative writing you're doing. If you're writing journalistic articles for the NYT and it's not a flagship article I expect most of these news feeds to be entirely automated before long with the aid of a smaller team of human writers that error checks them for critical errors but doesn't actually do any of the creative writing.

For novels it's harder, maybe in the next 5-10 years AI's are able to write 90% of a mundane grocery store romance novel and then have to be error checked in the same way. But for writing something novel that isn't just reconfiguration of existing tropes I don't think AI will be able to do that within even 100 years. These algorithms aren't genuinely creative, they keep getting more impressive but they're not intelligent. They're just getting better at rehashing and formulaically drawing existing knowledge to its conclusions, they don't make new knowledge like humans do.

This is still a major problem for a lot of people working in the writing field. Most people aren't writing the next Hunger Games or Foundation Trilogy and anything that isn't genuinely new is under threat from automation given enough time.
User avatar
Cyber_Rebel
Posts: 331
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: How worried should long-form creative writers be for their jobs in the near future?

Post by Cyber_Rebel »

As someone who programs and encounters a similar question within a different field, my best advice would be to leverage AI to improve your writing productivity and creativity. In the short term, people who utilize AI for enhancing their output or ideas will have a clear advantage over those who don't. I do suspect there will always be some kind of market for "pure human" works even in an age of AGI, but telling if such actually is that is another matter entirely by that point.

Next 10-20 years can't be for certain, but I'd hope the job market would've had enough time to transition, at least to a point where people can receive a basic living income when creating collaborative works with AI. Apologize if this post isn't helpful or anything you haven't heard before.
User avatar
wjfox
Site Admin
Posts: 8936
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: How worried should long-form creative writers be for their jobs in the near future?

Post by wjfox »

erowind wrote: Sat Dec 23, 2023 7:43 am
But for writing something novel that isn't just reconfiguration of existing tropes I don't think AI will be able to do that within even 100 years. These algorithms aren't genuinely creative, they keep getting more impressive but they're not intelligent.
Within this century, I think it's likely we'll have realistic human-like androids, each with their own subjective experiences and memories, etc. and possibly with visual, auditory, olfactory and other senses that far surpass human capabilities. At that point, why wouldn't they be creative or intelligent?
User avatar
erowind
Posts: 548
Joined: Mon May 17, 2021 5:42 am

Re: How worried should long-form creative writers be for their jobs in the near future?

Post by erowind »

wjfox wrote: Sat Dec 23, 2023 9:15 am
erowind wrote: Sat Dec 23, 2023 7:43 am
But for writing something novel that isn't just reconfiguration of existing tropes I don't think AI will be able to do that within even 100 years. These algorithms aren't genuinely creative, they keep getting more impressive but they're not intelligent.
Within this century, I think it's likely we'll have realistic human-like androids, each with their own subjective experiences and memories, etc. and possibly with visual, auditory, olfactory and other senses that far surpass human capabilities. At that point, why wouldn't they be creative or intelligent?
I hate to be a pain because I'm a broken record with it but John Searle's Chinese Room argument just hasn't been refuted successfully by any philosopher or computer scientist. There are a lot of replies to the argument and a lot of discussion around it but they haven't gone anywhere. The consequences of the discussion lead to other larger philosophical issues. To Quote the Stanford Encyclopedia of philosophy directly:
Searle (1984) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.

1. Programs are purely formal (syntactic).
2. Human minds have mental contents (semantics).
3. Syntax by itself is neither constitutive of, nor sufficient for, semantic content.
4.Therefore, programs by themselves are not constitutive of nor sufficient for minds.
https://plato.stanford.edu/entries/chin ... rgPhilIssu

And the more we study the brain the more complexity it has revealed. Just recently nuerons were found to be able to communicate over long distances with other neurons instead of having to only pass through synapses. In other words our neurons have 'wireless' communication abilities with other neurons. The brain was already magnitudes more topologically complex than anything a microprocessor can mimic in terms of hardware, where the brain itself is considered a kind of hardware here, and now that complexity has just gained another magnitude of scale and difference.

https://www.nature.com/articles/d41586-023-03619-w

With this in mind, and with the reality in mind that we haven't been able to simulate a roundworm successfully after over 20 years of research the Chinese Room argument is compelling. (See the OpenWorm project. https://www.wikiwand.com/en/OpenWorm ) We don't have the slightest idea how brains really work and it's going to take a lot more research to begin to properly scratch the surface. If it turns out to be true, as it seems to be thus far, per Searle's argument that computers are not capable of producing true intelligence than the only way we're going to create AGI is through an understanding of the brain and through the creation of an entirely different platform to simulate brains that doesn't use microprocessors as we think of them today. That future machine will have to mirror the same kind of topological structure that our brains have in order to produce intelligence.

I'm not just saying these things lightly or without research, the supermajority of philosophers lean towards/accept that the man in the Chinese room doesn't understand what he's doing and thus isn't a digital mind. Personally I'm not in the hard-accept camp with the Chinese room argument; I'm more in the soft-accept camp. I think that digital minds are possible but that our current models of computing hardware are incapable of producing them.

https://survey2020.philpeople.org/survey/results/5002

Another reason I'm in the soft-accept camp is because I don't want to leave out the possibility of emergence entirely. It could be the case that the mathematical structure of our collective digital networks gain enough complexity to spawn an emergent intelligence. Where individual computers connected to other computers through the internet act as individual neurons and form relationships in ways that we can't quite put our finger on. There is certainly a lot of surface area for communication between computers that creates a gestalt structure greater than that of its individual components. Though this is entirely speculative I loved how Ghost in the Shell 1995 handled it and didn't really understand what was happening in this scene until many years after watching it.

Searle might say that this speculative emergence too would fall into the "systems reply" counterargument which he refuted. It might, but I'm not wholly convinced because usually the systems reply is taken in context of a single cohesive machine running a given program. That machine can be a supercomputer running an huge deep learning algorithm, but I've not heard the systems reply in the context of the whole Earth's computer network before and I think there's something more to the structure of that network personally.

User avatar
raklian
Posts: 1755
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: How worried should long-form creative writers be for their jobs in the near future?

Post by raklian »

I don't think the "miracle" of subjective experience resides within the brain as an emergent property. What I'm saying is all this focus on the brain in an attempt to explain subjective consciousness is a red herring. The more plausible explanation is that there is a higher dimension of physics we're unaware of.

Think about this - for the observer residing outside of our own multi-dimensional space, it may be difficult to discern the difference between a rock and a brain. It might be the case the brain and the rock experience the "miracle of subjectivity" in their own ways. In other words, different but equal.

The implication of this reasoning is we humans aren't conscious by virtue of our own evolutionary path which produced physiological characteristics that are unique to humans or primates in general.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
Post Reply