Five Reasons Why Fantasy Authors Shouldn’t Be Worried About AI Writing Books.

TL;DR: AI sucks at writing books and will continue to do so for some time.

Yeah, yeah, we all know the prelude to a blog post like this. If you’ve had the misfortune to be on the internet in the past year, you’ve probably seen either a bunch of stone-headed tech bros gloating how they’ve finally discovered a “hack” to be creative that doesn’t require any of the effort, skill, talent, or passion that they lack, or the hand wringing of people who don’t want to believe them but have been so bombarded by screenshot after screenshot of dumb-shit takes from the former that they’re starting to see the latter group as an actual threat to the fabric of creative society.

This is an Artificial Intelligence article. And before we go on, I assure you not a single word of it has been written by one. (Unless you count a spell checker, in which case you’ll be grateful that I’m using one.) Want proof? Just look at the first paragraph. No AI in their right binary mind would allow such a run on sentence to exist. That 93-word failure of grammar is something only flesh can produce!

But I’m a science-fantasy author, so why am I writing this? Any time the technology becomes too “technical” and “logical” I simply hand wave it away with the excuses of it being nanomachines or forgotten magic. There is a reason why I’m interested in this, and why I know what I know, but that’s at the end of the article, so read on. (Or just skip to that.)

However, as an INDEPENDENT author, I’m on the front line of people asking stupid questions or attempting stupid answers when it comes to grifting up a fortune in published “books” that’ve been generated by a prompt-driven AI.

Over the past few years, there’s been various articles, scandals and foundational responses to people who’ve tried to pass these things off as books to the unaware consumer, but there’s only so much filtering, automated banning and fine-tuning you can do to filter out these bad actors from having access to indie marketplaces to shill “their” “wares.”

AI driven Language Learning Models (LLMs, from here on out.) have existed for a while, but it’s only been the past few years that they’ve come to prominence. Like I hinted at above, I’ll spare you the back story about the overview of what ChatGPT is and how some people have used it to automate their email responses, because that doesn’t matter here.

The real “threat” we’re concerned about is what will happen a few years down the road. Tech bros and people actually in the know have theorised, correctly or incorrectly, that these LLMs will eventually get to a point where they can generate passable enough pieces of text that don’t make you feel like you’re reading a bunch of LinkedIn posts.

At present, LLMs are unsuitable to any extended works that require a hint of creativity, as they struggle to hold a theme, a metaphor, or even any consistency beyond a few thousand words. Because this is something we already know, and can even go and see for ourselves, I won’t point to these as part of my five reasons why I’m not worried about it, though it does relate to my first point.



1: It’s shit, and by definition, can only ever be “average.”

Okay, fine. You got me. I’ll explain a little bit about how all this stuff functions. Keep in mind, I have no technical expertise and am pulling it out of my ass, but what are you going to do, google a better source? Good luck! You’ll only get an AI-generated article that’s stuffed to the gills with Search Engine Optimisation terms that will make you want to claw out your eyeballs.

Currently LLMs work by raking through an enormous pool of data, I’m talking millions upon billions of words, works, even blog posts like this one, to build their dataset. It can then plunder that dataset whenever it’s prompted to spit out a response to the prompt, all the while comparing that response to similar things in their dataset to see if they’ve made something passable.

This is the goal here. Not to make something that’s good, but to make something that matches the dataset.

Slight tangent here, but have you ever seen a machine-learning model teach itself to play Super Mario World? I’ve linked the video below, but if you don’t want to watch it because you’ve got a thing against pixelated Italian plumbers, I understand.

In this video, Mario has been imbued with all the power that neural networks and genetic algorithms could afford him back in 2015 to learn how to play the game.

The automated “player” was given a set of VERY simple instructions. To move towards the right of the screen as far as it can, with a death or lack of movement (such as being stuck on a wall) being a failure state to be avoided. While current LLMs are running on more advanced instructions than this, bear with me, I’m trying to make a point.

It took the AI a long time to even figure out how to move Mario in the first place. When it did, it took just as long to figure out how to do everything else. Even after figuring it all out, it still didn’t play naturally, like a human. It would have Mario jump, duck and spin without any necessary means. While you could add more and more layers of neural networks and evolutions, it won’t find its way to resembling an actual human. Apply this kind of idea to writing a book, and you’d only ever get something that’s passable.

Now, that’s a very basic explanation, and if you’d like to correct me on it, I’m not bothered to listen because then that means I have to go back and edit this entire blog post. The point is that an AI can only ever generate something that sits in the middle of its parts. It won’t be able to make quality, and even if it would, it wouldn’t be able to do so consistently. It would be like chucking a bunch of books in a blender and then trying to reassemble a novel from the pulp.

Sure, it might be enough to fool some prospective readers into buying, but they’ll likely find themselves disappointed by what they find and will be more wary next time. It’s a one-time scam to pass off poor quality work. And if you do find yourself satisfied with that quality of work, then I pity you.

Anyway, this isn’t something to be worried about. There’s already plenty of it out there.

 

2: People have been publishing absolute slop since the dawn of the written word.

My sub-heading might be a bit of an exaggeration, but you get the idea, right?

I’ll focus purely on the things that’ve happened in the recent years, because I don’t have to regale you with the invention of the printing press or the rise of mass-market paperbacks.

In the past ten or fifteen years, independent publishing has exploded along with the rise of e-books. The most prohibitively costly part of getting a book out into the world is the very act itself, as the process of laying out, printing, distributing and selling these page-turners costed a small fortune, and that was if you had enough of a network to get these things in books stores.

This left publishing strictly in the domain of traditional publishers, who had a staff of editors, typesetters and marketers who could decide on which stories would sell and which ones wouldn’t. Mind you, this didn’t prevent them from putting out stinkers, and likely passing on solid gold, right?

With massive marketplaces like Amazon, which, like it or not, holds the majority share of e-book sales and will do for a long time coming, and also Print-On-Demand services, which eliminated the need to actually distribute stock to online retailers, the act of putting out a book is suddenly accessible to all. Social media is a boon to marketing as well, so a prospective author actually has a decent, if not better chance at surpassing what a traditional publisher could do for them, as long as they were a good writer.

But we can’t ALL be good writers.

I’m not going to lie to you. There’re some really bad books out there, and either they die in obscurity, clogging page 89 of your search results, or they get fuelled by deep-pocketed indie authors with marketing teams behind them to create enough manufactured buzz that they’ll achieve a reasonable level of success.

Now I don’t say this to slag off any other indie authors or prospective writers. I’m sure they’re already aware of the reality of the swill we’re all floating around in. The point I’m making is that if a bunch of people come along trying to make a quick buck or short-cut the art of writing using AI-generated stories, they’re going to find that the niche for crap books has already been filled.

If you want to be successful as an author, you have to write good books. Books that actually have art and passion to them, that create something new, and that’s not what an AI is supposed to do, really.

 

3: AI does not “invent” truly new things and when it does, it’s usually a malfunction.

Many great artists of the past would use hallucinations and otherworldly states to assist in conjuring their next fantastical work. Salvador Dalí would wait in a chair holding a metal spoon above a platter until he was on the verge of sleep. When he inadvertently released the spoon and the clatter of its fall woke him up, he used this groggy state to help create. Stephanie Meyer concocted the Twilight series from dreams she had. (We can argue if that’s for better or worse later, but you have to admit, it launched the careers of Robert Pattinson and Kristen Stewart, phenomenal actors in their own right.) Even I am not immune, but I can’t tell you about the dream I had, it’ll be a spoiler for a book I’m writing at this moment.

On the other hand, LLMs and other AI’s suffer from their own kinds of hallucinations. After raking through their datasets, they can sometimes produce outputs that SEEM correct, that look similar enough to what it has to reference, but aren’t actually the case. Speaking of cases, the best example of this is the lawyer in America that was using ChatGPT to write his case notes, and didn’t realise he was getting entirely made up arguments on cases that didn’t even exist!

Point is hallucinations aren’t always supposed to happen. Some AI’s use a process called SMOTE (Synthetic Minority Oversampling Technique) where it basically adds fake, artificial datapoints around the real ones to make it more likely for results to gravitate towards reality. I only actually discovered the term when I was running past a concept I was going to put in a book with a friend who is better versed in this stuff than me, and they basically said I’d re-invented the wheel here. (Aren’t I clever?)

LLMs have a hard time with creating new names that properly roll off the tongue, of making places, environments and themes that haven’t been done before, simply because they’re set up to mimic stuff that HAS been done before. Maybe one day a new model will come along that might be a bit better at this, but it’s not something the general public, and especially not bad-faith actors, will have access to or will know how to use. This brings me nicely to my next point.

 

4: AI could be used for things so much greater than this.

I’m not saying that being a writer ISN’T great. It is. I love it.

What I’m saying, economically, computationally, realistically, even, is that asking a LLM to write you a book is one of the lowest forms of use-case you could apply to these marvels of technology. It would be like buying an F-35 to get you to the shops and back and nothing else. A hundred million dollars wasted to perform a task that could just as easily be accomplished with your own legs or other mobility aid. (And yes, that’s how much those flying monsters cost, which makes it all the more concerning that the US military just “lost” one back in September. Don’t worry, they found it again.)

It's such a misuse of a powerful tool that only the most stubbornly arrogant and morally bankrupt fools of the world would decide it’s appropriate. I can guarantee they’d be the only ones using it, because the people who see the true potential of AI, that really understand how it works, will be doing just that.

I’m not even going to try and name or list examples of the leaps and bound that can and have been made using AI, because by the time you’re reading this, and even by the time I’m finishing this sentence, most likely, it will be a list that’s out of date. More is being discovered every day and will continue to do so for years to come, which is why I’m so frustrated that the only “news” I hear about AI is reports of people trying to use it to steal the basic human joy of creativity.

 

5: The AI can’t be creative because it has to make sense. Real people don’t.

Creativity IS the point. Fiction is a funny concept because it’s something that needs to make sense while simultaneously not making sense. AI, at this moment, has to make sense. It has to stick to its dataset. It can’t create more of its dataset on its own, that’s called data leakage, where the outputs of a generative AI pollutes its own dataset and starts to degrade, producing results only it can understand.

Even if we just look at the concept of writing on a sentence level, a general LLM that most of us have “access” to will not want to create phrases that are oxymorons or ironic, because that construct and order of words won’t match what’s already been written into its dataset. Some AI will have it wired into their function to occasionally throw out a result that deviates from its norm and its data set to see if that is a positive response, but it wouldn’t be able to consistently use that to summon new words or sentences. You’d just be stuck reading the same rehashed catchphrases and cliches as it attempts to cobble them together into a story.

If you told an AI to attempt to deviate from convention for emphasis or effect, it could attempt to, but it likely won’t understand when or how, and would only go out of its way to break a rule without having much of a way to check if it had done so in an effective way, because it won’t have the dataset to check back with. I, on the other hand, am not bound by the laws of grammar or logic (much to the chagrin of my editor, at times) and can know, most of the time, at least, when to break them.

Zooming back out to plot structure, I’ll also know when and how to handwave things away as “just being magic” and have reasons and themes behind that effect, but an AI likely wouldn’t. If you have one advanced enough to decide this, then please refer back to point 4.

 

Anyway, you’ve been reading long enough. I’ll tell you why I’m so interested in AI and its potential failings, and if you’ve read Molten Flux, you might already have picked up on why. If you haven’t then go read it, here’s the link.

In just a few short months, I’ll be releasing its sequel, Blazing Flux, which will follow Ryza through the aftermath of his battle against the Locusts. This time he faces his toughest fight yet. Molten Flux itself. But it’s starting to change… See below to read more and catch your first peek at the finished cover art!

Stay tuned to hear more about the book, and also more of my ramblings through these blog posts! I had a lot of fun writing this, and it also let me procrastinate cleaning up around the house, so let me know if you’d like to see more posts like it.

Previous
Previous

The Fun and potential failures of writing multiple series at once

Next
Next

Inspiration from the droughtlands [part three]