The version of this post on my website contains an image referred to in the post. SOL doesn't (as far as I'm aware) allow images in blog posts, otherwise I'd post it here too.
***************
When I was uploading my work to my Ream Stories page, I noticed there was a question that no other site has asked before when uploading work.
"Was Generative AI used to write this story?”
It’s a “required” answer, and I can understand why it’s being asked; I really can. But it did spark a question in my head—Is “using” AI cheating? And the answer to myself was a resounding—well, I guess it depends on what you mean by “using”.
Let me explain.
I “use” three AI Tools—Grammarly, Copilot and WordTune. And I use each one of those in different ways, for different purposes and in different frequencies.
Grammarly is, to all intents and purposes, a very, very sophisticated version of the spell check and grammar check that’s been a part of Microsoft Word and other word processors for decades, but rather than just highlighting where you’ve made a “mistake”, and offering a simple solution like adding or deleting a comma, it can restructure whole sentences or even whole paragraphs. I have it turned on in Word (or Google Docs—I use both) and, for the most part, I ignore the red lines it produces while I write. Then, every so often, I’ll go through those red lines and make a decision as a writer which I want to accept and which I reject.
And that’s the key with all AI tools, really. How are you using it? Do you follow the suggestions blindly, or do you make informed choices about the suggestions the tools make based on your personal writing style and preferences? I’d suggest that one of these is the “correct” way to use AI and the other… well…
Copilot is a completely different tool and a far more “dangerous” one in terms of “cheating”. And I don’t actually use it for writing at all, even though it is absolutely capable of generating entire articles that read well and make sense. It can even generate complete, simple stories with a beginning, middle and ending.
My personal use tends to be for image generation. I regularly ask Copilot to generate a featured image for a blog post or to illustrate a scene I’ve written.
This second use is proving to be really useful for me. It gives me an idea if the picture I’m painting with words matches the image I have in my head as I write. Let me give you an example.
I asked Copilot to illustrate this paragraph from “A Wounded Heart”…
“She was walking along the train station platform at as brisk a pace as her high heels and the heavy suitcase she was dragging behind her on its wheels would allow. She was at least smiling though. She was dressed casually, in three-quarter-length faded blue jeans, a white t-shirt with some branded logo on that I didn’t recognise and a light jacket that reflected that it was still quite warm even though it was now September. Her long blonde hair was in a ponytail that bounced behind her as she walked.”
One of the images it generated can be seen on my website version of this post. It is very much the kind of picture I had in my head when I wrote that scene and every time I read it.
Link to Image here
I’ve also, on occasions, asked Copilot to write a blog post on a particular topic. I’ve never used the text generated as an actual blog post, but I have used it as a writing prompt to write my own post. Doing this gives me a rough structure to work with and ensures I don’t miss out a potentially important point.
There is one other situation I’ve used Copilot for, and that’s song lyrics. I was writing a scene that required a song to be sung, and the lyrics of the song were important to the scene. But I’m no songwriter and could easily have wasted many hours trying to develop some lyrics. But Copilot has a plug-in for Suno, the AI music generator, and I asked it to write me a song. I then used the lyrics of that song as a basis for the one in the scene. I changed a large proportion of the words but kept the structure and the rhyming couplets. So, what’s in my manuscript isn’t AI-generated, but it is based on something that was.
Is that cheating?
Again, the key here is not just taking what the AI has generated and using it verbatim but instead using it as an inspiration to create something yourself.
And that’s how I’ve used the last tool, too. WordTune describes itself as “…an AI-powered reading and writing companion capable of fixing grammatical errors, understanding context and meaning, suggesting paraphrases or alternative writing tones, and generating written text based on context.”
In one sense, it’s a lot like Grammarly, but while Grammarly focuses on grammar and structure, WordTune is capable of adding to what you’ve already written. It has a “continue writing” function which looks back at the context of what you’ve already written and, well, continues it.
This is my least-used AI tool. In fact, I could probably count on one hand the number of times I’ve clicked that “Continue writing” button in the past few months. When I do click on it, it gives me half a dozen ways of continuing the paragraph and what I usually do is cycle through them a couple of times, decide which, if any, I like best and insert it into the text. Then I’ll usually re-write it slightly (or a lot) to sound more like “me”.
Again, it’s not using the tool to write for me, it’s using the tool as inspiration to get me over a tricky spot when the words don’t flow by themselves.
Is that cheating?
I don’t believe it is.
I think that it’s fine to use AI as long as it’s used almost like a “collaborator” or a beta reader if you prefer, the kind that points out the mistakes and missteps you’ve made and helps guide you towards a cleaner, more readable manuscript and maybe also give you a nudge in the right direction when you hit a hurdle you’re having trouble getting over.
I had a human version of this when I was writing The Lies We Lead. I gave a friend access to the Google Doc and permission to comment on it, and comment on it she did. There were times when she was “watching” me write in the sense that she had the document open at the same time as me, and because Docs works in “real-time”, she could see what I was writing as I wrote it. She could even see when I deleted sections I’d written that I wasn’t happy with and replaced them.
And she would make comments in real-time when we both had the document open, too.
In a sense, I was using her intelligence as a tool for my writing. It just wasn’t an artificial intelligence but was, in a sense, just a series of comments on the screen.
Is that really that much different from the current crop of AI tools? Aren’t they best used as real-time collaborators?
That, then, I believe, is how AI should be used. Not to generate great swathes of text and copy & paste it into your manuscript but as a genuine tool—not much different from the spell checkers we’ve all been using for the past thirty years or more or a friend looking over your shoulder and making comments as you write.
Really, it’s the same as it has been for every single piece of new technology that this species of ours has come up with since the first cave dwellers picked up a rock and used it to crack open a dinosaur egg to get to the yolky goodness inside.