An Editor’s Thoughts on AI
The fallacy of moral absolutes in the era of ChatGPT
I stumbled onto a comment a few days ago on Substack about artificial intelligence.
It read, “I block people who use AI. I block people who tell other people it’s ok to use AI. I block people who use AI for editing, too. For ideas. For anything related to creative writing.
Sometimes I think I’m overreacting. But no, I’m not. If you use AI, you’re not a writer. If you have a problem with that sentence, unfollow and unsubscribe please.”
Initially, I wanted to agree. As someone who owns a Medium publication that’s fielded its fair share of AI submissions, I’ve developed some pretty strong feelings on the subject. I respond with disgust whenever someone submits an article that I can tell has been AI-generated. It’s not usually all that difficult to discern when an “author” has outsourced the task of writing to a robot and prompted it with some generic essay topic.
As a writer who’s spent entire decades honing his English, the notion of someone with a middle school understanding of the language just typing a hasty prompt into Google Gemini or Grok and pretending that they’ve done genuine work is enough to spur contempt. The same can be said of the people who try to pass off AI-generated artwork as their own, or take pride in the finished products because they’ve pieced together a couple of directives that the programs could run with. Many who leverage these tools do it in a way that can plainly be called cheating.
And yet, it’s a recurrent theme that those who adopt the most sweeping anti-AI stances struggle to apply their standards evenly.
One of the most useful, everyday applications I’ve found for AI is its ability to handle the kinds of inquiries that overwhelm traditional search engines. Throughout so much of my life, there were questions I wanted answers to, but typing them into a search bar would only land me on a page with a few of the same keywords. I learned to accept that any denser queries I had would need to remain unaddressed— or harbored until I could find the right expert to make sense of them.
Another aspect of ChatGPT (specifically the voice assistant) that I value is that it’s the rare technological innovation that’s empowered me to be more in touch with the world around me. When I want answers while traveling, I don’t need to scroll through Reddit boards, phase out my scenery, or extricate myself from the people I’m with. When walking along a trail with friends and one of us begins to wonder why a peculiar tree looks the way it does, I can simply ask. It only takes a couple of taps. And we’ll delight in the ability to so futuristically find answers.
I also appreciate the ability to have ongoing conversations and ask questions that extend directly from previous ones.
For example, I can ask, “What’s the weather like in Belize in June and July?” and then immediately follow up with, “How burdensome is their rainy season, and what are some of the best ways to prepare for it?” or “Might any neighboring Latin American countries be more viable that time of year?” It’s not without its deficits; it still hallucinates, and still needs to be verified. But with most queries on most topics, it excels. In many ways, it finally delivers on the wide-eyed hopes that people had for Siri when it first debuted.
In addition to satisfying my curiosities about the world, it allows me to improve as a writer when I’m not even writing. Whether I’m walking my dog or out driving, a lot of my best ideas hit me when I’m away from my computer. And as I do when I’m writing, I wonder about the best ways to express them, and the slight distinctions between words that may seem similar at face value.
It says a lot about my passion that I’ll spend my free time dissecting words like “curt,” “terse,” “brusque,” and “laconic,” and consulting ChatGPT as I explore their subtle shades of meaning. It’s evidence that no matter where I go, my passion for writing follows. And I think it’s remarkable that this technology has granted me an accessible means to wrestle with the inner workings of my language anywhere with cellphone service.
I earnestly believe that AI has the potential to aid writers in expressing their thoughts better than they already can — and, most importantly, in ways that linger even when the program is no longer in front of them. It’s a shame that what we so often see are people with little linguistic experience trying to bypass the journey of becoming writers completely. Those shameful cases when people enter the field and immediately surrender the expression process over to automation.
It’s because I’m so passionate about what I do that I refuse to let AI do it for me. No matter how good these models get, it won’t change the fact that I began writing because I love it. This isn’t some menial job that people do because they have no other choice. There’s no burden that’s removed if I were to just tell some robot software to write my essay for me — only joy sapped. The would-be fun and enlightening and cathartic routine of expressing my ideas becomes hollow.
But even “hollow” overstates what it is when we tell AI to just elucidate our thoughts for us. The creative process evaporates. The delightfully painstaking trial and error of figuring out how to communicate our wisdom and our experiences is stripped away. The difference couldn’t be more colossal.
I was quick to apply the GarageBand analogy when AI first entered the fray; if software that democratized music creation didn’t put an end to musicians, why would AI threaten creators now? But I’ve grown to view it as an oversimplification of an issue too big for words.
The threat that AI represents to creators of all sorts isn’t something that can easily be exaggerated. Within 10 years, I expect life on Earth to look entirely different because of it. More than solely an asset for creators, AI has the power to steal from us, imitate our likeness, and possibly one day in the not-so-distant future, replace us.
But that doesn’t mean it’s devoid of merit, or that it has nothing to offer creators in their endeavors to become more creative. The real challenge is in determining at what point a program that can help us think more analytically about words and their relationships becomes a cybernetic performance enhancer. When it enables us to express things that we’re incapable of relaying on our own.
I think one of the most effective litmus tests for evaluating what constitutes cheating is when authors lack the ability to verbally discuss the insights in their stories. Sometimes a conversation over text is enough to reveal the con. When I red-flag submissions for AI use, the accused often respond back in authorial voices that are night and day different from the ones behind the pieces. Perfectly structured sentences disappear, run-ons abound, and the charade becomes plain.
A student can copy homework all semester, but it will do them no good come finals. When grifters pass off AI-produced writing as their own, the scheme might make it past less perceptive readers for days or even months. But eventually the veneer will come undone.
The hard-line “all AI use is cheating” stance is also tricky to apply universally when so many websites, tools, and applications lean on the technology. From Siri and Facebook to Grammarly and Dictionary.com, there are all sorts of algorithms and automated intelligences that we’ve already assimilated — even come to depend on. The notion that all AI is bad just doesn’t stand up to scrutiny, and to live a life in line with that belief would practically demand us deleting all of our social media profiles and moving off of the grid.
When I began writing professionally, I frequently found myself visiting thesauruses and asking Siri to define certain words, making sure that I’d tailored them as best I could to the contexts at hand. I doubt many would see an ethical conundrum in that. But a year later, telling those same peers, “Sometimes I’ll use ChatGPT to unpack the distinctions between synonyms, how they work in specific sentences, and discuss their origins,” they’ll often resort to reactionary judgement or condemnation.
On one hand, such concerns shouldn’t be completely downplayed. But on the other, I can’t help but see the same reflexive fear of the new that defined our attitudes toward “talkies,” color TVs, and computers when my contemporaries flatly dismiss every function that AIs can potentially serve.
When we write an entire piece before asking chatbots about the mistakes we’ve made, they act as little more than glorified spell-checkers. And yet, it’s often met with fire and brimstone fury when someone admits AI played a role in helping them pick up on a word they’ve repeated, or another they’ve misspelled, or one that was better suited in the sentence by a slightly less pointed synonym. Purists may even turn to the scathing criticism that those writers aren’t writers at all.
The truth is, I think it takes pre-existing talent in order to know how to employ these tools usefully. Without that eye, it’s easy to believe that what Gemini produces when told to generate articles is adequate. It takes a writer to recognize the questions worth asking, which answers are helpful, and which suggestions are actionable. How to draw from its feedback in a way that enables them to continue growing.
Growth is what should happen when we have our work critiqued, and not everyone has a human editor willing to volunteer their time.
As the world changes, I always try my best to advocate for empathy and understanding. I keep returning to the image of ants in a jar: agitated into conflict, yet oblivious to the outside force disrupting their world. As the development of AI continues to accelerate, we’re the ants who are blameless in our in-fighting.
Something on which every AI cynic and I agree is that these generative AI programs were given to us with scarily little warning, vetting, or oversight, and they’ve led to a litany of problems as a result. It can be tempting to assign responsibility to everyone that uses this software that was so rashly peddled to the public, and to believe each and every one of us are part of the greater problem. Maybe there’s a kernel of truth in the idea. We feed the machines data and it draws patterns from it, and much of that data wasn’t given voluntarily. For many, theft is at the heart of their repulsion to the new technology.
But theft isn’t unique to the world of generative AI. It’s the backbone of the modern internet. Social media platforms, search engines, and algorithm-driven feeds have long relied on the unconsented extraction of our data, preferences, and behavior to fuel their profits. Large language models like ChatGPT and Gemini are a direct extension of that. That’s not to downplay the breaches of liberty that have tainted our digital lives from day one, only to point out that intellectual thievery is more hopelessly ubiquitous than jaywalking.
As with so many of the digital tools that have reshaped our lives, it’s not fair to cast all blame on users. Our anger over this AI storm should be directed almost entirely at the higher ups who knew what they were doing when they freed their genies from bottles.
We can spend our days trading insults with those who’ve been algorithmically trained to hate us, or we can blame the tech titans who were well aware that their apps encouraged these divides. We can make criminals of all of the people who are using AI, or we can take a step back and appreciate the magnitude of this inflection point. What it means to be a species during the dawn of artificial intelligence. To be simultaneously thrilled by its potential and horrified by its potential consequences.
There are many uses of AI that I ethically refuse to stand behind. When I know an article is AI-generated, I’ll reject it faster than you can say “machine learning,” and I’ll report the “authors” for eating into the income of those who’ve worked for their talents. But I don’t hate them. I try to remember that this kind of abuse was hard to fathom a few years ago, and the very dilemma is just another strange feature of living in frenetic times.
There must be something deeply liberating about generative AI for those who’ve always had stories and ideas they wanted to share, yet lacked the means to articulate them. And I can only imagine how confusing it must feel for those people to finally see them take form only to be told that they’re not entitled to them. That the thoughts aren’t their own, and that the finished products on the screen before them are the results of borrowed time and talent.
The ethical quandaries are myriad, and there are no objectively correct answers on how we should approach them.
We might be neck-deep in murky waters. But we’ve also been catapulted into an era of unprecedented possibility. There’s no use denying that we’ve reached a momentous fork in the road.
We’re scrambling in real time to determine what constitutes authentic humanity in this new age. I certainly don’t have the answers myself. But I can say with confidence that putting our heads in the sand and assigning labels to one another won’t get us through this storm.
Artificial intelligence is a Pandora’s box we’ve already opened, and we won’t be sealing it again anytime soon. That doesn’t mean we should just accept this deluge with open arms or without asking big questions. But nor do I think we can allow our fears to set the limits of our imagination.
Thoughtful and thorough.
Thank you for a thoughtful, well-written article.
I use Grammarly to check spelling and punctuation, and it gives me suggestions about better ways to word something. It is my choice to accept its suggestions or not, and I often dismiss them.
I often use Google when I am looking for a synonym.
Other than that, I refuse to use AI to write my pieces. They come out of my own head, my observations, my imagination, and my experiences.