Resisting the Pull of Generative AI

By Jeff Raderstrong

AI’s writing abilities have dramatically improved in the last few years – it can produce compelling fiction and (in certain cases) humans prefer AI writing to human writing. Writerly responses fall anywhere from outright outrage to maybe-this-isn’t-so-bad. While the philosophical debate continues on, the reality remains: AI can write, maybe better than 90% of the population, and maybe better than you. What’s a writer to do?

This is an especially important question for those of us in the “business writing” community, where there’s creativity, sure, but things are a little more rote. Taking a blog post and turning it into a series of LinkedIn posts isn’t that complicated. AI can’t write books yet (although I’ve worked with some clients who’ve tried!) but it can easily take a series of articles and re-work them into a pretty compelling chapter draft.

AI will likely reshape a lot of business copywriting, in the same way we no longer have typists at a CEO’s beck and call. But I am slowly dipping my toe into using AI for its generative qualities, meaning its ability to write sentences that are designed for humans to read. I think a lot of business writers are as well. I’ve begun to notice some unmistakable traces of AI quietly seeping into content creators’ newsletters. It doesn’t announce itself—it shows up in more muted, subtle ways.

Giving in to Generative AI?

Yet outsourcing too much to AI can cause our writing muscles to stagnate and wither away. (Just like how most people’s handwriting has become atrocious or few can parallel park without a back-up cam.) So we need to be honest and realistic about how AI will transform business writing and the skill of human writers. Our response cannot be as simple as “use it” or “don’t use it.” The writing tasks I do for clients run across a wide spectrum and generative AI can be helpful in some part of that spectrum. Many of my clients are actively using AI and I feel it’s my responsibility to help guide them through what’s helpful and what’s not.

I’ve created somewhat of a framework for myself on when to use AI, somewhat inspired by a similar framework from the good people at Move 37. While others discuss AI use in ethical or moral terms (which is an important and separate conversation), I am trying to keep this a little more grounded in what AI can do, what it can’t, and what we want to do about that.

Let AI do it

Like I said earlier, about 85%-95% of my AI use is for research. There’s no reason for me to use Google anymore. I used to spend time painstakingly researching every individual claim a client would make for background evidence and now I can put a list of research prompts into AI and it does that searching for me in about five seconds.

I’m also increasingly using AI to do the task I described above: Take one piece of writing and then turn it into something else. For example, I’ll upload a draft of a blog post I wrote and ask it to turn it into a fundraising email message. Or two newsletters and have them reduced into one blog post.

To be honest, I don’t know how much time this saves me, and I worry that letting AI complete this task is the equivalent of letting my leg muscles atrophy while I’m strengthening my arms. But I find it very boring to take something I wrote prior (or someone else wrote) and then translate that into a different form of writing for a different purpose. I find that AI-speak can sneak its way into the final product (see below) and it takes time to then edit.

Other writers I’ve spoken with about how they use AI do it for creating outlines, and I think there is some value in it. However, I’ve found that the outlines it produces are best when there’s some “input” already created other than prompts, such as a set of blogs or article series. So, I’d place that task into the category described above. To be clear, I have never uploaded a full draft of a book into AI, as there are privacy and IP concerns in doing this (although my clients have done this despite my best efforts to educate them!) I pay for a pro-level AI subscription for confidentiality purposes—it keeps my data private and does not use any of my inputs to train the model.

There are other minor tasks that fall outside these areas that are helpful: I’ve had AI format citations (a lifesaver!) and also with some help with tricky grammatical structures, similar to what I assume Grammarly would do if I had a subscription. There are also times where I’ve been staring at a sentence or paragraph for too long and I don’t know what to do about it, so I ask AI: What’s wrong with this? Its answer is helpful probably 50% of the time.

AI is terrible at this

I’ve found that generative AI breaks down with anything longer than several hundred words. It’s pretty good at taking a blog post and turning that into an email, but it can’t take a blog post and write a book chapter. Filling in gaps is very, very hard for AI. Which makes sense, because all it’s doing is regurgitating its inputs. Telling it to build out sections of an outline with no additional information is like trying to fill in the missing pieces of a puzzle with ketchup.

I’ve had a few experiences now where I’ve edited books that were clearly written by AI in full or in part. They made some kind of sense at the start, but the longer it went, the thread was lost and everything dissolved into nonsense.

The problem of lack of input and direction basically negates most of the value of generative AI to produce text that humans will want to read. It’s why all posts on LinkedIn now sound the same – people are not doing the work of trying to distill down their ideas in an actionable and engaging way, and just hoping that AI will do that for them.

AI is very good at producing output, but not good at shaping that output into something that matters. (I’d say a lot of humans have this same problem.) That’s why I’m not really worried about my job – even if AI moves past its “slop” phase, I think people will still come to me to help them figure out what they want to write and why.

I can’t let AI take this from me

This last category is separate from the previous because there are some things I will never let AI do even if it’s way better at it than me. Maybe one day AI can write better than any living human. That’s fine. I’m still going to write.

Because writing is thinking (a frequent phrase but one I saw most recently from Professor Matthew Connelly so I’ll cite him). The act of writing transforms someone, whether you do it alone or in collaboration. Every single client I’ve worked with on a project has ended up with a completely different perspective on their work, their business or their life.

So anything I want to think about, or discover, I’ll write about. This includes all my fiction work, because it’s personal and I cannot do it without the self-discovery that comes from writing. That also includes any of the deep, long-form writing I do with clients or any new writing where we are thinking through what they want to say and why. They pay me not to produce content, but to help them transform themselves and their business. Writing is the best way to do that – and the pieces we create are a nice output.

I know a lot of writers are worried about AI, but I hope this way of thinking about it can help people understand what AI can be a tool for, and what it can’t. And maybe allow us all to focus more on the transformation that comes through writing and worry less about everything else.

Jeff Raderstrong is a writer and ghostwriter that helps people feel seen. He works with executives, entrepreneurs, coaches, consultants, and speakers to establish credibility and create authentic connections with others. His work has been featured in TIME, Forbes, Newsweek, MSNBC and more. Learn more about him at www.raderstrong.com

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Marcia Layton Turner

Leave a Comment