67 Comments
User's avatar
R. H. Snow's avatar

A great article, Colin!

So, AI is learning to write like Humans. It uses the same techniques as Humans, the same rhythm as Humans...

of note is the fact that all these particular literary devices are in heavy use in the King James Bible. Its rhythm is the rhythm of the Gospel Preacher, rife with alliteration and repetition, fraught with what are now considered to be obscure words -

and I will continue writing that way until the day I die. I will not give up the best of Humanity just so I may be differentiated from a program; let AI identify its work.

I will remain Human.

Expand full comment
Colin Gorrie's avatar

The King James Bible is a good point of comparison!

It makes me think that writing like a human is not just a matter of being able to use literary devices but also from time to time to tire of doing so!

Expand full comment
Kris Martin's avatar

I have a graduate degree in technical communication and decades of experience as a technical writer. When AI started getting popular, it seemed as if people were suddenly learning to write with a little more sophistication. Now I recognize it as AI slop, of course. Lately I’ve changed my style and approaches to avoid some of the now-overused devices identified here. Frustrating.

Expand full comment
Rosy Pedrini's avatar

This is the most interesting article I have read in a while.

Expand full comment
Colin Gorrie's avatar

Thank you, Rosy!

Expand full comment
Dr Anne Whitehouse's avatar

Totally agree Rosy. I see much of this in my writing. It’s my voice and I was using it decades before LLM was born. Nobody complained then and my book was very well received. I’m not going to change who I am simply to prove I’m a human. That would no longer be my voice.

It’s either stay as me or revert to the academic scientist passive voice used in my scientific papers and PhD thesis. “It is well established that” etc.

and nobody wants to read that! 😂

Expand full comment
Boulis's avatar

Hm, not a fan of passive voice no matter the author, human or inhuman.

Expand full comment
Dr Anne Whitehouse's avatar

I don’t think anyone is, but it is compulsory for scientific papers.

Expand full comment
cortex_ghost's avatar

Great article explaining -why- LLMs are abhorrent. They trigger the 'uncanny valley' in our reading in the same way generative AI (aka art plagiarism) does for visuals.

Expand full comment
Colin Gorrie's avatar

Uncanny valley is the perfect way to describe it!

Expand full comment
Juan's avatar

Thanks for this, I was finally able to identify the specific person that LLM-generated text reminded me of.

Separately: did you come up with this style of quoting from a TV show (“Seinfeld 9:3”)? It looks kind of biblical, I like it!

Expand full comment
Colin Gorrie's avatar

I came up with that citation style for Seinfeld specifically, somewhat as a joke, since it was the nearest thing to a canonical text for me as a teenager.

Expand full comment
Jennifer A. Newton-Savard's avatar

I’ve been overusing dashes for decades, ever since I was a research assistant in grad school, learning from the professor whose handwritten scholarly articles I was typing up how effective—and versatile—dashes can be. :-)

Expand full comment
Colin Gorrie's avatar

Even more than a semicolon, it lets you empower the reader, as if to say "determine for yourself the relationship between these clauses"!

Expand full comment
Jennifer A. Newton-Savard's avatar

Yes, I love that!

Expand full comment
Helen Barrell's avatar

I find it disturbing how AI has told you it's designed to sound authoritative. I'm not comfortable with something being "authoritative" which makes errors and hallucinates (and isn't real, and is created by big tech, and steals, and causes environmental damage... Need I go on?). The fact it's try to sound "authoritative" explains that dreadful tone it has which just sounds so patronising. I suspect it's all the tricolons, which sound like over-explaining in AI, and the antitheses just sound too pat. "It's not this, it's that" - sounds like "I know this; you don't," and that really gets my goat!

Expand full comment
Omar Acevedo's avatar

I've found that when I use AI it's anything but authoritative! But I suppose it mimics the voice of the person using it—and I tend to be more philosophical in use of language.

Expand full comment
Joe's avatar

It’s designed to SOUND authoritative - presumably so you’ll trust the tool.

Expand full comment
LV's avatar

Before LLMs came along, I actually trained myself to use a lot of parallelism and contrast because in my work, this is very effective at explaining the gist of complex topics to senior leaders who do not have time to get into the weeds.

Expand full comment
Colin Gorrie's avatar

In that case, perhaps we should be flattered that the LLMs are treating us like senior leaders!

Expand full comment
Louis Fromage's avatar

You make the study of language absolutely engrossing. Thank you!

Expand full comment
Colin Gorrie's avatar

Thank you, Louis!

Expand full comment
Kathlyn's avatar

Brilliant - that’s exactly how it feels reading one of those AI written pieces. It’s like it’s got all of the pieces of the jigsaw, but no clue how to assemble them: it knows edges are important, so there’s just no middle bits left!

Expand full comment
Colin Gorrie's avatar

Love that metaphor!

Expand full comment
Rob Rough's avatar

I have long been irritated by the “it’s not x, it’s y” formula - long before it was taken up by AI. Or maybe what really irritates me is a variant that goes something like this …

“John swaggered across the room, casting a disdainful eye at those within. Some described him as ‘confident’ - but his manner is not true confidence. True confidence doesn’t make others feel uncomfortable.”

There’s usually an insistence that a “positive” word has to have a wholly positive meaning or effect - whereas in reality there is no reason why a confident person shouldn’t make others feel uncomfortable. The meaning of “confident” is merely what it says in the dictionary, and the dictionary doesn’t say anything about lack of discomfort in others.

I’ve seen this trick pulled with many words but I’ll just give one more example … “true” patriotism doesn’t involve nasty nativism - it can only be good - etc etc etc.

Expand full comment
Fr. Scott Bailey, C.Ss.R.'s avatar

Thank you for an informative article, Colin. AI can be a useful tool and I’m not adverse to using it as such, but I’d rather do the work myself. It’s more enjoyable.

Expand full comment
B.o.G.'s avatar

Also the em-dash or the overuse of brackets are neurodivergent speak, kinda funny that ChatGPT does that. Being the "most normie" of them. Claude doesn't do that so much.

Expand full comment
Kathlyn's avatar

Thanks for pointing that out, it’s so true! - I’m a huge user of em-dash, and hadn’t realised how weird it was until AI started doing it in fake books. 🙄

Expand full comment
Bruce Dale's avatar

I guess only geezers (like me) still use parentheses.

Expand full comment
Colin Gorrie's avatar

When you get to the point of nesting them, then you'll know you've ascended to another plane of power!

Expand full comment
Ken Grace's avatar

Great article, Colin. I think you've nailed it. I've shared it with my professional network on LinkedIn.

Expand full comment
Colin Gorrie's avatar

Thanks, Ken!

Expand full comment
Samar Havir's avatar

A great article, I have identified the same pattern that an AI text offers and I am agree with you. We should use AI as an assistant instead of author and by that this question comes to my mind. Should we moderate the use of AI or just get used to it?

Expand full comment
K K's avatar

This is a truly excellent post! I read it about a month ago and since then I keep noticing these patterns. Thank you so much for making it salient for someone who doesn't have a habit to spot the use of rhetorical tools!

Expand full comment
Colin Gorrie's avatar

Thank you! I’m so glad to hear you’ve found it helpful in picking out the patterns.

Expand full comment