So, AI is learning to write like Humans. It uses the same techniques as Humans, the same rhythm as Humans...
of note is the fact that all these particular literary devices are in heavy use in the King James Bible. Its rhythm is the rhythm of the Gospel Preacher, rife with alliteration and repetition, fraught with what are now considered to be obscure words -
and I will continue writing that way until the day I die. I will not give up the best of Humanity just so I may be differentiated from a program; let AI identify its work.
Totally agree Rosy. I see much of this in my writing. It’s my voice and I was using it decades before LLM was born. Nobody complained then and my book was very well received. I’m not going to change who I am simply to prove I’m a human. That would no longer be my voice.
It’s either stay as me or revert to the academic scientist passive voice used in my scientific papers and PhD thesis. “It is well established that” etc.
Great article explaining -why- LLMs are abhorrent. They trigger the 'uncanny valley' in our reading in the same way generative AI (aka art plagiarism) does for visuals.
I came up with that citation style for Seinfeld specifically, somewhat as a joke, since it was the nearest thing to a canonical text for me as a teenager.
I’ve been overusing dashes for decades, ever since I was a research assistant in grad school, learning from the professor whose handwritten scholarly articles I was typing up how effective—and versatile—dashes can be. :-)
Thank you for an informative article, Colin. AI can be a useful tool and I’m not adverse to using it as such, but I’d rather do the work myself. It’s more enjoyable.
I find it disturbing how AI has told you it's designed to sound authoritative. I'm not comfortable with something being "authoritative" which makes errors and hallucinates (and isn't real, and is created by big tech, and steals, and causes environmental damage... Need I go on?). The fact it's try to sound "authoritative" explains that dreadful tone it has which just sounds so patronising. I suspect it's all the tricolons, which sound like over-explaining in AI, and the antitheses just sound too pat. "It's not this, it's that" - sounds like "I know this; you don't," and that really gets my goat!
I've found that when I use AI it's anything but authoritative! But I suppose it mimics the voice of the person using it—and I tend to be more philosophical in use of language.
Before LLMs came along, I actually trained myself to use a lot of parallelism and contrast because in my work, this is very effective at explaining the gist of complex topics to senior leaders who do not have time to get into the weeds.
I use these rhetorical devices too often already (with no AI used). I love saying things in threes and use of the em dash, but I should have always toned them down for the mundane. Problem is the AI will probably be coded to figure this out sooner than later - it's already close.
Despite so many here on Substack acting like AI is complete garbage, it's not.
Thank you! We'll see if the models develop more subtlety over time. I'm inclined to imagine that they will. But these things change so quickly that I'm sure this article will be out of date before long.
Brilliant - that’s exactly how it feels reading one of those AI written pieces. It’s like it’s got all of the pieces of the jigsaw, but no clue how to assemble them: it knows edges are important, so there’s just no middle bits left!
Also the em-dash or the overuse of brackets are neurodivergent speak, kinda funny that ChatGPT does that. Being the "most normie" of them. Claude doesn't do that so much.
Thanks for pointing that out, it’s so true! - I’m a huge user of em-dash, and hadn’t realised how weird it was until AI started doing it in fake books. 🙄
A great article, I have identified the same pattern that an AI text offers and I am agree with you. We should use AI as an assistant instead of author and by that this question comes to my mind. Should we moderate the use of AI or just get used to it?
I have long been irritated by the “it’s not x, it’s y” formula - long before it was taken up by AI. Or maybe what really irritates me is a variant that goes something like this …
“John swaggered across the room, casting a disdainful eye at those within. Some described him as ‘confident’ - but his manner is not true confidence. True confidence doesn’t make others feel uncomfortable.”
There’s usually an insistence that a “positive” word has to have a wholly positive meaning or effect - whereas in reality there is no reason why a confident person shouldn’t make others feel uncomfortable. The meaning of “confident” is merely what it says in the dictionary, and the dictionary doesn’t say anything about lack of discomfort in others.
I’ve seen this trick pulled with many words but I’ll just give one more example … “true” patriotism doesn’t involve nasty nativism - it can only be good - etc etc etc.
A great article, Colin!
So, AI is learning to write like Humans. It uses the same techniques as Humans, the same rhythm as Humans...
of note is the fact that all these particular literary devices are in heavy use in the King James Bible. Its rhythm is the rhythm of the Gospel Preacher, rife with alliteration and repetition, fraught with what are now considered to be obscure words -
and I will continue writing that way until the day I die. I will not give up the best of Humanity just so I may be differentiated from a program; let AI identify its work.
I will remain Human.
The King James Bible is a good point of comparison!
It makes me think that writing like a human is not just a matter of being able to use literary devices but also from time to time to tire of doing so!
This is the most interesting article I have read in a while.
Thank you, Rosy!
Totally agree Rosy. I see much of this in my writing. It’s my voice and I was using it decades before LLM was born. Nobody complained then and my book was very well received. I’m not going to change who I am simply to prove I’m a human. That would no longer be my voice.
It’s either stay as me or revert to the academic scientist passive voice used in my scientific papers and PhD thesis. “It is well established that” etc.
and nobody wants to read that! 😂
You make the study of language absolutely engrossing. Thank you!
Thank you, Louis!
Great article explaining -why- LLMs are abhorrent. They trigger the 'uncanny valley' in our reading in the same way generative AI (aka art plagiarism) does for visuals.
Uncanny valley is the perfect way to describe it!
Thanks for this, I was finally able to identify the specific person that LLM-generated text reminded me of.
Separately: did you come up with this style of quoting from a TV show (“Seinfeld 9:3”)? It looks kind of biblical, I like it!
I came up with that citation style for Seinfeld specifically, somewhat as a joke, since it was the nearest thing to a canonical text for me as a teenager.
I’ve been overusing dashes for decades, ever since I was a research assistant in grad school, learning from the professor whose handwritten scholarly articles I was typing up how effective—and versatile—dashes can be. :-)
Even more than a semicolon, it lets you empower the reader, as if to say "determine for yourself the relationship between these clauses"!
Yes, I love that!
Thank you for an informative article, Colin. AI can be a useful tool and I’m not adverse to using it as such, but I’d rather do the work myself. It’s more enjoyable.
I find it disturbing how AI has told you it's designed to sound authoritative. I'm not comfortable with something being "authoritative" which makes errors and hallucinates (and isn't real, and is created by big tech, and steals, and causes environmental damage... Need I go on?). The fact it's try to sound "authoritative" explains that dreadful tone it has which just sounds so patronising. I suspect it's all the tricolons, which sound like over-explaining in AI, and the antitheses just sound too pat. "It's not this, it's that" - sounds like "I know this; you don't," and that really gets my goat!
I've found that when I use AI it's anything but authoritative! But I suppose it mimics the voice of the person using it—and I tend to be more philosophical in use of language.
It’s designed to SOUND authoritative - presumably so you’ll trust the tool.
Before LLMs came along, I actually trained myself to use a lot of parallelism and contrast because in my work, this is very effective at explaining the gist of complex topics to senior leaders who do not have time to get into the weeds.
In that case, perhaps we should be flattered that the LLMs are treating us like senior leaders!
Perfect breakdown.
I use these rhetorical devices too often already (with no AI used). I love saying things in threes and use of the em dash, but I should have always toned them down for the mundane. Problem is the AI will probably be coded to figure this out sooner than later - it's already close.
Despite so many here on Substack acting like AI is complete garbage, it's not.
Thank you! We'll see if the models develop more subtlety over time. I'm inclined to imagine that they will. But these things change so quickly that I'm sure this article will be out of date before long.
It's quite obviously not, but our brains are primed to unnecessarily take sides on basically anything that *appears* to *afford* them.
Brilliant - that’s exactly how it feels reading one of those AI written pieces. It’s like it’s got all of the pieces of the jigsaw, but no clue how to assemble them: it knows edges are important, so there’s just no middle bits left!
Love that metaphor!
Also the em-dash or the overuse of brackets are neurodivergent speak, kinda funny that ChatGPT does that. Being the "most normie" of them. Claude doesn't do that so much.
Thanks for pointing that out, it’s so true! - I’m a huge user of em-dash, and hadn’t realised how weird it was until AI started doing it in fake books. 🙄
Great article, Colin. I think you've nailed it. I've shared it with my professional network on LinkedIn.
Thanks, Ken!
A great article, I have identified the same pattern that an AI text offers and I am agree with you. We should use AI as an assistant instead of author and by that this question comes to my mind. Should we moderate the use of AI or just get used to it?
I have long been irritated by the “it’s not x, it’s y” formula - long before it was taken up by AI. Or maybe what really irritates me is a variant that goes something like this …
“John swaggered across the room, casting a disdainful eye at those within. Some described him as ‘confident’ - but his manner is not true confidence. True confidence doesn’t make others feel uncomfortable.”
There’s usually an insistence that a “positive” word has to have a wholly positive meaning or effect - whereas in reality there is no reason why a confident person shouldn’t make others feel uncomfortable. The meaning of “confident” is merely what it says in the dictionary, and the dictionary doesn’t say anything about lack of discomfort in others.
I’ve seen this trick pulled with many words but I’ll just give one more example … “true” patriotism doesn’t involve nasty nativism - it can only be good - etc etc etc.
Fascinating!