Home Maybe LLMs are a good thing for writing
Post
Cancel

Maybe LLMs are a good thing for writing

A diplomat is a man who uses many more words than necessary to express much less than he knows. - Shilard Fitz-Oesterlen, Gwent - The Witcher Card Game

A thought struck me the other day when writing an application that roughly boiled down to “How do I make this obviously not written by ChatGPT?”.

While the AI technocrats among us may immediately jump on the ‘Resistance is futile, give up you feeble meatsack’ viewpoint, I feel the reality is still a little more nuanced. My initial response was to make a joke regarding the meta nature of my concerns, or incorporate another method to reduce the formality of the syntax in a way that makes the tone more conversational. That then got me thinking about why this felt like it would work.

See to me, LLMs (large language models) will consistently produce grammatically perfect, fabulously eloquent bullshit. And I don’t just mean bullshit causes lawyers to lose their jobs or universities to freak out. It’s bullshit of the variety you hear from politicians at live debates, or from PR departments of large companies on damage control (or say, a potential candidate reaching out to a party they want to create a good impression with…). While the words may be expertly crafted and tick all the proverbial boxes of exceptional prose, they carry no emotional weight, and are about as intellectually nourishing as a Drive Thru Happy Meal at 02:00 AM.

Formal writing (as it’s more commonly referred to) of this kind has long been my least favourite form of writing for the above reasons, at its best it is a medium for someone to tell you exactly what they think you want to hear, with absolutely no promise of sincerity behind it. It’s formulaic to the point where industries are entirely built around it, and occupies vast swathes of human waking hours being produced and consumed. It’s also what LLMs are exceptionally good at producing, arguably to the point where they are significantly better than the majority of us at doing so.

Now when the vast majority of people can produce a perfect piece of formal writing with but the guidance of a few sentences, perhaps that then decreases the value of it intrinsically? Doubtlessly when every candidate is producing a bulletproof cover letter hand written by robots, recruitment departments will need to find another metric by which to judge the potential value of that candidate? Will this lead to a gradual decline in both the expectation and industry built around formal (bullshit) writing? Would this ultimately end up with people writing what they want to write about something, rather than what they feel they need to?

Even if that were not to be the case (after all, the above is pure conjecture), LLMs are unlikely to fade away just yet (even if they already suck at math), and us mere humans are going to need to find a way to adapt. Trying to compete syntactically is a losing battle against an opponent with an ever growing database of the entire literary history of the human race. No, what I suspect/hope will happen is people will start to show a little more ‘character’ in their writing, and maybe presenting a sightly more genuine picture of themself, rather than the pristine, saccharine idol they have built through years of cover letters and CVs.

Sure, you can instruct your AI scribe to be more informal and make jokes, but humans have a pretty good sense of detecting a lack of sincerity (just try and find a mission statement of any billion dollar company that doesn’t make you internally roll your eyes). And sincerity is hard when you’re producing text that is a heavily optimised summation of all the english text you can consume and tokenise.

To me, the best storytelling (and indeed writing) focuses less on the story being told, and more in how it is told. The Lord of the Rings is a timeless classic enjoyed by millions, but is made somewhat less exciting if you choose to experience the story through the Wikipedia plot summary rather than in the words of J.R.R Tolkein. Every literary great has a unique style of writing that encouraged millions to invest time into their worlds.

On a less grand scale, I find blog posts are often most engaging based on how they’re written rather than what the actual subject matter might be (I discover a lot of new interests based on articles I enjoyed reading on something new to me). We love writing that we can relate to. This to me is the hardest area thing for ChatGPT et al to replace.

While LLMs could stand to muscle humans out when it comes to formal writing, I think this could in turn force us into finding our own niche within written communication. We will need to find ways to make our work stand out from the textbook answers provided by machines, and that will push us to invest more character and personality into what we write.



Perhaps this is a good thing for writiiii I don’t know. If you have any thoughts on the above I’d love to hear from you.

This post is licensed under CC BY 4.0 by the author.