Is AI better at writing than humans?

AI is taking on more human writing every day, relieving us of time-consuming tasks – and, unfortunately, sometimes whole jobs. It produces news content, takes care of inconvenient school essays, and it’s often the first result you see in a Google search.

But while tools like ChatGPT produce deceptively high-quality content that’s hard to distinguish from human writing on the surface, they fall down in more meaningful areas. AI may pass the Turing test, but it’s still in the depths of the uncanny valley, and it hasn’t yet cracked the innermost workings of the human mind.

Is AI better at writing than humans?

So how does AI really stack up against human writing? And how can you tell the difference between the two?

Distinguishing human writing from AI can be tricky, and even AI has trouble identifying its own output. In 2023, OpenAI launched a tool to do the job, but soon removed it because it wasn’t reliable. The tool only managed to spot a quarter of AI-written texts, while incorrectly labelling human writing as AI 9% of the time.

In education, teachers understandably want to know when students have relied heavily on AI instead of producing their own work. To limit AI’s threat to academic integrity, the plagiarism detector Turnitin now estimates the percentage of a document generated by tools like ChatGPT. While some claim this is 98% accurate, others are more sceptical and worry it regularly leads to false accusations of cheating.

What about humans? How good are we at telling our own species from a computer? According to a US survey, people can detect writing generated by the GPT-3.5 language model around 47% of the time, falling to 37% when GPT-4 is used. That’s less than chance, meaning that participants weren’t simply guessing, and the AI was actively deceiving them. Still, while AI is getting better at mimicking human language, it’s not yet fooling everyone.


How is AI different from human writing?

Considering how often people and machines struggle to identify their own kinds’ writing, are there any reliable ways to do it? Here’s what I’ve discovered from my experience and research.


AI is predictable

While AI is trained on human writing, the way it produces text is very different. Each human brain is a unique ecosystem of information, experiences, and personality traits, giving every person a distinct writing style that fluctuates based on mood and responses to new information.

AI also relies on a vast pool of data and is getting better at understanding pieces of information and how they relate to each other. But while human thought and writing is infinitely diverse, ChatGPT writes by working out the most typical response to your prompt based on an analysis of all the information it has. Once it gets started, it continues by picking the phrase that most commonly appears next within the sequence.

ChatGPT essentially gives you the most generic response possible – an average human writing style that’s quite unlike any individual human writer, like those average face composites that bear no resemblance to anyone you’ve ever met.

These composites are consistently rated more attractive than the faces used to generate them, and something similar happens with AI writing. Perhaps the reason many judge it to be of such high quality is that generic writing smooths the rough edges, removing anything too odd or offensive. But some would argue it also robs writing of the idiosyncrasies that make it enjoyable.


It tries too hard to be conversational

Studies show we find it difficult to separate AI from human writing. But what is it that confuses us so much? In one study, people mistook long and unusual word choices for signs of AI, when in fact they were more typical of human writing. At the same time, participants wrongly thought contractions (“don’t”, “isn’t” etc.) indicated human writing instead of AI.

In some situations, AI uses colloquial language to sound less robotic. But its tendency to sound “more human than human” (a phrase used in the study write-up) is, ironically, one of the ways it gives itself away.

You might think that because of the way humans speak, our writing should be less formal than AI, but this isn’t necessarily the case. We’re often quite reserved outside of intimate contexts like text messages to friends, only letting our guard down when we feel we won’t be judged for it. Just think of how some of your work colleagues write emails and reports compared with the way they speak to you in person. (It’s worth noting, though, that overly formal language is one of the things good writers avoid.)

AI, on the other hand, takes a casual tone in informal contexts in a bid to sound more human (even when real humans would be more formal). As the study asked AI models to write dating profiles and other personal descriptions, it’s likely they erred on the side of familiarity.

Here’s part of what ChatGPT came up with when I asked it to write a dating profile (with no prompts prescribing tone or language):

I believe in the importance of making time for the things that bring joy – whether that’s cooking up something new in the kitchen, catching live music, or staying active outdoors. If you’re up for spontaneous road trips and deep conversations, we might just get along!

As well as contractions, it’s full of informal phrases like “cooking up”, “catching […] music”, “up for” and “we might just get along”. Some might feel this is too casual for someone they’ve never met before, and I’m not convinced anyone uses language like this in real-life conversations. It’s as if AI is trying so hard to be relatable, it does the opposite instead.

But while ChatGPT uses laid-back vocabulary, other aspects of the language are too formal: “I believe in the importance of making time for the things that bring joy” could simply be “I believe it’s important to make time for the things that bring joy”. Turning the adjective “important” into the noun “importance” is called nominalisation, and it’s a hallmark of stuffy writing. Of course, beneath this cloak of grandeur is a pretty banal remark, and a good writer would scrap this in favour of something more meaningful.


At other times, it’s too formal

As we’ve just seen, while generative AI is prone to cheesy attempts at being conversational, it also swings too far the other way. According to a recent linguistic study, ChatGPT-3.5 uses more complex vocabulary than humans (you might think this contradicts the study above, but hear me out).

AI also uses more nouns, conjunctions and direct objects, while humans use more verbs and prepositional phrases. This might make AI writing seem rigid and clunky compared with a more dynamic human approach. The dating profile excerpt above is a great example of this, with ChatGPT unnecessarily using a noun instead of an adjective or even a verb.

On top of this, AI relies more on conjuncts like “therefore” and “however” to transition between ideas, which could make it sound formulaic in contrast with humans’ more subtle and varied language.

When it comes to phonology, the study showed AI prefers stronger stress patterns (the way we emphasise parts of words) and more consonants than humans, and this could result in a harsher-sounding, less personable tone.

Why does this study point to AI having a more formal, detached style, while the previous study showed the opposite? Perhaps it comes down to circumstances. This study asked AI to write English language proficiency exam tasks, which is a far more formal context than the dating profiles in the first study. The models are more likely to drop their attempt at human familiarity in this situation.

ChatGPT can vary its style this way because it’s been trained using such a variety of data. While it’s been exposed to plenty of human conversations, it has also learnt from Wikipedia articles, online books, and website content judged to be high quality.

However, it’s clear that when AI imitates human language, it doesn’t always choose the right features for the context or strike the right tone. As we’ll see later, this is because it lacks judgement and perspective, meaning even the most specific prompts can’t make it sound entirely human.


AI isn’t logical

AI is rarely concise. And I don’t just mean it uses verbose sentences and flowery vocabulary. It’s also repetitive. If you look carefully, you might find the same point multiple times, although ChatGPT often uses such different wording that you don’t notice. The best human writers spare their readers unnecessary words.

In longer texts, you’ll notice a lack of cohesion and coherence, with AI jumping back and forth between points and failing to make meaningful connections between them. It’s not great at building consistent and methodical arguments.

That’s not to mention that ChatGPT and other tools are prone to drastic errors and wild misinterpretations. You may have heard of hallucinations, where AI simply makes up information that sounds like it could be right.

In the study where people underestimated how seemingly human AI could sound, participants nevertheless flagged repetitive and nonsensical text within AI writing. Even when it manages to pull the wool over our eyes with anthropomorphic language, AI simply doesn’t understand what it’s saying in the same way a human would. While it can collect and present relevant information in different ways, it can’t abstract from it to make rational and unique observations. This brings us on to our final major difference.


AI lacks human experience and judgement

Regardless of whether it’s being formal or casual, AI hasn’t yet cracked what it means to communicate like a human. You can prompt ChatGPT until the cows come home, but it still lacks humans’ sense of what sounds good and what’s likely to elicit the right emotional response from readers.

While tools like ChatGPT are extremely capable, they lack artificial emotional intelligence. Even with access to all the facts, ChatGPT’s knowledge is relatively superficial; it doesn’t have feelings about, or real-world personal experience of, the things it’s learnt about. It’s also very literal, and it isn’t as good at dealing with nuance and the implications behind language as we are.

All of this means that while ChatGPT is good at sounding factual and objective (even though it might be making things up!), it lacks its own perspective. AI can be hard to distinguish from human-written essays, articles and books, but it has a harder time with more personal formats. It’s best, therefore, not to use it to persuade or build rapport, such as in one-to-one emails, sales copy and even business proposals, as these can end up sounding stunted. You should also avoid using it for topics you feel strongly about, or on which you have lots of specialised knowledge.


A real example

I realise some of this seems vague without a concrete example, so I asked ChatCPT-3.5 to write an introductory paragraph for a blog post about the limitations of AI. Here’s what it came up with:

Artificial Intelligence (AI) has revolutionised industries, reshaping how we live, work, and interact with technology. However, despite its remarkable advancements, AI is far from infallible. As we increasingly rely on these intelligent systems, it’s crucial to recognise their inherent limitations. Understanding these constraints—from biases in data to the lack of true understanding—helps us set realistic expectations and guides us toward responsible and ethical use of AI in our daily lives.

The paragraph makes sense and is free from grammar and spelling mistakes. But it’s also free from personality. Here’s why it isn’t very effective:

  • It says a lot without communicating much, labouring the simple point that while AI is changing our lives, it raises problems that we need to deal with.
  • It’s obvious. This point has been made so many times that it now goes without saying. Like much AI writing, it regurgitates truisms and says nothing new or interesting.
  • It’s not just the sentiment that’s unoriginal; the wording is too. The paragraph relies on trite phrases and sweeping statements like “AI has revolutionised industries”, “reshaping how we live, work, and interact”, “remarkable advancements” and “inherent limitations”.
  • This language is overly formal as well. While it might sound profound, it’s not very meaningful, and this certainly isn’t how humans speak to each other.
  • It lacks a personal perspective. Even though it uses the first person “we”, it offers no emotions, experiences or opinions beyond the predictable cautious approach to AI.

Is AI too different to be useful?

All of this isn’t meant as a scathing attack on generative AI. Actually, tools like ChatGPT are extremely helpful to the writing process, offering examples and ideas to guide you in the right direction and speeding up research.

But chatbots can’t reproduce your lived experiences and viewpoints, and they aren’t a substitute for honing your own writing skills. At the same time, AI isn’t a professional copywriter (psst, I am!).


Develop your writing skills with my online course, which lets you learn quickly through detailed examples, interactive exercises, and feedback. You can also get personalised support with one-to-one coaching.

Leave a Comment

Your email address will not be published. Required fields are marked *