From coworker emails to articles on the Internet, are you starting to wonder if what you’ve been reading lately was bot or human generated? Online AI text detectors can be helpful at identifying chatbot generated text, but they don’t always work and using them can be time consuming. But with a little training, there are a few ways for humans to manually identify AI-generated text. Here’s how.
Auto Generated Text Is Everywhere Now
After it release in November 2022, AI-generated text from ChatGPT quickly made its way into our everyday lives. Rabbis started using AI to write their sermons, Judges are using ChatGPT to get legal advice, and students are going wild using AI to do their homework.
How does AI-generated text work?
AI-generated text works by using natural language processing (NLP) algorithms to generate text from data samples. These data samples can be a controlled set, like a group of files, or the entire Internet. The NLP algorithms then analyze their provided data sets and generate text that mimics the style and structure of the original data.
Basically, the AI generates text by comparing previously published content and making predictions on what it should say next.
Is there a way to detect AI-generated text without using a ChatGPT detector?
To the untrained eye, auto generated text can be very convincing, which is why AI generative tools like ChatGPT are so popular with students, and hated by teachers.
There is no foolproof method to detect AI-generated text. AI text detector tools, like GPTZero, are very helpful at identifying AI-generated content, but unfortunately, they aren’t foolproof. Using online AI writing detector tools can also be very time consuming.
So if you are checking multiple documents for AI-generated text, it’s helpful to know how to manually scan for auto generated text before using an online ChatGPT detector tool. With a little training, visually spotting chatbot generated content isn’t that hard.
Here’s How Humans Can Visually Check Content For AI-Generated Text Without Using An AI Text Detector
If you know what to look for, you can easily identify auto generated text. Here are some clues that may help you spot AI-generated content:
Lack Of Creativity Or Originality:
AI-generated text often lacks unique wording and may repeat phrases and sentences. If what you are reading feels overly robotic and generic, then it might be AI-generated text.
Too Many Short Sentences:
AI-generated content usually consists of shorter sentences. Although AI is capable of replicating basic human writing, it is not yet able to create longer complex sentences. If you’ve used ChatGPT before, try to remember how many complex sentences it has generated for you that contained a semicolon. Probably none.
Inconsistent Writing Style:
Text generated by AI may switch between writing styles, use repetitive sentence structures, or have unnatural phrasing. Since the AI is just using a random selection of inputs to auto generate text, it has no “real” writing style.
For some reason, AI likes to use weird metaphors. One funny example I found on Twitter was an AI referring to Ticketmaster as a “ticketing giant”.
Sure, Ticketmaster is a ticket monopoly, but calling the company a “ticket giant” is a weird association. It’s a silly mistake an AI would make because it doesn’t really understand what it’s talking about.
Auto Generated Text Is Too Perfect:
Human-written text often contains accidental typos and even some slang. But AI-based language models almost never make grammar or spelling mistakes. AI-generated text lacks the natural flaws and quirks that real people have in their writing. And that’s where most students go wrong when trying to cheat using ChatGPT.
If a C+ level student all of a sudden hands in a grammatically perfect paper, that’s a red flag to any teacher.
If you read an article where the same word is repeated over and over again, it was likely written by an AI. This is because of two reasons.
- If the AI doesn’t really know what it’s talking about, it will try to fill space by repeating keywords. This often results in the same ideas just being rephrased over and over again.
- Many AI tools allow users to dictate SEO keywords for the AI to use in its text generation. This can result in SEO keyword-stuffing, which is when a word or phrase is repeated so often that it sounds unnatural.
Limited Ability To Provide Thoughtful Insight Or Analysis:
AI-generated text may not fully understand context, tone, and cultural references. As a result, it often struggles with providing any valuable insight or analysis.
AI tools like ChatGPT are much better at rewriting human written content than generating any original insight or analysis. AI is great at collecting and listing data, but it struggles to turn it into something meaningful. This makes AI better suited for static writing about well-documented topics, like history, than current events or analytical writing. The more information available on a topic, the better AI can use it to create and manipulate content.
Factually Incorrect About Everyday Human Experiences
AI generators are really good at writing about certain things, like historical events, but they have a lot of trouble explaining common situations that humans experience in their everyday lives.
For example, I just asked ChatGPT this question about what to “do” when it rains. ChatGPT said I should “put on an umbrella”.
- What should I do when it rains?
- When it starts to rain, you should take shelter and put on an umbrella.
Or here’s another example. I asked ChatGPT about the time required to pour a cup of coffee. Because this is such a simple and common task, there’s not a lot of online documentation about the few seconds required to pour a cup of coffee. So, as a result, the AI makes a guess.
- How long does it take to pour a cup of coffee?
- It typically takes about 1–2 minutes to pour a cup of coffee.
I don’t know about you, but if I waited and watched someone pour a cup of coffee for two whole minutes, I might pass out from boredom.
Keep an eye out for statements that are factually incorrect, especially when it comes to everyday human experiences. The AI doesn’t know any better and the human using AI to cheat may be too lazy to catch the error.
Why Human Society Adopted AI Tools Into Our Everyday Lives So Quickly
Thanks to ChatGPT, in just a few months, chatbots went from small obscure tools to a mainstream deployment across the Internet. Companies like Microsoft and Google invested billions into AI technology and will integrate generative AI into every element of their software products.
What makes the generative AI technology so exciting is also what makes it so frightening. With its amazing natural language ability, it’s becoming difficult to tell the difference between a bot and real person.
Chatbot content is already causing problems for academics, educators, and editors. With AI-generated plagiarism and cheating on the rise, regular plagiarism detection tools may not be enough. It’s becoming a lot harder to identify the difference between human and bot generated content. AI detection tools can help, but they are far from perfect.
Frank Wilson is a retired teacher with over 30 years of combined experience in the education, small business technology, and real estate business. He now blogs as a hobby and spends most days tinkering with old computers. Wilson is passionate about tech, enjoys fishing, and loves drinking beer.