How to Tell If You're Talking to a Bot
You probably already suspect it sometimes. A reply that's a little too quick. A profile that feels assembled rather than lived in. A comment that technically answers the question but doesn't quite land. That instinct is usually right. Here's how to sharpen it.
Check the Profile First
The quickest signal is the profile itself. Look for:
No photo, or a suspiciously perfect one. Bots either skip the profile picture entirely or use an AI-generated face. AI portraits tend to look almost too symmetrical. Run the image through Google Reverse Image Search or TinEye to see if it appears anywhere else online.
A thin or generic bio. Humans accumulate detail. Bots get assigned a template. A bio that says "Passionate about tech, markets, and life" with nothing personal is a red flag.
Account age vs. activity. An account created last month with 3,000 posts is not a person. Check when the account was created and whether the volume of activity makes any human sense.
Follower-to-engagement ratio. Ten thousand followers and forty likes per post. The math doesn't work for a real account with a real audience.
Watch the Behaviour
Profiles can be faked. Behaviour is harder to sustain.
Posting times. Humans don't post at 3am on a Tuesday as regularly as they post at noon on a Saturday. If an account's activity is suspiciously even across all hours, all days, it's automated.
Posting volume. Fifty posts a day, every day, on topic, without variation, is not a person. That's a script.
Tunnel vision. Bots are created for a purpose. They post about that purpose and nothing else. No tangents, no off-topic days, no evidence of a life happening outside the content. Humans wander. Bots don't.
Response speed. An instant reply to a comment posted three seconds ago isn't a person who happened to be online. It's automation. MIT Technology Review has been writing about this tell since 2018, and it still holds.
Test the Conversation
If you're unsure, talk to them. Not to be clever, just to see what happens.
Ask something personal and specific. "What did you do this weekend?" or "Where are you based?" Bots handle generic questions fine. Genuinely personal ones trip them up. The reply will either be evasive, generic, or slightly off in a way you can feel.
Try sarcasm or subtext. Bots are bad at reading tone. They respond to the literal words, not the meaning underneath them. A dry joke will usually land flat.
Ask something that requires context from earlier in the conversation. Humans track a thread. Bots often lose it. Reference something said a few exchanges ago and see if they connect it.
Be slightly vague. According to Fraudlogix, bots will often respond to vague statements by staying vague themselves, mirroring your words back in a way that sounds like engagement but isn't.
Look at the Language
Sophisticated bots have gotten much better at language, but not perfect.
Too polished. Real people on social media don't punctuate everything correctly. They use fragments. They trail off. Content that reads like it's been proofread is often not from a person.
Repetitive phrasing. Scroll back through a suspected bot account's posts. Bots reuse sentence structures, phrases, and formats in ways humans don't notice until they see a lot of it at once.
Oddly formal for the context. A very professionally worded reply in a casual thread. Language that doesn't match the register of the conversation. That gap between how the content sounds and how a human in that situation would actually write is one of the most reliable tells.
The Honest Caveat
The bots are getting better. Imperva's 2025 Bad Bot Report confirmed that AI is actively being used to make bots more evasive, more human-like, and harder to catch. What caught them last year might not catch them next year.
The uncomfortable truth is that you shouldn't have to know any of this. You shouldn't need a checklist to figure out if the person you're talking to is a person. The fact that you do is a failure of the platforms that built these spaces without any real commitment to keeping them human.
That's the problem Hi Friction exists to solve. Not by teaching everyone to be better bot detectors. By building somewhere the bots can't get in.