Why Is the Internet Filled With Bots?
It didn't happen by accident. Nobody sat down one day and decided to ruin the internet. It happened gradually, then all at once, driven by a set of incentives that nobody stopped to question until the damage was already done.
The Metric That Broke Everything
In the early days of social media, success was measured by real things. Did people show up? Did they come back? Did they tell their friends?
Then the metrics changed. Follower counts. Engagement rates. Impressions. Reach. Numbers that could be gamed, bought, and inflated. And once you could buy a number, people did.
Fake followers arrived first. Then fake engagement. Then entire networks of automated accounts designed to make things look more popular than they were. By the time platforms noticed, the incentive to clean it up was competing with the incentive to keep the numbers looking good.
What Platforms Knew and Did Anyway
Facebook removes an average of 4.5 billion fake accounts every year. That's 1.5 times its entire active user base, gone annually, just to keep the numbers from completely falling apart. And yet the accounts keep coming back, because creating them is trivially cheap and the rewards for doing so are real.
X (formerly Twitter) has never resolved its bot problem. Estimates have consistently placed bot accounts at somewhere between 9% and 15% of the platform, and some researchers put it higher. Elon Musk cited bot prevalence as part of his justification for buying the platform. He then dropped the bot cleanup efforts once he owned it.
The platforms knew. They always knew. They just had a financial reason to look the other way.
Then AI Made It Infinitely Worse
The fake account problem was bad enough when bots were clunky and obvious. Repetitive posting patterns, no profile photo, generic bios, response times that didn't match human behaviour. Annoying, but detectable.
Generative AI changed that completely. According to Imperva's 2025 Bad Bot Report, automated traffic surpassed human traffic for the first time in a decade in 2024, accounting for 51% of all web traffic. AI tools have lowered the barrier to entry so far that anyone can now create sophisticated, convincing, scalable bots. No technical knowledge required.
The bots got smarter. The content got polished. And distinguishing a real account from a fake one became something that even trained researchers struggle with.
The Cost of All This
The cost isn't just aesthetic. It's not just that your feed feels emptier and more manufactured than it used to, though it does.
Bot-driven engagement manipulates what you see and what you believe is popular. It distorts political discourse. It enables financial scams. It degrades trust in online spaces so thoroughly that people stop engaging altogether, which is arguably the worst outcome of all. A disengaged public is easier to manipulate than an engaged one.
And the humans who are still there? They're doing an unpaid, unacknowledged job of making a broken ecosystem feel worth using. Their real content subsidises the fake content that surrounds it.
Someone Has to Do Something About It
We built Hi Friction because we got angry. Because the status quo is a choice. Because platforms could verify their users and have chosen, repeatedly, not to because it would hurt their growth numbers.
We're making the opposite choice. Every user verified at signup, monitored continuously, and asked periodically to prove they're still human. Not a checkbox ticked and forgotten. An ongoing commitment to keeping this space real.
It takes more effort. That's exactly why it's worth it.