I run a roofing company in the Berkshires, so I'm definitely not an academic researcher. But I've noticed something weird happening with my business over the past year that connects to what you're asking about. When I check my website analytics, I'm seeing thousands of "visits" that don't match reality at all. My contact form gets submissions that look perfect on the surface--proper grammar, reasonable questions about roof repairs--but when I call the numbers back, half are disconnected or people say they never contacted us. I've started tracking this because it wastes hours of my time every week following up on ghost leads. The really unsettling part is how these fake inquiries have gotten better at mimicking real customers. Early on, they were obviously spam. Now they reference specific services from my website, mention local weather conditions, even ask about my 15-20 year workmanship warranty by name. Someone or something is clearly scraping my site and generating content that sounds legitimate enough to fool me initially. What bothers me most isn't just the wasted time--it's that I can't trust my own business metrics anymore. When I used to get 50 inquiries a month, I knew roughly what that meant for my schedule. Now I get 200, but only 30 are real people. That makes it impossible to plan labor, order materials, or understand if my actual business is growing or dying underneath all this synthetic noise.
I run an electrical contracting company in South Florida, not exactly the research background you're asking for, but I've got a ground-level view of something relevant. When I'm preparing permit applications or sourcing materials for commercial jobs, I've noticed municipal inspection departments and supplier customer service have fundamentally changed in the past 18 months. I used to call the building department and argue with an actual inspector who knew our previous jobs and had opinions about our work. Now I get routed through AI phone systems that sound helpful but can't actually resolve anything specific to our project history. Same with electrical supply vendors--their "customer service" chat systems give me technical specs that look right but are sometimes for discontinued products or wrong voltage ratings. I caught one that would've delayed a hospital backup power job by weeks. The real problem isn't that AI exists in these systems--it's that I can't tell anymore when I'm talking to a human with accountability versus a language model that sounds confident but has no stake in whether my job passes inspection. I've started demanding callbacks from actual inspectors by name because I need someone who'll be there next week when the work gets reviewed. My 6 employees have noticed it too--they waste time calling suppliers for parts only to find out the "available inventory" they were quoted doesn't actually exist. What connects to your research is this: in a business where mistakes mean failed inspections, project delays, or safety violations, I can't afford to trust information that sounds authoritative but has no human verification behind it. The synthetic responses aren't just annoying--they create real liability when I'm responsible for FAA-compliant obstruction lighting or emergency power systems.
I run a cybersecurity firm in Texas, and I'm seeing the "dead internet" problem from the security side--our threat detection systems are drowning in what I call "ambiguity attacks." When we analyze network traffic for clients now, 40-60% of flagged activity sits in this weird zone where we genuinely cannot determine if it's a sophisticated bot probe, legitimate AI agent behavior, or an actual human using AI tools. Here's what keeps me up: We used to train our systems to distinguish human patterns from automated threats, but AI agents now deliberately mimic human inconsistency--random mouse movements, varied typing speeds, "natural" browsing patterns. Meanwhile, real humans using ChatGPT or Copilot generate perfectly structured, bot-like content. The signature we relied on for decades just evaporated. The cybersecurity implications are worse than people realize. One client got hit with what looked like a standard phishing campaign, but forensics showed the attack emails were individually generated by an AI that had scraped the company's LinkedIn, parsed their website copy, and mimicked their internal communication style. It wasn't a human running the AI--it was autonomous agents testing vulnerabilities 24/7 across thousands of targets simultaneously. What worries me most is that we're patching this problem with more AI, creating detection systems that watch other AI systems, which means even the "security layer" of the internet is becoming synthetic. We're building a web where AI guards against AI, and humans are just hoping both sides keep us in the loop.
I run an IT and cybersecurity company, and I've watched the Dead Internet Theory play out in our own security monitoring systems. About eight months ago, our dark web monitoring tools started flagging what looked like credential dumps--except when we traced them back, they were AI-generated fake credentials being sold to other bots in circular marketplaces that no human was actually participating in. The scariest part from a security perspective is client education. We run mandatory phishing training for healthcare and government contractors who handle HIPAA and CUI data. Our tests used to have a 12-15% click-through rate on fake phishing emails. Now we're seeing 31% because employees genuinely can't distinguish AI-generated "urgent IT security alerts" from our legitimate company communications. The synthetic content has become so convincing that it's actively degrading human threat detection. What keeps me up at night is regulatory compliance documentation. We help clients meet CMMC, SOC2, and federal ATO requirements--frameworks that assume human auditors reviewing human-generated policies. I'm already seeing AI-written security policies being submitted to certifying bodies, who then use AI tools to review them. When both sides are synthetic, who's actually accountable when a breach happens and patient data or defense information gets exposed? The business impact is measurable: our incident response time has doubled because we now spend the first 20 minutes of every "urgent" ticket determining if we're dealing with a real human problem or an AI system that hallucinated an error that doesn't exist.
I have years of experience observing the way that synthetic systems are working on human learning environments and the truth of the matter is that what we are currently experiencing in the world is something that would not have been possible just a couple of years ago. No longer the spamming of bots in comment boxes. This changed sometime in 2022 with the state of the art in generative models to the extent that generating convincing content has become dirt cheap in comparison with compensating real individuals. The worst part is we are losing the possibility of identifying the difference between an actual human being who is poor with code and AI code that is produced in order to create relatability. I make use of technical discussion boards, and what I observe being asked are too glossed. In five years when one would get stuck on a bug, he/she would have messed up in questioning. Unfinished sentences, frustration typos, partial screen shots that failed to represent the issue. That's gone now. All this is read as having undergone three editing processes. The financial part all is fine the frightening aspect. Why leave the real user research alone when you can create 10,000 fake user accounts with browsing history and engagement level sure to have real browsing history? I have found the transformation of SEO into a text to Google, not write to Google, which is now write to the AI that Google employs to understand text. At some point in the middle we ceased writing on behalf of others. The last thing I am actually concerned with is that the developers are being taught how to write code at this moment. They are finding assistance in answers in Stack Overflow and tutorial comments that were possibly never written by someone who actually had to debug that specific issue at 2am, on a deadline. That alters all about the way of passing of knowledge.
The Dead Internet Theory worries me because AI content is overwhelming real human work. At Magic Hour, we started labeling AI-generated stuff, but I noticed that making everything too easy just floods the platform with synthetic posts. What actually worked for us was adding some friction. When we let people see video edits openly and asked users to label their work clearly, real creativity started standing out again. Sometimes you need to slow things down a bit to keep the internet human.