I work in B2B marketing for small-mid SaaS and services companies, usually under 100 staff. Most outbound is founder-led or a tiny sales team (1-3 reps). I led AI SDR implementation as Fractional CMO with the founder. ACVs were usually in the low-mid five figures. I've tried a mix of AI email + sequencing tools and "AI SDR in a box" platforms. For my clients, AI SDR hasn't worked well enough to stay as the main outbound channel. Before AI SDR, reply rates were modest but healthy, meetings were fewer, and pipeline was smaller, but intent was higher and founders felt proud of the emails. After AI SDR, by month 2-3, raw reply volume sometimes went up, but quality dropped. We saw more neutral/negative replies and fewer sales-qualified meetings. Close rates from AI-sourced meetings were clearly lower than from human-led outbound, though I don't have a clean % split across all clients. Most pilots stalled in month 2 or 3. First warning sign was reply sentiment and founder discomfort with tone, not the numbers. Deals from AI leads also moved slower and fell out more often, so pipeline looked bigger than it was. The main cost wasn't licence fees; it was burned lists and trust. In a few cases, we hit high-value accounts with off-message or too-frequent emails. I can't give a precise revenue number, but I'm confident we shortened the life of some lists by months. Teams also spent several hours a week fixing issues: cleaning contact data, handling odd replies, rewriting prompts, and dealing with CRM sync problems. That killed the "hands-off SDR" idea. Critical gaps: weak real account research, poor grasp of nuance in positioning and who not to contact, and clunky control over when to stop or hand off to a human. The final straw was usually a founder seeing a bad email go to someone important, plus the clear gap in close rate vs manual outbound. Most have since gone back to human-led outbound with AI as a writing and research assistant, which has given better lead quality and more confidence.
Industry: SaaS media and data platform Company size: 10 Led by: Founder / RevOps ACV: Low to mid four figures AI SDR tried: Multi vendor pilots The project stalled around month two. Reply volume looked fine initially, but reply quality degraded fast. Neutral responses turned negative once prospects realized follow ups lacked context. Burned leads was the first red flag, not meeting volume. We estimate low six figures in lost pipeline from damaged first impressions and follow up fatigue. The team spent significant time rewriting prompts, cleaning lead lists, and manually apologizing to prospects. The missing piece was judgment. AI handled sequencing, but not intent, timing, or nuance. We shifted to AI assisted research and drafting, with humans controlling send logic. Value improved immediately. Albert Richer, Founder, WhatAreTheBest.com
Education services; ~50-200 employees. Marketing and the founder completed the tool implementation. Testing an AI SDR for re-engaging cold leads & speeding up follow-ups. Good News: Replies were coming back sooner than expected (hours instead of days). Bad News: Lead Quality got Noisy as the AI sent more Urgency-based messaging. We had our "First Positive Reply" at Week 1, but by Month 2, we had more "Stop Emailing Me" Messages unless we tightened the rules. What did work is a Hybrid Lane approach: the AI handled First Touch + 1 Follow-Up. Then a Human would take over once a Lead asked a Real Question. If I could redo One Thing, I'd start with Stricter guardrails: Fewer Sends, Tighter ICP Filters, & A Hard Cap on Follow-Ups Per Contact.
Fintech/payments; ~10-50 employees. Led by Sales & Revenue Operations. The AI SDR had the greatest impact on how quickly and consistently prospects were followed up with after they showed intent to buy (the signal). In addition to being faster, prospects received follow-up from us within the first week of their interest, rather than weeks/ months earlier with AI. We had two surprises: Bad Data (not Deepfake messages) turned out to be the bigger problem. If titles or firmographic information were incorrect, the tone of the AI-generated messages appeared confident yet inaccurate. A second surprise was that while AI-generated meeting requests were successful and improved our sales team's ability to schedule meetings, we found we needed to add a human qualification step before booking the meeting. As of today, we have implemented a hybrid model: we use the AI SDR for initial touches and routing new leads to our sales team, and for all subsequent communications and scheduling meetings related to the sale of our products/services, we use our sales team. The hybrid approach has been successful and is still being used at month three.
Small marketing agency; ~1-25 employees; founder/lead on marketing operations. Our AI SDR pilot failed in month two. Initially, I didn't realize there was a problem with reply rates. What initially alerted me to this issue was reply sentiment. Although we did get many more responses, unfortunately, they were negative (wrong fit, not relevant, remove my name from your email list). Negative replies are very costly if you sell on trust. I also did not realize the cleanup cost at the time. We spent several hours per week rewriting SDR prompts (because the AI had gotten them wrong), and making corrections on poor personalization, as well as sending follow-up emails ("sorry, I think I sent an email that was not quite right) to repair relationships with people whom we had damaged through poorly constructed SDR emails. In our business environment, the AI SDR lacked reliable brand voice control and a strong do-not-message memory across accounts. Therefore, we transitioned to a lower-volume, human-led approach, using AI only for research briefs. This resulted in fewer SDR sends, better quality leads, and less damage.
I work in B2B SaaS and services, usually with teams in the 11-50 range depending on the engagement. I'm the one who usually drives the AI SDR rollout on client projects, often paired with a RevOps lead. One fintech client came in with a reply rate hovering around 1%. Their junior SDRs were exhausted, and they were averaging about eight meetings a month. Three months after switching to an AI SDR setup, reply rates jumped to around 5%, and meetings landed at 17 a month. By month six, replies slid back to about 3.5%, but the SQLs were stronger and overall pipeline value was up roughly 40%. We saw the first positive response three days after launch. We already had a tight ICP and made small daily adjustments, which helped a lot. Right now the mix sits at about 60% AI and 40% human. Human-sourced meetings still convert better--around 28% versus 22%--but the AI keeps the top of the funnel moving in a way the team couldn't on their own. The biggest surprise on the upside was landing meetings with accounts we'd been trying to break into for over a year. The downside? One of the sequences started booking calls with competitors because the intent filters weren't sharp enough. Our stack grew from five tools to nine. To keep the AI SDR on track, we needed stronger enrichment, cleaner filtering, and better analytics. It works, but it's unforgiving if your data is messy. What made the biggest difference was the speed of testing. The AI cycled through around 15 different pain-point angles in two weeks--something a human team would've needed months to figure out. The fast feedback became a huge advantage.
We created an AI phone recipient that picks up the phone. Our AI sales rep knows everything about the products we sold. In addition, the representative pauses when the caller speaks. It listens and we built in some humor that is light but works and niche to the business. Amazing how good it works for something that took a few hours to setup.