Rolling out AI SDRs was messier than we expected. Our team leaned on it too much and clients started getting follow-ups that were clearly not written by a person. We noticed a few regulars just stopped responding. What actually worked was having Sarah review the AI conversations daily and training the team on when to jump in. Looking back, setting clear human-AI rules from day one would have saved us a lot of trouble.
As a managing partner at M&A Executive Search, we experimented with AI SDRs to streamline our outreach to potential executive candidates. 1. The biggest challenge we faced was maintaining personalization during outreach. This was because AI struggled to adapt with the voice of our brand. 2. An unexpected challenge that arose was issues in data quality. Some AI generated emails lacked professional information and this really put our credibility at risk. 3. Initially, there was tension as our team was not fully on board with integrating AI, but eventually, with clarity, they started to ease up to the idea. 4. If I could start over, I would not fully depend on AI from the start. I would make sure to automate it well and have a team to go through the results to verify information. This would allow us to use human judgement to ensure the best quality.
Getting our AI sales reps to work with the human team at Lusha was a mess at first. The AI would email prospects at 2 AM and handoffs were constantly dropped. After it kept flagging bad leads, our team just started ignoring its suggestions completely. If I could go back, I'd map out exactly where the AI's job ends and a person's begins. Clear boundaries would have saved us weeks of headaches.
Managing Director at Threadgold Consulting
Answered 9 days ago
The first time we used AI sales reps for a client with a complex ERP system, it completely messed up our consultants' schedules. The AI would book a meeting, and so would our own scheduling person in the same slot. Suddenly our team was handling double bookings and apologizing to clients. A few people felt left in the dark and you could see their frustration. We should have just sat everyone down first, showed them exactly how the AI worked, and set up clear communication channels from day one.
The AI sales reps we used at Tutorbase just sent out generic template emails, which in education, doesn't work. One partner actually forwarded an email back and asked if we'd even looked at his website. We switched to having AI draft, then a person rewrite it before sending. That saved our relationships. If first impressions matter, don't let AI handle the first contact alone.
The worst day was a school district RFP where the AI SDR sounded brilliant and still got us in trouble. It mentioned "pre-approved surfacing specs" and a "10-12 week install window," both of which ignored union calendars and permit lead times. Procurement caught the mismatch, and a friendly facilities manager stopped replying. The email looked polished. The problem was invisible: compliance drift. In public-sector sales, the one detail you skip, prevailing wage, bond capacity, and site prep, decides whether you build or get blocked. We recovered by making the model respect the rules first, not the copy. I added a compliance gate before send: location - union rules - seasonality - permit risk. If any check returns "unknown," the draft lands with a sales engineer. I replaced free-form promises with approved phrases "indicative lead time," "pending permits," "subject to site evaluation" so we never outrun ops. Territory awareness kept us focused on districts we'd served in the last 24 months, and we watched Escalation Rate and Promise Accuracy like hawks. The surprise benefit: credibility climbed. Superintendents prefer a careful "it depends" to a confident "we can." AI can find the door, but in regulated B2B, humans still read the fine print before stepping through it.
My nightmare wasn't angry prospects, it was angry partners. The AI SDR offered "launch pricing" to an affiliate list that wasn't supposed to get promotions, then sent time-limited deals after the promo ended in two regions. CTR was great. Net margin and trust were not. Overnight, my CRM looked like a puzzle with missing pieces: duplicate accounts, mis-attributed conversions, and partner tickets piling up. The model had optimized for clicks while quietly breaking channel and calendar rules that keep the business healthy. I fixed it by making the bot channel and calendar-aware. It can ask for interest, but only humans reveal prices or codes. The model checks promotion windows and regional holidays before it drafts a single line. We allowed buyer personas and banned edge-case segments that create fraud risk. Two numbers steer the ship now: Net Margin per AI-sourced deal and Partner Escalations. If the margin dips or tickets spike, the AI pauses, we run a failure review on the 10 worst threads, then retrain. The funny part? Revenue didn't slow, but refunds did. Customers still converted, partners stayed happy, and reps stopped babysitting the bot. In e-commerce, AI SDRs don't just need a good pitch; they need to obey the rules of revenue. If the model can't read a policy, it shouldn't press send.
The most significant issue we ran into when working with AI SDRs was quality control at scale. The tools were making processes faster and more reliable overall, but they were lacking the nuance required for effective personalization, especially when engaging mid-market and enterprise accounts. We recognized early on that prospects would simply tune out whenever our behalf felt templated or disengaged, and it was difficult to quantify how that hurt brand perceptions, although we knew it did. Another unforeseen hurdle was that AI tools created a false sense of confidence. While the early indicators were promising, like open and response rates, once we looked majorly, a huge portion of those replies were negative or completely un-non-qualifying. AI was simply optimizing for replies, not actual, qualified conversations. The rollout had an effect on morale in the team beyond what we thought. Senior SDRs felt the loss of relevance in another tech-driven model. The rollout made us re-evaluate roles and look at the AI as a guide and not a replacement of the SDR role. Once we did that, and assigned the reps 'ownership' of the AI generated outreach, morale returned and performance improved. If I had a chance to do it all over again, I would start by training the AI on our top messaging vs using the two or three templates we considered out-of-the-box still. I would also provide the AI more 'guard rails' around tone, relevancy, and compliance on the front end. AI can definitely multiply force but it must look at your best strategy and not just the fastest version of it.
The hardest part with our AI sales assistant was getting it to sound like us. It sent some weirdly formal emails that even confused prospects. Ironically, my team ended up spending more time reviewing and fixing the AI's drafts, which actually slowed down our product launch. Looking back, we should have had the marketing and AI teams working together from day one.
At Xponent21, our AI sales reps and the human team started stepping on each other's toes. We were sending the same email to the same prospect, which led to some really awkward calls. The AI's timing was just off from our salespeople's rhythm. If I did it again, I'd get everyone in a room and map the whole process on a whiteboard first. We had to spot those overlaps early so the team could see how the AI actually helped.
The real headache was team morale. Our reps saw the AI qualification tool and thought it was there to compete with them. It would sometimes miss subtle but important cues, especially in talks with potential creators, so we missed out on a few good leads. We fixed this by setting clear rules on when a human had to step in, which helped calm everyone's fears. If I did it again, I'd position it as a partner from the start, not a replacement.
When we launched our AI SDRs, our biggest mistake was the onboarding. We thought sales reps would just jump in, but a lot of them didn't trust the AI lead scores. They just wouldn't use it, which slowed everything down and created a lot of friction. I get it now. You can't just hand them a new tool. You have to show them what the AI is good at and where it needs a human touch. Otherwise, the process gets gummed up and everyone gets frustrated.
The AI sales reps we built sounded too robotic. Clients noticed, with some even replying to ask if a real person wrote it. The AI also got stuck when conversations went off-script, like when a journalist asked an unexpected question, so we missed opportunities. If I did it again, I'd have our sales team involved from the start, building templates from their actual conversations instead of using the vendor's generic advice.
My AI sales reps kept writing messages that sounded like a robot wrote them. One campaign our response rates tanked because the follow-ups just listed features instead of solving actual problems. The founders were confused and I had no way to show my team how to build real connections. If I did it again, I'd watch those first campaigns closely and rewrite everything to sound like a person talking, not whatever the AI generates by default.
The hardest part with AI sales reps was getting them to sound human for our construction clients. We tested one tool, and it kept sending canned responses when a contractor asked about specific project details. Teaching the AI construction terms took weeks of work. Looking back, I should have spent more time on real construction examples upfront and had the sales team review the AI's responses daily.