I have picked up a pattern in automation that shows the automation pilot misses ROI because it was built on top of a broken workflow. If the inputs are inconsistent or no one owns the exceptions, the automation works in theory but fails in the real environment. When a pilot underperforms, the right move is to audit the process and the data before touching the model. Look at failure patterns, exception volume, and whether the workflow should be redesigned first. As for when to expand, timing matters. Scale only after the first workflow runs with stable inputs, documented exceptions, and at least one full cycle without manual rescue. Expanding too early creates sprawl. Expanding too late slows momentum. A clean, repeatable process is the real green light.
In my experience, a pilot misses targets when the workflow wasn't standardized well enough before automating. I've had pilots show only a 5-7 percent time savings at first, then jump to 25 percent once we cleaned inputs and set ownership rules. The takeaway is to diagnose process quality before declaring automation a failure. I expand only after two cycles where the team uses the automated workflow at 80 percent or higher. Expanding too fast creates rework, especially with document-heavy processes where edge cases blow up the gains.
(1) The success rate of pilots remains inconsistent because they don't always meet their initial targets. One client deployed OCR and ML for invoice processing but achieved only 40% straight-through processing, short of their projected 80%. Rather than abandoning the initiative, the team treated it as a lab experiment. They improved the quality of training data and limited document processing to specific types. Sometimes, ROI surfaces in unexpected departments and only becomes apparent after a delay of a few months. (2) You should only expand automation when at least one team member is using the system independently, without reminders. People adopt automation systems through habitual use--just like they rely on plumbing without thinking about it. The second deployment for one client failed because employees hadn't fully onboarded with the initial system yet. (3) Your organization needs a mechanism to regulate the flow of automation requests. One client started receiving excessive requests--like automating lunch orders--which were deemed unreasonable. To manage this, they created an Automation Review Board made up of a business lead, a technical expert, and an operations specialist. All board members must approve a project before developers begin work. (4) The shift from pilot mode to business as usual happens when companies stop treating automation as one-off initiatives and integrate them into their standard planning cycles. One enterprise client accomplished this by embedding automation OKRs directly into their operational performance metrics. Their automation teams transitioned from pilot projects to essential units contributing to the company's performance and bonus targets. (5) I avoid mentioning "CoE" in early conversations with clients. Instead, I describe it as an internal automation agency that delivers cost-effective, fast solutions and filters out unqualified requests. I present current time and cost savings--e.g., "$180k annual savings so far"--before projecting future potential like "10x that next year." CEOs respond well to these specific, repeatable wins, especially when they see a clear path to scale.
What if the automation pilot doesn't deliver the ROI we projected? In my experience, a pilot that misses its ROI target is usually telling you more about the process than the technology. Most pilots fail because the workflow has hidden exceptions or inconsistent data that the team didn't uncover early. The right move isn't to shut it down. Do a short root-cause analysis and map the outliers. Fixing the process usually unlocks the ROI you expected.
What I've seen is that when a pilot doesn't hit the projected ROI, it's rarely the automation that failed. It's usually that the workflow wasn't clean enough going in. Before pulling the plug, I always ask, 'Did we automate the process, or the problems?' Fixing the upstream mess often unlocks the return you expected. The right time to expand is after two things happen: the workflow is stable, and the people who use it weekly trust it. Expanding too early just scales confusion. To prevent automation sprawl, you need basic governance: a single intake form, a simple scoring rubric, and one person who can say "not yet." Sprawl starts when every team automates in isolation. Transitioning from pilot mode to business-as-usual happens when ownership shifts from the project team to the people doing the work. That means documentation, training loops, and clear exception paths. And when pitching a CoE to a CEO/CFO, drop the jargon. Frame it as a way to protect ROI and reduce rework. A CoE is just a small group making sure automation stays valuable instead of chaotic.