What's worked best for me is a lightweight intake channel with guardrails. We give every team a single form with three fields: problem description, monthly hours wasted, and the system/data involved. That keeps ideas focused on real friction instead of "nice to have" dreams. We also review submissions monthly so people see movement. In one cycle we cut our queue by 30 percent because users learned quickly what qualifies as an automation-ready process. I keep it simple. A 3-factor score, each rated 1 to 5: hours saved per month, risk reduction, and reusability across teams. Anything scoring 12+ gets prioritized. For example, one document-classification workflow scored a 14 because it saved 20+ hours monthly and was reused by two other teams. This level of scoring is easy to maintain and still ties each build to measurable business impact.
1 / The first stage of your automation journey should focus on high-pain, low-complexity operations because they provide the most immediate value. One of our clients, for example, spent six hours each week manually renaming documents to meet internal naming conventions--a task with no strategic benefit. We automated this process, which led to instant buy-in from the operations team. Once implemented, the system quickly reached its full capacity and delivered measurable efficiency. 2 / We use a three-question test to evaluate whether a process is suitable for automation. First, does the process involve structured data that can be organized automatically? Second, are the operations rule-based? Third, do we have full access to the system's screens and data? If two out of three answers are yes, then automation--whether via RPA, AI, or a hybrid approach--becomes viable. In several projects, what initially seemed like non-automatable processes turned out to be solvable once we found creative workarounds. 3 / The automation criteria really depend on the specific department's needs. In finance, accuracy is critical--error elimination is top priority. Sales teams, on the other hand, value time recovery more than anything. Leadership tends to focus on ROI and measurable cost savings. But when we look across the board, time savings often rise to the top because they directly impact all other performance goals, including accuracy and cost. 4 / We use a simple Google Form to collect automation ideas. Users are told to describe their task as if explaining it to a disinterested new employee--no technical jargon, and no more than three steps. Each week, the team evaluates submissions based on the effort required to automate. This system works well across different functions. The main challenge isn't submission quality--it's making sure everyone knows this option exists and feels encouraged to participate.
Hello! We work with a lot of clients on automation, and we've seen what works and what falls apart. The most successful organizations rally their teams early and focus on clear wins. Here are my thoughts, would love to talk more if you are interested, thanks! Which processes should we automate first? I believe you start with plotting all your potential AI use cases (i.e. opportunities to automate something) by anticipated business (or tech) value vs effort to implement. When getting started, you want to position your organization for success by starting with high value and low effort tasks! How do we evaluate whether a workflow is automatable? Look for repeatable steps, clean inputs, and predictable outcomes. If the process is "messy" or people are making constant judgment calls, you may need a human in the loop, which is still possible but not a place to start. What criteria matter most? Error elimination usually delivers the fastest business value. Time and cost follow naturally once the quality improves. How do we crowdsource ideas without chaos? Give teams a simple intake form and ask for problems, not solutions. Then let a small review group filter and prioritize. When everyone tries to design the automation themselves, it spirals. For great submissions, make sure there is a financial reward as well as recognition and a spotlight in the organization (the two ways people like to be rewarded). What's a realistic scoring framework? Use something simple: value, effort, risk. Score each from 1 to 5. High value, low effort, low risk moves to the top. You don't need complex tools to get started.
What I've seen is that you start with the processes that are high volume and high pain. If you choose only high pain, you burn too much time on edge cases. If you choose only high volume, you automate tasks nobody's frustrated by. The sweet spot is where repetitive work and real friction overlap. That's usually document intake, routing, and data extraction. To judge if a workflow is automatable, I ask one question: Can the team describe the steps the same way twice? If the answer is no, fix the workflow first. Automation only amplifies clarity or confusion. When scoring ideas, error elimination usually matters more than time saved. Bad data creates rework downstream, and that's where companies bleed hours. To crowdsource ideas without chaos, we use a simple rule. Every submission must include the trigger, the current steps, and the failure points. No essays. No venting. A realistic scoring model is just four numbers: volume, pain, error rate, and ease of mapping. If a task scores high on three of the four, it's worth automating.
We're building an automation Center of Excellence (even if we don't directly call it that) with a simple rule: start where the business will actually feel the impact. That usually means high-pain before high-volume. Volume is attractive on paper, but pain creates urgency, champions, and budget. We automate what helps our ML teams sleep at night—workflows that bury them in manual document handling, messy data extraction, or document systems that constantly break. A workflow is automatable when it meets three checks: the inputs are mostly structured or at least predictable, the rules can be written down clearly, and exceptions don't outnumber the actual work. If a human can explain the logic in 30 seconds, a machine can usually be trained to do it. If not, we know we're still in "workflow redesign" land, not automation land. The most important success metric depends on the job, but we rank them like this: 1) error elimination, 2) time saved, 3) cost—because teams lose trust fast when automation is speedy but wrong. We look for wins that reduce mistakes first, then friction, then spend. To collect automation ideas without chaos, we run 2-week "idea sprints" where teams submit problems, not solutions. We group ideas, merge duplicates, and score only the ones tied to real document processing or data extraction pain points. This keeps enthusiasm high without creating noise. Our vendor scorecard is intentionally basic and doesn't need fancy tools. We score workflows 1-5 in: Clarity (can rules be written?), Repeatability (does it happen often?), Inputs (consistent docs/data?), Exceptions (manageable?), and Impact (does solving this help people?). Total the score. Anything 20+ moves forward, 15-19 gets reviewed, <15 waits. Simple, fair, actionable. For crowdsourcing at scale, teams like Miro help separate signal from noise by letting ideas be clustered and ranked visually, which keeps the process organized even when excitement runs high.
To establish a successful Automation Center of Excellence, prioritize automating processes with high pain points over high-volume tasks. Focusing on high pain areas addresses significant productivity challenges, improves employee morale, and enhances customer satisfaction. Start with processes that require excessive manual effort, cause bottlenecks, or are error-prone, such as streamlining document processing to reduce delays and improve accuracy.